Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Many cells in parallel If two or more cells of unequal voltages are connected in parallel (with the same terminal on the same side), is there a formula that gives the net potential difference?
Also, by Kirchoff's loop law, if we go round in the loop (the internal one, not the external one), we gain $V_1$ going in the direction of current and lose $V_2$ going against it. This implies $$V_1-V_2=0$$ but $V_1\neq V_2$. What does this mean?
| You cannot connect ideal cells of unequal emf in parallel. The potential difference between the common ends would then have two or more different values at the same time, which is impossible. See Combination of ideal voltage sources.
All real cells have a non-zero internal resistance. This allows their terminal PD to differ from their emf. Terminal PD is the emf less the drop in potential across the internal resistance. For real cells in parallel the current through each cell adjusts so that the terminal PD has the same value in each branch.
Your mistake in your application of Kirchhoff's Voltage Law is that you have ignored the PD across the internal resistance in each cell. Or perhaps you have assumed that the cells have no internal resistance.
Thevenin's Theorem states that any network of resistors and cells is equivalent to a single resistor in series with a single cell. If the network consists of cells in parallel, each with an emf $V_i$ and an internal resistance $R_i$, then the voltage (ie emf) and resistance of the equivalent series resistor and ideal cell are given by
$$\frac{1}{R_{Th}}=\frac{1}{R_1}+\frac{1}{R_2}+\frac{1}{R_3}+...$$
$$\frac{V_{Th}}{R_{Th}}=\frac{V_1}{R_1}+\frac{V_2}{R_2}+\frac{V_3}{R_3}+...$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/196802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
} |
Why spinning a pen makes it easier to remove it from the stand? I have a ballpoint pen stand on my desk. The pens are held inside their caps with the point down, like this one (but not as fancy):
If I try to simply pull up one pen, the friction between cap and pen is strong enough to lift the stand, instead of simply removing the pen from the cap.
But If I spin the pen while pulling, it comes out without even bumping up the stand.
Why spinning the pen reduces the friction against the pull?
Info requested by the commenters:
*
*the cap tip is hollowed. The pen's body is hexagonal. No vacuum is at work here.
| For the same reason if you try to push a block or your coffee mug placed on the table it take some force before it move but when it starts moving you can lower the amount of force you applied but the block/mug keeps moving.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/196867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Why is a sine wave considered the fundamental building block of any signal? Why not some other function? It is mathematically possible to express a given signal as a sum of functions other than sines and cosines. With that in mind, why does signal processing always revolve around breaking down the signal into component sine waves?
| You refer to Fourier Series. The brilliance of Fourier was to use sin to express a function.You know that you can create any vector from the sum of some unit vectors.Exactly the same think happens here. The number you multiply the unit vectors is the coefficients in F.S.
To answer to your question of why we use sin and cos is that they have (mathematicly) the same property like the unit vectors.In any book, you see (but you don't objerve),that those unit vectors are orthogonal to each other.Same thing with sin and cos.
A very good video on F.S is this: https://www.youtube.com/watch?v=le_gMPJFyJ8
Go to 11:50 to see what i mean with the term orthogonal.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/196976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Reynolds Average Navier Stokes equations and turbulence scale To obtain the time average of an unsteady term like $\frac{\partial u_{i}}{\partial t}$ by definition we perform the following:
\begin{align}
\overline{\frac{\partial u_{i}}{\partial t}} &= \frac{1}{T}\int_{t}^{t+T} \frac{\partial }{\partial t}(U_i + {u}'_i)\, dt \\
& = \frac{U_i(x,t + T) - U_i(x,t)}{T} + \frac{u'_i(x,t + T) - u'_i(x,t)}{T}
\end{align}
where $U_i$ is the mean value of velocity in $x$-direction and $u'_i$ is the fluctuating part.
My question is that why this term $\frac{u'_i(x,t + T) - u'_i(x,t)}{T}$ equals zero so that
$$ \overline{\frac{\partial u_{i}}{\partial t}} = \frac{U_i(x,t + T) - U_i(x,t)}{T} = \frac{\partial U_{i}}{\partial t}$$
Somehow the reason is because $T$ effectively approaches $\infty$ on the time scale of the turbulent fluctuations so that it equals zero, but why isn't that the case for the first term?
| This is a very good question, which illustrates that Reynolds averaging is a very special form of averaging. Indeed the Reynolds averaging procedure assumes three properties of the averaging operator:
*
*Linearity: Let $a,b$ be constants and $f,g$ observables $\overline{af+bg} = a \overline{f}+ b \overline{g}$.
*Commutes with derivatives: $\overline{\frac{\partial f}{\partial s}} =\frac{\partial\overline{ f}}{\partial s} $, for $s=x,y,z$ or $t$
*Factorising property: $\overline{f\overline{g}} = \overline{f}\overline{g}$.
As discussed here, these properties are not satisfied, in an exact sense, by many common averaging procedures. For example the running average you suggest doesn't satisfy either 3. or 2., but as Kyle said correctly, in practice, when handling actual data, one simply chooses the smoothing period long enough and one may also use a weighting function to minimise the problem. An example of an averaging operator which satisfies 1.-3. is the zonal average around a latitude circle of Earth which is often considered in studying the general circulation of the atmosphere (the basic state in this case is zonally symmetric).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/197182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Dirac delta function definition in scattering theory I'm studying scattering theory from Sakurai's book.
In the first pages he gets to the following expression:
$$\langle n|U_I(t, t_0)|i\rangle=\delta_{ni}-\frac{i}{\hbar}\langle n|V|i\rangle\int_{t_0}^t e^{i\omega_{ni}t'} dt',\tag{1.9}$$
where $U$ is the propagator in Dirac's interaction picture and $V$ is a potential operator.
So given that scattered states are only defined asymptotically we want to send $t \to \infty$ and $t_0 \to -\infty$, so I would say that the integral becomes immediately a Dirac's delta because that's just its integral representation! But he says: let's define a $T$ matrix such that:
$$\langle n|U_I(t, t_0)|i\rangle=\delta_{ni}-\frac{i}{\hbar}T_{ni}\int_{t_0}^t e^{i\omega_{ni}t'+\varepsilon t'} dt'.\tag{1.10}$$
And then keeps going.
I don't get this! Why do we need this small parameter $\varepsilon$? He then says that it's going to be sent to zero and that it makes sure the integral does not diverge. I don't quite get this prescription. Can anyone help me understand this strategy?
| I think the point is that if you took the limits to infinity without doing anything else, you'd be implicitly redefining the matrix element in order to make the equations consistent. So he just calls the redefined matrix element $T_{ni}$ instead of $V_{ni}$. Later he solves for $T_{ni}$ in terms of $V_{nj}$ (see the section "Solving for the T matrix").
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/197346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Are Hubble Telescope Images in true color? Like many others, I have marveled at the images made available from the Hubble Space Telescope over the years. But, I have always had a curiosity about the color shown in these images. An example is shown below. Are the colors we see, such as the yellows, blues, and so on the true colors or are they applied by some kind of colorization method to enhance the image quality for realism.
| I find myself now answering my own question but only because the comment feature is not suited to this "comment". I have selected the answer by @HDE 226868 as my answer and primarily due to the linked Space.com reference. Very good answer to my question.
In particular, I also thought this quote from the same page as being important as these reasons (below quote) for adding color were more of what I originally thought they were doing:
The Hubble team uses color in three ways. First, for objects that would otherwise be too faint for the human eye to see,the team adds color to make the objects visible. Second, the team uses color to depict details that the human eye can't see, like astronomical features only visible in infrared or ultraviolet light. Third, color can highlight delicate features that would be otherwise lost.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/197487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 3,
"answer_id": 0
} |
Why is there an energy gap in superconductors? I'm a little out of my depth here...
I'm trying to understand quasiparticle tunnelling in superconductor-insulator-superconductor junctions. Many books use the "semiconductor model" to explain this:
(source: wikimedia.org)
These diagrams show the available quasiparticle states (with a large band gap due to the formation of Cooper pairs), the filled states, and the empty states.
My question with these diagrams is: shouldn't all the electrons exist as Cooper pairs? I assume that the lower band is filled with quasiparticles, since Cooper pairs would all be at the same energy level and quasiparticles obey Fermi-Dirac statistics, but I don't know where they're coming from.
Also, why is there an energy gap in the quasiparticle energy states? I understand that this gap corresponds to the energy needed to break Cooper pairs, but I don't understand why would you need to break Cooper pairs to raise the energy of quasiparticles.
Or is this "semiconductor model" not fully representative of the physics?
| The lower part is not filled with quasi-particles. At zero Kelvin, in zero magnetic field and with zero disorder all free electrons condense and form the superconducting condensate. The semiconductor model now describes the breaking of Cooper pairs not as resulting in two electron-, but in one electron- and hole-like excitation. As you are potentially familiar with in semiconductors, one electron of energy $E$ above the Fermi level may be described by one hole with the same energy below.
Now, this is the equilibrium. Tunnelling processes however are very non-equilibrium processes! When you apply a bias across your junction you give rise to quasi-particle excitations, with result in a tunnelling current. That is why, even at 0 Kelvin, there is "electron" tunnelling...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/197572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to predict bound states in a 1D triangular well? Assume we have a (single) particle in a potential well of the following shape:
For $x \leq 0$, $V = \infty$ (Region I)
For $x \geq L$, $V = 0$ (Region III)
For the interval $x > 0$ to $x < L$, $V = -V_0\frac{L-x}{L}$ (Region II).
The potential geometry is reminiscent of the potential energy function of a diatomic molecule (with $x$ the intra-nuclear distance).
In Region II the potential energy is a field with (positive) gradient $\frac{V_0}{L}$.
A few observations:
In Region II, $V(x)$ is non-symmetric, so we can expect eigenfunctions without definite parity.
In Region II we can expect $\psi(0) = 0$.
We can also expect $\psi(\infty) = 0$, so the wave functions should be normalisable.
A quick analytic look at the Schrödinger equation in Region II using Wolfram Alpha’s DSolve facility shows the solutions involve the Airy Functions $A_i$ and $B_i$.
For $\frac{V_0}{L} = 0$, the problem is reduced to an infinite potential wall (not a well). Incoming particles from Region III would simply be reflected by the wall at $x = 0, V = \infty$. There would be no bound states.
And this raises an interesting question: for which value of $\frac{V_0}{L}$ is there at least one bound state and approximately at which value of the Hamiltonian $E$?
I have a feeling this can be related to the Uncertainty Principle because aren’t the confinement energies of bound particles in 1 D wells inversely proportional to $L^2$? If so would calculating a $\sigma_x$ not allow calculating a $\langle p^2 \rangle$ and thus a minimum $E$ for a bound state?
| Disclaimer: In this answer, we will just derive a rough semiclassical estimate for the threshold between the existence of zero and one bound state. Of course, one should keep in mind that the semiclassical WKB method is not reliable$^1$ for predicting the ground state. We leave it to others to perform a full numerical analysis of the problem using Airy Functions.
^ V
W|
a|
l|
l| L
-----|---|------------------> x
| /
| /
|/
-V_0 |
|
$\uparrow$ Fig.1: Potential $V(x)$ as a function of position $x$ in OP's example.
First let us include the metaplectic correction/Maslov index. The turning point at an infinitely hard wall and an inclined potential wall have Maslov index $2$ and $1$, respectively, cf. e.g. this Phys.SE post. In total $3$. We should then adjust the Bohr-Sommerfeld quantization rule with a fraction $\frac{3}{4}$.
$$ \int_{x_-}^{x_+} \! \frac{dx}{\pi} k(x)~\simeq~n+\frac{3}{4},\qquad n~\in~\mathbb{N}_0,\tag{1} $$
where
$$ k(x)~:=~\frac{\sqrt{2m(E-V(x))}}{\hbar}, \qquad
V(x)~:=~-V_0 \frac{L-x}{L}. \tag{2} $$
At the threshold, we can assume $n=0$ and $E=0$. The limiting values of the turning points are $x_-=0$ and $x_+=L$. Straightforward algebra yields that the
threshold between the existence of zero and one bound state is
$$V_0~\simeq~\frac{81}{128} \frac{\pi^2\hbar^2}{mL^2} \tag{3} .$$
$^1$ For comparison, the WKB approximation for the threshold of the corresponding square well problem yields
$$V_0~\simeq~\frac{\pi^2\hbar^2}{2m L^2} \tag{4} ,$$
while the exact quantum mechanical result is
$$V_0~=~\frac{\pi^2\hbar^2}{8m L^2} \tag{5} ,$$
cf. e.g. Alonso & Finn, Quantum and Statistical Physics, Vol 3, p. 77-78. Not impressive!
^ V
W|
a|
l|
l| L
-----|---|------------------> x
| |
| |
| |
-V_0 |---|
|
$\uparrow$ Fig.2: Corresponding square well potential as a function of position $x$. Each of the 2 infinitely hard walls has Maslov index 2.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/197676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Would a solid door handle get hotter than a hollow one if there is a fire behind the door? I am a designer (mechanical engineer) who works for a fire company. My boss asked me to develop and build a new door handle and lock mechanism. Which I did successfully. It has a 16mm Solid vertical handle on the right of the door.
My boss came and viewed the work the first thing out of his mouth was "We will need a hollow handle so that it doesn't get as hot." At this point we had a discussion as to whether changing the handle from solid to hollow would change the maximum temperature.
I argued that by only changing from solid to hollow none of the heat transfer mechanisms will decrease (convection, conduction & radiation) and that only the time it takes to reach maximum temperature (or ambient) will change.
If I am wrong can you please explain why. It just seems like I have the correct answer here.
Thank you for your time.
UPDATE: 07/08/15
Not too sure if all the people who answered this will see this I hope so. Thanks for your time.
Your answers were very good and what I expected. I totally agree a solid handle stores more energy. Which when touched will be available for transfer into your hand and the longer the handle is touched the more of this energy gets transferred and the more the user gets "burned". Veritasium did a great youtube video on this.
https://www.youtube.com/watch?v=hNGJ0WHXMyE
Fun fact, in the fire standards there is no maximum temperature requirement for any part of the fire, be it being touched by the user or not.
| A hollow handle is preferred. It will transfer less heat as it has a reduced cross section. It cools off by radiation and convection at at least at the same rate as a solid one, so the equilibrium temperature will be lower.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/197923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 4
} |
Difference between $|d{\bf r}|$ and $d|{\bf r}|$ What is the difference between $|d{\bf r}|$ and $d|{\bf r}|$ and why are both of them not always equal to each other?
My question might seem stupid to some and will probably get downvoted but I have thought on the question but still can't comprehend any difference between the two.
I was reading Irodov's Mechanics as an extra reading,when I came upon this!
The book has given an example at the footnote but I still can't understand. :/
| As shown in the diagram $|dr|$ represents the magnitude of the vector difference(that involves the laws of vector addition/subtraction) between $\vec{r_2}\quad \& \quad \vec{r_1}$ while $d|r|$ represents the difference between magnitudes of two vectors which is simply the difference in their lengths.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/197989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
What does it mean for a physical quantity if its mixed second partial derivatives are not equal? This goes for every problem (either in electromagnetism or fluid dynamics) that has to do with vector fields. Say we have a fluid flowing in a closed circular pipe (or an electromagnetic field, the concept does not matter). If its mixed second partial derivatives are not equivalent,
$$
\frac{\partial^2 \mathbf u}{\partial x\,\partial y}\neq\frac{\partial^2 \mathbf u}{\partial y\,\partial x}
$$
where $\mathbf u$ is the flow velocity vector, then what does this mean (physical, not mathematical meaning)?
I want an INTUITIVE(physical,not plain mathematics) understanding of what changes for the fluid (or EM field) from the situation in which they where equal. Give your own example if you think that this is the ideal way of explaining the stuff you have in mind.
Note: For those who don't know, elementary mathematics tell us that those second mixed partial derivatives should be equal in most cases, so my question has to do with an exception of this rule (especially in physics where we don't see this kind of behavior everyday).
| Singularities in functions often lead to non commuting second derivatives. As for a Physical interpretation I think the following exercise may help:
*
*The partial derivative can be from First Principles can be written as
df(x,y)/dx = (f(x+h,y)-f(x,y))/h i.e the function is incremented by h and then the derivative is found.
(x,y+h). .(x+h,y+h)
(x,y). .(x+h,y)
When you take the partial wrt x and then wrt y you move horizontally and then vertically.
In the other case you first move vertically and then horizontally.
For nice continuous functions you reach the same place but if the function does not have any singularities and still the derivatives do not commute then that means that you are in a space where
(x+y) != (y+x)
Such non-commuting spaces can lead to non-commuting derivatives as well.
All derivatives are partial sorry I am still learning latex so sorry for the weird font.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/199352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 4
} |
What is the relationship between quantum physics and chaos theory? I am not a physicist, I am looking for a non-technical explanation.
Articles such as this one seem to hint at the fact that "macro reality" regulated by classical mechanics is somehow a pattern emerging out of quantum-level chaos. Is that correct? Can anything at a quantum level be defined as a complex system?
I apologize if there are inaccuracies in the way the question is formulated, feel free to tear it apart and answer as it fits best.
| There is a SHM wave equation based on Schroedinger's equation and Maxwell's equation for an elliptical wave function which produces chaos (Langtons ant).
It has Fermi statistics emerging from some of the variables and also has a beautiful thermodynamic Hamiltonian ($\theta$). It has the maths and the Octave code. This is one dimensional; when done in three dimensions some amazing properties emerge. I guess this answers the question about SHM and chaos.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/199402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Creating complex interference figures with simple sources 3D printers that use Stereolithography usually have to build a 3D object layer by layer, each layer being constructed by having a laser travel across the surface until it has hardened all the layer's interesting parts.
Thus I was wondering if it would be possible to theoretically (I am aware that in practice that would probably be impossible), using a large amount of simple light sources, instantly build the layer using interference.
Say for instance that you have $n$ sources $s_k(d,t)=S_k\cos(kd+\omega t)$ where $d$ is the distance to the source, and any two sources have to be separated by a distance of at least $\epsilon$. If we consider a simple case, we could consider that all the sources are in 2D, and on a centered circle (or square). Let's suppose that our space is filled with resin, which becomes solid upon being under an intensity $S_r$. What is the optimal resolution one could get ?
If this problem is too complex to be answered here, has that been researched before ? I haven't managed to find relevant articles.
| In principle I think your idea is sound, but there are serious complications that you probably haven't recognized. The complications have to do with "coherence". So your n sources actually have functions
$s_k = S_k \cos(kd + \omega t + \phi_k)$
where $\phi_k$ are phase constants associated with each source. Even for "nice" sources like lasers they each have a different $\phi_k$. What is more, the $\phi_k$ are actually time dependent. This time dependence is generally very complicated, but can often be well modelled as constant but with random changes which occur at random times with some characteristic time, $\tau$, which we call the coherence time.
So, if you just walk down the hall to the lab, take two HeNe lasers and point them both at the same point on a smooth surface you won't see any evidence of interference because their coherence time (while longer than, say, an incandescent bulb) is much shorter than the time that your brain averages visual data over. So on sufficiently short time scales interference patterns would exist. But the patterns would change on a very short time scales. There are ways around this (such as phase locking the lasers).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/199651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is a bomb's shockwave strong enough to kill? I'm watching a movie, The Hurt Locker, and the first scene shows an IED explosion which kills a soldier. Of course movies don't depict explosions with maximum realism, but I noticed the debris and smoke / flame didn't reach him, and it made me curious about whether invisible aspects of an explosion - heat or concussive blast can be lethal (without carrying shrapnel).
How strong are the unseen forces from an explosion such as a road side bomb? Strong enough to be lethal?
| The blast overpressure of the explosion is a very strong shock wave which can kill humans. There are a number of ways an explosion without shrapnel can do harm to people:
*
*Rupturing of the hollow organs due to rapid compression and expansion by the shock wave.
*The body can get thrown through the air if a strong detonation occurs nearby. Impact of the body with the head can cause brain injury.
*Burns to the body due to heat or chemicals from the detonation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/199730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 4,
"answer_id": 2
} |
How does the Higgss field change with the expanding universe? Does the expanding universe really have an effect on the Higgs field? Say, is it like the matter getting diluted with the expanding universe? Or does the expansion does not have any effect on the Higgs field, just like the Dark Energy maintaining its energy uniform even if the universe is expanding?
| At the moment there does not exist a quantized unified theory of gravitation and the standard model. The only candidate to date is string theories which are at a research level, and in which I am not able to form an answer.
There exist effective quantizations of gravity and effective models including the standard model in the studies of cosmology and the Big Bang model. In these models the continuing expansion of the universe is not affecting the bound states of the universe, atoms, molecules, gravitational masses, galaxies , clusters. Even the gravitational field is much stronger then the effective "force" of expansion. ( the raisin bread analogue for the expansion and the stability of bound states).
The Higgs field, once the energies became low enough to break the symmetry which gives zero masses at very high energies during the original expansion, belongs to the set of fields which defines the bound states of atoms and molecules, together with the strong weak and electromagnetic fields. Thus the Higgs field will also not be affected by the continuing expansion of the universe.
This is a hand waving explanation within the known theories, which needs a theorist's input , imo. See one such by Lubos Motl .
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/199932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How many pairs of nuclei collide in heavy ion collisions? As each bunch of heavy ions consist of a large number of nuclei it does not seem unlikely that multiple binary ion collisions will occur as it does in p-p collisions. However, should this be the case, I do not see how a possible azimuthal anisotropy could be related to the elliptic flow - which to my understanding is defined for the individual binary nuclei collisions and in terms of the reaction plane of the collision.
So, either the luminosity is tuned to allow for maximally one collision per crossing or there is a way to separate the individual collisions from each other.
So, my question is two-fold:
*
*How many pairs of nuclei collide in heavy ion collisions (at the LHC)?
*If $n>1$; how are they distinguished from each other?
| ALICE is a heavy ion experiment at CERN.
Here is a lead lead collision
One of the LHC's first lead-ion collisions, as recorded by the ALICE detector.
Thanks to the advances of computing the vertex is determined by the tracks , measured and pointing back to it, even though there are thousands of tracks from each vertex. Certain tolerance assumptions have to be used to identify a track to a vertex, of course.
The average number of interactions per bunch crossing varies from 0.05 to 0.3. (page 9 in link)
So the experiment is designed to have one main vertex (one colliding pair of ions) per crossing by having a low luminosity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/200133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What range of light wavelengths can a typical mirror reflect? A typical mirror is capable of reflecting the full spectrum of visible light. Can it also reflect other wavelengths both longer and shorter? What is the range?
| The reflectance of a typical mirror depends on the metallic coating used, but usually it is usually aluminum or silver in more expensive mirrors. Special optical coatings can be used to reflect or scatter EM waves at specific wavelengths.
Here is a photo of a mirror reflecting IR (700nm - 1mm wavelength):
Mirrors are also used to focus X-rays (.01nm - 10nm):
Thor Bjarnason states it well here:
"It depends what you make the mirror out of. If you're looking at
radio waves then the mirror will have to be made of thicker metal,
because as you increase the wavelength you also have to increase the
thickness of the metal to get the same reflectivity. That's actually
how satellite dishes work. They're basically a big curved mirror that
concentrates all of the microwaves coming down from the satellite."
Thus mirrors can reflect quite a large range of EM waves, (about .1nm - 1cm in wavelength with specialized mirrors), however it is very unlikely that the mirror in your bathroom can reflect much more than visible light and IR radiation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/200420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Can a classical 50/50 probability be distinguished from a quantum superposition with a single measurement? $\renewcommand{\ket}[1]{|#1\rangle}$
A "false" (equally superimposed qubit) is created by mechanically firing with 50/50 probability a resonance photon at a Hydrogen atom qubit in the ground state. This qubit is sent to Alice and it now has 50/50 probability of being in state $\ket{0}$ or state $\ket{1}$, but it is not in a quantum mechanical superimposed state! Alice is also sent a "real" 50/50 superimposed qubit created by a Rabi process.
Is there any way that Alice can perform gates and measurements on the two qubits to determine which is which?
| The minimum probability of error for correctly identifying one of two states $\rho$ and $\sigma$ prepared with equal probability (where minimization is over POVMs and the pair of states is fixed) is related to the trace distance $\|\rho-\sigma\|_1$ between them via the Helstrom bound $p_\text{min}=\frac{1}{2}-\frac{1}{4}\|\rho-\sigma\|_1$. For some reason, it's very hard to find this actually stated anywhere, but it's definitely in Helstrom's book on quantum estimation theory. In particular, the minimum error probability is zero if and only if the two density operators have orthogonal support, when $\|\rho-\sigma\|_1=2$. The two density matrices in your example are $\rho=(1,1;1,1)/2$ and $\sigma=(1,0;0,1)/2$, so $\|\rho-\sigma\|_1=1$. The optimal measurement therefore has $p_\text{min}=1/4$. A measurement satisfying this bound is easy to find: just measure in the $\left\vert\pm\right\rangle$ basis, and if you measure $\left\vert+\right\rangle$ report $\rho$ and if you measure $\left\vert-\right\rangle$ report $\sigma$. With $p=1/2$, $\rho$ will be prepared. You'll always be right in this case. With $p=1/2$, $\sigma$ will be prepared, and then you have $p=1/2$ to measure $\left\vert+\right\rangle$ and incorrectly report that you received $\rho$, for an overall error probability of $\frac{1}{2}\cdot\frac{1}{2}=\frac{1}{4}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/200625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Can you huddle next to a fridge in sub-zero temperatures and keep warm? There's a saying I've heard in so many places.. "It was so cold that we used to huddle next to our refrigerator to keep warm..." I had heard this phrase uttered some 30 or so years ago, and it's stuck with me ever since...
Which gets me thinking...
Imagine it's -40 degrees (Fahrenheit or Celsius, it's the same number for both scales). Your fridge is by comparison capable of blasting chilled air at +4 degrees Celsius (39.2 degrees Fahrenheit)... give the temperature difference between environment and the refrigerator, could an average human of body temperature of ~37 deg C potentially warm themselves by an open fridge blasting chilled air at +4 deg C in a surrounding environment of -40 deg C and keep "warm"?
| According to the second law of thermodynamics, sustained cooling below the ambient temperature requires work. The fridge's electric motor does this work to cool the air inside the fridge, in the same time it has to warm air outside the fridge - this is how one can "huddle next to a fridge" to keep warm (you can easily find the warm place of your fridge, usually at its rear), you do not keep warm with air cooled in the refrigerator.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/200715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Why do physicists always give "event rates"? Many times I see plots for expected/measured "event rates", but what's the motivation for this? Why not generate/use plots for expected/measured event numbers/counts instead?
| Actual number of events measured will depend on how long an experiment is run, the efficiency of a detector, the size or thickness of a target, the intensity of an incoming beam among other things. Each of these is unique to a given experiment.
Science is done with the expectation of reproducibility, so these factors which distinguish one experimental setup from another are factored from the analysis. What remains is a rate: of time, of incident particle, of target thickness, etc.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/200797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What is the time period of an oscillator with varying spring constant? It is well known that the time period of a harmonic oscillator when mass $m$ and spring constant $k$ are constant is $T=2\pi\sqrt{m/k}$.
However, I would be interested to know what the time period is if $k$ is not constant. I have searched hours after hours for right answers from Google and came up with nothing. I am looking for an analytical solution.
| Here is a solution for a spring force that varies directly with displacement. It thus varies with time implicitly, but has no explicit dependence on time or any other variable.
Givens and Assumptions
*
*oscillator with mass $m$
*amplitude of oscillation $A$
*oscillator displacement, $x$, varies with time, but $x(t)$ is unknown
*spring applies force varying with displacement, $F(x)$
*The function $F(x)$ is an odd function, that is $F(-x) = -F(x)$ (otherwise the amplitude could be different in the positive and negative directions - see below for what to do in this case)
*equilibrium position is $x=0$, that is $F(0) = 0$ (for convenience only)
Objective
Find the period of oscillation, $T$
Solution
Starting from conservation of energy, the sum of the kinetic and potential energy of the mass must be equal to the total energy, which is constant.
$$KE(x)+PE(x)=E$$
$$KE(x)=\frac{1}{2}mv^2(x)$$
$$PE(x)=\intop_0^xdx'\,F(x')$$
So $PE(x)$ is the potential energy stored in the spring, with $x'$ as just an integration variable.
We can think of $PE(x)$ as another way of defining the force-displacement relationship of the spring. We can define the potential energy versus displacement or the force versus displacement, and getting the other one is fairly easy.
Now, at $x=A$, $KE(x=A)=0$, so $PE(A)=E$ is known.
And so we have
$$\frac{1}{2}mv^2(x)=PE(A)-PE(x)$$
Solving for $v(x)$,
$$v(x)=\sqrt{\frac{2}{m}\left(PE(A)-PE(x)\right)}$$
Because $v=\frac{dx}{dt}$, we can also write
$$dt=\frac{dx}{v(x)}$$
Integrating both sides, the time to go from a position $x_0$ to $x_1$ is
$$\Delta t = \intop_{x_0}^{x_1}\frac{dx}{v(x)}$$
In particular, we know the time required to go from $x=0$ to $x=A$ is $T/4$, so
$$T=4\intop_0^A\frac{dx}{v(x)}$$
$$T=4\intop_0^A\frac{dx}{\sqrt{\frac{2}{m}}\sqrt{PE(A)-PE(x)}}$$
which further simplifies to...
Final Result
$$T=\sqrt{8m}\intop_0^A\frac{dx}{\sqrt{PE(A)-PE(x)}}$$
Check of Result
For the linear case, $F(x)=kx$, so $PE(x)=\frac{1}{2}kx^2$ and $PE(A)=\frac{1}{2}kA^2$, which gives
$$T=\sqrt{8m}\intop_0^A\frac{dx}{\sqrt{\frac{k}{2}}\sqrt{A^2-x^2}}
=4\sqrt{\frac{m}{k}}\intop_0^A\frac{dx}{\sqrt{A^2-x^2}}$$
This integral can be looked up in a table, to obtain
$$T=4\sqrt{\frac{m}{k}}\left(\sin^{-1}(1)-sin^{-1}(0)\right)=4\sqrt{\frac{m}{k}}\frac{\pi}{2}$$
$$T=2\pi\sqrt{\frac{m}{k}}$$
as expected. (QED)
We can dispense with the assumption that $F(x)$ is odd if we define two amplitude values: $A_+ > 0$ for the amplitude in the positive direction and $A_- < 0$ for the amplitude in the negative direction.
The total oscillator energy, $E = PE(A_+) = PE(A_-)$, so we can still call it $PE(A)$ as long as we remember what that means now.
Then, the period is twice the time required to go from $A_-$ to $A_+$, and so
$$T=2\intop_{A_-}^{A_+}\frac{dx}{\sqrt{\frac{2}{m}}\sqrt{PE(A)-PE(x)}}$$
$$T=\sqrt{2m}\intop_{A_-}^{A_+}\frac{dx}{\sqrt{PE(A)-PE(x)}}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/201078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
How to form the spin triplet - singlet states from two electrons with spin not an eigenstate of $ S_z $ (spins not along z-axis)) For a two electron system, we know that the total $ J^2 $ states (Triplet - Singlet) are related with the $\uparrow \downarrow $ , $\downarrow \uparrow$ , $\uparrow \uparrow$ , $\downarrow \downarrow $ states as:
$$
\[
\left(\begin{array}{c}
\left|11\right\rangle \\
\left|10\right\rangle \\
\left|1-1\right\rangle \\
\left|00\right\rangle
\end{array}\right)=\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0\\
0 & 0 & 0 & 1\\
0 & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0
\end{array}\right)\left(\begin{array}{c}
\uparrow\uparrow\\
\uparrow\downarrow\\
\downarrow\uparrow\\
\downarrow\downarrow
\end{array}\right)
\]
$$
But what if we want the singlet-triplet not in terms of
$\uparrow = \left(\begin{array}{c}1\\0\end{array}\right)$ ,
$\downarrow =\left(\begin{array}{c}0\\1\end{array}\right)$ , but in terms of spin states that are at an angle $\theta$ from the $z$ axis?
For example when $\hat\eta $ is at $60^o$ from the $z$ axis the spin up and down states are
\begin{eqnarray*}
\nearrow & = & \left(\begin{array}{c}\frac{\sqrt{3}}{2}\\
\frac{1}{2}\end{array}\right)=\frac{\sqrt{3}}{2}\uparrow+\frac{1}{2}\downarrow\\
\swarrow & = & \left(\begin{array}{c}-\frac{1}{2}\\\frac{\sqrt{3}}{2}
\end{array}\right)=-\frac{1}{2}\uparrow+\frac{\sqrt{3}}{2}\downarrow
\end{eqnarray*}
By solving for $\uparrow$ and $\downarrow$ in terms of $\nearrow$ and $\swarrow$ and rewriting the $\uparrow \downarrow $ , $\downarrow \uparrow$ , $\uparrow \uparrow$ , $\downarrow \downarrow $ states in terms of the
$\nearrow \nearrow$ ,$\nearrow \swarrow$ , $\swarrow \nearrow$ , $\swarrow \swarrow$
we get (after some algebra): $$\left(\begin{array}{c}
\left|11\right\rangle \\
\left|10\right\rangle \\
\left|1-1\right\rangle \\
\left|00\right\rangle
\end{array}\right)=\left(\begin{array}{cccc}
\frac{3}{4} & -\frac{\sqrt{3}}{4} & -\frac{\sqrt{3}}{4} & \frac{1}{4}\\
\sqrt{\frac{3}{8}} & \sqrt{\frac{1}{8}} & \sqrt{\frac{1}{8}} & -\sqrt{\frac{3}{8}}\\
\frac{1}{4} & \frac{\sqrt{3}}{4} & \frac{\sqrt{3}}{4} & \frac{3}{4}\\
0 & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0
\end{array}\right)\left(\begin{array}{c}
\nearrow\nearrow\\
\nearrow\swarrow\\
\swarrow\nearrow\\
\swarrow\swarrow
\end{array}\right)$$
What is the general method of writing the Triplet - Singlet in terms of two electron states with spin at an arbitrary axis $\hat\eta (\theta)$ ?
Thank you!
| You are looking for the Wigner d-functions. They relate angular momentum eigenstates through rotation. As you can see in the link the definition is
$$d^{(j)}_{m,m'}(\theta) = \langle jm|e^{-i\theta J_y}|j m' \rangle$$
where $e^{-i\theta J_y}$ is a unitary rotation operator.
We'll have two sets of states: $|jm;0\rangle$ for the original basis and $|jm;\theta\rangle$ for eigenstates of momentum along the axis $\hat{\eta}(\theta)$. Let us assume that we've set up our basis so $\hat{\eta}$ lies in the $x-z$ plane. That way the two bases are related to each other by a rotation along the $y$ axis. The two sets of states are then related by a unitary rotation operator: $|jm;\theta\rangle = e^{-i\theta J_y}|jm;0\rangle $
You can look up the various $d$-functions here: http://pdg.lbl.gov/2002/clebrpp.pdf
Using $|jm;0\rangle$ as a complete set of states we can write
$$|jm;\theta\rangle = \sum_{m'} |jm';0\rangle\langle jm';0|jm;\theta\rangle$$
$$= \sum_{m'} |jm';0\rangle\langle jm';0|e^{-i\theta J_y}|jm;0\rangle $$
$$|jm;\theta\rangle= \sum_{m'} d^{(j)}_{m',m}(\theta) |jm';0\rangle$$
So you want to perform two steps. First, rotate your total angular momentum state into the basis you'd like to compare to with the $d$-functions. Then second, use the Clebsch-Gordan coefficients to decompose the total-angular momentum states into tensor-product states. The final result is
$$|jm;\theta\rangle= \sum_{j_1,j_2,m_1,m_2} \left(\sum_{m'} d^{(j)}_{m',m}(\theta) \langle j_1 m_1;j_2 m_2|j m'\rangle \right) |j_1 m_1; j_2 m_2;\theta\rangle$$
I'll leave you to compare this expression to the matrix you wrote out in your question, but you'll find that plugging in $\theta = 60^\circ$ gives you exactly what you already have.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/201156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Spherical and chromatic aberration correction I have some spherical lenses which are 5mm, 1mm and 0,5 mm in diameter, having 100x, 350x and 1000x magnification respectively. While looking at blood samples, I'm having big problems with spherical and chromatic aberration. The edges are completely out of focus and I get that rainbow-ish effect in the picture. How could I correct this?
The lenses we use are like these(just smaller): http://i.ytimg.com/vi/rUU-w18dsz0/maxresdefault.jpg
We put them in cardboard or paper holes and watch through our smartphones.
Here we looked at stained tissue with 1000x and blood cells with 350x.
| The main effect here is lack of field flatness, or, cited in more geometric terms, the deviation between your imaging system's focal surface and the surface which you're imaging.
An imaging system generally images a plane onto the surface of an ellipsoid (approximately). i.e. light from a point surface on a plane will be converge to its tightest focus on an ellipsoidal surface. The "natural" curvature is back towards the lens system, as shown in an example lens I designed a long time ago below. So, if you're imaging through your ball lens onto the flat CCD smartphone camera chip, the image of your microsope slide is actually bent and so things get out of focus near the edges.
A big part of multielement microscope objectives is the correction that they make for this effect. It can be done, and it is very important in cameras and brightfield imaging systems. Objectives with higher field flatness (higher correction of this curvature) are often called by manufacturers "Plan" objectives.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/201495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Bohr Model of the Hydrogen Atom - Energy Levels of the Hydrogen Atom Q.1) What is the idea of stationary orbital of electron? As it is said energy tangled with mass and vice versa how is this energy always be in the form of cloud such as s, p, d, f? From where does this energy come from?Why don't the energy cloud vanish away or what is stopping it to not vanish away?
| First of all let us make clear the the Bohr model posits orbits, with some imposed by hand quantization constraints, as the stationary assumption.
Orbitals for the hydrogen atom are the solutions of the Schrodinger equation with the hydrogen potential energy and helped develop quantum mechanical theory.
In QM the solution of the schrodinger equation is a wave function and the square of this wave function gives the probability of finding the electron at a specific (x,y,z). Orbitals are probability distributions with specific quantum numbers. The theory and the experiment agree .
A hydrogen atom in its ground state (n=1,s) will stay there forever unless the atom is hit with a photon with an energy difference between the s state and another orbital position. Thus the energy for higher states is supplied by a photon interaction. These higher states will decay to the ground state releasing the energy as a photon.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/201603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Eigenspaces of angular momentum operator and its square (Casimir operator) The casimir operator $\textbf{L}^2$ commutates with the elements $L_i$ of the angular momentum operator $\textbf{L}$:
$$
[\textbf{L}^2, L_i] = 0.
$$
However, the $L_i$ do not commute among themselves:
$$
[L_i, L_j] = i\hbar\epsilon_{ijk}L_k.
$$
This makes sense so far, but it leaves me wondering how their eigenspaces relate to each other. I remember some theorem that diagonalizable, commuting matrices share their eigenspaces. If those operators could be expressed as complex matrices (in the finite-dimensional case), they surely are diagonalizable. So it follows that $\textbf{L}^2$ has the same eigenspaces as the three $L_i$, but that would imply that they commute among themselves, which is not the case.
What am I missing? What is the relation between the eigenspaces of these operators?
| The $L_i$ has many eigenspaces corresponding to many eigenvalues. Each of those eigenspaces is also an eigenspace of the Casimir operator.
So they share common eigenspaces in the sense that there are eigenspaces that are eigen to both. But they don't share them in the sense that they are the same.
Look at the hydrogen atom. There are energy eigenspaces and there Casimir eigenspaces. A Casimir eigenspace of eigenvalue $0$ contains vectors of every possible energy. And an energy eigenvector with eigenvalue $E$ c contains vectors of lots of different angular momentums. But there are common eigenvectors that have a fixed Casimir eigenvalue and a fixed energy.
There exist common eigenvectors for commuting operators but that doesn't mean a random eigenvector of one is an eigenvector of the other.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/201786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
Light Absorption of a glass I've the $n$ (refractive index of the glass sheet ) and $t$ (the thickness of the glass sheet)
with this information, how can I find the amount light absorption of the glass sheet?
| Possibly some semantic confusion here. Glass, with a simple refractive index, does not "absorb" light, it is transparent. Therefore the amount of light that emerges on the other side, for a given angle of incidence, is independent of the thickness of the glass.
Instead, some of the incident light is reflected from the first boundary as the light enters the glass, and some is reflected from the second boundary as it exits the glass. Perhaps this is what you are calculating? Have a look at Fresnel Equations.
Of course, real materials do absorb/scatter light, but you need a complex refractive index to sort that out. Do you have a complex refractive index? If you do then the light is exponentially attenuated as it travels through the material, but the amount of intensity attenuation depends on the (vacuum) wavelength of the light, $\lambda$, and the path length through the glass, roughly as $\exp(-4\pi \kappa x/\lambda)$, where $\kappa$ is the imaginary part of the refractive index and $x$ is the path length (which will be $t$ for normal incidence, but larger for non-normal incidence).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/201891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What is the order of the gates making up the QFT on two qubits? The Quantum Fourier Transform consists of 2 gates. Controlled Phase Gates, and Hadamard gates. I'm assuming the Controlled Phase Gate is a combination of a Control Gate, and a Phase Gate.
But what is the order of operations on the Controlled Phase Gate? CNOT then Phase Change? Or Phase Change then CNOT?
| You seem to have confused the CNOT gate with the controlled phase gate. There's no CNOT gate involved in the implementation of a C-phase-gate (let's denote it CPG for short), as all we do with it, is multiplying by a phase factor. For example on two qubits it is defined as:
$$
U_{CPG}(\phi) |xy\rangle = \exp(i(x\land y)\phi)|xy\rangle
$$
Where $\land$ is the logical AND operation. Thus the multiplication by the phase factor $e^{i\phi}$ is only applied if the 2 input qubits are $|xy\rangle = |11\rangle.$ All 3 other possible inputs ($00$, $01$, $10$) remain unaffected. Its matrix form using $|00\rangle,$ $|01\rangle,$ $|10\rangle$ and $|11\rangle$ vectors as an orthonormal basis in $\mathbb{C}^4$ would be:
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & e^{i\phi}
\end{pmatrix}
Now in the Quantum Fourier Transform (QFT) setup, single qubit controlled phase gates are used. For example for the first qubit, after the Hadamard, we start by applying the first CPG gate, usually named $R_2=U_{CPG}(2\pi/2^2)$:
\begin{pmatrix}
1 & 0 \\
0 & e^{i2\pi/2^2}
\end{pmatrix}
Then we continue applying the remaining CPG's, i.e. $R_3,...$ all the way up to $R_k$ (for $k$ qubit quantum computer). Note $R_k$ is given by:
\begin{pmatrix}
1 & 0 \\
0 & e^{i2\pi/2^k}
\end{pmatrix}
You proceed similarly for the remaining input qubits, until the last qubit for which there's only a Hadamard to apply.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/202273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Intepretation of area under velocity-time graph for a bouncing ball A typical velocity-time graph for a bouncing ball is shown below. I understand that the ball starts from rest at $t=0$, then it accelerates downwards and hits the floor at time $t_1$. Between time $t_1$ and $t_2$, the ball experiences an upward reaction from the floor and its velocity changes. At time $t_2$, the ball leaves the floor and bounces back.
Here is what confuses me. As the area bounded by the velocity-time graph can be interpreted as the distance traveled. I don't really understand what is the interpretation of the blue and yellow piece shaded in the graph below. For a ball hitting the floor at a speed of about 10 metres per second, with time of impact of about 0.1 seconds, the calculation shows this distance could be as large as 0.5 metre, way larger than the size of a tennis ball. If the time of impact is even larger, this distance would increases in proportion as well. But obviously during this time, the ball does not go anywhere but hits hard with the floor.
I guess the solution lies in the fact that the ball cannot be treated as a point mass anymore during the time of impact, as the velocities at different positions inside the ball would be all different. But I still have no idea about what these two coloured area represent. Is the distance interpretation wrong? Or should this graph be understood in a different way?
| No, all your reasoning is totally right. The conclusion isn't that the graphs are wrong, it's that the time of impact is less than 0.1 second. In this video, for example, the time of impact is just about 0.01 seconds.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/202385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Gravity and centre of mass Does gravity act entirely as if on the centre of mass? Often I have heard this, but it seems more realistic (even if less practical) if it acts on individual bits of matter, therefore weaker further away, shifting the "centre of gravitational attraction" closer to the attracting object than the centre of mass. Is this what the centre of gravity is?
| You are right: gravity acts on individual bits of mass, and is stronger towards the source of the gravitational field. The center of mass and center of gravity correspond if you assume constant gravitational field (and rigid bodies I would say).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/202583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Free phonon propagator in imaginary time The free phonon propagator in Matsubara space is given by
$$D^0(i\omega_n)=\frac{1}{M}\frac{1}{(i\omega_n)^2-\Omega^2}.$$
I want to derive its representation in imaginary time. I know the result should be
$$D^0(\tau) = -\frac{1}{2M\Omega}\frac{\cosh[\Omega(\beta/2-|\tau|)]}{\sinh(\Omega\beta/2)}.$$
This is what I've got:
$$D^0(\tau)=\frac{1}{\beta}\sum_{i\omega_n}e^{-i\omega_n\tau}D^0(i\omega_n) = \frac{1}{2M\beta\Omega}\sum_{i\omega_n}e^{-i\omega_n\tau}\left[\frac{1}{i\omega_n-\Omega}-\frac{1}{i\omega_n+\Omega}\right]$$
Now, I use the usual tricks involved in Matsubara summation: I interpret the sum to be over the poles of $n_B(z)=1/(e^{\beta z}-1)$ which have residue $1/\beta$, flip the integration contour (getting a negative sign), and carry out the integral by summing over the poles at $z=\pm\Omega$. This gives:
$$=-\frac{1}{2M\Omega}\left[e^{-\Omega\tau}\frac{1}{e^{\beta\Omega}-1} - e^{\Omega\tau}\frac{1}{e^{-\beta\Omega}-1}\right] = -\frac{1}{2M\Omega}\frac{\cosh[\Omega(\beta/2+\tau)]}{\sinh(\Omega\beta/2)}$$
which is correct for $-\beta<\tau<0$. How does the absolute value arise? What am I missing?
| I figured it out myself. The complex function used in the summation is
$$g(z) = e^{-z\tau}\left[\frac{1}{z-\Omega}-\frac{1}{z+\Omega}\right]$$
One can then write
$$\sum_{i\omega_n}g(i\omega_n) = \sum_{i\omega_n}\text{Res}[g(z)n_B(z)]_{z=i\omega_n} = \oint\mathrm dz\ g(z)n_B(z)$$
Then, one flips the contour and evaluates it as a sum over the poles of $g$. This can be done by the use of Jordan's lemma if $g(z)n_B(z)$ decays fast enough. And this is where things went wrong in my question:
$$g(z)n_B(z)\sim \frac{e^{-z\tau}}{e^{\beta z}-1}\sim \begin{cases}
\mathrm{Re}(z)>0: & e^{-(\beta+\tau)z}\to 0\\
\mathrm{Re}(z)<0: & e^{-z\tau}\to\begin{cases}
0, & \tau<0\\
\infty, & \tau>0
\end{cases}
\end{cases}$$
So if $\tau<0$, the integrand is exponentially small away from the origin of the complex plane and that's why the result matched in that case. However, for $\tau>0$, things fall apart. Then, instead of using the Bose function $n_B(z)$, one has to use a modified function,
$$\tilde n_B(z)=\frac{1}{e^{-\beta z}-1}$$
which will ensure that $g(z)\tilde n_B(z)$ decays for positive imaginary times. Indeed, this gives the desired result and the solutions for both $\tau<0$ and $\tau>0$ can be combined into one formula using $|\tau|$, as stated in the question.
I hope this helps someone avoid spending as much time on this triviality as I have.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/202672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why don't photons split up into multiple lower energy versions of themselves? A photon could spontaneously split up into two or more versions of itself and all the conservation laws I'm aware of would not be violated by this process. (I think.) I've given this some thought, and a system consisting of multiple lower energy photons would have a significantly higher number of micro-states (and consequentially higher entropy) than one consisting of a single photon with that much energy. This would make the process more favorable.
Why does this not happen?
| It is certainly thermodynamically possible for a high energy photon to vanish
and a multiplicity of lower energy photons to be created. This is observable as a cascade of events (photoelectric absorption of a photon, followed by multiple
fluorescence photons) in thermalization of a high energy photon interacting
with matter. It isn't a simple photon-in, two-photons-out reaction, because
that doesn't balance as a particle reaction (can't conserve energy and
momentum and angular momentum).
The usual cascade that thermalizes
energy, in matter, from an X-ray photon, might generate some other photons, but mainly
generates phonons or unstable atomic states (excited electrons). There may be photons MUCH later; thermoluminescent devices accumulate X-ray exposure for weeks, to create IR photons when the radiation badge is in the reader.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/203067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 7,
"answer_id": 6
} |
Why can black hole evaporate if objects need infinity time to reach event horizon (as seen by a distant observer)? I am new to black hole, but have a question about it:
*
*Object needs infinity time to reach event horizons (as seen by a distant observer).
*Particle-antiparticles separates, one of them drops into black holes to cause black holes evaporate.
Then why does not imply black holes needs infinity time to evaporate? Is there any contradiction? If not, what is the misconception here?
| *
*Firstly particles can reach event horizon in finite time in the frame of an observer at infinitely far away (This is the frame of reference for describing black hole radiation).
*But the above phenomenon of particle reaching to the event horizon in finite time has nothing to do with black hole radiation. Hawking radiation is a quantum effect. The pair of particles you are referring to are particle-anti particle pair created from vacuum 'near the horizon'. So there is no need of extra particle supply from outside world!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/203483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Can an egg be forced into a bottle by lowering the pressure inside the bottle using cold air? I just asked a question about why ice-cold water inside a thermos results in what feels like suction on the cap. The answer stated that the cold water cools the air inside the thermos, thus slowing it down, thereby decreasing the pressure on the inside of the cap relative to the high atmospheric pressure pushing on it from the outside.
But what about the egg and bottle demonstration in which a hard-boiled egg is forced into a bottle with atmospheric pressure by lowering the pressure inside the bottle? In that demonstration, the air pressure is lowered by heating the air inside the bottle. Could the same effect be achieved by somehow rapidly cooling the air inside the bottle? Or if not a bottle, then a thermos?
|
the air [pressure] is lowered by heating the air inside the bottle.
I think you need to watch the video again. The air pressure is raised by heating the air, not lowered. This increase in pressure forces air out of the bottle, which is visible by the egg vibrating on the rim. The egg acts like a one-way valve. When the pressure inside is greater, it pushes the egg up, but only slightly. Once a gap appears, the air can rush by.
After the flame goes out, the air inside cools, lowering the pressure just like in the thermos. This time the egg cannot lift slightly to equalize the pressure and is instead pushed into the bottle by the unbalanced forces.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/203669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Reason for strange magnetic ball movement I was playing with a set of those neodymium magnet spheres and noticed a couple of strange behaviors (which I believe are related so I'm only positing a single question) and I was hoping to get an explanation for.
When I roll a single magnet in a straight line on a wooden table, it doesn't travel in a straight line, but 'wiggles', and changes velocity, on both planes, fairly significantly.
When I roll two magnets into each other (if I get the angle right), they join, and spin quite rapidly and for much longer than expected. Video ~15s in
I'd suspect the earths magnetic field is applying this force / supplying the energy, can someone confirm this, and include an explanation?
| I've randomly found a possible explanation of what is occurring for one of the effects I noticed.
For the case of rapidly spinning magnet pair, it appears this is 'powered' by gravity via precession of the magnets. I can't find anything else to confirm / refute this though. Anyone care to weigh in?
http://amasci.com/amateur/beads.html - 'Mysterious Energy Source' heading
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/203886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Galvanic Cells and Electric Potential In a battery or a galvanic cell, the electric potential of the battery is due to a difference of charges between the two cells like in a capacitor? So it is the electric field due to this separation that is driving the electrons? if yes, why we call it electromotive force of a battery (EMF) ?
| Galvanic cells are driven by a chemical reaction known as a Redox reaction.
Schematically speaking the cell contains a oxidiser $O$ and a reducing agent $R$, separated by a conductive membrane.
When the oxidiser reacts it loses electrons:
$O \to O' + n e^-$ (where $O'$ is the reduced form of $O$)
When the reducing agent reacts it absorbs these electrons:
$R + n e^- \to R'$ (where $R'$ is the oxidised form of $R$)
It's these electrons that cause the potential to arise at the two electrodes and the cell to be able to provide current (a flow of electrons).
The overall Redox reaction is:
$O + R \to O' + R'$
A battery is usually (but not always) a number of the same cells connected in series to obtain the desired output voltage. The reactions take place only when the circuit is closed, so the electrons can flow from cathode to anode.
A typical system is the manganese dioxide ($MnO_2$), zinc ($Zn$) battery in which the oxidiser $MnO_2$ oxidises the $Zn$ metal. These batteries run out when either the oxidiser or reducing agent has been fully consumed in the redox reactions. In galvanic cells chemical energy is converted to electrical energy (when the cell is in use).
Another example of a galvanic (voltaic) cell is the $Zn/CuSO_4$ cell.
At the cathode $Zn$ is oxidised to $ZnSO_4$, while at the anode $CuSO_4$ is reduced to $Cu$. A permeable membrane allows transport of the sulphate ($SO_4^{2-}$) ions, while keeping the oxidation and reduction reactions separated.
The cell potential can be calculated as shown here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/203963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
All possible photons wavelengths As far as I know all photons come from electrons loosing their energy. I remember from physics and chemistry classes, that electron can loose or get only certain determined amount of energy. Also I remember that every nucleous has finite amount of energy levels. Since photon is a wave (and I might have read this somewhere) I've always imagined, that photon comes from electron doing "wobbling" upon stabilizing on a lower orbit and that there is no other way for photon to be born.
These statements lead me to conclusion, that there is a finite amount of possible photons wavelengths. Is it so? If not, where am I wrong? Thank you
| Photons can come from a variety of sources, some of which are indeed transitions of electron levels (or nucleus levels or molecular levels), which are indeed discrete (it is also not because of wobbling).
The most common way of having photons of arbitrary wavelengths is via Bremsstrahlung radiation. This is the radiation obtained when a charge is accelerated, which can emit photons of pretty much any energy level.
Another thing to keep in mind is that the wavelength depends on the frame. Even if you use energy transitions, if the bound system considered is moving, the emitted photon will have its wavelength shifted, which could also here produce arbitrary wavelength depending on the speed.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/204010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find constant acceleration with only initial speed and distance Given the problem:
"A car moving initially at 50 mi/h begins decelerating at a constant rate 60 ft short of a stoplight. If the car comes to a stop right at the light, what is the magnitude of its acceleration?"
While this problem seems simple, I can't seem to find the correct formula to use. Most formulas I am finding require the use of time (t) which is not given in the problem statement. What formula(s) do I use to solve this problem? Am I supposed to use distance as the unit of time somehow? Or should I use some sort of derivation to get the number needed?
| There are four kinematic equations that apply to this type of problem. One of those equations can be used to solve for acceleration when you don't how long it took for the car to stop. The equation is:
$v_f^2 = v_i^2 + 2a \Delta\ x $
The initial velocity is given, the final velocity is zero, and the distance traveled is given. The only unknown, acceleration, is trivially easy to separate algebraically, and leads to an immediate answer for this type of problem.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/204103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Can I electrify a pin by applying current in its base? I imagine electric circuits as loops. So I wonder, if it is possible to electrify a pin without connecting its two edges, instead only applying current at its base. But I want the current to run across its tip. Is that possible? And if yes, how will the current flow exactly, i.e. what will the path of the electrons be?
| If you charge a pin, most of the charge will accumulate in the tip, but you can't have current unless the charge is going somewhere. That's like asking for a waterfall without letting the water move.
You could take two pins, separated by an insulator, with joined tips, then apply voltage across the two bases. Then you'd have current flowing through the tip.
Beyond that, you'll need to explain what you're hoping to accomplish with this pin before people are likely to give you a better answer.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/205208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Hermitian 2x2 matrix in terms of pauli matrices In my studies, I found the following question: Show that any 2x2 hermitian matrix can be written as
$$
M = \frac{1}{2}(a\mathbb{1}+\vec{p}\cdot \vec{\sigma})
$$
with $a=Tr(M)$, $p_i = Tr(M\sigma_i)$ and $\sigma = \sigma_x \hat{i}+\sigma_y \hat{j}+\sigma_z \hat{k}$.
I did show that this equation works, but I want to know how to prove it just working with the fact that the Pauli matrices span a basis in 2x2 Hilbert space and that M is hermitian.
| First, check that the 2x2 hermitian matrices form a (finite dimensional) real vector space.
Convince yourself, that the set $\{1,\sigma_i\}$ is linearly independent.
You may now either directly expand a generic hermitian matrix in terms of $\{1,\sigma_i\}$, or note that the dimension of the aforementioned space is four, thereby proving that $\{1,\sigma_i\}$ is indeed a basis.
Finally you want to check that $\{1,\sigma_i\}$ is an orthonormal basis with respect to the Hilbert-Schmidt inner product
$$ \frac{1}{2}\mathrm{Tr}\,\sigma_i \sigma_j $$ where we set $\sigma_0 \equiv 1$. This gives you the expression for the coefficients.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/205524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Physical interpretation of Convolution Let us say that we are interested in finding the voltage (potential difference) $y$ across the resistor.
The circuit consists of a battery, a resistor and an inductor. The problem can be solved by following the Laplace transform "recipe".
$$
-U + L\left(s\left(\frac{Y}{R}\right) -i(0)\right) + Y = 0 \implies \\
\implies Y = \frac{U + Li(0)}{1+ \frac{Ls}{R}} \implies \\
\implies y(t) = \frac{R}{L}\int_0^te^{-\tau/\frac{R}{L}}u(t-\tau)d\tau + Ri(0)e^{-t/\frac{R}{L}}
$$
Having gotten this answer I am not sure of how to interpret the convolution. I don't mean that I wouldn't know how to evaluate the integral but rather that I don't understand why it takes on the form it does. Is it adding up contributions from earlier emf:s from the battery?
| To add to the mathematical explanations given in the other answers, physically speaking, an inductor is a device that resists changes in current, but which gradually allows current to change as time goes by if needed. That is, if a sudden change is applied, the current going through the inductor will gradually change towards the desired change. If a potential is increased across the inductor, the current through the inductor will increase slowly, and if the potential is reduced, the current will decrease slowly. Therefore, as time goes by, the influence of a past changes will diminish (hence the exponential term multiplied by time shifted external forcing). Note that if additional changes happen in time at successive moments, if any, such changes will be added together (as the inductor is a linear device and obeys the principle of superposition). Therefore you are right, convolution is used as you add contributions from earlier voltage forcing (here your battery) to see what the current current is (which gives you potential across the resistor), with a time-dependent exponential reduction.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/205600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Intuition for spin 1/2 and 1 propagators The propagator for a spin 0 particle is (in momentum space, dropping $i\epsilon$ and other factors)
$$\frac{1}{p^2-m^2}$$
which has the intuition "the particle likes to be on-shell". But the propagators for spin 1/2 and 1 are more complicated; they are
$$\frac{\gamma^\mu p_\mu + m}{p^2 - m^2}$$
for spin 1/2 and
$$\frac{\eta_{\mu \nu} - p_\mu p_\nu / p^2}{p^2-m^2}$$
Is there an intuitive explanation for what the extra terms in the numerators do? I've been given no explanation besides "this is what falls out of the theory".
| To get a better intuition consider a field with a general spin, this can be written as
\begin{align*}
\psi_\ell &\propto \sum_\sigma \int d^3 p \left( u_\ell (\vec{p},\sigma) e^{i p\cdot x}a(\vec{p},\sigma) + v_\ell (\vec{p},\sigma) e^{-i p\cdot x}a^\dagger (\vec{p},\sigma) \right)
\end{align*}
where $\ell$ is the spin index, and $\sigma$ summed over all spin states.
then the propagator will have
\begin{align*}
\sum_\sigma u_\ell (\vec{p}, \sigma) u_m^*(\vec{p},\sigma)
\end{align*}
in its numerator (or equivalently $\sum vv^*$). So we are summing over $\textbf{polarizations}$
So now remembering that the propagator in a crude sense measures the correlation between disturbances of the field at different positions, you can understand that the numerator accounts for the contribution of different modes of polarization in propagating the disturbance, where you have to sum over all of them to get the total correlation. For a spin-less particle there is only one mode...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/205767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Does venus have plasma in its atmosphere? Is it hot enough on Venus for thermal collisions to ionize molecules? If not, what temperature would it have to be?
| See e.g. https://en.wikipedia.org/wiki/Saha_ionization_equation. Typical ionization energy for gases in planetary atmospheres are 14eV. The temperature on the surface of Venus is around 750K, which corresponds to an energy of 0.064eV. The exponential in the Saha-Langmuir equation will therefor basically completely suppress the existence of ionized atoms at that temperature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/205946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Motion of center of mass I was reading about COM and forces and came upon this in my book.
If a projectle explodes in air in different paths,the path of the centre of mass remains unchanged.This is because during explosion no external force (except gravity ) acts on the COM.
My question is, even though the author realises that there is gravity acting on the particle yet he goes on to conclude that the path of COM remains unchanged.
But I learned that path will change whenever there is an external unbalanced force.Here gravity acts but why has the author neglected its effect ? (or am I mistaken somewhere?)
| Because gravity was acting on the projectile before it exploded, it was already taken into account. It wasn't turned on at the time of the explosion.
The phrase "the path remains unchanged" is referring to the gravity-induced parabola that the object was on prior to the explosion, not to a straight line that it would have in the absence of gravity.
So the velocity will change due to the force of gravity, but the "path" will not in this case since it assumes the force to be there.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/206127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why is the K shell electron preferred in the photo electric effect? I have read in many books and on Internet as well that photoelectric effect is only possible when an electron is emitted from the K shell of the metal. Why not other bonded electrons?
| As others have pointed out, the premise is false. However, there is still an element of truth to it, which is pretty easy to explain. It is true that when a photon has enough energy to ionize either a tightly bound electron or a weakly bound electron, it has a much higher probability of doing the former. This higher probability is shown by the fact that the K-shell edges are huge, constituing order-of-magnitude increases in the cross-section.
The reason for this is that you can estimate the cross-section using first-order perturbation theory, and it involves $|\langle i|eEz|f \rangle|^2$, where i is the electron's initial state (bound in an atom), and f is its final state (ionized). As discussed in more detail in the answers to this question, the transition matrix element tends to be small because the wavelength of a gamma or high-energy x-ray tends to be too small to be well matched with the spatial scale of the electron wavefunctions. You can say it either way around: the cross-section goes up with photon energy for a fixed electron orbital, or it goes down with electron energy for a fixed gamma energy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/206263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 0
} |
How to calculate the electric field due to a thin arc? I have been given to calculate the electric field at the centre of a thin arc with linear charge density as a function of $\cos\theta$ as $\lambda(\theta)=\lambda_0 \cos\theta$.
How I approached: The angle subtended by the ends of the arc at the centre is $\theta$. Now I considered a very thin segment at an angle $\alpha$ with the vertical with a small angle $d\alpha$ such that $d\alpha = \Delta \theta$. The sines of the electric field due to all points is $0$. All we are left with are the cosines of electric field. Thus I got an equation to integrate involving both $\cos\alpha$ and $\cos\theta$.
My problem: I am not sure whether the above mentioned approach to calculate the field is correct and if it is correct,then how should I proceed, what limits should I use ? $-\dfrac{\theta}{2} \to +\dfrac{\theta}{2}$?
| It's a good idea to approach this as you did, and you're certainly correct to choose a thin segment at an angle $\alpha$. This small segment subtends an angle $d\theta$. Remember that $\alpha$ is some arbitrary value of the variable $\theta$.
The charge carried by this segment is then $dq=\lambda d\theta=(\lambda_0\cos\theta) d\theta$. This segment will produce an electric field
$d\vec{E}=\dfrac{k\cdot dq}{r^2}\hat{r}\quad\quad\quad$(you'll also see $d\vec{E}=\dfrac{k\cdot dq}{r^3}\vec{r}$, but these are the same),
where $k=\dfrac{1}{4\pi\epsilon_0}$. As you've said, symmetry arguments guarantee that electric fields perpendicular to the arc's central axis cancel ($\vec{E}_y=0$).
The $x$-components, on the other hand, add. The $x$-component of the electric field from your single element is, like you said,
$d\vec{E}_x=d\vec{E}\cdot\cos\alpha=d\vec{E}\cdot\cos\theta$
Again, remember that your $\alpha$ is just some particular value of the variable $\theta$. This leaves your $x$-component as
$dE_x=\dfrac{k\lambda \cdot d\theta\cos\theta}{r^2}\\
=\dfrac{k(\lambda_0\cos\theta)\cos\theta\cdot d\theta}{R^2}\quad\quad\text{(r is constant)}\\
=\dfrac{k\lambda_0}{R^2}\cos^2\theta\cdot d\theta.$
And yes, your integration limits are spot on.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/206377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Does a gun exert enough gravity on the bullet it fired to stop it? My question is set in the following situation:
*
*You have a completely empty universe without boundaries.
*In this universe is a single gun which holds one bullet.
*The gun fires the bullet and the recoil sends both flying in opposite directions.
For simplicity I'll take the inertial frame of reference of the gun. The gun fired the bullet from its center of mass so it does not rotate. We now have a bullet speeding away from the gun. There is no friction. The only thing in this universe to exert gravity is the gun and the bullet.
Would, given a large enough amount of time, the bullet fall back to the gun? Or is there a limit to the distance gravity can reach?
| For a somewhat extreme answer: How massive should the gun be to have an escape velocity larger than the bullet speed? I am assuming we're using a 357 Magnum fired from a Desert Eagle, which is actually on the low to mid end of the muzzle velocity scale:
source: http://wredlich.com/ny/2013/01/projectiles-muzzle-energy-stopping-power/
A Desert Eagle has a 15 cm barrel. Using the formula provided in other answers:
$$v_\mathrm e=\sqrt{\frac{2GM}{r}}$$
Fill in the numbers:
$$v_\mathrm e=\sqrt{\frac{2\times G\times M}{0.15\ \mathrm m}}$$
$$(410\ \mathrm{m/s})^2=\frac{2\times G\times M}{0.15\ \mathrm m}$$
$$1.68\times10^5\ \mathrm{m^2\ s^{-2}}=13\ \mathrm{m^{-1}}\times G\times M$$
$$M=1.9\times10^{14}\ \mathrm{kg}$$
Note: I am not sure how accurate this number is. I entered these variables in 2 online calculators. 1 of them came up with this answer (http://calculator.tutorvista.com/escape-velocity-calculator.html), the other one came up with a number which is the same numbers, but many orders of magnitude smaller: $1889.4434\ \mathrm{kg}$ (https://www.easycalculation.com/physics/classical-physics/escape-velocity.php). I am not sure why these 2 numbers are so different.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/206440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "93",
"answer_count": 5,
"answer_id": 3
} |
Negative Electric Capacitance I know electric capacitance is always positive (otherwise it doesn't make any physical sense).
But the capacitance is related to the potential difference, which can be positive or negative, depends on who I choose to be first. Also, it is dependent on the charge, which can also have a negative sign.
So my question is, can I just take the positive value of whatever I get and not worry about the signs while doing the calculation?
Thanks!
| Wikipedia defines capacitance of parallel plates as
$$C=\frac{Q}{V},$$
where $\pm Q$ is the charge on the plates (one sign for each plate) and $V$ is the voltage between them.
In other words, yes, if you calculate a capacitance to be negative from, say, $Q<0$, then you can just take the magnitude and call the capacitance positive.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/206552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is a flexible space tower using coaxial superconducting rings possible? Suppose we stack coaxially, vertically a large number of rings, made of some high temperature superconducting material, and start a current through each of the rings in alternate directions (e.g. the 1st ring CW, 2nd ring CCW, 3rd ring CW, etc). Each ring will repel the rings adjacent to it, but since the repulsion is inversely proportional to the distance between rings to the 4th power, the effect on non-adjacent rings should be very minor. Using some simple mechanical restraining to keep the rings from sliding away horizontally from co-axiality, we should be able to see the condensed stack starting to rise up and each ring hovering above the one below it. Then, once such a tower is erected high enough, there could be various ways to attach "climbers" so as to use it as a space elevator.
The lower rings may require some cooling but above a certain altitude the low ambient temperature should make various materials superconducting without any additional cooling. Alternatively, extremely strong and lightweight conducting materials such as carbon nanotubes could be used so that the reduction in weight requires less current to hold up the tower.
What are the flaws in the above reasoning? (I presume there are many...)
| The problem is that the system of one ring balanced on another is an unstable equilibrium. The rings will slide sideways and fall off. You've spotted this and you state in your question:
Using some simple mechanical restraining to keep the rings from sliding away horizontally from co-axiality
But your mechanical restraining would be far from simple. You'd need some form of tower either running through the centres of the rings or running around their perimeters, and this tower would have to be rigid even when it was so long it reached out of the atmosphere. Making a long structure resist bending stresses is generally far harder than making it resist compressive stresses, so if you could build a tower like this it would probably stand up on its own and you wouldn't need the rings.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/206683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Perturbations on a fluid thread and hydrodynamic instability I know that due to fluid instability there are some perturbations ( picture below )
after all of these, my question is the reason of existence of these perturbations.
Why exactly they appear? Shear stress or capillary pressure ?
if the parameters I mentioned are in play please explain me the effect of each of them.
| The surface wave formed in a Rayleigh-Taylor instability is caused mainly by surface tension. Like i mentioned before, a liquid tends to minimize its surface area and $n$ droplets of volume $V/n$ have more surface area than a liquid column of volume $V$.
Initially, the film is uniform and surface tension will minimize the area by starting to form waves. The surface tension is related to the capillary pressure.
Old (slightly irrelevant) answer:
When i wrote this answer i was considering a different type of pertubation than the ones you meant
Pertubations are part of any real system caused by asymmetries in the system, or changes in air pressure, or someone walking by or a train speeding by, etc. etc.
A perfectly undisturbed symmetric system as you describe in your first picture is very difficult to obtain experimentally. Such a system is therefore a purely theoretical situation.
An example of the influence of pertubations is in determining the transition from the laminar to the turbulent regime in a channel flow. Generally, we say this occurs at $\mathrm{Re}\approx2000$ but notice the approximation sign; due to pertubations caused by external factors, the transition may occur one day at $\mathrm{Re}\approx1900$ while another day at $\mathrm{Re}\approx2100$ for the same experiment.
Note that in the case of the Rayleigh-Taylor instability there is an assumed pertubation of the form:
$$e=e_0+\delta\left(t\right)\cos\left(\kappa x\right)$$
This means that the theoretical treatment assumes the pertubations are already present, i.e. the pertubations do not grow spontaneously from a initially undisturbed uniform film. Rectification: Even in a perfectly symmetric system the pertubation will occur because of surface tensile forces albeit much more slowly than in the case the system is perturbed by some external factor.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/206965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Would a pipe from the surface to the Earth's exosphere suck all atmosphere to the space? If I built a tube from Earth's surface to the exosphere, would all the air be sucked out to space?
If this pipe reached to a big planet, like Jupiter, would its gravity through the pipe suck our atmosphere?
If one end of the pipe was at the Earth's core, and other in the exosphere, would the magma go there, like in giant volcano?
| *
*No, it would not be sucked off, for the same reason that the earth has an atmosphere to begin with: gravity.
*No, for the same reason that Jupiter doesn't have a noticeable pull on you: the strength gravity decreases with the inverse square of distance.
*No, Gravity is too strong.
Your misconception seems to be coming from the idea of a vacuum and a straw. The vacuum itself is not what causes the sucking. It is the atmospheric pressure that causes sucking.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/207076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How force is transferred from one body to another If there are 3 coins , namely 1 , 2 and 3 as in figure.
When coin $1$ strike coin $2$ ,the coin $2$ passes the force to coin $3$ and the coin $3$ moves away.
Case :1
How does this happen?
What exactly happens there and passes the force on coin $1$ to coin $3$?
How does the force cause movement?
I mean that when we push any object, why does it moves?
Case 2:
What will happen if there are just 2 coins and coin $1$ strikes coin $2$ and coin $2$ moves?
How and why does coin 2 move in this case?
Please don't say that there is no opposite force or net force is not equal to 0.
| Here is a very simplified picture:
Let's say you have a slow motion camera, and you can see in detail what happens in the fractions of second it takes for this process to occur. What you would see is that when 1 and 2 start touching they start deforming a bit.
This deformation is kinetic energy of 1 transforming into potential energy of 1 and 2... and just a bit later into potential energy of three as well, since 2 is compressing and exerting compression on 3 as well. Well instants later all this compression was transformed into the force that put 3 into motion.
Now why did 2 stay in its position? Well we are assuming that at least 1 and three have equal masses, if this was not the case, then the result would be different. We are also assuming the knock was completely elastic. This means that all energy stayed kinetic, no coin remained with internal energy, that is none stayed deformed, not even vibrating.
Still, how to explain that 2 stayed? Well since 1 and 3 are identical, the deformation on 1 when hitting two is instantaneously transmitted to 3 through 2. This happens only when 2 is very small or very strongly made (so that compression on one side travels so fast that no portion of 2 can move faster with respect to the other, this is close to an ideal solid).
If this were not true, then 2 would deform and the transmission of movement to 3 would take longer time, and 2 would move some distance before completely giving its internal energy to kinetic energy of 3.
So basically, the system behaves much like if 1 hitted 3 directly, and 2 remains a spectator.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/207175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Near energy In the null of a Hertzian dipole Since $\mathbf E = -∇Φ - ∂\mathbf A/∂t$ one expects an oscillating $\mathbf E$ field even in the null of a Hertzian Dipole unless the two right hand side terms cancel -- which they do in the far field of the null.
However, in the near field of the null, the terms do not completely cancel, leaving a residual oscillating E-field.
Since the null has, by definition, no $∇ × \mathbf A$ curl in the oscillating $\mathbf A$, there is no $\mathbf B$ thence no $\mathbf H$ field and therefore no $\mathbf E × \mathbf H$ and since $\mathbf E × \mathbf H$ is the only accepted definition for the dipole's Poynting vector, there is no accepted way for energy to be locally available at points along the dipole's null.
If one places a particle of charge $q$ and mass $m$ along the null, it must experience a force, $\mathbf F=q\mathbf E$ and thence acceleration $\mathbf F=m\mathbf a$.
Where does this energy come from, and how is it delivered without violating locality?
| The example by CuriousOne is spot on - we know charged particle will accelerate in static external electric field, so generally, Poynting vector based on the external field does not need to be non-zero when charged particle gain kinetic energy.
The apparent problem with local energy conservation is caused by using the wrong expression for the energy flux density - you seem to assume it is a function of external fields only. The exact form of the expression depends on whether the charged particles are points or extended bodies, but in both cases it is not a function of external fields only.
For example, if the charged particles are extended (charge and current density are finite), the Poynting theorem is valid. The local version of this theorem states that
$$
\mathbf j\cdot\mathbf E = -\frac{\partial}{\partial t}\left(\frac{1}{2}\epsilon_0 E^2 + \frac{1}{2\mu_0} B^2\right) - \nabla \cdot (\mathbf E\times\mathbf B/\mu_0)
$$
where $\mathbf j$ is electric current density and $\mathbf E$,$\mathbf B$ are total EM fields. One cannot use only external electric and magnetic field and expect this equation remains valid.
When a charged particle starts accelerating in electrostatic external field, external magnetic field indeed is zero, but total magnetic field is not. This is because accelerated particle itself has non-zero magnetic field in its vicinity.
For example, if fields of the particle are assumed retarded, the region of space where the magnetic field is non-zero will be a sphere centered at the position of the particle when it started accelerating and its boundary will expand with speed of light.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/207399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Why can scalars have a sign? I wondered to myself why some scalars have a sign, if they do not have a direction. After all, the plus and minus indicate the direction of the scalar on a one-dimensional axis.
So, for example, why can temperature have a sign? Why can't mass?
| The modern notions that separate "scalars" and "vectors" goes as follows:
*
*Scalars are elements of fields. Examples of fields include the rational numbers, the real numbers, and the complex numbers. Scalars can be added and multiplied and divided.
*Vectors are spaces over fields. These are basically just lists of elements of fields (You can get fancier than that, but let's not). Velocity vectors, for example, are triples of real numbers. Vectors can be added and subtracted, but not multiplied nor divided.
That's it! Here are some examples.
*
*Real scalars that can have a minus sign include the coulomb and gravitational potentials, as well as any other potential (like $\mathrm{Pe}=mgh$, $h$ can be negative).
*Complex numbers are scalars with no notion of positive or negative (you cannot say $i<0$ or $i>0$). They do have a notion of "direction", but in quantum mechanics for example, the "direction" of a complex number is meaningless (we say "the wavefunction is unique up to a phase"), so you really do have scalars with no possible meaning of "direction", but which still have $+1$ and $-1$ as scalars.
*Temperature in kelvin, and mass, are both [real] scalars almost always positive. But it's incorrect to say "that's the case because mass and temperature are scalars". There are other reasons that's the case.
Middle school teachers might tell you that "$-3$ cannot be a scalar, because it has a direction", but my example with the gravitational potential is a good counter-argument, and my "wavefunction" example seals the deal. If your teacher insists "$-3$ cannot be a scalar", memorize their sentence, remember it on their test, and forget it immediately afterwards :)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/207477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 6,
"answer_id": 3
} |
Why didn't accelerator mass spectrometry greatly improve the accuracy of carbon dating? My understanding of the limitation of radiometric dating is that background radiation swamps the radiation from C14 once the remaining atoms get few enough in number. Accelerator mass spectrometry seems to actually count every atom in the sample, meaning background radiation doesn't matter. Yet the advantage of AMS dating stated here is "can use smaller sample size", not "can give dates much farther into the past using the same sample size". Are the machines too expensive to build one that can test larger samples?
| After accepting an answer, I found an answer with a more experimental focus at talkorigins.org.
Why would the instrument reading be noisy if it counts every atom?
The second contribution, laboratory contamination, is largely due to sample chemistry (pretreatment, hydrolysis or combustion to CO2, and reduction to graphite), which generally introduces a small amount of modern carbon, typically at least 1 microgram [8, 12, 13, 14]. Thus a 1 mg sample of infinitely old carbon would measure at least 0.1 pMC (percent modern carbon) before background subtraction...
The third contribution, instrument background, has a number of sources. The main sources are generally the following:
*
*ion source “memory” of previous samples, due to radiocarbon sticking to the walls of the ion source, thermally desorbing, and then sticking to another sample
*mass spectrometer background, non-radiocarbon ions that are misidentified as radiocarbon, sometimes through unexpected mechanisms [16]
*detector background, including cosmic rays and electronics noise
Why can't the sample size be bigger?
The maximum allowed sample size is typically about 10 mg of carbon. Larger samples produce excessive CO2 pressure in the sealed tubes used in the process, causing tubes to explode and samples to be lost. Thus, even if larger samples like RATE’s “on the order of 100 mg” [6] are submitted to an AMS laboratory, only about 1 mg of carbon will actually undergo analysis. Though Baumgardner calls a 1 mg sample “tiny” [6], it is generally considered “large” by AMS laboratories [e.g., 5, 7, 8], with enough carbon to provide ion source current for about a day.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/207604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Mathematically, where does polarization current come from? Mathematically speaking, where does polarization current in a material (due to time variant polarization) come from.
Griffith's introduces the concepts of bound charges and bound currents first as a mathematical trick, and then argues their physicality. But when it comes to the polarization current $J_p$, he postulates it based on physical arguments.
Is this the only way at arriving at them or is there some mathematical procedure that brings them out, analogous to the bound charges and currents which come about by the application of integral theorems of vector calculus?
| Assuming the polarization is not strong enough to ionize your material, the overall bound charge is neutral. Let's exploit this fact by integrating over the volume and surface of the bound charges,
$$\int_V\rho_b\:d^3r + \int_{\partial V}\sigma_b\:d^2r = 0,$$
which, by the divergence theorem, is satisfied if
$$\rho_b = -\nabla\cdot \vec{P}, \:\:\:\: \sigma_b = \vec{P}\cdot\hat{n}.$$
We also know, on physical grounds, that this bound charge density due to the polarization must satisfy its own continuity equation, so
$$\frac{\partial \rho_b}{\partial t}+\nabla\cdot \vec{J}_p = 0 \\ \\ \frac{\partial}{\partial t}(-\nabla\cdot\vec{P})+\nabla\cdot\vec{J}_p = \nabla\cdot(\vec{J}_p-\frac{\partial\vec{P}}{\partial t}) = 0 \:\:\:\Rightarrow \:\:\: \boxed{\vec{J}_p = \frac{\partial \vec{P}}{\partial t}}$$
Physically, the polarization current comes about from the rate at which the material internally responds to the external electric field. Mathematically, this is found by simultaneously solving for overall bound charge neutrality with continuity imposed. Griffiths deduces $\vec{J}_p$ on physical grounds, and then checks that the continuity equation is satisfied, but you can just as well impose continuity to determine $\vec{J}_p$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/207987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How to find kinetic energy given relativistic linear momentum? The relativistic energy of a particle is given by the expression
\begin{equation}
E^2 = m^2c^4 + p^2c^2
\end{equation}
The rest energy is $E_{0}=mc^2$ and the momentum is $p=mc$. In the rest frame, the kinetic energy is $T=E-mc^2$.
Ok, now in another frame of reference, we must include the Lorentz factor $\gamma$, where $\gamma=\frac{1}{\sqrt{1-v^2/c^2}}$.
In a different reference frame, momentum is $p=\gamma mc$ and the kinetic energy is $T=(\gamma-1)mc^2$.
Are these expressions correct? If so, I am confused. I have a question which asks me "The relativistic momentum is $p=mc$. What is the kinetic energy?".
Should I conclude this is $T=E-mc^2=mc$? That is, the kinetic energy is also $p=mc$? Or is the correct conclusion that $\gamma=1$ and therefore the kinetic energy is $T=0$?
| The expressions are not true in general. The first one should be $E^2 = m^2c^4 + p^2c^2$, and the momentum is in general $p = \gamma m v$. The rest energy is $E_0 = m c^2$ and it doesn't depend on the frame (by definition), and the kinetic energy is always $T=E-mc^2 = (\gamma - 1) mc^2$.
You are (understandably) confused because the question is not telling you that momentum is $mc$. You are being told that in a specific situation and in a specific frame, it just so happens that the momentum is equal to $mc$. You should be able to find the velocity from this, and then the kinetic energy.
Alright, since you're having trouble let's get our equations straight. First we define $\gamma$, which is a function of velocity $v$, as $1/\sqrt{1-v^2/c^2}$. The momentum $p$ of a particle with mass $m$ moving with velocity $v$ is given by $p = mv/\sqrt{1-v^2/c^2} = \gamma m v$. The expression $\gamma mv$ looks simpler but don't forget that $v$ is hidden inside $\gamma$.
There are two expressions for the energy. Obviously both are true and can be proved to be equal to each other; the only difference is whether you want the energy in terms of $p$ or $v$. So we have $E^2 = p^2c^2 + m^2c^4$ and $E = mc^2/\sqrt{1-v^2/c^2} = \gamma m c^2$. Kinetic energy $T$ is defined as $T = E-mc^2 = (\gamma-1)mc^2$. As always, this depends on $v$ through $\gamma$.
All these equations are true in any frame. The quantities themselves (such as $v$, $p$ or $E$) change when you change frames, but they change in such a way that all the equations remain correct.
Now, you have been told that in some particular frame, a particle is moving with $p=mc$. This will not be true in general, since the formula for $p$ is $mv/\sqrt{1-v^2/c^2}$; it just so happens that in our situation, $v$ is such that $p = mc$. This is an equation you can use to find $v$; knowing $v$, you can use the formula for kinetic energy $T$ (which, don't forget, depends on $v$) and find what you are being asked for.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/208104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why don't constant motion charges produce waves? I'm a little confused about the origin of electromagnetic waves.
Although I can understand their origin mathematically, I get a little confused about the physical intuition of...
Information transfer is restricted to the speed of light; a local change in a field can only perturb distant regions once a light sphere has reached said regions
This makes perfect sense, but I can't understand then why only accelerating bodies product EM waves, and not any motion which would affect a local field (such as linear, constant motion).
Of course, I realise constant motion is entirely relative and by producing EM waves, I'm suggesting the experience of light entirely subjective. But that's not the case, right?
How then does constant motion not produce EM waves, but also disturb a local field and not violate the speed limit on field propagation?
Also, besides appearing in an entirely separate physical model and mathematical framework, does the origin of gravitational waves (and the lack thereof for constant motion mass) follow a similar explanation?
Any and all input is appreciated!
| For any charged particle in uniform motion there is an inertial frame in which that particle is at rest, and vice versa. So if the particle shed energy as EM waves due to uniform motion you would have the odd situation that a motionless particle would also have to shed energy as EM waves. Likewise if a motionless particle doesn't create EM waves then neither can one in a state of uniform motion.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/208203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
at $T \approx 0 \, \text{K}$, will all energy levels within the electronic band structure be occupied up to a certain level? I saw this from the script of my teacher that I don't understand what does it mean
If we cool down a crystal to an absolute temperature of T ≈ 0K, all atoms of the crystal will exist at their ground states and all energy levels within the electronic band structure
will be occupied up to a certain level
At the absolute zero, all electron is at its lowest energy level so the higher level is empty so why all energy levels within the electronic band structure
will be occupied up to a certain level. I don't understand what it means. The higher level is empty so why all energy level (including higher level) is occupied at a certain level?
| The Pauli Exclusion Principle means that not every electron can be at the very lowest energy level.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/208301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding $\phi(k)$ If you have $\Psi(x,0) = c(\psi_1 + \psi_2)$ where $\psi_n$ is an Energy eigenfunction for a quantum number $n$. I'm supposed to find $\phi(k,t)$ at $t$ = 0. This is for an infinite square well from 0 to a.
I'm not exactly sure how to do this. I assume since $V(x)=0$ from $0<x<a$ then $\phi(k,t)$ will not evolve in time, thus $\phi(k,t) = \phi(k)$.
$$\phi(k,0) = \frac{1}{\sqrt{2\pi}}\int_{0}^{a} \Psi(x,0) e^{-ikx} dx.$$
While, I believe that I'm on the right track the $\phi(k)$ I get is not correct in the sense that when I find $|\phi|^2$ it is still complex, which can't be right. Can someone guide me in the correct direction?
| Firstly, the Shroedinger's equation is:
$$H\Psi(x,t)=i\hbar\frac{\partial}{\partial t}\Psi(x,t)$$
so, every wave funtion does evolve in time.
In the case of infinite square well (and also for cases where the potential V is independent of t), we can use separation of variable for the above Shroedinger's equation. Then:
$$\Psi(x,t)=e^{\frac{-iEt}{\hbar}}\Psi(x,0)$$
Now, the wave equation in momentum space is just a Fourier transform of $\Psi(x,t)$ with integral in $x$ variable as you presented.
Moreover, $|\phi|^2$ is always a positive real number for all complex $\phi$. So, if you find $|\phi|^2$ is complex after calculating, please re-check.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/208517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the meaning of spin of an particle is $1/2$ and $2$ or something? On which factor does these spin no. depend? I have read a book. The writer had written that if the spin of an particle is $\frac{1}{2}$, then we have to rotate it at $720$ degree. Imagine that there are two balls joined. Then we have to rotate the two balls first through $180$ degrees then we will reach at the joining point of two balls. Then again rotate it through $180$ degrees.Then we will reach at end point of second ball. Like this we have to rotate it through $720$ degrees. The writer has also written that if spin is $0$ then it will be like a sphere. I can't understand this all. I am fifteen years old. So my questions can be silly with respect to other members. Please tell me about these.
| The spin of a particle is a number that describes its angular momentum. The earth orbits the sun, making years- that is angular orbital momentum. The earth spins on its own axis, making days- that is angular rotational momentum
The spin of a particle is analogous to the latter of those two. Not exactly alike due to the quantum nature of spin, but ´same idea´. The differences arise partly from the nature of spin being measured within a 2d vector space(which will require additional reading to really make sense.)
The other part of your question, ´on what factor does these spin no. depend´ can be answered by arguing the opposite question:¨Upon the spin no. do what factors depend upon,¨ as that is the nature of fermions and bosons- the spin determines whether they are a fermion or boson and from this the properties of the particle arise. This is a tautological statement, but without going terribly in depth it can just be assumed.
Bosons(which have integer spins) are force carriers while Fermions(which have 1/2 integer spin) are the constituents of ordinary mass. If their spins were reversed, the properties would be very different from what we have observed.
To learn more about why that actually is, reading about the Pauli Exclusion principle and becoming familiar with quantum numbers will help. I think that with this information you will be able to go out and find the rest of the desired answers while actually gaining a familiarity with the basics of particle physics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/208608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Temperature of a falling meteor I am reading "What if?" article https://what-if.xkcd.com/20/ and I'm interested in it's scientific background. Mr. Munroe writes:
As it [the meteor] falls, it compresses the air in front of it. When the air is compressed, it heats it up. (This is the same thing that heats up spacecraft and meteors—actual air friction has little to do with that.) By the time it reaches the ground, the lower surface will have heated to over 500℃, which is enough to glow visibly.
How can one make such estimation? I wanted to use PV = nRT, but I don't know the volume and the difference in pressure. I tried to sum up all the kinetic energy of all air molecules of the air, bumping into the meteor, but the answer is nowhere near. Does anyone have an idea? Such an interesting problem.
| Ram Pressure produces a large amount of atmospheric drag force, by the compression of the air located ahead of the meteor. It's equation is:
P = $\rho$v $^2$
P is the pressure, $\rho $ is the fluid density and v is the velocity of the meteor.
The remainder of your answer might be found here, Drag and Heat, so rather than duplicate that, I will ask you read it and hope it helps you.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/208722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Clarification about Wave-particle duality Okay,so I am learning about the double slit experiment done with electrons. I saw this picture, which shows the interference pattern being built up slowly with increasing number of electrons:
I just wanted to confirm whether I have the correct understanding. The fact that the first image has a random distribution, shows that each electron interferes with itself and strikes a point on on the screen which would be dictated by the probability function.
The interference pattern is the result of the same interference of many electrons and is a statistical property of many electrons.
Also, does this mean the electron travels as a wave, but then it obviously must strike as a particle since it hits a well defined spot on the screen?
|
The fact that the first image has a random distribution, shows that each electron interferes with itself and strikes a point on on the screen which would be dictated by the probability function.
Yes.
The interference pattern is the result of the same interference of many electrons and is a statistical property of many electrons.
Sort of. Each electron impact obeys (technically, samples) the probability distribution, which contains the interference. You need many hits for the probability distribution to become evident, but saying that the interference is exclusively a statistical phenomenon is slightly contentious.
Also, does this mean the electron travels as a wave, but then it obviously must strike as a particle since it hits a well defined spot on the screen?
Yes. There is a disparity in the evolution of quantum systems: wavelike, continuous, and linear ("unitary") when they're left 'by themselves' and discrete, particle-like, discontinuous, nonlinear, when they're 'measured'. The current state of affairs is not really satisfactory, as there isn't an ironclad rule to say which situations are 'systems by themselves' and which situations are 'measurements', so there's still much to understand here. The overall problem is known as the measurement problem, and while there's been some impressive progress recently, we're still far from anything like a satisfactory understanding of these matters.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/208818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Existence of bound states in 3D Yukawa potential For a 3D Yukawa potential
$$ V(r) = - \lambda { e^{-Mr} \over r}. $$
Bargmann's upper bound can be read as necessary condition for the existence of at least one bound state; we want $N_l>1$ and from $$\int r V(r) > (2 l +1) N_l$$ we had (for the $l=0$ wave)
$$\frac \lambda M > 1$$
(in units where $1=2m=\hbar$). And it looks a bit poor limit in this case. Is there a better bound, preferably a sufficient and necessary condition, or at least a good numerical approximation?
| Bennett, Herbert S. "Upper Limits for the Number of Bound States Associated with the Yukawa Potential" Journal of Research of the National Bureau of Standards,
Vol. 86, No.5, September-October 1981 (PDF):
The number of bound-state solutions of the Schrodinger equation for
the screened Coulomb potential (Yukawa potential), $-(C/r) \exp(-\alpha
> r)$, occurs frequently in theoretical discussions concerning, for
example, gas discharges, nuclear physics, and semiconductor physics.
The number of bound states is a function of $(C/\alpha)$. Three upper
limits for the number of bound states associated with the Yukawa
potential are evaluated and compared. These three limits are those
given by Bargmann, Schwinger, and Lieb. In addition, the Sobolev
inequality states that whenever $(C/\alpha) < 1.65$ no bound state
occurs. This agrees to within a few percent of the numerical
calcuiations of Bonch-Bruevich and Glasko. The Bargmann and Lieb
limits and the Sobolev inequality are substantially easier to evaluate
than the Schwinger limit. Among the three limits, the Schwinger limit
gives the most restrictive limit for the existence of only one bound
state and, therefore, is the best one to use for the approach to no
binding, i.e., $1.65 < (C/\alpha) < 1.98$. The Lieb limit is the best
among the three when $(C/\alpha) > 1.98$. The Bargmann limit is the
least restrictive.
(Emphasis mine.)
For more details, this may also be useful:
Brau, Fabian and Calogero, Francesco. "Upper and lower limits on the number of bound states in a central potential". Journal of Physics A: Mathematical and General, Volume 36, Number 48, 19 November 2003.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/209031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Would a tachyon be able to escape a black hole? Or at least escape from a portion of the hole inside the photon horizon?
| Your question only really makes sense for a localized tachyon, i.e. one whose wavefunction in position space is constrained to a finite region of space (i.e. has compact support) because that is the only kind that will "fit" inside a horizon of a black hole. And the answer to your question for this kind of tachyon is that it cannot escape a black hole.
The edges of a localized tachyon fulfilling the Klein Gordon equation with an imaginary mass parameter can only spread at a speed less than $c$, even though the plane waves (momentum eigenstates) that make it up have phase velocities greater than $c$. That is, if we plot the region where the wavefunction is nonzero, the boundary of this region can only grow at a speed less than $c$. This is a consequence of the Paley-Wiener theorem, which shows that the Fourier transform of any function with compact support cannot itself have a compact support and must include arbitrarily high frequency components. This is elegantly summarized in John Baez's Article "Do Tachyons Exist?". The theory making use of the Paley-Wiener theorem is summarized in QMechanic's answer to "Do Tachyons Move Faster than Light?.
Since the disturbance of a localized tachyon cannot spread faster than $c$, it therefore cannot escape the inside of a black hole's event horizon. In concluding this, we also need to assume (or at least I will, because I only know classical General Relativity) that the inside of a black hole is exactly as e.g. the Schwarzschild or Kerr solutions (in the Kerr case we need to limit the angular momentum so that there is an event horizon) to the Einstein field equations would describe a black hole: no talk of firewalls or other speculative recent quantum phenomena. We also need to assume it is valid to simply transcribe a solution to the Klein-Gordon (or other suitable wave equation) onto the spacetime inside a black hole. So these ideas will apply for a big black hole, where the spacetime inside the horizon is of very low curvature compared to the scale over which the tachyonic field is nonzero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/209154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
What causes this pattern of sunlight reflected off a table leg?
My friend noticed an interference-like pattern around the table leg. However, we do know that interference patterns of sunlight produces rainbow colours. What seems to be happening here?
| I'm not convinced that variations in thickness are the cause. Variations in gloss (areas of specular reflection and areas of diffuse reflection) seem more likely: the "distribution requirements" are the same (in both cases the "defect" has to repeat at about equal distances), but the "thickness" hypothesis also requires that the curvature of the sections changes with height (because the focal length has to be proportional to the radius of the circles we see).
Edit: It looks like concentric circles, but a single spiral would look the same. A regular helix pattern of increased specularity would produce such an image. And a regular helix pattern is exactly what one can expect from the polishing process, in which pipes are rotated while they move along the grinding/polishing wheel or belt. While the process can be tuned to minimize them, periodic variations are an essential feature of that process: the center of the grinding/polishing wheel or belt moves in a helix over the surface of the pipe.
Such a helix of higher specularity will produce a regular spiral image regardless of the width and the spacing, or the position of the sun.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/209232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48",
"answer_count": 5,
"answer_id": 4
} |
Action max, min, or saddle? It is well known that $\delta S = 0$ lays the foundation for variational mechanics. But I am confused as to whether or not this S is a minimum, a maximum, or a saddle point. Some books address this issue by using the language of "Stationary Action" instead of the more well-known "Least Action". But that doesn't really solve the problem of identifying different types of possible extrema.
So my question is: under what occasions are $S$ minimum or saddle? Is it possible for $S$ to be a maximum? (I have not encountered a single scenario in which $S$ is a maximum) In other words, can we refine the constraints on the possible extrema of $S$?
| In analogy to the ordinary calculus you need to look at the second or quadratic variation, Gelfand and Fomins Calculus of Variations do a good job of explaining it with the minimum of fuss.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/209344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Why is induced current through an inductor more when switch is put off than when the switch is put on? This problem I saw somewhere got me thinking. I thought very hard about this but couldn't get to any conclusion. (here opened and closed are verbs, i mean when the current is flowing in the circuit and we cut the key off, its opened, and when there is no current in the circuit and we switch it on, its closed.)
| Imagine a circuit consisting of a battery, a wire, a switch, and an inductor, all in series. For "resistor" you could simply sum the internal resistance of battery and wire - it doesn't really matter (I just don't like "unrealistic" circuits for simple explanations).
When you close the switch, current will attempt to flow. The most current that could flow (if the inductor were a perfect conductor) is $I_0=\frac{V}{R}$, but the inductor will try to resist the change in current and therefore generate a reverse e.m.f. that is initially no greater than V, the voltage of the battery (because when it reaches that value, there is no force left to drive current).
When the switch is opened, the current through the inductor attempts to go to zero "in an instant". Unfortunately, just generating a back e.m.f. of V will not be sufficient to stop the current change - the circuit is broken, and with an infinite resistance in the loop you need an infinite back e.m.f. to keep the current flowing.
In reality there will be a little bit of stray capacitance in any inductor (if only the turn-to-turn capacitance); that acts to create a "short circuit" for the current, so the change in current through the inductor when the switch is opened is not infinite, and a finite voltage spike ensues.
But either way, the back e.m.f. is indeed greater when the switch is opened than when it's closed, because the circuit impedance is greater.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/209436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Amplitude of light across material boundaries Does the amplitude of the light ray decrease when it moves from a rarer to a denser medium?
I think that since amplitude depends upon the energy of the light ray, it should decrease. This is because of the kinetic energy of the light wave decreases (velocity decreases as light travels from rarer to denser medium), hence the energy of the wave falls.
This explanation does not seem convincing, could anyone provide some insights?
| Fresnel's equations sum up the behaviour of light across medium boundaries in terms of linear dielectric response. This problem was solved in the 19th century.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/209806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What experiments have been done that confirm $E=mc^2$? What experiments have been done that confirm $E=mc^2$?
Are there experimental results that contradict $E=mc^2$?
Or are experimental results consistently showing this famous formula to be true?
| Emilio's answer was also the first that came to my mind, but I was not quick enough to post. However, even more precise experiments come from particle accelerators, and similar devices.
https://en.wikipedia.org/wiki/Tests_of_relativistic_energy_and_momentum
The power of the magnets in the LHC is determined by the relativistic mass of the particles going through it, which is over 7000 times the mass these particles have at rest. If $e=mc^2$ was wrong, and they underestimated the mass, the particles would hit the outer wall of the accelerator.
http://lhc-machine-outreach.web.cern.ch/lhc-machine-outreach/lhc-machine-outreach-faq.htm
To answer the second question:
I am not aware of any experiment refuting $e=mc^2$. There may be problems near or in a black hole, where relativity slams into quantum stuff.
https://en.wikipedia.org/wiki/Criticism_of_the_theory_of_relativity
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/209919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Why do voltmeters and ammeters have high and low resistance respectively? I understand why voltmeters are connected in parallel and ammeters are connected in series, but why is it that to measure voltage, you must have high resistance, and to measure current, you must have low resistance? Perhaps this is not within the scope of the question, but how exactly do either devices measure what they measure and what do their resistances have to do with it?
| Theoretically, these requirements arise from the way you connect the measurement devices to the rest of the circuit.
A voltmeter is connected in parallel, as you said. Say that you are trying to measure the voltage drop across a resistor $R$ through which passes a current $i$. If the internal resistance of the voltmeter is comparable to $R$, then the current $i$ will be divided through both branches, and the voltage value you read will be different from when the resistor is connected alone. If you make the internal resistance very large, then the current flowing through the voltmeter will be negligible, and the voltage drop across the resistor will not change.
The same applies to ammeter; since you're connecting them in series with the rest of the circuit, you need the device's internal resistance to be negligible so that it doesn't affect the current flowing through the branch. If you connect a voltage source $V$ to a resistor $R$ and want to measure the current flowing through the resistor, if the ammeter's internal resistance is not negligible, the total resistance the voltage source "sees" will be different, so by Ohm's Law the current will also be different.
As for the actual electronics of these measurement devices, I'm afraid I can't help you, as I don't know them in detail myself.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/210020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to calculate force / torque on non-flat lever, i.e. dolly See attached image. The mass is being rotated on a lever where the pivot point (P) is a certain distance ($L_2$) from the right angle at the bottom. How do I calculate the force necessary to apply horizontally at point U to lift the mass in the worst case (i.e. where the rotation position requires the maximum force), ignoring for now the weight of the lever itself, friction, etc. The structure will never rotate counter-clockwise from its illustrated position, and will rotate up to $60^\circ$ clockwise. Also assume the mass will be distributed evenly across $L_1$, i.e. the center of the mass is in the center of
$L_1$.
For those interested: this is part of a robotics project. A string attached to a motor/pulley system will be pulling at point U. I'm trying to determine if the motor has sufficient stall torque and if so, how much mass we can reasonably expect to be able to lift.
| You need to balance the moments about point P. The horizontal force F time $L_3$ will equal the mass $M$ times the horizontal distance $L_4$ between P and M.
$$F \times L_3 = M \times L_4$$
This calculates the force required in the current position you've shown.
Worst-case, an infinite force will be required after rotating $90^{\circ}$ from that shown, assuming that the mass M is a point mass located on the lower bar.
$$F \times 0 = M \times L_2$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/210108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Gaussian path integrals and convergence The Hamiltonian path integral in quantum mechanics, for a particle with coordinate $q$ and momentum $p$ and Hamiltonian $H=p^2/2m+V(q)$, is
$\int \mathcal{D}q(t)\mathcal{D}p(t)e^{i\int_0^T(p\dot{q}-\frac{p^2}{2m}-V(q))}$
Now, to go to the Lagrangian formulation, it seems like the standard procedure is to complete the square for $p$ and then evaluate the gaussian integral to "integrate out" $p$. My question here is, why can we do that? The exponent is purely imaginary, and the gaussian integral should only be well defined if the real part of coefficient of $p^2$ is negative, right?
(In Peskin and Schroder (Chapter 9, Functional methods), when they evaulate the full path integral for a free field, they comment on this and say that convergence is guaranteed because the time $T$ is slightly imaginary. However, when they earlier did the above operation to integrate out $p$, they did not comment on this issue at all. Are these two cases different or are they both solved by slightly imaginary $T$ in some way?)
| The Gaussian integral with a purely imaginary exponent actually converges because of the increasingly fast oscillations. This Math.SE question has a bunch of (fully rigorous) proofs that $\int \sin(x^2)\, dx$ converges, which is of course the imaginary part of $\int \exp(ix^2)\, dx$. The real part can be similarly be proved to converge.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/210213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Can tidal forces significantly alter the orbits of satellites? I would assume that there are other larger, more significant, forces acting on artificial satellites, but can tidal forces drastically alter the orbit of a satellite over time?
I was thinking this could especially be an issue for a satellite in geostationary orbit, because they have to be extremely precisely positioned. However, I could see this being an issue for satellites in other orbits as well, just not to the same degree.
| Tidal force acting on a natural satellite, like the moon around the earth, is the result of the deformability of the earth as the moon affects it and slowly the moon recedes from the earth. In general these tidal forces can be accelerating or decelerating :
their orbital period is shorter than their planet's rotation. In other words, they revolve faster around the planet than the planet rotates. In this case the tidal bulges raised by the moon on their planet lag behind the moon, and act to decelerate it in its orbit.
The size of the artificial satellites is such that this type of effect is very small in disturbing the orbit . After all the moon with all its size is still here and will be in orbit forever though at a distance, unless there is a collision with a third body or the sun turns nova.
The energy losses due to friction with the matter ( there is no complete vacuum) in their orbit is important and will mask any effect since the orbits are continually corrected for the losses as Whatroughbeast says in his/her answer.
The tidal bulges due to the Moon on the earth do affect satellites and have to be taken into account as discussed here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/210403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Why must the speed of the aether wind be so small compared to the speed of light? I was doing some reading on the Michelson-Morley Experiment. One of the principle equations for the equations is this one.
$$\frac { 2w }{ c } \times \frac { 1 }{ 1-\frac { { v }^{ 2 } }{ { c }^{ 2 } } }$$
Where v is the speed aether wind, c is the speed of light, and w is the distance light travels from point A to point B. The equation is then changed to this one.
$$\frac { 2w }{ c } \left( 1+\frac { { v }^{ 2 } }{ { c }^{ 2 } } \right) $$
The two equations are nearly equal, given the fact that if x is a very small number, 1+x is the same as 1/(1-x). So the second equation is dependent on the fact that the speed of the aether wind is very small compared to the speed of light. My question is: why did Michelson think that the speed of the aether is very slow compared to the speed of light. The text I was reading mentioned something about the timing of the eclipses of Jupiter's satellites, but didn't go into detail.
| If the velocity of the aether wind is a sizeable fraction of c, the apparent velocity of c will depend strongly and obviously on the direction in which the measurement is taken. Since this is not true, the aether wind velocity must be quite small, which requires a sensitive instrument to detect the effects. It was exactly this range of possible wind speeds that the Michelson-Morley was designed to test, and the analysis required to understand the results is what you read.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/210566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What are the functions of these coefficients $c_1,c_2,c_3,c_4$ in $ \psi_{sp^3}= c_1\psi_{2s}+ c_2\psi_{2p_{x}} + c_3\psi_{2p_y}+ c_4\psi_{2p_{z}}$? Hybridised orbitals are linear combinations of atomic orbitals of same or nearly-same energies. Atomic orbitals interfere constructively or destructively to give rise to a new orbital which is what we call hybridised orbital.
This is the definition I'm quite acquainted with. But I couldn't understand one thing. What are $c_1,c_2,c_3,\ldots?$ For instance,
$$\psi_{sp^3}= c_1\psi_{2s}+ c_2\psi_{2p_{x}} + c_3\psi_{2p_y}+ c_4\psi_{2p_{z}}.$$
I've read many books one of which state that these coefficients determine the directional properties of the hybrid while other sources tell these coefficients are normalizing constants that is; $$c_1^2 + c_2^2 + c_3^2 + \cdots = 1.$$
But what is the necessity of the sum of the square of the coefficients to be equal to $1?$
Here is the quote:
[...]
\begin{align}
ψ_1 &= c_{1,1} φ_1 + c_{1,2} φ_2 + ... + c_{1,n} φ_n\\
ψ_2 &= c_{2,1} φ_1 + c_{2,2} φ_2 + ... + c_{2,n} φ_n\\
\vdots\\
ψ_n &= c_{n,1} φ_1 + c_{n,2} φ_2 + ... + c_{n,n} φ_n
\end{align}
Here $n$ atomic orbitals (with their wave functions $φ_1, φ_2, ..., φ_n$) are used to construct n hybrid orbitals ($ψ_1, ψ_2, ..., ψ_n$) through a linear combination, where the coefficients $c_{1,1}, c_{1,2}, ..., c_{n,n}$ are normalization constants that must fulfil some requirements:
Hybrid orbitals must be normal: $$ c_{1,n}^2 = c_{1,1}^2 + c_{1,2}^2 + ... + c_{1,n}^2 = 1$$
I then compared the above with these to quantum superposed state $$|\psi\rangle= |1\rangle c_1 + |2\rangle c_2$$ where $|1\rangle,|2\rangle$ are orthogonal states. Here $c_1^2 + c_2^2= 1.$
So, is hybridization a superposition?
Can anyone please explain what these coefficients are actually meant for? Why should their square add to $1?$
| Yes, hybridization is just that the hybrid state $\psi$ is a superposition of the different orbital states $\phi_1,\dots,\phi_n$.
Since the orbital states $\phi_i$ are assumed to be normalized ($\lvert\lvert \phi_i \rvert\rvert^2 = 1$) and orthogonal to each other, for the hybrid state to be normalized the squares of the coefficients must sum to $1$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/210777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
What is used to measure the spin of a particle? I was wondering what is the specific system or method that is used to measure spin of a particle? e.g. In a lab what would they use to tell what a particles spin is?
P.S. I am new to stack exchange so please tell me if I formatted this wrong or need to change anything about my question. Thank you very much.
| Once the stable particles , electron and proton have had their spin determined by the stern gerlach method as discussed in the other answer, one can start building up the spins of the elementary particles and the resonances.
The spins of the particles have been determined by the angular distributions of decay products.
Example: the Higgs has been declared the Higgs because it is a scalar
Examining decay patterns. Spin-1 had been ruled out at the time of initial discovery by the observed decay to two photons (γ γ), leaving spin-0 and spin-2 as remaining candidates.
Distribution of |cosθ⁎| for events in the signal region defined by 122GeV(lt)mγγ(lt)130GeV. The data (dots) are overlaid with the projection of the signal (blue/dark band) and background (yellow/light histogram) components obtained from the inclusive fit of the data under the spin-0 hypothesis.
The various analysis use extensively also conservation of angular momentum, i.e. knowing the spins of all the other particles except the one under study.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/210885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Primitive unit cell of fcc When I consider the primitive unit cell of a fcc lattice (red in the image below) the lattice points are only partially part of the primitive unit cell. All in all the primitive unit cell contains only one single lattice point.
My question is how much each point at the corners of the red primitive unit cell contributes? At every corner a point is only partially inside the red primitive unit cell such that all parts together form a single point. How big are these individual parts?
In principle it should be possible to calculate that, but I hope there a known results in the literature. Unfortunately I can't find no such thing...
| I'm guessing that the question is asking how you work out how many lattice points are in the cell. If so the standard procedure is to displace the cell a small distance along each of the lattice vectors than count the number of points the cell contains.
I'll illustrate this in 2D since my abilities to draw convincing 3D diagrams are limited. Consider this lattice:
I've drawn a possible unit cell. There's obviously one lattice point in the middle, and we could argue that each of the corner points contributes $\tfrac{1}{4}$ of a point, but this is a rather hit and miss way of trying to count the points. Instead just displace the cell a small distance along the two lattice vectors while keeping the size constant:
and it's now obvious that the cell contains two lattice points so it's a compound cell.
This always works, and in any number of dimensions though of course it's harder to visualise in 3D.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/210963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 2
} |
Where does the force appear when considering object interactions in another reference frame? Imagine I am sitting on an asteroid with my buddy and drinking a beer. When the bottles are empty we throw them simultaneously in opposite directions perpendicular to the asteroid's movement. What will happen?
From the logical standpoint and from momentum conservation, our velocity should not change - the total momentum of two bottles is zero in the asteroid's frame of reference.
Suppose somebody is watching the asteroid from another reference frame (velocity not equal to zero). According to Newton's second law, the force is equal to the change of momentum over time. The mass of asteroid was changed (remember the bottles). The momentum was changed ($M\times V$). Where is the force?
| The mass of the asteroid changed, but the mass of the asteroid + bottles did not. Your outside observer would need to include the bottles in calculating total momentum; otherwise the system is not closed.
This is the same principle behind operating rockets in vacuum. We can change the momentum of a rocket by firing out mass (exhaust) in the opposite direction. It's the momentum of the whole rocket+exhaust system that is conserved.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/211203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
} |
Does a changing magnetic field impart a force on a stationary charged particle? Does a collapsing and re-establishing magnetic field impart a force on a stationary charged particle? Does the charge particle get repelled and or attracted? Does it move or spin?
| Yes, it will create a force. The force is directed solenoidally around the change in a magnetic field.
To see this, look at Maxwell's equation $\nabla \times \mathbf{E} = -\partial_t \mathbf{B}$. This is analogous to the equation from magnetostatics: $\nabla \times \mathbf{B} = \mu_0 \mathbf{J}$. Thus a changing magnetic field sources an electric field the same way a current sources a magnetic field.
So for a concrete example, suppose you have a solenoid and you turn on a current so the magnetic field strength increases at a constant rate. Then $\partial_t \mathbf{B}$ is constant in the solenoid, and the electric field you get will be the same as the magnetic field you would get from a constant $\mathbf{J}$ in the region of the solenoid. That is, the electric field you get will look like the magnetic field from a wire. So outside the solenoid, you will get an electric field wrapping around the axis of the solenoid. This electric field will cause a force on the charge. Note, the force is directed in a circle, but it will not cause circular motion. Instead, the charge will eventually spiral away from the solenoid.
Notice that in some sense you would say the force is caused directly by the electric field, and it is only indirectly caused by the magnetic field. However, I am still going to say that a "yes" answer is more appropriate in this case, and anyway I think this indirect effect is what you were trying to get at in the first place.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/211293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
How does color (or reflection in general) work? I'm confused, does the absorption and emission determine the color of something? Or does that only happen when something is emitting energy?
When light hits an object, the photons get absorbed, then emitted with a different wavelength right?
| From a quantum mechanical perspective, all light scattering is a form of absorption and re-emission of light energy. Photons don't bounce off a surface.
When the energy (proportional to frequency, or color) of the light is far from resonance with an energy transition for the material, then the the re-emission happens (nearly) instantly and the light maintains its coherence. Since the emitted light has the same energy as the absorbed light, this color is reflected by the material.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/211388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Novel atomic clocks: can quantum and many-body effects help? I am trying to learn if there are any proposals concerning the application of quantum and many-body effects to atomic clocks.
From what I understand, optical lattices have been used for timekeeping only due to a superior signal to noise ratio (SNR). The interactions between atoms are considered to be harmful as they introduce an extra frequency shift. I have a couple of questions related to this:
1) What is the reason for using a lattice instead of a continuum ultracold gas
in a trap?
2) Have people considered using quantum effects, like coherent superpositions of states (e.g., squeezed states) in order to improve these clocks?
3) Are there any many-body effects that actually help to increase the accuracy (besides increasing the SNR)?
| You might want to look up the chip scale atomic clock made by symmetricom. It "resonates" with the hyperfine transition using CPT (coherent population trapping) weird QM Atom-light field interaction. SNR is many times worse than the standard Rb atomic clock.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/211572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Zero gravity means zero friction? The frictional force acting on a body placed on a horizontal plane is
$F=\mu{R}$
where $R$ is the normal reaction and is equal to weight of a body in this case. And $\mu$ is the coefficient of friction. But, if gravity is zero, then is the frictional force zero (ignoring all other friction due to atmosphere etc.)? If friction is zero, then how do astronauts in a spaceship experiencing zero gravity move?
(I saw so many question related to this on this site, but they give explanations on cosmological effects on it. But I here avoid all such effects and I want answer to only this specific question.)
| There's no friction in space?
Strange how every screw just fell out of the ISS!
What's actually going on is you are looking at the friction between an object and the surface it's resting on, neglecting any other force that might be pushing the surfaces together.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/211706",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 4
} |
Interpretation of cosmological redshift I was trying to understand why we cannot explain the observed redshift of distant galaxies using special relativity and I came upon this article by Davis and Lineweaver.
Unfortunately when I arrive at section 4.2, where the authors explain why we cannot use special relativity to explain the observed redshift, I get stuck. In particular I don't understand this sentence:
"We calculate D(z) special relativistically by assuming the velocity in $v = HD$ is related to redshift via Eq. 2, so...".
What bothers me is the assumption that velocity is related to distance linearly. I was thinking that in a special relativistic model the basic assumptions were:
1)Relativistic Doppler shift formula
$$
1+z=\sqrt{\frac{1+v/c}{1-v/c}}
$$
2)Observed Hubble law
$$
z=\frac{H}{c} d
$$
Combining this two i get the following relation between velocity and distance
$$
\sqrt{\frac{1+v/c}{1-v/c}}-1=\frac{H}{c} d
$$ and not the one proposed in the article.
| The Hubble parameter is defined to be $\dot{a}(t)/a(t)$, where $a$ is the scale factor of the universe. If you wished to have a model where redshifts were not due to expansion, but actually just due to things moving away from us (and this is what Davis & Lineweaver are doing in the section of paper you refer to), then you could assume that $H = v/d$ is an equivalent statement.
Then assuming that the redshift is only due to a velocity, then special relativity tells us that the redshift $z$ is given by
$$ (1 + z)^2 = \frac{1 + v/c}{1 - v/c}$$
which can be rearranged to give eqn 2 in the reference you quote.
$$ v = c \frac{(1+z)^2 -1}{(1+z)^2 +1}$$
Inserting $v=Hd$ gives
$$ d = \frac{c}{H} \frac{(1+z)^2 -1}{(1+z)^2 +1}$$
The equation relating redshift and distance under the general relativistic universal expansion model is quite different to the relationship between redshift and distance in special relativity. The difference becomes apparent at high redshift, as explained in section 4.2 of the Davis & Lineweaver paper. Observations of course show that the relationship between distance and redshift is not the one derived above, which therefore favours the universal expansion interpretation of redshift.
You can of course always hypothesise some ad hoc relationship between $H$ and $d$ (or equivalently $H$ and $t$) to make a model to match the data. I think Davis & Lineweaver's aim was merely to show that the flattening of the $z$ vs $d$ relation cannot just be due to the non-linearity of the $z$ vs $v$ relationship in special relativity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/211797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Why is torque the cross product of the radius and force vectors? I understand the torque vector to be the cross product of the radius (moment arm) and force vectors, but that means the torque would be perpendicular to the radius and force vectors, which makes no sense to me, e.g. a force applied tangent to the surface of a car tire creates a torque along the line of the axle.
I'm pretty sure I am just misunderstanding a simple formula, so I wanted to make sure.
And, when you use the formula for torque, is torque defined as a vector or just a scalar? I would think it would be a vector.
| You are right in saying that the torque points along the line of axle. This doesn't make sense intuitively, but if you look at the formalism of angular momentum vector, this would be obvious.
So angular momentum is defined as $L = r \times p$. And torque is defined as $\tau = r \times F$ . It is clear that $$\frac{d\vec{L}}{dt} = \frac{d\vec{r}}{dt} \times \vec{p} + \vec{r} \times \frac{d\vec{p}}{dt} = \frac{\vec{p}}{m} \times \vec{p} + \vec{r} \times \vec{F} = \vec{r} \times \vec{F} = \vec{\tau}$$
When you look at the car tire case. The friction creates a torque along the axle, which increases the angular momentum of tire in that axle direction. What does that mean? Since $$L=r \times p$$ and $r$ does not change, $p$ must increase. Which makes sense, since friction makes the wheel spin faster!
In summary, the direction of torque vector is determined by the definition of angular momentum and angular velocity vectors which capture more measurable kinematic quantities. This formalism is a bit counterintuitive at first. But you will eventually get used to it...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/212042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
} |
What is a Christoffel symbol?
*
*What is a Christoffel symbol?
*I often see that Christoffel symbols describe gravitational field and at other times that they describe gravitational accelerations. Then, on some blogs and forums, people say this is wrong because Christoffel symbol is NOT a tensor and thus has no physical meaning. Which of these statements is the right one?
*What is the significance of a Christoffel symbol in differential geometry and General Relaivity?
| how i see it:
The Christoffel symbols represent the correction/changes to any parallel transported vector (like particle velocity) on a curve manifold (so that this vector "stays" on a geodesic of the manifold).
From Einstein equations, the energy/matter forges the structure of the (spacetime) manifold, and this structure is expressed by the metric tensor (that is solution of Einstein equations). The Christoffel symbols are directly linked to the metric tensor and so they "force" any moving particle to follow a geodesic on the manifold.
In that sense the Christoffel symbols can be seen as the components of a force field (the gravitational field) that at any point of the manifold will force the particle to follow the (curved) structure of spacetime. In this context the metric tensor can be seen as the source of the (gravitational field) potential.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/212167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
Sphere of uniform charge density with a cavity problem Suppose we have a sphere of radius $R$ with a uniform charge density $\rho$ that has a cavity of radius $R/2$, the surface of which touches the outer surface of the sphere. The question was to calculate the field inside the cavity.
Naively, I used Gauss' law to determine that $\mathbf{E}=0$ inside the cavity. However, the solution I have stated that the field is actually the superposition of the field of the sphere without the cavity, and the field of the cavity, wherein the charge density is the negative of that of the original sphere. So,
\begin{align}
\mathbf{E}_{\text{inside sphere}}&=\frac{\rho}{3}(x,y,z)\\
\mathbf{E}_{\text{inside cavity}}&=-\frac{\rho}{3}(x+R/2,y,z)\\
\\
\Longrightarrow \ \mathbf{E}_{\text{net inside cavity}}&=\mathbf{E}_{\text{inside sphere}}+\mathbf{E}_{\text{inside cavity}}\\
&=-\frac{\rho R}{6}(1,0,0).
\end{align}
I am confused by the rationale of this approach, and also why Gauss' law gives the incorrect answer.
| The problem I see in your solution is with adding the $R/2$ term for the field inside the cavity (the negatively charged sphere that makes the "cavity"). The field inside of a uniformly charge sphere with charge density, $\rho$, can be found using Gauss' law to be:
$\vec{E}(r) = \frac{\rho r}{3\epsilon_0} \hat{r}$
Your notation is slightly different, but I think it is essentially the same thing. But what you notice, is that inside the sphere, the value of the electric field only cares about the charge density, since Gauss' law in this symmetric situation is only concerned with the charge inside your Gaussian surface. The same is true for the oppositely charged sphere, where the only difference should be a '-' sign up to the point $r=R/2$. Therefore, the electric fields are equal and opposite inside the cavity, so $\vec{E} = 0$ inside the cavity, and both superposition and Gauss' law produce the same result as they should.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/212513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Pendulum's motion is simple harmonic motion For a pendulum's motion to be simple harmonic motion (S.H.M.) is it necessary for a pendulum to have small amplitude or S.H.M. can be produced at large amplitudes as well?
If it is really necessary for an S.H.M. to have small amplitudes then why is it? because even at large amplitudes there is restoring force pulling the pendulum toward mean position and its acceleration is directly proportional to the displacement.
|
In case of pendulum motion, when the angle of displacement is large(as shown in fig.), the direction of restoring force$(mg. sin \theta)$ is not exactly in the direction of equilibrium position. But the condition of S.H.M. is the restoring force must directed to the equilibrium position in all instant. So in case of large angular displacement, this condition is violated. Hence the motion of the pendulum no more remains simple harmonic in that case.
For this cause the angular displacement of S.H.M. must be kept smaller than $4$ degree.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/212583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Limitations of particle in cell method for high desnity plasma Are there any limitations of particle in cell (PIC) method for high density plasma? To be more specific, is modelling of a narrow channel of high density plasma possible or are there any limitations connected with PIC approximation?
| This is a very complex and broad question.
There are first some technical difficulties:
*
*The memory consumption, as noted by others.
*The time it takes to compute.
They can be very high, because a very large number of particles may be required to limit the statistical noise. This noise can create many problems, such as numerical plasma waves, strong numerical heating, etc. These limitations can vary drastically from one specific problem to another. It is difficult to assess without knowing your situation.
Secondly, there are physical difficulties:
*
*Collisions, which can be treated but only with approximations that can fail.
*Collisionnal ionization, which can prove difficult when dealing with exponentially growing number of electrons.
*Field ionization (same problem)
*Recombination and other atomic processes
*Radiation due to these atomic processes or due to Bremsstrahlung, and all the physics related to this radiation interacting with ions or electrons
*Nuclear reactions
*Quantum processes (pair creation, quantum Bremsstrahlung, etc.)
*... (an endless list really)
You have to define clearly your problem and identify which of these physical aspects require additionnal computation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/212703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Bernoulli principle and particle Bernoulli principle describes the flow of a fluid for steady, incompressible flow along a streamline. But it is said for a particle of a fluid along a streamline. My question is a particle of fluid refers to a molecule or a group of molecules?
| The Bernoulli principle is nothing but $F=ma$ for small volumes of fluid.
In other words, the only thing that can accelerate some fluid is a difference in pressure, and vice-versa.
A molecule of a fluid (since it has temperature) is moving quite fast, but it doesn't get very far because it collides with other molecules.
Those constant collisions are called "pressure".
If the molecules up ahead are at lower pressure it means they are seeing fewer collisions per second, so the original molecule will tend to get bounced in that direction.
So that's how lower pressure makes fluid move.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/212881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Relativistic time dilation on Mars compared to Earth? What is the time dilation in Mars, compared to earth? Can we accurately calculate it? What information is needed to do these calculations?
| We can calculate the time dilation approximately using the weak field approximation. If the difference in the Newtonian gravitational potential between two points $A$ and $B$ is $\Delta\Phi$ then the weak field approximation tells us that the relative rate at which clocks at the two points tick is given by:
$$ \frac{\Delta t_A}{\Delta t_B} = \sqrt{1 - \frac{2\Delta\Phi_{AB}}{c^2}} \tag{1} $$
Let's be clear about the notation and sign conventions. Take your example of Earth and Mars as an example. $\Delta\Phi_{AB}$ is the change in the potential energy going from the Earth ($A$) to Mars ($B$), and since in going from the Earth to Mars means the potential energy becomes less negative that means $\Delta\Phi_{AB} \gt 0$. That means the right hand side of equation (1) is less than one, so the fraction $\Delta t_A/\Delta t_B$ is less than one. This means time ticks more slowly on Earth than it does on Mars.
To calculate $\Delta\Phi$ simply:
*
*calculate the (positive) potential energy change to leave the Earth's surface (staying at the same distance from the Sun)
*calculate the (positive) potential energy change to move from the Sun-Earth istance outwards to the Sun Mars distance
*calculate the (negative) potential energy change to descend to the surface of Mars (staying at the same distance from the Sun)
then add up the three potential energy changes to get the total $\Delta\Phi$ and plug it into equation (1). I'll leave this as an exercise for the reader.
Strictly speaking this only calculate the gravitational time dilation and ignores the time dilation due to the orbital velocity of the Earth and Mars. In the weak field limit you can simply multiply together the time dilation due to gravity and orbital velocity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/213106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Does the ray transfer matrix for a gap change, based on the index of refraction of the medium? The ray transfer matrix for a gap is typically:
$$\begin{bmatrix}
1 & d \\
0 & 1\\
\end{bmatrix}$$
If I know that my glass is a thickness $L$ does the ray of light that goes into it travel a distance $n' \, L$ where $n'$ is the refractive index of the glass? So would the $d$ inside the matrix change to $n' \, L$?
| No, you would use the actual thickness of the glass $L$. This is because your ray matrix should change the angle of the incoming ray at the boundary using Snell's law, and to calculate the propagation accurately beyond that, you will want to use the distance covered by the ray in the lab frame.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/213384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does the magnetic force depend on the reference frame? Force due to a moving charge = qvB
Immagine a charge moving with some velocity on earth and I calculate the force due to its magnetic field with Earth as reference frame for me.
An astronaut in space also calculates the force but space as reference frame.
For ease lets imagine that everything is happening in XY-plane then the force would be perpendicular to both of us, so our motion in XY plane would hove no effect on the force.
The force calculated by me and the astronaut would be different...right? If so, then how is this possible that the force produced is different when it should be same?
| The magnetic field and velocity vectors are not Lorentz invariants, so yes, the resulting force is frame-dependent.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/213464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Why are ultrasounds used for producing images of body organs? Low frequency waves can penetrate better than high frequency waves, then why are high frequency waves used in ultrasounds for sharper images?
Similar is the case in detection of flaws in metal blocks.Why are high frequency waves used here instead of low frequency waves?
| Ultrasound frequencies for diagnostic imaging range from about 1-10 MHz. Lower frequencies tend to be used for imaging of deeper tissues because they penetrate further as you rightly say. However, penetration is only one consideration. The "sharpness" of the imaging depends on the wavelength. Basically, you will not be able to resolve structures that are comparable to or smaller than the wavelength that you are using because diffraction effects will blur the results.
The speed of sound in human tissues is about 1500 m/s (it varies slightly between different types of tissues, which causes reflections, which is how ultrasound actually produces images). Thus the wavelengths at 1 and 10 MHz are 1.5mm to 0.15mm and it is easier to resolve fine details with ultrasound for structures nearer the "surface" of the body.
Another issue is that to image structures at a range of depths then a range of frequencies needs to be used in order to ensure that sound penetrates to, and is reflected off, interfaces at a range of depths.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/213537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
} |
Electric field at $r=0$ How does classical physics justify the existence of an electric field at $r=0$?
Is this an edge case, an ambiguity, a "does not exist"?
Is this a trivial case or indicative of an actual fault in classical electrodynamics?
Obviuosly the math breaks down because the denominator is $r^2$...What I want to know, is this significant or a trivial case?
| To make it simple, it does not exist, there are no real point-like classical charged particles. That's why we learn, for example, the electric field of a homogeneous charged sphere right after the one for a point charge.
To put it in another way. A point charge $e$ could be thought of as made up by many $de$ tiny charges at one spot, but you'll require an infinite amount of energy to bring two (or more) of these charges from infinity to a single and the same point.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/213639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is specific heat really a constant? When we calculate heat lost or gained by a body according to equation $q=mc\Delta T$ where
*
*$m$ = mass of body
*$c$ = specific heat
*$\Delta T$ = temperature difference
In this equation, why do we take $c$ to be constant because we know $c$ depends on temperature? Why is the equation not $q=m\int c\Delta T$?
| No, specific heat varies with temperatures. Only at temperatures way beyond the Debye temperature is the specific heat of the body a constant. This equation treats the specific heat as a constant because the temperature range of operation is assumed to be small enough for a constant assumption of specific heat. The final data that you get will not have as much noise as you are expecting. This equation is more of an engineering equation than that of physics. The original equation is much more complicated, and solving practical engineering problems using the original equation would be unnecessarily difficult. The general variation is such that at temperature much lower than the debye temperature, the specific heat has a cubic dependence and at temperatures much higher the dependence is constant.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/213960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can two 2500 K light bulbs replace one 5000 K bulb for growing plants indoors? In an effort to assist an old Greek woman I find myself in need of greater minds.
A 5000 Kelvin light bulb is required for her indoor fig plant. Can I get away with substituting two bulbs each in separate fixtures emitting 2500 Kelvin each?
All answers are greatly appreciated and I'm looking forward to the education.
| The output of a light bulb is characterized by (at least) two parameters: the Wattage, and the color temperature.
The wattage tells you how much total energy the bulb uses (and emits). The color temperature tells you how that energy is distributed. In principle, an incandescent object emits according to Planck's Law:
$$B(\lambda, T) = \frac{2hc^2}{\lambda^2}\frac{1}{e^{\frac{hc}{\lambda k_B}-1}}$$
As temperature gets higher, this leads to "whiter" light (more blue).
You can't just put two 2500 K lamps together to make a 5000 K bulb: you will continue to have too much red, and not enough blue.
Some attempts have been made to make light "more blue" - for example, the GE Reveal light bulb. And you can buy "grow lamps" which also are more blue.
Don't use 2500 K lamps. They will cause heating and evaporation, but lack the blue component that the plant uses for photosynthesis.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/214057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Particle in a box: value for wave function $u(x)$ when potential $V(x)$ is infinity The time-independent Schrödinger equation (TISE) is:
$$ -\frac{\hbar^2}{2m}\frac{d^2 u(x)}{dx^2}+V(x)u(x)=Eu(x) \hspace{15pt}$$
where $E$ is a constant.
Imagine now a infinity potential well as we can see on the following picture:
The potential $V$ is infinity in $x<0$ and $x>a$. I've seen on Gasiorowicz's book that $u(x)$ must be 0 for this $x$ interval. But it wasn't totally justified.
I thought about some possibilies but all of them just take me to a place where $u(x)=0$ in $x<0$, $x>a$ is an already assumption . Can you explain me why $u(x)=0$ in $x<0$, $x>a$?
| Remember that the potential $V(x)$ is related to a force $F(x)$ via:
$$F(x)=-\frac{dV(x)}{dx}.$$
At $x \leq 0$ and $x \geq a$, $V=+\infty$, so in these areas:
$$x \geq a,$$ $$F(x) = -\infty \, .$$
and:
$$x \leq 0,$$ $$F(x) = +\infty.$$
So an infinite force acts on the particle at both borders of these areas, preventing it from entering these areas. In these areas the probability of finding the particle is zero and thus:
$$P(x)=|u(x)|^2=0.$$
and:
$$u(x)=0.$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/214188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Can friction change the resonance frequency of a system? I am simulating the transient response of a mass-spring-damping system with friction. The excitation is given in the form of a base acceleration.
What I am not sure about is: can the friction change the resonance frequency of the system or will it affect only the response amplitude?
My first guess was that friction, being an external force, cannot change the resonance frequency of the system. What can happen is that there is no motion because the mass is sticked due to static friction, but if there is motion then the resonance frequency remains the same.
Then I saw that friction can be expressed as an equivalent damping as
$$c_{eq} = \mu N sign(\dot{x})$$
thus actually changing the natural frequency since it is expressed as
$$\omega = \omega_n \sqrt{1-2 \xi^2}$$
The fact is that the use of the equivalent damping is just an approximation to introduce the effect of friction into the equation of motion, is it not?
| The resonant frequency is equal to the natural frequency when no damping and no external force at all is applied to the system. When damping is applied so that now the decay time (decay of amplitude) is in effect, the resonant frequency decreases a little below depending on magnitude of damping.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/214320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Our Perception of Heat Our body temperature is roughly 37 degrees celsius (that is, when we measure our body temperature externally, by using a thermometer that measures the temperature of our skin usually between our arm and side torso), whereas most of us would say that 25 degrees would be a pretty hot day. Why do we perceive a 25 degree day to be hot, when thermal energy from our 37 degree bodies should be leaving out and entering our surroundings?
| You are correct in a sense of thermodynamics. The heat from a human body does indeed leave the body and into the surroundings. The body combats this by burning calories and producing more heat, keeping the internal body at a constant temperature.
I'm not a biologist however:
Perception of a hot day, is just because our nerves our telling our brain its a given temperature. We are warm blooded animals, our body naturally generates heat. Lets assume that the body generates the same amount of heat every day, our brain may interpret a 25C day as warm because the body is generating the same amount of heat however it is leaving the body into the surroundings at a slower rate.
Temperature sensing is a survival tool, it used as a way of keeping the body at a constant temperature. As it gets hotter, your brain is in a sense telling you that it's getting harder to cool. (Forgive my terminology)
Think of this, if the day was as hot as the human body, you would be at danger of heat stroke.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/214432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Effects of inner painting in the temperature of room which room will stay warmer for long time in winter night. The room with inner painting black or white?
| It will make no measurable difference.
Heat in a room is lost by conduction and convection - not (in significant amounts) by radiation.
Radiative heat transport occurs between two objects at different temperatures, so the surface of the wall would have to be colder than the rest of the (objects in the) room for radiative transport to take place.
Assume a black sphere in the middle of the room at temperature $T_1$, and a wall at $T_2$. If the walls are "black" (maximum emissivity at the wavelengths of interest - note this is not just black in the visible since most thermal radiation is in the far IR) we can compute the rate of heat loss as
$$Q = \sigma(T_2^2-T_1^2)\pi r^2$$
For a wall at 0C and a sphere at 20 C, this results in heat transport of about 100 $\rm{W/m^2}$ from the object to the (perfectly black) walls. In reality there are not walls on all sides - so the solid angle is smaller and the rate of heat transport is less - and furthermore the walls are not that cold. Because if they were, there would be significant heat transport from the air.
For this you can refer to this engineering toolbox link. With a very modest rate of air flow you will have a heat transfer of about 20 W per square meter per degree of temperature difference. This would quickly result in a heating up of the wall surface to "something close" to the temperature in the room - at which point the radiative losses are even smaller.
It may already be obvious from the above but let me spell it out: radiative cooling would (very slightly) affect objects in the room; but the air has very little radiative interaction.
At "normal" temperatures, and with a house that is somewhat insulated, you can ignore the radiation component. This is not necessarily true if you have a flimsy tent - you may have come across the aluminized Mylar "emergency blankets" that provide radiative insulation. They do make a difference when that's all there is between you and a cold dark clear sky. Not the same thing at all.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/214503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.