Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Why are ropeway pillars tilted? While skiing I have noticed that ropeways pillars are usually tilted to be perpendicular to the slope (fig.1). If the gravity is pulling straight down, why aren't they vertical as they are supposed to support ropeway's weight? Is there something more they "do"?
Also there are pillars which function is not to support weight but to "push" the rope down in order to keep it tense, why is it so crucial? Why the rope needs to be tense?
fig.1
| The cable is pulling up the slope and the masses are pulling down vertically, the net force is slightly angled forward and that's what the poles support.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/450078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why do we care only about canonical transformations? In Hamiltonian mechanics we search change of coordinates that leaves the Hamilton equation invariant: these are the canonical transformations.
My question is: why we want to leave the equations invariant? I mean: we want to solve a differential equation (the Hamilton equation), so why restrict ourselves only to a certain kind of change of coordinates? Why we care about the "form"? I suppose that sometimes could exist some non-canonical transformation that makes me solve the equation... or not?
| It's for the same reason that Einstein sought a covariant form for the equations of gravity. The equations should not be dependent on our coordinatisation of space. This he called general covariance.
In classical mechanics, canonical transformations leave invariant the Hamiltonian and this means it leaves invariant the time evoution of the system, and hence the physics f the system is still the same.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/450187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 3
} |
Potential Difference of a battery - What does it mean? I have studied current electricity for a while now. When I look back at basic concepts, I am quite clear about what current, electron, resistance is. But I cannot imagine about the potential difference or voltage of a battery. Or in a circuit, it is said that potential drops across a resistance, why is that so?
What does it mean to have a potential difference? I asked my friends too, but none of them have quite understood the concept too. So, can you clarify the concept of p.d? using analogies or any way that might easy.
| In short, this arose from a hydrodynamic analogy: the electrostatic potential is similar to pressure, the electrical resistance is similar to the hydraulic resistance of the pipe, the electric current is similar to the flow rate of a liquid. Battery voltage similar to hydraulic head. More details on this concept can be found in Maxwell’s article On Faraday's Lines of Force
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/450445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Is it easier to balance a free-standing ladder when standing on the bottom step or the top step? This video shows someone climbing a ladder not resting against a wall: https://youtu.be/bWOSbEtDAas . I was wondering if it's harder to balance the ladder when you're standing near the bottom, or standing near the top.
Intuition for why it might be easier near the bottom: the ladder tipping has less effect on your body's center of mass.
Intuition for why it might be easier near the top: climbing the ladder increases the moment of inertia of the you+ladder system. Analogy: it's easier to balance a vertical rake on the palm of your hand than it is to balance a vertical pencil.
| At the bottom. You will begin to fall as soon as the center of mass of the system passes through the vertical plane of the legs of the ladder. As you point out, this happens more easily up high by simple geometry.
You also mention a higher moment of inertia associated with being up higher. This is true, but really only means that, as you fall, you accelerate slightly slower than you would have had you been lower.
Consider a similar situation, which is easier: walking in platform shoes elevated 1cm or 1m?
In response to the rake vs pencil: here you are able to move the support point so the slower fall from the higher moment of inertia is very helpful. When balancing on a ladder, you can only lean your weight from the top.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/450561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to build an antisymmetric selfdual tensor out of two 4-vectors? In problem C of section 1.4 of Ramon's Field Theory: A Modern Primer, we are asked to build a field bilinear in $\chi_L$ and $\psi_L$, two left-handed weyl spinors, which transforms as the (1,0) representation of $\text{SL}(2,\mathbb{C})$. This representation is equivalent to the behavior of rank 2 tensors $B_{\mu\nu}$ which are antisymmetric and selfdual, i.e., $$B_{\mu\nu}=-B_{\mu\nu}\\B_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}B^{\rho\sigma}.$$
One can check that both $i\psi_L^\dagger \sigma^\mu\psi_L$ and $i\chi_L^\dagger\sigma^\mu\chi_L$ are 4-vectors. I think the correct way to proceed is to use them to build the field $B_{\mu\nu}$. My first approavh was to antisymmetrize and selfdualize $\psi_L^\dagger \sigma^\mu\psi_L\chi_L^\dagger \sigma^\nu\chi_L$. This of course fails to be bilinear in the fields. Does any body have any clue? Would something like $\psi_L^\dagger\sigma^\mu\sigma^\nu\chi_L$ work?
| Since both fields transform as $(\tfrac{1}{2},0)$ we can treat them as $SU(2)$ spinors and ignore everything else. So the question is how do you multiply two $j=\tfrac{1}{2}$ spinors to get a $j=1$ vector? Easy enough, just use regular Pauli matrices:
$$ b_L^{\pm,0} = \psi_{L\alpha} \sigma^{\pm,3}_{\alpha\beta} \chi_{L\beta}$$
or
$$ \vec{b}_L = \psi_{L,\alpha} \vec{\sigma}_{\alpha\beta} \chi_{L,\beta}$$
To see that this is equivalent to an anti-symmetric Lorentz tensor that satisfies $B_{\mu\nu} = \tilde{B}_{\mu\nu}$ write
$$B_{\mu\nu} = \begin{pmatrix}0 & -b_L^x & -b_L^y & -b_L^z \\ b_L^x & 0 & -b_L^z & b_L^y \\ b_L^y & b_L^z & 0 & -b_L^x \\ b_L^z & -b_L^y & b_L^x & 0\end{pmatrix}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/450888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Spaghettification on quarks? Imagine a nucleon falls into a black hole, I would expect the gravitational force acting on each quark to be drastically different but due colour confinement wouldn't there more pairs of quarks being spawned. Would this adds even more mass to the black hole and where do this energy comes from?
| A similar issue arises outside the context of quantum gravity. This question is similar to asking where the energy comes from to elastically distort a solid object under gravitational tidal force as an object falls.
For example, consider Newton’s famous (but possibly apocryphal) apple falling from the tree. The Earth’s gravitational field is not uniform but radial, and dependent on the distance from the center of the Earth, so a very tiny deformation of the apple occurs, and this deformation changes the interatomic forces and the elastic energy associated with them.
So, assuming the elastic energy increases, where does it come from? Simply from the kinetic energy of the falling apple.
By this argument, I claim that no “extra” energy gets added to the black hole. If a neutron falls in from rest, the mass of the black hole simply increases by the neutron mass, 1.675e-27 kg.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/451105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What happens to the quantum information of a particle and an antiparticle when they annihilate? I understand that the quantum no-deleting theorem dictates that it's impossible to delete quantum information, so what happens to the quantum information of a particle and an antiparticle when they annihilate each other?
| First of all, the particles don't disappear: they turn into other particles. But it's not as simple as just "electron + positron -> two photons". A state containing an electron and a positron is really only well defined as $t \to -\infty$, when we can think of the two particles as being infinitely far apart and hence having no probability of annihilation. As soon as time starts to move, the state begins to evolve into a massive superposition of all the possible decay products: two photons, three photons, $n$ photons, and, if the energies are high enough, other particle-antiparticle pairs (muons, quarks, whatever), hadrons, Higgs bosons, and whatever you want. And don't forget, given the particle content you still have a lot of room to maneuver by distributing energies, momenta and spins among the different particles in the possible final states.
This complicated state is the one that holds all the information of the initial state, because the time evolution was just dictated by Schrödinger's equation, and is thus unitary. As soon as you actually observe some decay product, you collapse the state, and hence lose information1. If you just observe that the result is two photons with, say, $3\, \mathrm{GeV}$ of center-of-mass energy and some given momenta, can you deduce that the initial state was an electron and a positron? No, there are lots of possible initial states that result in two photons with that energy. You really need that huge superposition if you want to run the clock backwards and recover the initial state. That's where the information is.
1 At least in the Copenhagen interpretation, but let's not get into that.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/451323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 3,
"answer_id": 2
} |
The reasoning behind doing series expansions and approximating functions in physics It is usual in physics, that when we have a variable that is very small or very large we do a power series expansion of the function of that variable, and eliminate the high order terms, but my question is, why do we usually make the expansion and then approximate, why don't we just do the limit in that function, when that value is very small (tends to zero) or is very large (tends to infinity).
| The idea behind any expansion is to express a "complicated" function in terms of simpler ones. In the case of a series expansion, the simpler ones are polynomials. Thus for instance the function
$$
\frac{1}{a+x}-\frac{1}{a-x} \tag{1}
$$
is a difference of two approximately equal quantities when $x/a\to 0$ and so appears to be $0$ when $x/a\to 0$ but that's not really useful information so it is convenient to reexpress it as
$$
\frac{1}{a(1+x/a)}-\frac{1}{a(1-x/a)} \approx
-\frac{2 x}{a^2}-\frac{2 x^3}{a^4}
$$
which gives some additional information in this limit.
There are also multiple circumstances where some equations - say a differential equation - cannot be solved exactly but can be solved in some limit (often yielding a linearized equation or systems of equations), which still allows some qualitative understanding of the features of the solutions: this is the basis for perturbation theory. For instance, solving the Schrodinger equation for the Lennard-Jones potential
$$
V(r)= 4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12}
-\left(\frac{\sigma}{r}\right)^6\right]
$$
cannot be done analytically, but near the minimum $r_0=2^{1/6}\sigma$ one can expand to obtain
$$
V(r)\approx -\epsilon+ \frac{18\ 2^{2/3} \epsilon (r-r_0)^2}{\sigma ^2}
$$
which, up to an unimportant constant and shift in $r$, is a harmonic oscillator potential for which the solutions are known. Thus, we can get some approximate insight (or at least some orders of magnitude) into the appropriate molecular transitions.
Of course great care must be taken to insure that the assumptions behind the expansion are not ultimately violated by the solutions, i.e. one must understand that the resulting solutions are approximate and may fail badly in some cases.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/451588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 4,
"answer_id": 2
} |
$Q$-factor for damped oscillator (not driven)? How would this be defined?
Some of the Q-factor definitions I have encountered include:
$$Q=2\pi\frac{\text{Energy stored}}{\text{Mean power per cycle}}\\Q=2\pi\frac{\text{Energy stored}}{\text{Energy lost per period of
oscillation}}\\Q=2\pi\frac{1}{\text{Fractional power lost per cycle}}$$
However, none of these seem to work for a non-driven, damped oscillator. The first two won't work because energy stored is not a constant, and unless fractional power lost per cycle is a constant (is it, and if it's then how do you show that?) the third won't work either.
| The Q factor is $ 2\pi$ divided by the fraction of energy lost per cycle. If you drive the oscillator externally and keep the stored energy constant then you have to supply this fraction at every cycle.
Note: this definition does not require Q to be constant.
See https://en.m.wikipedia.org/wiki/Q_factor
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/451690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Hamiltonian description of a system I know that phase space is the Hamiltonian description of a system, where we deal with position and momentum in equal footing. My question is in this phase space are those position and momentum are basis for that system?
As far as I know they are independent, in Hamiltonian dynamics, but how can I say that they are orthogonal basis functions? As alwayas we draw position and momentum line perpendicularly!
| If the question is if the Hamiltonian phase space has the structure of a vector space equipped with a scalar product, the answer is negative, in general.
It is true that generalized coordinates and momenta of a system with $n$ spatial degrees of freedom are locally represented by $2 n$ real numbers and $\mathbb R^{2 n}$ can be seen as a $2 n$ dimensional vector space. But the impossibility, in general, of a global mapping of the phase space on $\mathbb R^{2 n}$ prevents the possibility of identifying coordinates and momenta as a vector.
As a simple example, which shows why there is such limitation, is the phase space of a rigid body in 3D. Coordinates represent three independent angles. However the set of rotation is not a vector space because in general $3D$ finite rotations around different axes do not commute.
It turns out that the most natural structure of the Hamiltonian phase space is that of a symplectic differentiable manifold. Which implies that, besides the failure of a general identification with a vector space, even at local level, the most important property of Hamiltonian coordinates is not related to the concept of angle and scalar product, but to the concept of local volume.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/451817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Running coupling constants within a highly compressed object I wonder is it possible. in highly compressed objects, such as neutron stars and black holes, (I'm not sure that this applies to singularities), that the physical conditions within these objects partially mimic the conditions assumed to exist shortly after the big bang, particularly with regard to the boson meditated interactions of the standard model.
My question is whether the assumed convergence of coupling constants on these scales is taken into account when we try to model these systems and whether (probably unanswerable), this may prevent the formation of a singularity.
As far as I know, current experimental evidence does give some support to the idea that at high enough energies, unification of coupling constants is a possibility.
To summarise, I am wondering if, beneath the event horizon of a collapsed star, there still may exist a recognisable physical object, due ( in some handwavy way), to unification/convergence/modification of coupling constants?
| The short answer is no. Compressed objects form from gravitational interactions, and the unification of coupling constants at some unification scale would only alter the type of matter being compressed, and not the nature of the compression itself. In order to resolve a singularity, one would need a modification of General Relativity at high energies, which is the nature of the study of quantum gravity, the characteristic energy scales of which are much, much, higher than the expected unification scale of the standard model gauge theory.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/452106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If the universe is flat, does that imply that the Big Bang produced an infinite amount of energy? Too much density and the universe is closed, analogous to a sphere in four dimensions: you travel in a straight line and you end up where you started. Too little and you have a saddle: not sure about the destination if you travel in a straight line. Just the right amount and the topology is flat. The flat topology is infinite: you travel in a straight line forever.
If the topology is flat (and at this point all evidence indicates that it is to within 0.4%), then multiplying the critical density by an infinite amount of cubic meters gives you an infinite energy/stress.$$\rho_{CRIT}\space kg\space m^{-3}\times \infty\space m^3=\infty\space kg$$
Is that a reasonable interpretation?
| If it was completely flat, you wouldn't talk about volume. The universe isn't completely flat, it does have a volume (like a pancake, it's called 'flat', but it definitely has a volume). For a truly flat object, your formula would need a surface density and surface area, and you would get a finite answer.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/452460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Where does it getting wrong , when using $v^2 - u^2 = 2as $ down the incline, for different object having different moment of inertia? Well, Consider a situation there is a sphere and a ring, of same mass $M$ and radius $R$. They both starts rolling down the inclined plane. We know moments of them as well, $$I_\text{sphere}=\frac{2}{5}MR^2$$ and $$I_\text{ring}=MR^2$$ respectively. So, We know that sphere will have more transitional kinetic energy, so more velocity, so it will take less time to reach at bottom.
The question is while using equation for both, $$v^2 - u^2 = 2as, $$ initial velocity is $0$ for both, final velocity are different for both, but acceleration and distance traveled same. So, where is the blunder happening?
And also the equation $$v=u+at,$$ if velocity for sphere is greater, then what about the time? Why is the time taken less? Where are the equations getting wrong or is it me getting it wrong?
| Your kinematic equation of $v^2-u^2=2as$ is correct, but just like your question here you are neglecting the effects of friction, which gives rise to different accelerations for each object.
Considering the net force acting on each object, we actually have two forces with components acting down the ramp: gravity ($mg\sin\theta$) and friction ($f$). Without friction, the objects only will have the force $mg\sin\theta$ acting down the ramp, and there would be no net torque acting about the center of each object. Therefore, each object would slide without rolling down the ramp with the same acceleration and reach the bottom of the incline at the same time!
So, what you need to do is determine the net force acting on each object:
$$F_{net}=mg\sin\theta-f$$
However, just like the question of yours I referred to, by imposing the rolling without slipping condition, $a=\alpha R$, you are constraining friction to be a certain value for each object that depends on their moment of inertia $I$. This can be seen by considering the net torque on each object:
$$\tau_{net}=I\alpha=\frac aRI=fR$$
So we see that in order to have rolling without slipping it must be the case that
$$f=\frac{aI}{R^2}=\gamma ma$$
for $I=\gamma mR^2$
So we see that we end up with different frictional force for each object. Putting it all together:
$$mg\sin\theta-\gamma ma=ma$$
$$a=\frac{g\sin\theta}{1+\gamma}$$
Showing what you already knew: the larger value of $\gamma$ causes a lower acceleration, and hence a longer time down the ramp when both objects roll without slipping down the ramp.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/452670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Photons exerting force Since gravity affects photons, and forces always work in pairs.
Does this mean that photons have a resultant force.?
And would we be able to harness this resultant force to move objects using light?
|
Does this mean that photons have a resultant force?
Yes it does. If we send a light ray past a massive object and the path of the light ray is changed that means the momentum of the light is changed, and to conserve momentum the momentum of the massive object would also change. It would look like this:
This shows a star bending the light downwards, and the star will move upwards due to the reaction force. But:
And would we be able to harness this resultant force to move objects using light?
The momentum of light is tiny and for any mass heavy enough to significantly bend a light beam the deflection of the mass would be completely undetectable. So no, we can't use gravitational deflection of light to move objects.
For completeness we should note that you can use light to move objects and this is what optical tweezers do. This technique also exploits the momentum change of the light, but it is unrelated to gravitational deflection of light.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/452880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Are there any open questions in theoretical quantum physics? I am wondering if there are any open questions about the structure of quantum mechanics. If so, how do you know that this is an open question? Topics that come to mind are electron spin, probability amplitudes, and decoherence. I am interested in the foundations of the theory, and have studied quantum mechanics at the graduate level, yet I am somewhat curious about and unsure of the viewpoints of other scientists at this point in time (January 2019).
|
Are there any open questions in theoretical quantum physics?
There is at least this million dollar open question:
The successful use of Yang-Mills theory to describe the strong interactions of elementary particles depends on a subtle quantum mechanical property called the "mass gap": the quantum particles have positive masses, even though the classical waves travel at the speed of light. This property has been discovered by physicists from experiment and confirmed by computer simulations, but it still has not been understood from a theoretical point of view. Progress in establishing the existence of the Yang-Mills theory and a mass gap will require the introduction of fundamental new ideas both in physics and in mathematics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/452995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Could you use ferrofluid to electromegnetically propel a rocket using it as a fuel? Could you accelerate ferrofluid through a solenoid to provide lift
| A ferrofluid is basically tiny iron particles suspended in oil; so your proposal is effectively to use iron particles for reaction mass in a rocket. Yes, that can be done, but specific impulse is proportional to exhaust velocity. It is easier to accelerate very low-mass particles like ions to high velocities in a short distance than to accelerate relatively large particles like iron nanoparticles in the same distance. An accelerator for iron nanoparticles would need to be very long (and heavy) to attain the same exhaust velocity that can be attained with ions in an accelerator a few tens of centimeters long. So, it's a good but impractical idea.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/453125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
To what extent is the heat in the focal point due to visible light? When focusing sunlight on a piece of paper, e.g. with magnifying glass, the paper will be charred and might eventually even burn (assuming low cloudiness). To what extent is the heat a result of the focus of the visible light, rather than to other parts of the electromagnetic spectrum (i.e. ultraviolet or infrared) that are invisible to the naked eye?
Related to “If visible light has more energy than microwaves, why isn't visible light dangerous?”
| It's mostly just the visible light, with some infrared.
Typically, glass doesn't transmit so well outside the visible spectrum, as shown in the graph in this answer. Near infrared gets through ok, but ultraviolet transmittance drops off fairly quickly. And then you have to take dispersion into account: different wavelengths are in focus at different distances. So when the visible wavelengths are mostly in focus the infrared components won't be fully focused.
But it would be interesting (IMHO) to do the experiment, and see if a lens with relatively high UV transmittance, like a quartz lens, heats & burns stuff significantly faster than a similar lens of "normal" glass.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/453324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
What gives mass to black hole? I like to know when a dying star collapsed into a black hole, is there anything inside or on the event horizon that is interacting with higgs field?
| To answer the question in the title of this post: the mass of a black hole, and the gravitational field propagated by that mass, come from the mass of the object which originally collapsed and formed the black hole, plus any mass which fell into it after that.
Since the gravitational field of a black hole was in existence before the hole itself was formed, then there seems to be no need to invoke the specific dynamics of the higgs mechanism at the event horizon to account in some way for the mass of the black hole.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/453497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is there anything special about ebonite and fur? I'm from Czech Republic, born 1980. From elementary school, we all remember this mantra:
When ebonite rod is rubbed with fox fur, electrostatic charge is created.
Electrostatic charge is created by rubbing ebonite rod with fox fur.
Rubbing ebonite fur with fox fur creates electrostatic charge.
Etc. ad nauseam.
So...
Is there anything special about the combination of ebonite and fox fur that makes it especially useful for teaching kids about electricity?
Does there even exist a clear distinction between things that do and things that don't create electrostatic charge by rubbing?
The irony: I can't remember ever hearing the word 'ebonite' in any other context than this particular strange example. (I never even knew what ebonite was until about 15 minutes ago when I googled it.)
| You're looking for the triboelectric effect.
The triboelectric series is an empirical table of materials in order, such that materials high on the list tend to give electrons to materials lower on the list. Fur is high, ebonite is low. Materials of similar index don't build up much charge separation from rubbing, while materials with largely different index do.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/453656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Why do mirrors not follow brewster's angle? Normally a material will have an angle where the reflected light is completely polarized. Now say we have a mirror (implemented by a conductive silver coating) that reflects most of it's incident light. https://physics.stackexchange.com/a/10925 says that this imperfect mirror will be mostly linearly polarized, but not at the brewster angle. Why is this? The derivation for the brewster angle assumes non-magnetic materials, but does not assume non-conductive materials I believe.
| I think you may have misunderstood the answer to the question you cited. It says that light reflected from a silvered mirror will be mostly unpolarized. This is true whether the silver is on the front or back surface. There is a very slight polarization due to the less than ideal properties of the silver.
The front surface of a back-surface silvered mirror will reflect highly polarized light, but whatever gets past the front surface will be almost perfectly reflected by the silvered back surface.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/453820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Still Confused about Linear Momentum in a Circle
A point mass with mass $m$, distance $r$ from circle and constant tangential velocity and constant angular velocity is swung around a circle. ($p$ is linear momentum)
Angular momentum is radius x linear momentum. It is conserved.
If $r$ is increased, linear momentum decreases. However, shouldn't linear momentum be conserved as well? Where does linear momentum go?
| Linear momentum is conserved when there is no force acting on the system. If you increase the radius, you will have to exert a force on the system. If this could be done without a force then you could accelerate the particle to high speed and then increase the radius to infinity. This would violate conservation of energy (and angular momentum).
In a circle, I like to imagine that the area of the triangle made by $r$ and $p$ must be conserved just like in Kepler's Second Law. So when you double the radius, the linear momentum has to half to conserve the area.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/453940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Mass hanging from spring in free fall
Q: A mass $m$ hangs from a massless spring connected to the roof of a box of mass $M$. When the box is held stationary, the mass-spring system oscillates vertically with angular frequency $\omega$. If the box is dropped falls freely under gravity, why does the angular frequency increase?
So initially, we have $\omega=\sqrt{\frac{k}{m}}$, where $k$ is the spring constant. I read that the new angular frequency when in free-fall is $\omega'=\sqrt{\frac{k}{\mu}}$, where $\mu=\frac{Mm}{M+m}$ is the reduced mass, i.e. $\omega'=\sqrt{\frac{k(M+m)}{Mm}}$ which is clearly greater than $\omega$.
I'm not sure why this is true - I suspect it has something to do with the box oscillating (as it's no longer held stationary), but I'm not too familiar with the concept of reduced mass so I'd really appreciate a good explanation.
| A mass hangs by a spring from the roof of a box under gravity, and the mass-spring system oscillates vertically with angular frequency w.
Then the box is dropped and falls freely while the mass is moving downward.
The string is stretched until the box and the mass are traveling at the same velocity. Then the spring pulls the mass and the box toward each other. There is no force to push them apart until they hit each other, and the problem does not say how elastic their collision is.
So the problem does not give the information needed to tell whether they will continue to oscillate or what the frequency will be if so.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/454020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Is it possible understand Berry curvature as Gaussian curvature in some limit? I would like to understand the Berry curvature and the Chern number from mathematical geometry-topology.
I understand that in electronic QHE, there is a map from $k^2$ to a vector space where the eigenvectors "live". These eigenvectors are defined up to a phase (local gauge invariance).
Why Berry curvature is defined like this?
$B=\epsilon_{ij}<\partial_{k_i}u|\partial_{k_j}u>$
Is it standard in mathematics? Where can I find it?
My motivation comes from topology appearing in non-electronic systems such as photonics, acoustics or mechanics, where Berry curvature plays a similar role than in electronics. I would like to explain it without electronics.
| The Berry connection and Berry curvature only appear due to the wave nature of physical systems. That is why it also plays a role in photonics, acoustics and other classical wave equations.
The Berry connection and Berry curvature are a connection and a curvature in the mathematical sense on a vector bundle, commonly known as the Bloch bundle
\begin{align*}
\mathcal{E}_{\mathrm{Bloch}} = \bigsqcup_{k \in \mathrm{BZ}} \mathcal{H}_{\mathrm{rel}}(k) = \mathrm{span} \bigl \{ \varphi_1(k) , \ldots , \varphi_n(k) \bigr \} ,
\end{align*}
which is constructed from gluing together the eigenspaces
\begin{align*}
\mathcal{H}_{\mathrm{rel}}(k) = \mathrm{span} \bigl \{ \varphi_1(k) , \ldots , \varphi_n(k) \bigr \}
\end{align*}
spanned by the eigenfunctions associated to the eigenvalues below the characteristic energy or frequency; in solid state physics, this is the Fermi energy. Here we have assumed that your characteristic energy or frequency lies in a bulk band gap, because then the dimensionality of $\mathcal{H}_{\mathrm{rel}}(k)$ is independent of $k$ and the relevant subspace $\mathcal{H}_{\mathrm{rel}}$ of your Hilbert space depends analytically on $k$. (In classical waves, you need to pay attention to the bands with linear dispersion around $k = 0$ and $\omega = 0$, though.)
Berry connection and Berry curvature are then, as mentioned before, just a connection associated to the curvature on this vector bundle.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/454135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Quantum tunneling in a capacitor Ampere's law of Maxwell's equation includes displacement current as the Maxwell's correction.
Suppose consider a capacitor with a thin distance of separation. For an applied voltage do they posses some tunneling current?. If so, Do we have to include these tunneling current in the Maxwell's equation which will be again a correction for the maxwells equation(inclusion of tunneling current)?
| Quantum tunneling is defined when there are wavefucntions describing the particles in potential wells. The potential wells are defined in the solid plates of the capacitors and any effect of tunneling can only happen over distances where a potential well can be modeled accross the capacitor gap.
Quantum mechanical effects obey Heisenberg's uncertainty principle, and are bounded by it .
$ΔxΔp>h/{2π}$
When h/{2π}$ is effectively zero, the problem is well described by the classical models. Lets make an order of magnitude estimate:
$h/{2π}$ is of order $10^{-34}$ Joule.second=$kilogram Meter^2/second^2$
The smallest distance between two metal plates so as to have a capacitor might be a few microns . micron= $10^{-6}$ meters,
The drift velocity of electrons in a circuit is of the order ~$10^6$meters/second
The mass of the electron is ~$10^{-30}$kilograms
All these order of magnitude, and we get the HUP satisfied : $10^{-30}>10^{-34}$
That is why for nanotechnology quantum mechanical effects are important. For micron plate separation a tunneling model might give a barely measurable change in the AC currents . For smaller distances there would no longer be a capacitor to model.
With this estimate, I expect that the classical theory is unaffected for millimeter plate separation distances . DC currents will not exist, after transience, and AC currents will be well modeled by the classical maxwell equations.
p.s. if you google "nanotechnology capacitors and quantum tunneling effects" a list comes up , and the problem is being considered in those dimensions.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/454286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Will any charge oscillating in space create an EM wave? Would it be correct to say that any charge oscillating in space (regardless of the spacial amplitude) at a given frequency will emit an EM wave of the same frequency?
related: What change in an EM field is required to create an EM wave?
| It depends on the available space. In a free space - yes. In a perfect cavity the proper cavity modes are discrete and start from some minimal frequency. So a "slow" charge oscillations may not excite even the lowest cavity mode. The corresponding EMF is then purely reactive and may not propagate.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/454375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Relationship between CO2 concentration and thermal conductivity of the air in a volume? How much would the percentage of CO2 in a room need to change in order to make a measurable change (using transient hot wire method) to the thermal conductivity of the air inside room? Are there other changes to the air composition that could drive a significant change in the thermal conductivity?
Thermal conductivity just happens to be the easiest air parameter for me to instrument, I'm looking for ways to modify the air in a sealed space and then sense when the space becomes open by measuring the conductivity (or whatever else will do it) as it equalise with the bulk atmosphere.
| I don’t think you can get enough CO2 into room air to have a signal and still breathe it.
Humidity might be a much stronger signal. Particularly at just above room temperature, the thermal conductivity varies significantly with humidity:
(From here, which has more info)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/454731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are problems with self-energy of point charge in classical electrodynamics solved by field quantization? Classical electrodynamics gives strange results when considering a moving charge in its self generated field (Abraham-Lorentz equation).
Some 50 years ago there were many efforts and publications about how to interpret those results, including works of Dirac and other prominent physicists.
My question is, whether these peculiarities are removed by the formalism of field quantization (QED). I have read that it is the case, but other sources state the opposite, so it seems to be controversial.
| No, in QED the main term of the self-action diverges and it is discarded, just like in CED.
P.S. In CED one can see that the main self-action term is a self-induction. It is not a desirable radiation reaction ("small") term, but an additional inertial ("big") term.
In QED it is less visible, but there it is still a self-induction term.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/454926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is it possible to entangle particles for their path? In the figure below, you have a source of entangled particles that then sends these particles in opposite directions.
Each stream of particles heads towards a double slit with a detector screen beyond the slits. Is it possible to entangle these particles with respect to their path such that if a particle goes through slit A then its entangled pair must go through slit D, and if a particle goes trough slit B, then its entangled pair must go through slit C?
| For the configuration you've drawn, no, that won't be possible.
If you modify the source so that it has the capability to point independently at slit A (resp. B, C, D) without illuminating the slit B (resp. A, D, C) in the process, though, then the answer is yes, that's a perfectly possible state. In this form (and unless you perform some form of quantum-eraser experiment and look at coincidence counts between the two screens) neither screen will present an interference pattern.
Entanglement in path is one of the main standard ways to produce entangled photons. For examples in practice, try e.g. this search.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/455658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why we neglect the $\hbar ω/2$ in the Hamiltonian of the the Electromagnetic Field? After the quantization of the electric and the magnetic field, we get the Hamiltonian of the electromagnetic field:
$$H= \hbar ω(a^{\dagger}a +1/2) .$$
with $\hbar$ the planck constant and $a^{\dagger}$ the creation operator.
Why can we neglect the term $\hbar ω/2$ in many cases, e.g. when we want to describe the Rabi Hamiltonian, where we just take $H= \hbar ωa^{\dagger}a$ .
| The "quantization" procedure is ambiguous: it may contain terms proportional to $\hbar$ disappearing in the classical limit. So, we can consider Rabi Hamiltonian as a result of quantization too. The rest has been already explained in the previous posts.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/455835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Stability in Nuclear Shell Model As far as I understand , a particular sub-shell is filled with either protons or neutrons, $2*(2l+1)$ number of them, and never both together since protons and neutrons fill up levels separately in the shell model.So, are the magic numbers (2,8,20...) achieved by filling either of the energy levels corresponding to protons or neutrons? What if proton energy levels are filled( eg. $(1s)^2$ - 2 protons) and neutron sub-shells are not ( eg. $(1s)^2 + (1p)^4$ - 6 neutrons)?
Will we still get the stability corresponding to magic number 2?
| Yes, a magic number is achieved when there is a magic number of either protons or neutrons. In your example of p=2 protons and n=6 protons ($^8$He), we might then expect expect it to have "magic" properties such as a preference for a spherical shape and large gap in energy between the ground state and the first excited state.
For nuclei with a magic number of protons and a magic number of neutrons (for instance $^{16}$O with p=8 and n=8) these "magic" properties are further enhanced and we call them "doubly magic nuclei".
Magic numbers are useful references when thinking about the properties of nuclei but there are many subtleties to consider. For instance, in the $^8$He case you mention, there are n=6 neutrons. This number is not "magic" but it is enough to close the sub-shell level (p$_{3/2}$) which can give it more stability compared to $^7$He and $^9$He (also because of the pairing effect which provides more binding when an even number of protons or neutrons is present). Additional complications are that some magic numbers are stronger than others, and that as we study nuclei further from stability the magic numbers appear to change.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/456148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If electric fields add, and intensity is proportional to $E^2$, doesn't superposing light violate energy conservation? We know that the energy in a wave goes as electric field amplitude squared($E_0^2$) as given by poynting vector but the amplitude of electric fields are added linearly. To see why this creates a paradox, consider 2 lasers both with linearly polarized lights with electric field $E_0$. The total energy coming out of these lasers is then proportional to $2*E_0^2$. Now, suppose you find a way to add the amplitude of these 2 waves coherently so that the crest and trough match exactly with each other giving a net amplitude as $2*E_0$. Now, the energy carried by this wave would turn out to be proportional to $4*E_0^2$ which is greater than the energy we started out with!!!. How is this possible?
| Is is the key point of interference in optics (Young double slits or Michelson interferometer...).
On a bright fringe, the intensity is $4I0$ but this is compensated by the dark fringes, with zero intensity. Energy is conserved but with a different spatial repartition.
Sorry, I understand the question a little better. (sorry for my poor english)
Indeed, if we double the field, the power is multiplied by 4.
It seems to me that the problem is to believe that the power to be supplied when we draw two antennas indefinitely near each other is simply the sum of the individual powers when they are alone. Very close, they will interact with one another and it will take more power to make them radiate ?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/456598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
How does the heat capacity of an object come into play in thermal radiation? So say there's a cube in space acting as a blackbody.
Each side is 2 metres. Initial cube temperature is 400 Kelvin. Mass is 15 kg.
Say the heat capacity is 500 J/kgK. How would that affect thermal radiation? Or is it not a factor? How does it affect the rate of cooling? And how would you calculate this?
| The temperature determines the rate at which energy is radiated away (other factors, such as the emissivity, the temperature of the surroundings, and the geometry are also relevant).
The heat capacity determines how quickly the temperature changes as internal energy is changed. All other things being equal, a hot object with a high heat capacity will cool more slowly than one with low heat capacity because it must radiate away more energy to decrease its temperature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/456738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Probability of obtaining an eigenvalue for a degenerate spectrum of continuous eigenstates I'm trying to find the probability of obtaining an eigenvalue $\lambda$ which has a degenerate set of eigenstates $\phi_1(x)$, $\phi_2(x)$, and $\phi_3(x)$. Now, I want to know if this approach is correct for doing this:
$$p=\left|\int\phi_1^*(x)\psi(x)dx\right|^2+\left|\int\phi_2^*(x)\psi(x)dx\right|^2+\left|\int\phi_3^*(x)\psi(x)dx\right|^2$$
Now I'm not sure where this approach is the correct one. I know that in the bra-ket notation, I have to take a similar approach. But with these being continuous functions, I want to know if the norm has to be inside the integral or outside the integral.
| Your expression is correct. The probability is indeed given by
$$p=\left|\int\phi_1^*(x)\psi(x)dx\right|^2+\left|\int\phi_2^*(x)\psi(x)dx\right|^2+\left|\int\phi_3^*(x)\psi(x)dx\right|^2$$
The expression $\langle\phi\vert\psi\rangle$ is an inner product. For the case where you have a finite dimensional space, you have
$$\vert\phi\rangle = \begin{pmatrix}\phi_1 \\ \phi_2 \\. \\. \\ \phi_n \end{pmatrix}$$
Similarly, for $\vert\psi\rangle$ and so the bra-ket is given by
$$\langle\phi\vert\psi\rangle = \sum_a \phi_a^*\psi_a$$
In a continuous space, we rewrite the sum as an integral like $\int\phi^*(a)\psi(a)da$ and the square of this integral represents the probability.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/457069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does sound behave differently in water than in air? I noticed in some experiments at home that sound does not behave the same in water than in air. Is there a good scientific explanation to this?
I noticed that the sound sounded distorted in water but not in air.
I also used a software that I could use to hear the sound as if I had ears that are meant for underwater. I do not have the files because they are self wiped after I am done
| Human ears are evolved to furnish a good impedance match between sound waves traveling in air, and the nerve array inside your ear that turns vibrations into electrical impulses. This means that the greatest possible amount of sound wave energy will be conveyed to those nerves, across the greatest possible range of different frequencies.
The characteristic impedance of water as a sound-carrying medium is completely different from that of air. When you immerse your ear in water, there will be a significant impedance mismatch between your ear and the water. The sound waves in the water will be poorly matched to your ear, which will make the sounds faint, and the sounds you do hear will be distorted because some frequencies will be attenuated more than others.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/457644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 2
} |
If I leave a glass of water out, why do only the surface molecules vaporize? If I leave a glass of water out on the counter, some of the water turns into vapor. I've read that this is because the water molecules crash into each other like billiard balls and eventually some of the molecules at the surface acquire enough kinetic energy that they no longer stay a liquid. They become vapor.
Why is it only the molecules on the surface that become vapor? Why not the molecules in the middle of the glass of water? After all, they too are crashing into each other.
If I put a heating element under the container and increase the average kinetic energy in the water molecules to the point that my thermometer reads ~100°C, the molecules in the middle of the glass do turn into vapor. Why doesn't this happen even without applying the heat, like it does to the surface molecules?
| From a thermodynamic point of view, at fixed pressure, the vaporization takes place when the temperature exceeds the temperature of change of state $ Tc (P ) $
Within the liquid, the pressure that is to be taken into account is the hydrostatic pressure. This pressure is a little greater than 1 bar and the associated vaporization temperature is 100 ° C.
On the surface (thickness of some mean free path), the environment of the molecules is different. the pressure to be taken into account is the partial pressure of water vapor, which is related to the moisture content of the air. If the humidity is less than 100%, this pressure is well below 1 bar and evaporation takes place at a much lower temperature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/457717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Conceptual understanding of operators in QM Do operators in QM represent in some fashion the action of the measurement apparatus on a state being measured? Usually operators in QM are introduced as abstract transformations whose eigenvectors/eigenvalues are axiomatically the possible results of measurement, with an explanation along the lines of "because it works". However it seems like a coincidence that the operators that determine possible measurement results are, well, operators that transform states on which they act, as though the dynamical act of measurement itself were being modeled by a coarsely-grained apparatus-state interaction during the process of measurement, and the possible results of measurement are those fixed-point states for which the operator isn't "scrambling things up" during measurement (i.e the eigenfunctions). For example the momentum operator is associated with infinitesimal spatial translations, which makes sense because an apparatus that measures momentum has to in some fashion probe how a state translates in space without changing it. Has a view like this been fleshed out? It seems like it could shed some light on the measurement problem; it would make sense for the dynamical evolution of states being acted on by operators to eventually settle down (collapse) to the fixed points of the operator.
| I think that in the same sense as previous members have answered the hamiltonian has a certain role to play in the time evolution of the system.If viewed from the schrodinger picture it is the state of the system which evolves but in heisenberg picture it is the operator or rather the expectation value which evolves in time.These two completely equivalent to each other.I think it is an analogue of phase space volume in classical mechanics.And when any other operator commutes with the hamiltonian then it is called a constatnt of motion.This is what I have inferred from it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/457908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 4
} |
Dot product in E&M
I'm learning graduate level E&M. Textbook is a famous Jackson book. What I would talk now is about pp.295-298 in 3rd ed. I attached the photo of p.298.
It says (paragraph above eq.(7.15) and footnote in the photo) that $\vec{n}\cdot \vec{n}=1$ doesn't mean n is unit vector if n is complex vector. And it discusses about the form of n satisfying above relation.
But it looks weird to me.
When I learned linear algebra/mathematical physics, I learned that in complex domain it is more natural to define inner product as
$\vec{a}\cdot\vec{b}=\Sigma a_i^\ast b_i$. If we use this definition there would be no problem of being not unit vector. Why did Jackson stick to definition of dot product in real domain?
| The reason is quite simple: the use of the “standard” (i.e. real) dot product is the more familiar notation to physicists. Of course, you could absolutely rewrite everything in terms of some Hermitian inner product $\langle\cdot,\cdot\rangle$ and the equations might become a bit cleaner, but this would come at the expense of being slightly out-of-touch with your audience, who have been using one notation for a long time. And ultimately, a textbook needs to be understandable by a wide audience.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/458121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What's the difference in a $P$-$V$ diagram that is curved versus one that is straight?
So what would the difference be between the graph above versus one that has the same initial and final points but the path is curved. I'm sure it has something to do with temperature, so does it mean temperature is constant? Or is there something else going on?
| In an isotherm, which is a path through the $PV$ diagram where temperature $T$ is held constant, curved lines are formed. $P$ as a function of $V$ is given by the ideal gas law.
$$PV = NkT \Rightarrow \boxed{P = \frac{NkT}{V}}$$
So, the isothermal curves are similar to the shape of $y = 1/x$ in the first quadrant.
However, other sorts of curves exist, such as adiabatic curves, and, also, really any other sort of curve you'd like. Curves through a $PV$ diagram don't need to be isotherms, though they certainly could be.
A straight-line path, like the one in your picture, is very obviously not an isotherm. Thus, going along such a path requires a changing temperature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/458182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Are powers of the harmonic oscillator semiclassically exact? The Duistermaat-Heckman theorem, although too complex for me to completely grasp, states that under some conditions, the partition function for a special class of Hamiltonians is semiclassically exact. The harmonic oscillator is exact and should be an element of this class - the problem is that I don't completely understand the theorem, therefore can't generalize it.
I would like to know if, for systems of the form
$$ H(q,p) = \left( \frac{q^2+p^2}{2} \right)^\gamma \, , \quad \gamma \in \mathbb{N} \, ,$$
the partition function is semiclassically exact, and if this exactness implies that the semiclassical propagator is also exact.
| There are at least 2 types of partition functions:
*
*A finite-dimensional integral $$Z~=~\int \!dq~dp ~\exp\left\{-\frac{i}{\hbar}H(q,p)\right\}$$ with some $U(1)$ circle action, which OP didn't specify. Here a (Wick-rotated/oscillatory) Duistermaat-Heckman theorem applies.
*A loop-space functional integral $$Z~=~\int_{q(0)=q(T)}\!{\cal D}q~{\cal D}p ~\exp\left\{\frac{i}{\hbar}\int_0^T\! dt(p\dot{q}-H(q,p))\right\},$$ where the Hamiltonian $H(q,p)$ has no explicit $t$-dependence. Niemi-Tirkkonen equivariant localization to constant loops works generically away from caustics, cf. Refs. 1-3.
References:
*
*R.J. Szabo, arXiv:hep-th/9608068; section 4.6.
*A.J. Niemi & O. Tirkkonen, arXiv:hep-th/9206033; eq. (28).
*A.J. Niemi & O. Tirkkonen, arXiv:hep-th/9301059; eq. (3.23).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/458282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Friction acting as an internal force I was solving this problem in my assignment:
Assuming a frictional force F acts on the block of mass m, a force -F will act on plank of mass M. Hence, the net work done by frictional force should be zero, as friction is an internal force , but option D is given incorrect. What's the error in my reasoning?
Thanks in advance.
| For the free body system we can write this equations:
$$m\,\ddot{x}_m+F_\mu=0$$
$$\dot{x}_m=-\frac{F_\mu}{m}\,t+v_0\tag 1$$
$$M\,\ddot{x}_M-F_\mu=0$$
$$\dot{x}_M=\frac{F_\mu}{M}\,t$$
Where $F_\mu$ the friction force between the block and the plank.
the friction force work is:
$W=\int \,F_\mu\, dx=\int_0^{t_s}\,\,F_\mu\,\frac{dx}{dt}\,dt$
$t_s$ is the time that take the mass $m$ to reach the end of the block.
$\Rightarrow$
$$W_m=F_\mu\int_0^{t_s}\left(\,-\frac{F_\mu}{m}\,t+v_0\right)\,dt=F_\mu\left(-\frac{1}{2}\frac{F_\mu\,t_s^2}{m}+v_0\,t_s\right)\tag 2$$
and
$$W_M=F_\mu\,\int_0^{t_s}\,\left(\frac{F_\mu}{M}\,t\right)\,dt=\frac{1}{2}\frac{F_\mu^2\,t_s^2}{M}\tag 3$$
with equation (1) and the velocity $v_s$ that the mass $m$ reach at time $t_s$ we can calculate the final time:
$v_s=-\frac{F_\mu}{m}\,t_s+v_0\quad \Rightarrow$
$$t_s=\frac{m}{F_\mu}\,\left(v_0-v_s\right)\tag 4$$
with equation (4) in (2) and (3) we obtain for $W_m$
$$W_m=\frac{1}{2}\,m\left(v_0^2-v_s^2\right)\quad, v_0 > v_s \Rightarrow\quad W_m > 0$$
and for $W_M$
$$W_M=\frac{1}{2}\,\frac{\left(v_0-v_s\right)^2\,m^2}{M}> 0$$
so both work done by the friction force are positive!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/458433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Different expressions for distance & displacement : $\int$$d$$|\vec r|$, $\int$$|$$d$$\vec r$|, and $|$$\int$$d$$\vec r|$ I came across these expressions in my book. And the book says that all these are different from each other.
The expressions are : $\int$$d$$|\vec r|$, $\int$$|$$d$$\vec r$|, and $|$$\int$$d$$\vec r|$
Are they different from each other?
I know that
*
*$|$$\int$$d$$\vec r|$ means magnitude of displacement,
*$\int$$|$$d$$\vec r$| means total distance,
*But what about $\int$$d$$|\vec r|$?
I think it should mean total distance too, but I’m not sure if $\int$$d$$|\vec r|$ and $\int$$|$$d$$\vec r$| have the same meaning, do they?
Edit : Some of the answers say that the question is not very clear, and that a little more explanation would help. I’m not sure what else to add, so I’m attaching a picture of that page
| Notation matters. You have probably seen $\int d|\vec{r}|$ written as $\int dr$, without vectors.
*
*In 1D, this is the same as $\int dx$. The letter for the disttance is not relevant.
*In more than 1 dimension, it's the integral along the radius. For example, in a circle, you'd integrate for all angles and also for all radii from 0 to $R$, for example. There you'd find that integral. $S=\int_0^{2\pi}d\varphi \int_0^R dr$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/458586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
What is the relation between chemical potential and the number of particles? Chemical potential is defined as the change in energy due to change in the number of particles in a system. Let we have a system which is defined by the following Hamiltonian:
$$H = -t \sum_i^L c_i^\dagger c_{i+1} + V\sum_i^L n_i n_{i+1} -\mu \sum_i^L n_i$$
where $c^\dagger (c)$ are creation (annihilation) operators, $n$ is number operator, $t$ is hopping parameter, $V$ is nearest-neighbor interaction, $L$ is the total number of sites and $\mu$ is chemical potential.
What I understand by chemical potential is, if we set $μ=$some constant, then no matter how many sites ($L$) we add to the system, the number of particles will always be conserved. (Please correct me if I am wrong)
QUESTION:
What is the relation between chemical potential and the number of particles? i.e. if I set $μ = 10$ then how many particles are allowed in the system?
| At zero temperature, to find the relation between $\mu$ and the particle number you have to know the ground-state energy $E_N$ of the system with $N$ particles, and then $\mu= E_{N+1}-E_N$ Consequently you have to solve your Hubbard model exactly before anything else.
Once you have done this, you can approximate the definition of $\mu$ as
$$
\mu = \frac{\partial E}{\partial N}
$$
(where $E=E(N)=E_N$) and from this obtain $N$ as a function of $\mu$ by means of Legendre transformation. Set
$$
\Phi= E-\mu N
$$
Then
$$
\frac{\partial \Phi}{\partial \mu} = \frac{\partial E}{\partial\mu} -N-\mu \frac{\partial N}{\partial \mu}
$$
$$
=\frac{\partial E}{\partial N}\frac{\partial N}{\partial\mu } - N- \frac{\partial E}{\partial N}\frac {\partial N}{\partial \mu}
$$
$$
= -N
$$
At finite temperature a thermodynamic system with a fixed chemical potential must be in a grand canonical ensemble and therefore free to exchange particles with a reservoir. Consequently the particle number is not fixed but instead its average $<N>$ is determined by
$$
<N> = - \frac{\partial \Phi}{\partial \mu}
$$
where now
$$
\Phi \to E-TS-\mu N
$$
is the thermodynamic grand potential.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/459065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Creation operator acting on a coherent state. Occupation number operator For a coherent state
$$|\alpha\rangle=e^{-\frac{|\alpha|^{2}}{2}}\sum_{n=0}^{\infty}\frac{\alpha^{n}(a^{\dagger})^n}{n!}|0\rangle$$
I want to find a simplified expression for $a^{\dagger}|\alpha\rangle.$ I can only get this $$\begin{align}
a^{\dagger}|\alpha\rangle=e^{-\frac{|\alpha|^{2}}{2}}\sum_{n=0}^{\infty}\frac{\alpha^{n}(a^{\dagger})^{n+1}}{n!}|0\rangle=e^{-\frac{|\alpha|^{2}}{2}}\sum_{n=0}^{\infty}\frac{\alpha^{n}}{\sqrt{n!}}\sqrt{n+1}|n+1\rangle
\end{align}$$
or $$a^{\dagger}|\alpha\rangle=e^{-\frac{|\alpha|^{2}}{2}}a^{\dagger}e^{\alpha a^{\dagger}}|0\rangle.$$ Is it possible to get something more "beautiful" and "useful"?(I apologize for the unscientific lexicon.)
Ultimately, I want to find a simplified expression for $N|\alpha\rangle=a^{\dagger}a|\alpha\rangle,$ but I don't know such an expression for $a^{\dagger}|\alpha\rangle.$
| There's no easy expression for $a^\dagger\vert\alpha\rangle$ but you are interested in $\hat N\vert \alpha\rangle$ the easy way is
\begin{align}
\hat N\vert\alpha\rangle &= \hat N e^{-\vert\alpha\vert^2/2}
\sum_n \frac{\alpha^n}{\sqrt{n!}}\hat N\vert n\rangle\, ,\\
&= \hat N e^{-\vert\alpha\vert^2/2}
\sum_n \frac{\alpha^n}{\sqrt{n!}}n\vert n\rangle\, . \tag{1}
\end{align}
What is simple and useful is
$$
\langle \alpha\vert a^\dagger =\alpha^*\langle \alpha\vert \tag{2}
$$
obtained by taking the transpose conjugate of $a\vert\alpha\rangle=\alpha\vert\alpha\rangle$. The calculation of $\langle N\rangle$ then easily follows.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/459146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How big would a black hole have to be to absorb the Sun? How big would a slow moving black hole have to be to absorb or otherwise destroy the Sun?
| The odds of a roaming stellar blackhole entering the solar system is less than 1 in a trillion. There are two predominant types of a black hole in the universe. The first are supermassive black holes found churning at the center of galaxies. These don’t pose any threat to us at least until our galaxy collides with the Andromeda galaxy in a few billion years.
The other types of black holes are stellar mass black holes. The smallest of which is just 3.2 solar masses in size.
If one of these passed near the solar system, it would perturb the Ort cloud as such that it would shower the solar system with both massive and small comets.
If the black hole made its way into the solar system, we probably won't notice it at first, unless it began to eat a gas giant forming an accretion disk.
By the time it reaches the asteroid belt between Jupiter and Mars, the Earth would begin to be torn apart.
The sun and the black hole would begin to orbit each other. Gas would be sucked off the sun. As the sun lost mass it would begin to swell to a red giant. More and more mass would be lost until you are left with just a white dwarf. They would then orbit each other radiating gravity waves until further weakening the sun, until it is finally ripped apart.
The closest known black hole is Cygnus X-1 6000-7000 light years away and is 14.2 solar mass.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/459293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does the existence of electrons validate the integral form of electric fields? For an arbitrary charged object, it seems to be the case that we express it as a continuous sum (sum on the reals/integral) of point charges $dq$ that have a canonical Coulomb's law force.
That is to say, for an arbitrary charged object, we split it up into tiny $dq$'s (located at $\vec r'$, with the force exerted on reference point $P$ at $\vec r$ by them equal to..
$\text{let} \ \vec r - \vec r' = \vec \zeta$
$$F_{dq} = k \ dq \ \frac{\vec{\zeta}}{\zeta^2}$$
Implying..
$$\vec E = k \int \frac{1}{\zeta^2} \vec \zeta dq$$
But why do we assume that $dq$ exhibits the form $F_{dq}$? It's almost like there's a fundamentally point-like charged particle composing all charged objects.. aha! Electrons. But wasn't this theory established independent of electrons? How could we justify them without electrons? Do we need to? Is that even the justification for it? Why are we allowed to assume all charged objects are made of infinitesimal point charges and do electrons have anything to do with it?
| It was experimentally verified that assuming a continuous charge distribution was a good approximation for most, if not all, of the macroscopic electric phenomenon. A continuous charge distribution is no equivalent to a distribution of charged point particles. Assuming a continuous charge distribution can actually be thought of as in favor of the idea of continous matter distributions, which is the opposite of the idea of discreteness needed for the electron point particle.
Although it is true that electrons are point particles, if you want to rigorously deal with a collection of point charges you should not use an integral, because it is a "summation" assuming infinitely small distance between points, and that is not the case with the distribution of electrons on objects.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/459375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How do I derive the angular frequency of a simple pendulum through conservation of energy? Is it possible? I'm not exactly sure what I'm doing wrong.
So far I've gotten:
$mgl(1-$cos$\theta) = \frac12\omega^2l^2$
Which then gives $\omega = \sqrt\frac{2g(1-cos\theta)}{l}$ which is incorrect. Where am I going wrong??
| The problem is $v=wr$ only works for constant acceleration case, for rotational motion. So we cannot just say $d\theta /dt=\omega$.
The total energy of the system
$$E=1/2m\dot{\theta}^2 l^2+mglcos(\theta)$$
Since theres no external force acting on the system.
$dE/dt=0$
Hence
$$m\dot{\theta} l^2\ddot{ \theta}+mglsin(\theta)\dot{ \theta}=0$$
Since $v=\dot{ \theta}l \neq 0$
We can write,
$$m\dot{\theta} l[\ddot{ \theta}l+g\theta]=0$$
And by setting $sin(\theta)=\theta$ using small angle approximation.
Hence
$$\ddot{ \theta}l+g\theta=0$$
Or $$\ddot{ \theta}+\omega^2 \theta=0$$
For $\omega=\sqrt{g/l}$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/459498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to start moving in at $Re\ll1$ I find it difficult to see how something can accelerate (and therefore increase its velocity, e.g. start to move) in a $Re\ll 1$ situation.
It is customary, at low Reynolds numbers, to ignore inertial effects. This means that the nonlinear terms in the Navier-Stokes equations vanish. One can prove from the resulting equation (Stokes' equation) that, for example, if a sphere of radius $R$ is placed in a fluid of viscosity $\eta$ and local velocity $v$ (in a configuration at which $Re\ll 1$), the drag force experienced by the sphere will be
$$f_D=6\pi\eta R v$$
where $\eta$ is the dynamic viscosity of the fluid, $R$ the radius of the sphere.
What does this mean in terms of the force balance on the sphere?
Let's assume that I apply a driving force $F$ on the sphere, and I apply Newton's 2nd Law to this situation
$$F_{tot}=F - f_D = F - 6\pi\eta a \frac{dx}{dt} = m \frac{d^2 x}{dt^2}$$
where $f_D$ is the drag force. Does this imply that the motion of the sphere is given by the solution of the ODE above?
Or is the acceleration always going to be zero because of the absence of inertia? How can one justify this in the formalism used above? By imposing that the inertial mass $m=0$?
Edit: The following extract, from this well-know review on microswimmers, hopefully justifies the doubts about whether acceleration is meaningful or not in this context. The authors had first stated: "Since swimming flows are typically unsteady, we implicitly assume the typical frequency ω is small enough so that the frequency Reynolds number $\rho L \omega^2/\eta$ is also small."
| You are right. Swimming is impossible if Re<1. The forward motion is neutralized by the backward motion.
The only way to create propulsion is the screw. That's why the small swimmers has a long tail, they are rotating it, instead of flapping it back and forth.
This all is explained well in this old National Committee for Fluid Mechanics video (at ~28:30 is the demonstration)
https://www.youtube.com/watch?v=51-6QCJTAjU
This picture below is from the video notes:
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/459620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Why some forces follow superposition principle? Let there be a system of $n$ source charges and a test charge $Q$. When we say superposition applies to electrostatic force, we conclude that the interaction between a given source charge and the test charge is independent of interaction between other source charges and the test charge. Why exactly it is the case? Also why some forces follow superposition?
| Superposition holds only because experiments show it to be true.
"Superposition is not a logical necessity, but an experimental fact"
Source of quotation:- Introduction to Electrodynamics, David J Griffiths
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/459708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Faster ways of computing feynman diagrams Obviously the machinery of QFT allows us to calculate processes, such as QED diagrams, to great precision, and whilst it is effective, it seems there are many processes that make calculations (say by hand) significantly slow.
Are there any recent developments in our machinery to compute Feynman diagrams that makes it faster to analytically compute matrix elements, widths and cross sections?
| There are a number of computer algebra systems for evaluating Feynman diagrams and doing other particle physics calculations, such as FeynCalc, FORM, GiNAC, Package-X, etc.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/459787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Classical angular momentum components are numbers. Can they be generators of some symmetry group? In Quantum Mechanics (QM), angular momentum turn out to be the generator of rotational symmetry. This is trivial to see because in QM, angular momenta are defined by the commutation relations $$[J_j,J_k]=i\hbar\epsilon_{jkl}J_l.$$ One immediately recognises these as the generators of the rotation group. But in classical mechanics, angular momenta $L_i$ are numbers, or at best, functions of $x_j$ and $p_k$ as $L_i=\epsilon_{ijk}x_j p_k$. Can they be called the generators of some symmetry group because generators are usually differential operators or matrices?
| As you mention, they're not "just numbers" - they are functions of the coordinates and momenta. And, as such, they can indeed be used as generators of some symmetry group via the usual tool for the job in hamiltonian mechanics: the Poisson bracket.
Here, you won't be much surprised to learn that the mutual relationships between the angular momentum components are
$$
\{L_i,L_j\}=\epsilon_{ijk}L_k,
$$
where the Poisson bracket is defined as
$$
\{f,g\} = \sum_i \left[ \frac{\partial f}{\partial x_i} \frac{\partial g}{\partial p_i} - \frac{\partial f}{\partial p_i} \frac{\partial g}{\partial x_i} \right],
$$
and the group action generated by the angular momentum on a function $f$ works as
$$
f \mapsto f+\{L_i,f\}
$$
in its infinitesimal version, and as
$$
f \mapsto\exp\left(\theta\{L_i,\cdot\}\right) f
$$
for a finite angle $\theta$.
And, of course, the group that they generate is simply $\rm SO(3)$, acting on the space of real functions on phase space.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/459938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Pressure due the atmosphere Usually when we consider the pressure exerted by gas, there is nothing to do with the weight of the gas. On the contrary the atmospheric pressure is defined as the weight of the gasses. What is the difference here?
| This is actually a pretty interesting question.
For starters, it depends on the system you are analyzing. Atmospheric gasses are basically an open system that stays together due to the force of gravity. All the gas further down is supporting the weight of the above gasses. The force of this weight on top pressurizes all the air based on how much mass is above it. See for example John Rennie's linked possible duplicate to explain how it works on a macroscopic level.
In a closed container, we often don't consider the weight of the gas because the effects of the weight are minimal if the height of the container is not very large. In a very tall container, the pressure would also be affected by the weight. The gas on the bottom would have noticeably higher pressure than the gas above, because the gas on the bottom has to support all the weight of the gas above it.
The existence of the container isn't important, what is important is how tall the container is. If it is tall enough, than the weight of the gas will have a noticeable effect on the pressure distribution. Technically any closed container in gravity will have this; but unless the container is very tall the difference is negligible.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/460182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Computation of $e^{i \hbar \omega a^{\dagger} a} a e^{-i \hbar \omega a^{\dagger} a}$ I need to compute terms like :
$$e^{i \omega t a^{\dagger} a} a e^{-i \omega t a^{\dagger} a}$$
Where $[a,a^{\dagger}]=1$ (they are the bosonic annihilation/creation operators).
I wonder if there is a simple formula for this. Indeed, when I try to compute the commutator:
$$[a,e^{i \omega t a^{\dagger} a}]. $$
I end up with something that doesn't look trivial.
For example:
$$[a^{\dagger} a, a] =a .$$
But:
$$[(a^{\dagger} a)^2, a] =2 a^{\dagger} a^2 $$
So I don't know how I could compute the general term (and if actually it is an easy thing to do...).
In summary: is there a simple expression for:
$$e^{i \omega t a^{\dagger} a} a e^{-i \omega t a^{\dagger} a}$$
and if so, is there a trick to compute it?
| Hint: there is a general identity
$$ \exp(\hat{X})\hat{Y}\exp(-\hat{X}) = \hat{Y} + \left[\hat{X},\hat{Y}\right] + \frac{1}{2!}\left[\hat{X},\left[\hat{X},\hat{Y}\right]\right] + \frac{1}{3!}\left[\hat{X},\left[\hat{X},\left[\hat{X},\hat{Y}\right]\right]\right] + ...\ ,$$
which I believe would be useful for your purposes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/460321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why is iron the peak of the binding energy curve? If Nickel-62 and Iron-58 have more binding energy per nucleon than Iron-56 does, then why is iron-56 shown as the peak of the binding energy curve? Also, does adding neutrons always make the atom more stable because it will increase the strong nuclear force but not add any more electrorepulsive force?
| *
*The "folk wisdom" that iron-56 has the highest binding energy per nucleon is in fact incorrect; both iron-58 and nickel-62 have a higher binding energy per nucleon, with nickel-62 being the highest. I can't do much better than citing an article on the subject:
M. P. Fewell, "The atomic nuclide with the highest mean binding energy". Am. J. Phys. 63, 653–658 (1995).
The author of that work traces this misconception back to texts on stellar nucleosynthesis in the '50s and '60s. Stellar nucleosynthesis does favor the production of iron over nickel, and the author postulates that this fact may have been conflated with the peak of the binding energy curve.
*We can roughly model nuclei as having a set of "proton energy levels" and "neutron energy levels". Since both protons and neutrons are spin-$\frac12$ fermions, this means that one can have at most two neutrons per energy level in the nucleus. Adding more neutrons to the nucleus will thus result in the neutrons being piled into higher-energy states.
However, neutrons can undergo beta-decay into protons: $n \to p^+ + e^- + \bar{\nu}$. Suppose a neutron is in a relatively high energy level in the nucleus, and there is a vacant proton energy level below it. It can be energetically favorable for this neutron to turn into a proton and drop into this lower energy level. Thus, nuclei with too many neutrons will tend to undergo beta decay. (The same argument shows why nuclei with too many protons will tend to undergo inverse beta decay.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/460457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 2,
"answer_id": 1
} |
How can I determine a planet's mass based only off of information about its orbit and its parent star? I'm coding a video game with procedurally generated planetary systems and I want some and I want to make sure I'm at least somewhat scientifically correct. I've reached the part in my code where I know where a planet should be orbiting but don't know what mass to give it.
At this point, I have the mass of the star (in solar masses) and the orbital period (in Earth years) and orbital velocity of the planet (measured in Earth orbital velocities).
If there's no precise way of determining the planet's mass, is there a way to know more or less what range the mass should be in?
| The simple two-body problem won't give you bounds or constraints on planetary masses. However, one has to take into account the effects of planet-planet interactions which change the scenario. In particular, one has to take into account the effect of orbital resonances ( see the wikipedia page on this topic ) which may destabilize planetary orbits. The effect of a resonance does depend on the masses of the planets as well as on their period.
At the level of a realistic videogame I would use this information just to avoid positions resulting integer ratios of orbital periods and, after deciding distances, I would avoid to use masses resulting in a planet-planet maximum force much larger than the maximum value one can find in the Solar System (it should be the case of Jupiter-Saturn, but I did not check all the possible pairs).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/460730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why aren't particles constantly "measured" by the whole universe? Let's say we are doing the double slit experiment with electrons. We get an interference pattern, and if we put detectors at slits, then we get two piles pattern because we measure electrons' positions when going through slits. But an electron interacts with other particles in a lot of different ways, e.g. electric field, gravity. Seems like the whole universe is receiving information about the electron's position. Why is it not the case and the electron goes through slits "unmeasured"?
Bonus question: in real experiments do we face the problem of not "shielding" particles from "measurement" good enough and thus getting a mix of both patterns on the screen?
| Distinguishing which slit is which is the factor that causes the wavelike interference pattern to disappear. Experiments show that the more the path can be determined the more they look like single photons.
Here's some notes on a course where this is worked out explicitly for a Mach-Zender quantum interference experiment, where this continuum between "classical" and "quantum" is made mathematically explicit.
So yes, the more the experiment's electrons interacts with the "universe" in a way that the "universe" can gain information about which slit it went through, the more the "quantum interference pattern" disappears. This is a good intuition for why things at a macroscopic level behave classically: because the individual quantum pieces are interacting with the environment so much that all of this "quantum perserving" information leaks out.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/460855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66",
"answer_count": 6,
"answer_id": 2
} |
Diffusion equation Lagrangian: what is the conjugate field? Morse and Feshbach state on p. 313 without elaboration that the diffusion equation for temperature or concentration $\psi$ and its "conjugate" $\psi^*$ (quotation marks theirs) has the Lagrangian density:
$$L=-\nabla\psi\cdot\nabla\psi^*
-\frac{1}{2}a^2(\psi^*\frac{\partial\psi}{\partial t}-\psi\frac{\partial\psi^*}{\partial t}).
$$
I don't understand what the conjugate field, $\psi^*$, is. Since the classical (non-Schrödinger) field should be real, I suspect the conjugation symbol * refers to something other than complex conjugation. With a real field, $\psi^*=\psi$, and only $-\nabla\psi\cdot\nabla\psi$ remains, which would be the Lagrangian for the Laplace equation (steady state diffusion).
| The conjugate field ψ∗ is but the complex conjugate of ψ, so an extra degree of freedom to expedite derivation of the diffusion equation,
$$
\nabla^2 \psi = a^2 \partial_t \psi ,
$$
analogous to the Lagrangian of the free Schroedinger equation, real in that case--only.
*
*But, since this equation does not mix real and imaginary parts, take its imaginary part to be zero at the very end, and safely interpret ψ as a concentration, etc.
It is just computational expedience, namely extending to the complex plane, as, e.g. in electromagnetism, to avoid the quandary you observed if you take the imaginary part of the Lagrangian, (proportional to a2) to vanish.
Alternatively, integrating by parts in the action and discarding the surface terms nets a Lagrangian density
$$
L= \psi^* (\nabla^2 -a^2 \partial_t )\psi,
$$
so ψ∗ may be thought of as an extraneous Lagrange multiplier gimmick to brutally enforce the diffusion equation as is, and concentrate on its real solutions.
Note $\int dx \psi $ is a constant in time, as physically required for your diffusing quantity.
A central solution of this equation underlying its propagator is
$$
\psi({\mathbf x},t)= \frac{a^3}{8 (\pi t )^{3/2}} ~ e^{-a^2 {\mathbf x}^2 /4t}
$$
which starts out at t =0 as a Dirac $\delta ({\mathbf x})=\psi({\mathbf x},0)$. As a result, any initial concentration profile f can be written as a linear superposition of such δs,
$$
\tilde \psi({\mathbf x},0)= \int d^3 y ~ f({\mathbf y} ) \delta ({\mathbf x} -{\mathbf y}) ,
$$
and propagated through each component thereof, by the above solution,
$$
\tilde \psi({\mathbf x},t)= \frac{a^3}{8 (\pi t )^{3/2}} ~\int d^3 y ~ \tilde \psi({\mathbf y},0 ) ~ e^{-a^2 ({\mathbf x}-{\mathbf y})^2 /4t} .
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/460991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why did recombination make the universe transparent? It is commonly said that after the universe cooled enough for ionized Hydrogen to settle down into neutral Hydrogen, i.e. recombination, the universe became transparent. A reason I have heard for this is that most photons don’t have the right energy to be absorbed by H atoms.
But the free electrons before recombination weren’t absorbing the photons either, they were scattering them. Doesn’t light still scatter off bound electrons? For instance, my understanding is that Compton’s original experiment on Compton scattering used graphite as the source of electrons. Certainly then, photons were scattering off the electrons bound in carbon atoms?
I suspect the answer has something to do with the scattering cross section of bound electrons in neutral Hydrogen being much less than that of free electrons, but then why is that the case?
| When a photon with insufficient energy to move an electron from one energy level to a higher energy level (including to complete ionization) the photon may change direction, but not energy. During a period of time, say e.g. one second, about the same same number of photons move towards the observer, just as if there were no atoms in the way. That is, for each photon that was heading for the observer, and gets deflected to a different direction, there will (on the average) be another photon not heading towards the observer, but after hitting an electron it gets deflected to move exactly towards the observer.
Only a very small fraction of photons emitted from the last ionized atoms will move the energy of an electron high enough so that photons of a different energy are subsequently emitted.
I hope you find this answer to be acceptable.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/461233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Why do electromagnetic waves have the magnetic and electric field intensities in the same phase? My question is: in electromagnetic waves, if we consider the electric field as a sine function, the magnetic field will be also a sine function, but I am confused why that is this way.
If I look at Maxwell's equation, the changing magnetic field generates the electric field and the changing electric field generates the magnetic field, so according to my opinion if the accelerating electron generates a sine electric field change, then its magnetic field should be a cosine function because $\frac{d(\sin x)}{dx}=\cos x$.
| This is one of those 'why' questions that physics can or cannot answer, depending on what you want from answer to 'why'.
If equations are a satisfactory explanation, then the Maxwells Equation in Emilios answer are a complete answer.
Unfortunately, not far beneath the surface of that answer is 'why do Maxwells Equations' fit reality?' or 'why do fields behave the way they do so that we can derive Maxwells Equations?'. Wigner along with many other physicists was similarly troubled by such questions.
It doesn't get any more intuitive if you go down further to QED to try to explain the classical behaviour.
At the lowest level, the answer is 'that's the way Nature behaves'.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/461393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 2
} |
How can be $\frac{1}{4\pi R^2}\int_{S}V_{ext}(R)da= V_{ext}(0)$ physically explained? I was working out problem 3.1 (4th edition) of Introduction to Electrodynamics by Griffiths, which asks you for the average potential over a spherical surface due to a charge located inside the sphere (as well as verifying an equation).
I understand that, mathematically, one gets for $V_{av}$:
$$V_{av} = \frac{q}{4 \pi \epsilon R}$$
And if there is a bunch of $q$ inside the sphere one gets:
$$V_{av} = \frac{Q_{enc}}{4 \pi \epsilon R}$$
Griffiths shows that the average potential due to exterior charges is the same as if they were placed at the center.
I understand the Math to get such a result but I do not understand this result physically speaking.
How can be the average potential due to an external (out of the sphere) charge $q$ be equal to the average potential due to a charge $q$ located at the center?
I agree that the potential does not have a physical meaning as such, but my intuition tells me that the difference between position a (say its value at the center of the sphere) and b (say its value at the position out of the sphere) should matter. Besides, the potential falls off like $1/r$. Thus I do not see how is it possible (physically speaking) that:
$$V_{ext}(R)= V_{ext}(0)$$
|
How can be the average potential due to an external (out of the sphere) charge $q$
be equal to the average potential due to a charge $q$ located at the center?
It's not. The average potential over a spherical surface of radius $R$ due to a point charge a distance $z > R$ from the center of the sphere is $$V_\text{ave} = \frac{q}{4 \pi \epsilon_0 z}
$$
(see §3.1.4 of Griffiths). Meanwhile, the average potential over the surface if $z < R$ (i.e., the charge is inside the surface) is
$$
V_\text{ave} = \frac{q}{4 \pi \epsilon_0 R}.
$$
These quantities are not the same.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/461741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Unification of gravity and electromagnetism Have there been any attempts at unifying gravity and electromagnetism at least at classical level since Hermann Weyl's idea of gauge principle (1918)? We now have Standard Model which is very successful and many other theories. But gravity and electromagnetism are long range in nature and classical as well. Can these two be unified independent of weak and strong forces?
| Most physicists are not interested in unifying just gravity and electromagnetism, because electromagnetism is already fully unified with the weak nuclear force. They’re now sometimes just called the electroweak force.
Furthermore, the strong nuclear force has closer similarities to the electroweak force than gravity does, and Grand Unification of the strong and electroweak forces may be an easier next step.
Most physicists also have no interest in classical unification, when quantum physics is a more successful explanation of reality than classical physics. Classical electromagnetism isn’t even a correct theory, so why would we want to unify it with anything?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/462122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Why does the $\phi$-cubed theory have no ground state? In the book of Sredinicki's, he claimed that the $\phi^3$ theory has no ground state, hence this is not a physical theory.
My question is that I can't see why this system has no ground state. And I don't understand either the explaination he gave. For example, what does "roll down the hill" really mean? What's the case for a harmonic oscillator pertuibed by a $q^3$ term? Maybe it's better if someone can explain it using the quantum harmonic case.
| Work on a spatial lattice of finite extent so that the field operators and the Hamiltonian are well-defined as (unbounded) operators on a Hilbert space. Consider any Hamiltonian of the form
$$
H = \epsilon^D \sum_x \Big(\Pi^2(x)+V\big(\phi(x)\big)\Big)
\tag{1}
$$
where $\epsilon$ is the lattice spacing, the sum is over all lattice sites, $D$ is the number of spatial dimensions, and $V(\phi)$ is an arbitrary polynomial. The commutation relation is
$$
\big[\phi(x),\Pi(y)\big]=i\frac{\delta_{x,y}}{\epsilon^D}.
\tag{2}
$$
Suppose that a ground state $|0\rangle$ exists. By definition, this is a state that satisfies
$$
\psi_\text{diff}\equiv
\frac{\langle \psi|H|\psi\rangle}{
\langle \psi|\psi\rangle}
-
\frac{\langle 0|H|0\rangle}{
\langle 0|0\rangle}
\geq 0
\tag{3}
$$
for all states $|\psi\rangle$. For any real number $a$, the unitary operator
$$
U(a)\equiv\exp\left(-ia\epsilon^D\sum_x\Pi(x)\right)
\tag{4}
$$
satisfies
$$
U^\dagger(a)\phi(x) U(a)=\phi(x)+a,
\tag{5}
$$
so $U^\dagger(a)H U(a)$ is the same as $H$ but with $\phi$ replaced by $\phi+a$ inside $V(\phi)$. Now consider the state
$$
|\psi\rangle\equiv U(a)|0\rangle
\tag{6}
$$
where $|0\rangle$ is the alleged ground state. Then the quantity (3) is
$$
\psi_\text{diff} =
\frac{\langle 0|V_a-V|0\rangle}{
\langle 0|0\rangle}
\tag{7}
$$
with $V_a(\phi)\equiv V(\phi+a)$. Now suppose that $V(\phi)$ is a cubic polynomial with non-zero cubic term. Then the quantity (7) is a cubic polynomial in the real variable $a$ with non-zero cubic term. Since $a$ is an arbitrary real number, this polynomial attains negative values for values of $a$ of the appropriate sign and with sufficiently large magnitude. This contradicts the assumption that $|0\rangle$ was a ground state, so this completes the proof.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/462496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Can muons exist in space? Since muons produce when cosmic rays crash and collide into air molecules in the atmosphere, one would think that there would not be any muons in space. Is it true?
| They can exist in space, but only briefly. Muons are unstable, with a short lifetime of only 2.2 microseconds in their rest frame. They decay usually into an electron and two different kinds of neutrinos. Interplanetary and intergalactic space is not completely empty, and muons could be produced in space by occasional high-energy particle collisions, but the muons don’t last long there, or here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/462693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Lagrange equations in a conservative system, understanding $\nabla_i$ For a system of multiple particles with conservative forces: $\mathbf{F}_i = - \nabla_i V$, with $V \equiv V(\mathbf{r}_1,\dots,\mathbf{r}_N)$ the potential in function of the position of the $N$ particles.
When considering constraints, we can transform our Cartesian coordinates to generalized coordinates. This results in the potential being a function of these generalized coordinates: $V(\mathbf{r}_1(q_k),\dots,\mathbf{r}_N (q_k))$.
For the generalized force $Q_k$ we find that $Q_k = - \frac{\partial V}{\partial q_k}$. Now my book says that
$$ \frac{\partial V}{\partial q_k} = \sum_i (\nabla_i V) \cdot \frac{\partial\mathbf{r}_i}{\partial q_k} \quad (1)$$
I understand where this formula comes from (chain rule). I'm also aware of what $\nabla_i$ means: $\nabla_i = (\frac{\partial}{\partial x_i},\frac{\partial}{\partial y_i},\frac{\partial}{\partial z_i})$ for partial derivation w.r.t. coordinates of the $i$-th particle. Let's write the full sum (1):
$$\frac{\partial V}{\partial q_k} = (\nabla_1 V) \cdot \frac{\partial \mathbf{r}_1}{\partial q_k} + \dots + (\nabla_N V) \cdot \frac{\partial \mathbf{r}_N}{\partial q_k}.$$
I don't understand what the $\nabla_i$ look like when written full out. For example $\nabla_1 V$, what is this equal to? Do I just take $\mathbf{r}_1$? But the nabla operator has three components, so what's up with them? How do I write these $\nabla_i$ out?
| FWIW, in this context of Lagrangian mechanics, one often writes the derivative of the $i$th particle position ${\bf r}_i$ as
$$ \nabla_i ~=~\frac{\partial}{\partial {\bf r}_i}. $$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/462913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Confusion over units in force equation? While discussing Newton's laws, our book says
Force is proportional to rate of change of momentum
so they say
F is proportional to mass * acceleration if mass is constant
So $F=kma$ where $k$ is a constant.
They then say we choose a unit of force such that it produces acceleration of $1\ \mathrm{m/s}^2$ in $1\ \mathrm{kg}$ mass so $1\ \mathrm{N}=k\cdot 1\,\mathrm{kg}\cdot 1\,\mathrm{m/s}^2$. Then they say $k=1$.
How is $k=1$? It should be $1\,\mathrm{N}/(1\,\mathrm{kg\, m/s}^2)$, which is different than just $1$. Force is always written as $F=ma$ not $F=kma$ which seems false.
This question is different as it asks the actual concept of dimensions rather than other number the question asker of other question was confused about the choice of number not of dimension.
| The unit of force in the International System is Newton (N), which is equal to kg m/s$^2$. Newton is a derived unit, all units can be expressed as a product of the seven base units of the SI. Therefore, the $k$ in the formula is dimensionless.
The book explains that you can choose $k=1$ so that Force can be defined as being equal to rate of change of momentum, instead of proportional to rate of change of momentum. If early physicist had defined Force as half the rate of change of momentum, the correct formula would have been $F'=\frac{1}{2}ma$, where $F'$ is the newly defined force. $F$ and $F'$ have the same dimensions (and units), but they are different magnitudes.
You can compare this to the concepts of radius and diameter: you can state that the radius of a circumference is proportional to the length of its perimeter by a factor of $\frac{1}{2\pi}$. You can also say that the diameter is proportional to the length of its perimeter by a factor of $\frac{1}{\pi}$. The radius and the diameter are both lengths (measured in $m$), but they are defined differently.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/463036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 0
} |
Bernoulli's equation basics While deriving the Bernoulli's equation, we write the change gravitational potential energy as $mg(h' - h)$ , say where $m$ is the mass and $h'$ and $h$ are the two heights. Why we don't consider the centre of mass in this case? I mean why we don't have this term written as
$\frac{mg(h' - h)}{2}$. I feel I am having some problem in understanding some concept.
| In the derivation of Bernoulli's equation we consider a small element of a fluid so that we can assume that all fluid particles in the element has same pressure, velocity etc.
The potential of this small element is the one to be considered in Bernoulli's equation, the height of the element itself is neglected.
You asked what the potential energy in the left limb was, we cannot use Bernoulli's equation on such a large element we use it ideally between 2 points ; so you can determine potential by integrating the potential of each small element but you cannot apply this potential into the Bernoulli's equation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/463260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does a colored filter reflect their color of light? At the moment I'm somewhat confused by the concept of colored filters; common sense states that they allow only their color of light to pass through(i.e. red filter lets red light through), but, if they appear to be a specific color, wouldn't that indicate that they reflect that color?
| I would expect that the highest quality filters do not reflect any of their colour (red for example) and only appear red because everything behind them appears red. As it blocks out all the other wavelength's of visible light. A red filter like that would appear black in a completely blue room, however it is not actually completely black, the same would apply to a blue filter in a red room.
This would be different for filters that are likely to be used in a school classroom where the light filters do indeed reflect a small portion of their respective light and a red filter is genuinely red (as it appears).
Hope this helps!:-)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/463474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
How do you measure the chemical potential? It is clear how to measure thermodynamics quantities such as temperature, pressure, energy, particle number and volume. But I have no idea how to measure chemical potential.
Could someone please provide some examples of how one could measure the chemical potential?
| You can measure it indirectly by using other extensive quantities and applying thermodynamic relations (see https://en.wikipedia.org/wiki/Table_of_thermodynamic_equations). For instance, you could use $$\mu = (\frac{\partial G}{\partial N})_{p,T}$$
As for measuring it directly, it is not possible to measure it directly.
You can check the answer in Is there a tool to measure the chemical potential of a system? for a reason about the last point.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/463572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 1
} |
Why the entropy change is not zero in the irreversible adiabatic process? Why the entropy change is not zero in the irreversible adiabatic process?
...while it is defined as the integral of the heat added to the system over its temperature.
| Although entropy change is defined in terms of a reversible differential transfer of heat divided by the temperature at which the heat is transferred, you can have entropy change without heat transfer. Since entropy is a state function independent of the path between states, you can calculate it by assuming any reversible path connecting the states.
A classic example of an irreversible process causing entropy change not involving heat transfer is the free adiabatic expansion of an ideal gas.
A rigid insulated chamber is partitioned into two equal parts. Half the chamber contains an ideal gas. The other half is a vacuum. An opening is created in the partition allowing the gas to freely expand into the evacuated half. Since the chamber is insulated, there is no heat transfer ($Q=0$). Since the expansion of the gas does not expand the boundaries of the chamber, there is no boundary work ($W=0$). Consequently, per the first law, the change in internal energy is zero ($\Delta U=0$). Being an ideal gas, where a change in internal energy depends only on a change in temperature, there is therefore no change in temperature.
The end result is the volume has doubled the pressure has halved and the temperature is unchanged.
Although no heat transfer has occurred, the process is obviously irreversible (you would not expect the gas to spontaneously return to its original half of the chamber). But we can determine the entropy generated by taking any convenient reversible process to return the gas to its initial conditions so that the total entropy change of the system is zero. The obvious choice here is to remove the insulation and perform a reversible isothermal compression. To do that requires heat transfer to the surroundings. That amount of heat represents “lost work”, that is, the work that could have been done if the free expansion of the gas was replaced by a reversible adiabatic expansion.
Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/463988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
When the voltage is increased does the speed of electrons increase or does the electron density increase? I am just a high school student trying to self study, please excuse me if this question sounds silly to you.
I know that current is a product of the speed of electrons and the electron density.When current is increased it either means that the speed of electrons has increased or it means that the number density of the flowing electrons has increased.
I also know that voltage is directly proportional to current and when voltage increases(without no change in the resistance) the current will also increase.
But my question is, when voltage increases does an increase in the speed of electrons contribute for an increase in current or does an increase in electron density contribute for it.
If it isn't that black and white, then in what proportion will each of the two components increase? Does it randomly increase?
Related question:Say the electron density of a circuit that lights a light bulb increases.When this happens what change will we see in the brightness of the light bulb?I know that when the speed of electrons increase the brightness increases but what will happen when the electron density increases?
| Current is the amount of charge (electrons) passing a point in a wire per unit time. Voltage is the amount of energy in joule in every charge of 1 coulomb moving through the wire. Increase in current translates to increase in speed of electrons moving past our reference point. Electron density in a wire remain relatively constant even at high wire temperature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/464109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
How much time does it take for a broken magnet to recover its poles? I understand that when you cut a magnet you end up with 2 magnets but I wonder how much time does it take to the magnetic domains to rearange and form the new pole. I know the answer may vary depending on the size of the magnet, the material, and some other variable so I'm searching for an answer as general as possible and how the variables may affect the answer.
| It takes zero time because no domains need to rearrange when a permanent magnet breaks in two. The spins in each half are still aligned and still produce a magnetic field.
The idea that magnets have “poles” is a misconception. There are no magnetic poles in nature, or at least none that we have found. (And physicists have looked hard for them.) This is the meaning of one of Maxwell’s equations,
$$\nabla\cdot\mathbf{B}=0.$$
The magnetic field lines of a magnet are loops than run through the interior of the magnet and then loop back around outside. The so-called “poles” are just where the field lines happen to emerge from the interior to the exterior, or return back inside. When you break a magnet, the field lines simply come out and go in in two new places, so that each half has its own loops and its own “poles”.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/464256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Why is air pressure higher in winter than in summer? At the top of a mountain, say Mt Everest, atmospheric pressure is low.
So shouldn't the same thing be true for winter season.
I.e air pressure in winters should be lesser than that in summers.
But it's the opposite.
Can someone please explain why ?
| The pressure profile of earth from the surface all the way to the last layer of atmosphere is a decreasing/negative gradient, why is that? It's gravity. The strength of gravity on a mass is inversely proportional to the square of distance away from the gravity source. Air layers closer to the planet weighs more than layers farther because the planet is pulling those air molecules stronger than those farther above and also all the rest of the atmosphere from that point up is weighing down on anything at that point. In terms of temperature, temperature isn't much of a factor that affects gravitational pull, but since gases compress/contract when colder, we can say there's more air weighing down per $m^2$ than before. This last point I'm not so sure.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/464355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Fourier transform property in Feynman 1986 Dirac Memorial Lecture In his famous 1986 Dirac Memorial Lecture, Feynman refers to a Fourier transforms theorem holding in case F(w) satisfies "certain properties", while being restricted to positive frequencies only:
as I am interested in better understanding said "certain properties", I have GOOGLEd around in search of a source for said theorem, but could not find a way to single it out from hundreds of search results (mostly related to DFT techniques). Would anybody know the exact naming of such theorem, and-or about any Fourier transforms properties publication describing it?
| His point is that if the integral defining $f(t)$ converges for all real $t$ then $f(t)$ on the real axis is the boundary value of a function that is analytic in the lower half plane. (Observe that taking $t\to t-i\tau$, $\tau>0$ improves the convergence of the integral) Now analytic functions that have a limit point of zeros (as is guaranteed by their vanishing in a finite interval) in the interior of their domain of analyticity have to vanish everywhere in that domain. What is not immediately clear to me to what extent this is true for functions vanishing the boundary of their domain as the boundary limits can be quite singular. For such limits, I suggest that you look up "Hardy Space" on Wikipedia.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/464574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Cavity optomechanics Hamiltonian In cavity optomechanics the radiation pressure exerted by light moves a mirror in a cavity. Because of that the resonance frequency of the cavity changes due to change in length of the cavity (cavity frequency, $\omega_{cav} = n\pi c/L$, $L$ is the length of the cavity). The Hamiltonian of the system is given by two harmonic oscillators i.e. the cavity mode and the mechanical mode coupled by the optomechanical Hamiltonian [as discussed in this review article, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.86.1391, given by Eqs. (18)-(20)]:
$H = \hbar \omega_{cav}a^\dagger a + \hbar \Omega_{m}b^\dagger b - \hbar g_0 a^\dagger a (b + b^\dagger)$.
What I don't understand is that, since the cavity length $L$ is changed due to the radiation pressure, the cavity modes are now changed. So, should the modes not be represented by different creation and annihilation operators because the cavity modes are changing dynamically? How can we use the same annihilation (creation) operator '$a$' ('$a^\dagger$') for the optical mode in the Hamiltonian?
| *
*As it was correctly pointed in the other answer, only one mode is considered coupled to the oscillator, which is why one need not use additional creation/annihilation operators.
*The frequency of this mode does change as the mirror moves! The wavelength of the cavity mode is however huge compared to the displacement of the mirror, so this change can be neglected. Yet, it is sometimes included - you will probably meat such Hamiltonians, although this seriously complicates the math.
*Finally, cavity-mirror system is only one realization of such a Hamiltonian. In some cases this problem does not arise at all.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/464724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Demonstration of the completness of an orthonormal set of functions I find this concept of completness a little bit dense when it comes to prove this property of some set of orthonormal functions. In one of my classes, my professor proved this for the orthonormal set of functions $\left\{ \sqrt{2/L} \sin( n \pi x/L) \right\}$, but it did not convince me, even though I can't tell if there is something wrong mathematically speaking. He parted from the very condition of completness, i.e.,
$$\sum_n \frac{2}{L}\sin(\frac{n\pi}{L}x')\sin(\frac{n\pi}{L}x)=\delta(x-x')\;\;\;\;\;\;\;\;\;\;\;(1)$$
and he supposed that, being the set a complete one, then one can describe any funcion in terms of such set. He then wrote that
$$\delta(x-x')=\sum_nC_n\frac{2}{L}\sin(\frac{n\pi}{L}x)\;\;\;\;\;\;\;\;\;\;\;(2)$$
Then, taking advantage of the orthogonality of the set, on the interval $0\leq x\leq L$, from the equation (2)
$$\int_{0}^{L}\delta(x-x')\frac{2}{L}\sin\left(\frac{m\pi}{L}x\right)\mathrm{d}x=\sum_nC_m \delta_{m,n}=C_m$$
$$\therefore \frac{2}{L}\sin\left(\frac{m\pi}{L}x'\right)=C_m\;\;\;\;\;\;\;\;\;\;\;(3)$$
and replacing (3) into (2), then one gets the condition for completeness in (1). Even if this is correct, I can't tell why. Also, I would like to know how the proof for completness would be carried out taking the same condition but in Dirac's notation, that is, $\sum_n |\phi_n><\phi_n|=1$, but I have no idea how to proceed.
| Do you accept that known complete sets of functions, I.e. delta functions, exist?
If so, proving that you alternate set can reproduce the delta functions is enough. Since combining functions is linear, you can thereby always use your functions to make the deltas you need to make anything else.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/464948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If sound is a longitudinal wave, why can we hear it if our ears aren't aligned with the propagation direction? If a sound wave travels to the right, then the air molecules inside only vibrate left and right, because sound is a longitudinal wave. This is only a one-dimensional motion. If our ears are oriented perpendicular to this oscillation, e.g. if they are pointing straight up, how can we hear it?
| Sound travels outwards from a source in all directions. The waves that are set in motion are spherical.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/465203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 9,
"answer_id": 0
} |
Why does a car's steering wheel get lighter with increasing speed I've noticed it is difficult to turn the wheels of a car when the car is stationary, especially cars without power steering, which is why the power steering was invented. However, I've noticed it becomes feather light when traveling at speed (some models even stiffen the steering wheel electronically at speed). So, why does a car's steering wheel get lighter with increasing speed?
| As others have posted, the forward rotation of the wheel reduces the 'scrubbing', however, there is an opposing force which should be mentioned, the gyroscopic effect which would cause the steering to become more difficult to turn the faster the wheels rotate.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/465280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 4,
"answer_id": 3
} |
Can solar mirrors be used for amateur telescopes? Recently I have been fascinated by astronomy and telescopes. I have a small refactor telescope, but it cannot see very far, so I was interested in buying a larger telescope. Unfortunately, I cannot afford to purchase a good quality telescope, so I was trying to think of how I can make one by myself for a cheaper price.
So, my question is, is it possible to use this kind of parabolic solar reflector as a primary mirror for a decent amateur telescope? I am not looking for perfect, crystal clear images. Rather, I just want to see the Moon and the solar system close up for myself! :)
PS: I saw this thread with a similar question, but the answer was focused on high-quality professional telescopes. I'm not looking to do any science with this telescope. Just observing!
| No, it isn't. Just by looking at the reflection pattern in the photo, you can see that the consistency of focus across the surface of the mirror is lousy- certainly good enough for lighting ants on fire, but nowhere near good enough to produce good images.
If you live in or near a big city, you probably have an amateur astronomy club nearby. Many members of these clubs make their own telescopes from kits by grinding their own mirrors (it is possible!) at modest cost.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/465590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Virtual images - Work of a brain or work of a lens? I am just a high school student trying to self study, so please excuse me if this question sounds silly to you.
Many people tell me that virtual images are formed when two rays that are diverging appear to come from a point, therefore our brain thinks that it is coming from an object even though there is no object there.
However I think that a virtual image is formed because two diverging rays converge at a point when made to go through the crystalline lens in our eye to form a real image on our retina.I think that our brain has nothing to do with the formation of virtual images.
What is actually going on here?
| Both of you are correct.
The diverging rays converge at the retina and we see an image.
The brain's role is to locate the source(point from which the rays appear). Since there is no real source, our brain just extrapolate the diverging rays an make them meet at point.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/465680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Work when there is more than 1 force I know that for an object with an applied force, the work done is
$$W = Fd \cos \theta.$$
I was wondering what would happen when there is another force (e.g. friction)? Is it better to say that the work done for a general case is
$$W = F_{net} d \cos\theta.$$
| I think @garyp has answered the question in his comment. That is, we can either discuss about work done on the object by a specific force $F$ or total net force $F_{net}$. Therefore, how the work is calculated depends on what force you want to discuss with.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/465793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Do Maxwell relations hold during a phase transition? Maxwell relations are found by taking mixed derivatives of a thermodynamic potential. Does this mean that they do not hold at a first-order phase transition, where the thermodynamic potential is discontinuous?
| You are right, if, at a phase transition, second derivatives of a thermodynamic potential do not exist at that point Maxwell's relations are not valid anymore.
However, the sets of points of non-analyticity are confined on hyper-surfaces, in the thermodynamic state space, which partition it into regions of analyticity (pure phases or regions of coexistence). Thus, at each point of such hyper-surface, left- and right-limits exist (finite or infinite) and, for applications this is the only relevant thing.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/466046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why do prism split light at angle instead of curving it? I assume that when light goes through matter, it doesnt really slow down, but the waveform is pushed back due to some resonance with the atoms.
EDIT: Interference is probably a better word than resonance here
I also assume that the above effect is responsible for the refraction index of materials.
But according to these assumptions, light rays should curve more as they go deeper through matter shouldnt they ? In other words that effect should be cumulative with the thickness of matter the light does through?
However light doesnt bend at different angles if it goes through thicker glass. So where did I go wrong?
| At first all light components from different wavelengths move in air at same straight line.After they fall on the surface of the prism,they get splitted according to their wavelengths . Because prism has a constant cofficient of refraction, all the light from different wavelength move in the prism at different straight line. Again in air, they moves with constant velocities at different straight line with velocity $c_0$.Thus, the light components are splitted by the prism according to their wavelengths.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/466202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Are Electrons in a Circuit Subject to Newton's Third Law? Consider a simple electrical circuit made up of a battery, an incandescent bulb, and wire. The battery and bulb are equal in mass and are on opposite sides of a circle made up by the wire. Lastly, the circuit is operating and floating freely in microgravity.
Since an electromotive force propels objects with mass (electrons) around the circuit, can we expect the circuit, given enough battery life, to eventually rotate in the opposite direction of the electrons due to Newton’s third law of motion?
| Newton's laws do apply. The overall system will not rotate. While the electrons are rotating in one direction, the rest of the gear will indeed be rotating imperceptibly slowly in the other. But the rotation is constant, as the propulsive EMF is being exactly opposed by collision forces between electrons and metal atoms.
I would suggest that these collisions would also destroy any alignment of electron spins, negating any cumulative effect there (But I am not that much the physicist, so I can't really argue that case). A shame, as an experiment, to see if (mass) electron spin alignment is reflected in a counteracting macroscopic material angular momentum, would be an interesting one.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/466543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
EW theory vertices I'm trying to undertand the following vertex:
Initial state of up and anti-down quarks with finalk state made of $W^+$ boson. Does it go with left or right projector? I think that from Lagrangian it should go with left projector but the vertex with $e^+$ and $W^-$ in initial state and $\bar{\nu}_e$ in the final one goes with right projector and this is not read from Lagrangian.
Accordingly to my professor, the case R-positron with $W^-$ to give R-antineutrino has a vertex that goes with $\gamma^\mu P_R$, so what I want to know is how extract this vertex if the Lagrangian does not contain that term, just $P_R\gamma^\mu P_L = \gamma^\mu P_L$
| It would be helpful if you just wrote the Lagrangian terms you are looking at.
Destroying a Left-handed u and R antidown to create a W+ corresponds to the vertices
$$
W_\mu^- \overline {d} P_R\gamma^\mu P_L u + W_\mu^+ \overline {u} P_R\gamma^\mu P_L d ,
$$
while destroying a R-positron and a W- to yield a R-antineutrino to
$$
W_\mu^-\overline {e} P_R\gamma^\mu P_L \nu + W_\mu^+ \overline{\nu} P_R\gamma^\mu P_L e ~~~,
$$
where you focus on the first term in each line.
Note half the species, the R leptons/quarks and L anti leptons/antiquarks are simply missing from these couplings. (In an, impossible, notional world with no masses, these components would be missing everywhere and all spinors would be Weyl, and projectors would be superfluous.)
Further note L is in no way privileged over R: it is simply a convention of us mooring chirality on leptons/quarks instead of antileptons/antiquarks. (Sometimes this convention is subverted in QFT texts or GUT arraying.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/466692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is a delta resonance decay not a radioactive decay A delta resonance decays as given in http://hyperphysics.phy-astr.gsu.edu/hbase/Particles/delta.html . I wonder, why is it not a radioactive decay? In principle, most/all decays should be radioactive as it is a quite broad description:
Radioactive decay (also known as nuclear decay, radioactivity or
nuclear radiation) is the process by which an unstable atomic nucleus
loses energy (in terms of mass in its rest frame) by emitting
radiation, such as an alpha particle, beta particle with neutrino or
only a neutrino in the case of electron capture, or a gamma ray or
electron in the case of internal conversion.
https://en.wikipedia.org/wiki/Radioactive_decay
| The key point is that the technical definition is limited to atomic nuclei, the delta particle is definitely not a nucleus because it's lifetime is too short for playing any kind of role in an atom.
Said otherwise, it's impossible to form an atom out of a delta particle, and as such you don't consider the decay of a delta particle to be the decay of an atomic nucleus. It's really a matter of definition/convention.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/466816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why is there so much iron? We all know where iron comes from. However, as I am reading up on supernovas, I started to wonder why there is as much iron as there is in the universe.
*
*Neither brown dwarfs nor white dwarfs deposit iron.
*Type I supernovas leave no remnant so I can see where there would be iron released.
*Type II supernovas leave either a neutron star or a black hole. As I understand it, the iron ash core collapses and the shock wave blows the rest of the star apart. Therefore no iron is released. (I know some would be made in the explosion along with all of the elements up to uranium. But would that account for all of the iron in the universe?)
*Hypernovas will deposit iron, but they seem to be really rare.
Do Type I supernovas happen so frequently that iron is this common? Or am I missing something?
| The solar abundance of iron is a little bit more than a thousandth by mass. If we assume that all the baryonic mass in the disc of the Galaxy (a few $10^{10}$ solar masses) is polluted in the same way, then more than 10 million solar masses of iron must have been produced and distributed by stars.
A type Ia supernova results in something like 0.5-1 solar masses of iron (via decaying Ni 56), thus requiring about 20-50 million type Ia supernovae to explain all the Galactic Fe.
Given the age of the Galaxy of 10 billion years, this requires a type Ia supernova rate of one every 200-500 years.
The rate of type Ia supernovae in our Galaxy is not observationally measured, though there have likely been several in the last 1000 years. The rate above seems entirely plausible and was probably higher in the past.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/466889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 4,
"answer_id": 0
} |
Maxwell Tensor Identity In Schawrtz, Page 116, formula 8.23, he seems to suggest that the square of the Maxwell tensor can be expanded out as follows:
$$-\frac{1}{4}F_{\mu \nu}^{2}=\frac{1}{2}A_{\mu}\square A_{\mu}-\frac{1}{2}A_{\mu}\partial_{\mu}\partial_{\nu}A_{\nu}$$
where:
$$F_{\mu\nu}=\partial_{\mu} A_{\nu} - \partial_{\nu}A_{\mu}$$
For the life of me, I can't seem to derive this. I get close, but always with an extra unwanted term, or two.
Anyone have a hint on the best way to proceed?
| The relation as you state it does not hold. Only the space time integral of both hands of the equation is equal under suitable boundary conditions. So this would be an error.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/467007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Hawking radiation and mass annihilation Now, i just heard that the particle anti particle pairs that zip in and out of existence every planck second both have positive mass. if that is so, how does hawking radiation work? black holes lose mass when the particle with negative mass falls into the black hole and cancels out some of the positive mass in it? but, if both particles are positive, the black hole should gain mass instead of losing it. what exactly is going on here?
| The quantum fluctuations you are referring to are 'virtual pairs'. One could object that their creation is a violation of the conservation of energy principle, but this is allowed in quantum mechanics due to the brevity of their existence. Their creation and annihilation leaves no net gain or loss to the total energy of the system normally, but when one particle is within the event horizon the orphaned counter-part cannot annihilate with it, and thus must exist. This particle, now real, accounts for some new energy to the system and we cannot draw that from the vacuum, it is thus taken from the black hole; accounting for a very small loss of energy. The creation of the surviving particle requires more energy than the consumption of its counterpart provides to the black-hole.
I hope this helps, feel free to request further specificity if needed.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/467398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How does the green function for the wave equation in three dimensions preserve the ordering of noises between a speaker and a listener I was provided with the following equation in class for the Green's function of a three dimensional wave equation:
However, I am confused as to how this form of the Greens function preserves the ordering of noises between a speaker and a listener. Any explanation would be much appreciated.
| Implicit in this solution is the fact that the origin of the coordinate system is located at the speaker, at the moment in time in which they emit a pulse of sound.
In other words, the speaker is located at the origin, so $ \mathbf{r}_{speaker} = \langle 0, 0, 0 \rangle $, and they emit a pulse of sound at $ t = 0 $.
I assume by ''ordering of noises'' you mean the order in time.
It's best illustrated with an example. Consider a listener located at $ \mathbf{r}_{listener} = \langle 10, 0, 0 \rangle $.
For simplicity, set $ y $ and $ z $ to zero, to only look at sound on the x-axis. Then $ \mathbf{r} = \langle x, 0, 0 \rangle $.
When $ |\mathbf{r}| - ct = |x| - ct = 0 $, a pulse will exist at the listener's location (remember that the $ \delta $ function is zero when its argument is zero). Plugging in $ x = 10 $, you find that at $ t = 10 / c$, a pulse will arrive at the listener's location.
Also implicit in this solution is the fact that time is restricted to taking only non-negative values ($ t > 0 $).
For any given location at some distance $ |{\mathbf{r}}|$, the pulse will pass that location at some $ t > 0 $ that satisfies $ t = |{\mathbf{r}}| / c $. Which is a later time than $ t = 0 $.
In this sense, the temporal ordering between the speaker and listener is preserved because for a listener at any location distinct from the speaker's location will hear the sound at some time later than the time of speaking.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/467643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why do we ignore magnetic field vector during polarization? Most light sources in nature emit unpolarized light. Natural light sources consist of very large number of randomly oriented atomic emitter, which emit polarized light randomly. A linear polarizer is a device whose input is light of any polarization state and output is linearly polarized light. Natural light can be represented by 2 independent orthogonally linearly polarized wave of the same amplitude,
$$E_{x}(t)=E_{0}sin(\omega t)$$
$$E_{g}(t)=E_{0}sin(\omega t+\varepsilon )$$
In this case, intensity =$E_{0}^{2}$ and each linearly polarized component contributes $\frac{(E_{0}^{2})}{2}$
My question is why is only E considered in polarization and not B.
Thank You!
| Well , when we linearly polarise (just for the sake of convenience ) an electromagnetic wave , both the electric and magnetic fields oscillate in a fixed direction , so we can say that both the electric as well as the magnetic fields are polarised.Obviously, a light ray cannot have its electric field oscillating in one direction and the magnetic field just in a random direction. It's just a matter of choice and covenience. We have equipments that are better suited to measure the electric field as compared to the magnetic field so why talk of two different vectors when one is sufficient for all purposes ?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/467827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Strong empirical falsification of quantum mechanics based on vacuum energy density It is well known that the observed energy density of the vacuum is many orders of magnitude less than the value calculated by quantum field theory. Published values range between 60 and 120 orders of magnitude, depending on which assumptions are made in the calculations. Why is this not universally acknowledged as a strong empirical falsification of quantum mechanics?
| Our back of the envelope prediction for the order of magnitude of the vacuum energy is indeed very wrong! However, keep in mind that
*
*It is possible to precisely fine-tune free-parameters of the theory to match the measurement. This is achieved through a delicate cancellation between so-called tree-level parameters and corrections. When we make the back of the envelope calculation, we implicitly assume that such cancellations don't occur.
*This isn't a test of quantum mechanics per se; but a test of a particular theory that obeys a combination of quantum mechanics and special relatively. Such theories are called quantum field theories. There are many such theories as we may introduce lots of types of fields and let them interact in lots of different ways.
So, quantum mechanics isn't falsified as measurements of the vacuum energy don't directly test it. And even the theories that the measurements do test aren't falsified because we can find extremely fine-tuned combinations of parameters that match observations.
The fact that fine-tuning is required is considered problematic and arguably means that our theories might be somewhat implausible; read about naturalness/fine-tuning in physics for more information.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/467939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 9,
"answer_id": 1
} |
Why a softer phonon is easier to get excited In Kittel's book, "Some other structure B may have a softer or lower frequency phonon spectrum than A. As the temperature is increased the phonons in B will be
more highly excited (higher thermal average occupancies) than the phonons in
A."
Can you let me know why softer phonon is easier to get excited? Is it just because it has lower $hf$? Or it is due to other reasons? Thanks a lot!
| Phonons with low energy are called "soft" for this reason. It means the same.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/468265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Flux received by a negative charge Consider two charges $+q$ and $-Q$ placed at a distance, note charge $q$ and $Q$ are different in terms of magnitude.
My question: is number of flux lines received by $-Q$ proportional to its own charge, or does $+q$ charge have anything to say at all?
As according to gauss law
Source of image: Britannica
The LHS is dependant of field external to Gaussian surface and and RHS of equation depends on charge enclosed within the Gaussian surface.
| The answer would be "yes, but...".
In an universe with global neutral electrical charge, the flux through a closed surface is proportional to the charge inside the surface, but it means that it is also proportional to the charge outside the surface, because both are opposite but equal in absolute value.
Your drawing shows only two opposite charges and the rest of the universe is not supposed matter - it is not included in the model. Therefore all the flux received by the negative charge is originated in the positive one - that is, all lines that end in the negative charge start in the positive charge.
However, although you can say that the flux depends on the charge inside the closed surface or the charge outside the closed surface, it doesn't depend on how are those charges distributed. Therefore, the flux that a given negative point charge receives doesn't depend on whether an opposite negative charge is close to it or very far away.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/468360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Can quantum deletion error-correcting codes be constructed? I'm wondering whether or not we can construct quantum deletion error-correcting codes. The quantum deletion error is defined by the partial trace. If we can, could anyone give an example?
| Such a code has not been discovered until a few months ago.
Ayumu Nakayama and Manabu Hagiwara discovered the first example.
It encodes one qubit to eight qubits.
The details are written in the following paper.
The First Quantum Error-Correcting Code for Single Deletion Errors,
Ayumu Nakayama, Manabu Hagiwara,
IEICE Communications Express.
Its DOI is
https://doi.org/10.1587/comex.2019XBL0154
An improvement is found in the following paper.
A Four-Qubits Code that is a Quantum Deletion Error-Correcting Code with the Optimal Length,
Manabu Hagiwara, Ayumu Nakayama.
The paper is available from
https://arxiv.org/abs/2001.08405
This code encodes one qubit to four qubits.
Its encoder and decoder are given.
The paper proved there are neither two qubits deletion codes nor three qubits deletion codes. It means the four qubits code is optimal for code length.
These papers define deletion errors as partial trace operations.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/468518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Effect of earth's rotation in ballistics For this purpose, let's consider earth's rotations constant. Do earth rotation momentum get transfered to any object (a missile for example) that get's lauched? If so, why do we have to consider earth rotation when lauching the missiles? Wouldn't just follow earth rotation? (Btw, sorry for any grammar mistakes, I'm from a non-english speaking country).
|
Do earth rotation momentum get transfered to any object (a missile for example) that get's lauched?
Yes
That is why they build rocket launch sites as close to the equator as possible, so that they can use that velocity to help reach orbital velocity. At the equator the Earth is moving roughly 1000 miles per hour, and low earth orbit is about 17000, so you get about 6% of the speed you need for free.
Getting right on the equator isn't always easy when you consider where the lower stages fall back to Earth, so for most cases its "as close as we can get". So you have launch sites in Florida for the US, French Guiana for Europe, and Kazakhstan for the USSR (which make more sense when the USSR still existed).
When launching into polar orbits, this may work against you, so you see launch sites for those satellites in more northern locations.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/468682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Difference between voltage, electrical potential and potential difference I am having hard time to visualize these two concepts in my mind seriously.
First of this confusion came from two parallel plates that was connected to a power supply, charged then disconnected from power supply and then separated from each other, strangely potential difference increased but why? I have learn that the electrical potential is$$ V= k \frac{q}{d} $$and when the distance increases the potential of a point must drop but why when we are talking for potential difference it is increasing, doesn't every point between these plates feel less stress as the plates moves apart and doesn't this mean potential is dropping and so the potential difference as well?
| For parallel plates where the separation $d$ is much smaller than the dimensions of the plates (diameter for circular plates), and the potential difference between the plates is $V$, the electric field $E$ is given by
$$E=\frac{V}{d}$$
Where the electric field $E$ is directed from the + plate to the – plate.
Capacitance, $C$ is electrically defined as the amount of charge $q$ on the plates per volt across the plates, or
$$C=\frac{q}{V}$$
In terms of the physical characteristics of a capacitor, the capacitance is given by
$$C=\frac{εA}{d}$$
Equating the last two equations gives us
$$V=\frac{qd}{εA}$$
Where $ε$ is the electrical per permittivity of the medium between the plates.
Substituting $V$ from the last equation for $V$ in the first equation gives us
$$E=\frac{q}{εA}$$
The last equation shows that the electric field strength between the plates does not depend on the plate separation. Now returning to the first equation expressed in terms of potential difference we have
$$V=Ed$$
Since $E$ is constant, increasing the separation increases the electrical potential difference. This makes sense when you consider the following definition of potential difference, or voltage:
The potential difference, $V$, is defined as the work (joules) per unit charge $q$ (coulomb) required to move the charge between the points.
The force on a charge $q$ between the plates is $qE$. The work required to move the charge from one plate to another is
$$W=qEd$$
The work per unit charge is
$$\frac{W}{q}=Ed=V$$
Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/468938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Intuitive "story" explaining how orientation of spin axis affects up/down observation? Is there a "convenient fiction" that explains why the angle of an electron's spin axis affects the probability of it being observed in a spin up or spin down state?
By "convenient fiction", I mean a story or image that provides useful intuition to novices, even though it may not be technically accurate. For example the analogy of water flowing through a pipe is a convenient fiction used to introduce the concepts of current and voltage.
I imagine the electron being sent through a Stern-Gerlach device. It makes sense that the closer the spin axis is to vertical, the more strongly the electron is drawn up or down. But, I don't see what would induce the electron to ever move in the "unexpected" direction. For example, if the axis is 5 degrees off vertical, what ever induces it to move down?
Watching this Veritasium2 video leads me to imagine that the electron is constantly flipping its spin axis; but, that doesn't seem to explain how the angle of the axis affects the probability of being measured in the up or down position.
https://www.youtube.com/watch?v=v1_-LsQLwkA&t=334s
| I am afraid there is no way to get an intuition. The Stern-Gerlach experiment implies a description of the reality in terms of a superposition of states. In case of the electron spin you have Up and Down states related to the direction in which you measure. A surrogate of intuition could be the coefficients of the base kets (base states) specifying the superposition. In fact the probability of finding the electron Up or Down is the modulus squared of the related coefficient.
However the image that the electron moves up or down is not justified. In quantum mechanics you speak of collapse of the wave function during the measurement process.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/469562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What are equations of state in thermodynamics? So I am having real trouble understanding what equations of state are and how we form them. My issue stems from reading multiple sources. So I understand that an equation of state is used to build a relationship between variables to describe a state of a system.
For example $P=P(V,T)$, where $P=NKT/V$, is a function of state, but then I started reading about Gibbs, Helmholtz, enthalpy, etc and suddenly am very confused.
So looking at Gibbs, for example, we have two equations
$$G=E+PV-TS$$
and
$$dG=-SdT+VdP$$
but both describe the state of the system. I did not think an equation of state could be a differential equation but according to one book I have read, $dE$ is related to equation of state and this has caused me great confusion.
| An equation of state is a relation between intrinsic quantities of a system in equilibrium.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/469746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
What is $Z$ useful for in a CFT? As an example, the partition function of a free boson on a torus with modular parameter $\tau$ is,
$$Z(\tau,\bar{\tau}) = \frac{1}{|\eta(\tau)|^2}.$$
In quantum field theory, the partition function allows us to compute correlators and in statistical physics one can also compute quantities of interest, such as entropy.
Now, in CFT, what is the partition function useful for, practically/physically speaking? Many books make a big deal of the derivation and then not use it for anything.
| In two-dimensional CFT, the torus partition function is useful because it sometimes encodes the space of states, and because modular invariance of the partition function constrains that space of states. The partition function encodes the space of states provided the CFT is rational (i.e. we have finitely many representations of the symmetry algebra), and the symmetry algebra is simple enough. (For example, the Virasoro algebra, as opposed to larger W-algebras.)
However, the torus partition function does not tell you everything about the CFT. Actually, there can be different CFTs that share the same partition function. For example, Liouville theory and the free boson (a.k.a. the linear dilaton theory) both have the central charge as a continuous parameter, but their torus partition functions do not even depend on that parameter. Furthermore, the partition function only depends on one complex variable $\tau$, how could it encode correlation functions that depend on many variables such as the fields' positions and quantum numbers?
For applications, the most interesting quantities are typically $N$-point correlation functions on the sphere, not the torus partition function. There is no way to deduce the former from the latter. The partition function is often given too much prominence, including in well-known books, and you are right to wonder why.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/470341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How does $r$ depend on $\varphi$ in the Schwarzschild metric? I am confused about the Wikipedia derivation of the equation
for geodesic motion in the Schwarzschild spacetime. The derivation of this equation involves variation with respect to the longitude $\varphi$ only then the variation with respect to the time $t$ only.
My question is how can we vary with respect to $\varphi$ only when $r$ clearly depends on $\varphi$ since $dr/d\varphi\neq0$ so a variation of $\varphi$ leads to a variation of $r$ also?
| Before varying a Lagrangian or an action, all variables are considered independent. The classical equations of motion then correspond to the subset of possible trajectories that leave the action stationary. That is, before varying the action, $r$ and $\varphi$ are indeed independent coordinates. They only depend on one another after you require that the action is stationary.
To be a little more precise, the action $S$ is a functional which takes possible trajectories and returns a real number. Let $\Gamma$ (the "phase space") be the set of all possible trajectories. Within $\Gamma$, $r$ and $\varphi$ are completely independent. Now, let $\Sigma$ be the subset of $\Gamma$ that minimizes the action (we call $\Sigma$ the "shell"). $\Sigma$ simply consists of all solutions to the classical equations of motion. In general, coordinates on $\Sigma$ will have some nontrivial relation to each other.
This is exactly like how the coordinates $x,y,z$ on $\mathbb{R}^3$ are completely independent, but if I were to embedd a sphere into $\mathbb{R}^3$, then, on this subset, $x,y,z$ depend on each other in a nontrivial way.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/470585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
If atmospheric pressure is 76 cm of $\text{Hg}$ , why won't 76 cm of mercury stay in an open tube when suspended in air? If we keep an hold a tube in air with the closed end up and open end downwards, containing mercury upto a length of 76 cm, why does the mercury not stay in place? Shouldn't atmospheric pressure exert a force equal and opposite to its weight and balance it?
| I never experimented with mercury but I did with water. Here a summary of results.
Take a small bottle, fill it with water and invert it keeping mouth closed, e.g. with a cardboard piece. Now remove the cardboard sliding it horizontally: you'll see that water stays put if bottle's mouth diameter is under a certain threshold, about 1 cm (I don't remember the exact data - it's been years). Bottle size has a minor
influence.
My interpretation involves surface tension but in a complex way. Assume the surface of separation between water and air is initially plane. The question is: will that surface be stable against unavoidable perturbations?
Any perturbation will deform that surface to a curved one, with two effects:
*
*lowering water's centre of gravity, thus decreasing its potential energy
*increasing surface's area, thus increasing surface tension energy.
Water's state will be a stable one if any perturbation increases energy - unstable otherwise.
Then some non trivial mathematics is required to show that the former effect prevails above a certain diameter, the latter below. My numerical calculations, using density and surface tension of water, exhibited satisfactory agreement with experiment.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/470858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.