Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Why do electrons orbit protons? I was wondering why electrons orbited protons rather than protons orbiting electrons.
My first thought was that it was due to the small amount of gravitational attraction between them that would cause the orbit to be very close to the proton (or nucleus). The only other idea that I would have is that the strong interaction between protons and neutrons have something to due with this.
I have heard that the actual answer is due to something in QM, but haven't seen the actual explanation. The only relation to QM that I can think of is that due to a proton's spin and the fact that they are fermions, the atomic orbitals should be somewhat similar. Do protons have the same types of orbitals, that are just confined by the potential of the strong force?
A related question that came up while thinking of this being due to a gravitational interaction: do orbits between protons and electrons have a noticeable rotation between each other (as the sun orbits the earth just as the earth orbits the sun), or is any contribution this has essentially nullified by the uncertainty of the location of the electron (and possibly proton as well)?
| Technically the electron and proton are both orbiting the barycenter of the system, both in classical and quantum mechanics, just as in gravitational systems.
You find the same dynamics for the system if you assume the proton and electron are moving independently about the barycenter, or if you convert to a one-body problem of a single "particle" with the reduced mass
$$
\mu = \frac{m_p m_e}{m_p + m_e } \approx m_e \left(1 - \frac{m_e}{m_p}\right).
$$
However, the proton is nearly 2000 times more massive than the electron.
If we
assume that the proton is fixed and infinitely massive, and model our atom using $\mu=m_e$, we introduce errors starting in the fourth decimal place. Usually that's good enough.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/142037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Would using Cherenkov radiation for lighting be feasible? Could Cherenkov radiation be used for general illumination, for example, to replace LEDs, light bulbs etc? I.e. are there, or could there be, methods to produce substantial amount visible light with Cherenkov radiation:
*
*using devices compact and cheap enough,
*safely and
*energy-efficiently
to actually make any sense? What other problems could there be for using Cherenkov radiation for this purpose?
| Interesting question!
Cherenkov radiation would definitely be inefficient for illumination. You only get Cherenkov radiation from charged particles moving faster than the local speed of light in a medium. If you have a transparent medium with index of refraction $n=2$ and you're sending fast electrons through it, you'll only get Cherenkov radiation while the electrons have $v>0.5c$, or $$\gamma > \frac{1}{\sqrt{1-0.5^2}} = 1.15.$$
Since the kinetic energy is $T = (\gamma-1)mc^2$, this means the first $79\,\mathrm{keV}$ of electron energy contributes absolutely nothing to the Cherenkov illumination. Seems like a lot of energy to waste.
Furthermore Cherenkov radiation tends to be concentrated in the UV, so you'd need some scintillation process to convert the light to visible.
The most serious complaint to me is that if the transparent material is thin enough that most of the light escapes, it won't be thick enough to stop the radiation that's causing the Cherenkov emission in the first place. I think it'd be hard to commercialize a product which involved either (a) a mass equivalent to many radiation lengths in water or (b) hard radiation leaking out into the illuminated area.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/142132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
Temperature in CFT Non-vanishing Temperature can break conformal symmetry (Can anyone show this point explicitly), my question is that in AdS/CFT the temperature of boundary field theory is non-zero, why the boundary field theory whose conformal symmetry is breaking is still a conformal field theory?
| AdS/CFT correspondence tells you (among other things) the dual geometry for each state of the boundary theory. As @Arnold says (check the link in the comment), a finite temperature state ("spontaneously") breaks Lorentz, and hence, conformal invariance. That's okay because excited states could break many symmetries of the theory (eg: p-shell of electron cloud around a Hydrogen atom breaks SO(3) symmetry). So it's not like you're losing the conformal symmetry describing the dynamics.
(In fact, I wonder whether one might be able to use something like the ideas behind spurion analysis -- on the finite temperature state, since the temperature is the only thing "breaking" the conformal symmetry, spontaneously :-?)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/142191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Born's rule and Schrödinger's equation In non-relativistic quantum mechanics, the equation of evolution of the quantum state is given by Schrödinger's equation and measurement of a state of particle is itself a physical process. Thus, should be governed by the Schrödinger's equation.
But we predict probabilities using Born's rule.
Do we use Born's rule just because it becomes mathematically cumbersome to account for all the degree of freedoms using the Schrödinger equation, so instead we turn to approximations like Born's rule.
So, is it possible to derive Born's rule using Schrödinger's equation?
| Your question is misconceived. The wavefunction does not collapse. Rather, when you measure a system, each of the possible outcomes of that measurement happens. Each outcome is associated with a different version of the measurement apparatus. Those different versions of the measurement apparatus can't interact with one another or exchange information. The way in which the wavefunction is sliced up into different versions is a result of the fact that information can only be copied from one system to another if it is instantiated in a set of mutually orthogonal projectors:
http://arxiv.org/abs/1212.3245.
Quantum mechanics isn't a stochastic theory. The probability of a measurement outcome doesn't refer to the chances of picking it out of a hat. Rather, the Born rule is a measure over the set of outcomes of a measurement that satisfies the constraints imposed by decision theory:
http://arxiv.org/abs/quant-ph/9906015.
If you wanted to bet on the outcome of a quantum mechanical experiment the way to do this that would make you money would be to use the Born rule to calculate the amount you expect to make.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/142297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 7,
"answer_id": 6
} |
Is it possible for two holograms to be on the same holographic film? I was thinking if you could use a laser of a specific frequency to write a hologram on a film, and then to try to record over the same film but with a laser of another frequency the the first.
Is it possible for two holograms to be on the same holographic film?
(So much that you can read an image or another depending on the frequency of the laser you use.)
| From MIT Museum Collections: Mini Kiss II
This hologram was made in 1975 of a person blowing a kiss and is made from multiple exposures (at least 16) on the film so that it appears the person is moving.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/142460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why aren't classical phase space distribution functions always delta functions? The phase space distribution function (or phase space density) is supposed to be the probability density of finding a particle around a given phase space point. But, classically, through Hamilton's equations, the system's time evolution is completely determined once the initial conditions are specified. So for a 2D phase space, why isn't the distribution function always the same:
$$f(x,p,t)=\delta(x-x(t)) \ \delta(p-p(t))$$
I know that this thinking has to be wrong, and I am definitely confusing some things. I would like to ask for clarification.
| Note that various density kernels (like the gaussian kernel used in classical phase-space) have delta functions as a limit. Physicaly, clasicaly delta (dirac) distributions are the correct ones. Mathematicaly more smooth distributions (e.g gaussians) may be used to calculate integrals and take limits (the limits should be same as having delta ditributions)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/142574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Failure of the Steady State Theory I was reading a journal of astronomy and came to the most famous opponent of Big Bang theory:
The Steady State Theory:
The 20th - century theory was proposed by Hoyle,Gold and Bondie. The theory is based on the Perfect Cosmological Principle which states that universe has no change in its appearance and is homogeneous. It is isotropic in nature. When an old star dies,new star replaces it. So everything remains the same . According to the theory, the universe has neither any beginning nor any end. Universe was and will always the same through the whole time.
Then I was surprised when the journal wrote that this theory has no scientific existence now. It is obsolete today. While reading this, I have found nothing wrong. Isn't the universe isotropic? Why did the Steady State theory fail?
| The main problem for steady state theory now would be bow can steady state theory produce dark matter and baryonic matter?
That is a big problem for the theory.It would be hard to find a conservation law for this process.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/142724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
What does it mean that quantum teleportation can be classically simulated? Quoting here from Quantum Computation by Nielsen and Chuang :
(Gottesman–Knill theorem) Suppose a quantum computation is performed which involves
only the following elements: state preparations in the computational basis, Hadamard
gates, phase gates, controlled-NOT gates, Pauli gates, and measurements of observables
in the Pauli group (which includes measurement in the computational basis as a
special case), together with the possibility of classical control conditioned on the
outcome of such measurements. Such a computation may be efficiently simulated on
classical computer.
A few lines further:
Consider that interesting quantum information processing tasks like quantum teleportation (Section 1.3.7) and superdense coding can be performed using only the Hadamard gate, controlled-NOT gate, and measurements in the computational basis, and can therefore be efficiently simulated on a classical computer, by the Gottesman–Knill theorem.
Does this mean that quantum teleportation can be efficiently simulated on a classical computer? What does that mean?
| I don't really know what answer you expect here.
As you have found out yourself, the Gottesman-Knill theorem tells you that stabilizer circuits can be efficiently simulated by a classical computer. Teleportation can be implemented that way, hence you can efficiently simulate it on a classical computer.
What does that mean? Well, give the computer a random state, it can perform the teleportation protocol and the simulation time will be polynomial in the dimension of the state. This also means that using teleportation alone can't give you any exponential speedups.
Note however that there is a very small caveat: You cannot prepare all states with stabilizer circuits, i.e. if you want to first create a special state and then teleport it, the time of the simulation doing both might increase more than polynomially in the dimension of the state!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/143099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Electric and Magnetic field's phase difference shift in linearly polarized electromagnetic waves I am a high school student and we currently studying the electromagnetic theory. In my textbook i read that the oscillating electric the magnetic fields have phase difference equal to π/2 rad near the source (for example an antenna) while away from it they agree in phase.
Is this true? And if so, why and how is this happening.
| Possibly you are talking about the difference between the "far field" and "near field" solutions for the simple oscillating electric dipole.
Often when dealing with such a system, if we are looking at the field more than a few wavelengths away from the dipole (or more formally, $kr \gg 1$ or $r \gg \lambda/2\pi$) then the solution looks like a spherically expanding electromagnetic wave; the E-field and B-field are in phase, mutually perpendicular and at right angles to the outward propagation direction. For a dipole moment aligned with the z-axis, the E-field is polarised in the $\theta$ (poloidal) direction and B-field is in the $\phi$ (toroidal) direction.
But if $r \leq \lambda$ then the solution is more complicated. The E-field has both a $\theta$ and a radial component. The B-field is just toroidal, but contains two terms with differing radial dependencies.
In these extra terms for the nearby fields, the E-field becomes much more dominant (in transverse electromagnetic waves it is normally $c$ times bigger). Furthermore it is out of phase with the B-field by $\pi/2$. You can see this from the equations below - when $r$ is small, the first term in $B_{\phi}$ dominates the B-field, whereas it is the first (even stronger) terms in the $E_r$ and $E_{\theta}$ components that dominate the E-field.
These are different in magnitude from the B-field by a factor that includes $i$ and hence are out of phase by $\pi/2$.
Perhaps this is what you mean?
The Maths:
The solutions for the E- and B-field from a simple oscillating dipole are
$$E_{r} = \frac{p_0 \cos\theta}{4\pi \epsilon_0} \frac{k^2 \exp(ikr)}{r}\left[ \frac{2}{k^2r^2} - \frac{2i}{kr} \right]$$
$$E_{\theta} = \frac{p_0 \sin\theta}{4\pi \epsilon_0} \frac{k^2 \exp(ikr)}{r}\left[ \frac{1}{k^2r^2} - \frac{i}{kr} -1 \right]$$
$$B_{\phi} = \frac{p_0 \sin\theta}{4\pi \epsilon_0} \frac{k^2 \exp(ikr)}{r}\left[ - \frac{i}{kr} -1 \right] \left(\frac{\epsilon_0}{\mu_0} \right)^{1/2}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/143361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
The influence of the antenna height I am working on a model of a transmitter. The transmitter is attached to the wheel of the vehicle and thus constantly changes it's height. In other words capacitance between antenna and ground is changing.
What happens when the transmitter is in most up and most down position. Is there significant loss of the signal? Which effects are of crucial importance?
In particular case, I am dealing with TPMS (Tire Pressure Monitoring System). Transmitters are attached on the wheel inside the tire. The receiver is somewhere inside the car. The operating frequency is 433.92 MHz (ISM band). So, the wavelength of 70 cm which is comparable with the radius of the wheel.
| i will give a more general answer unless other information is added (then i can update)
It would depend on type of antenna, the frequency/wavelength of transmission and how this wavelength compares relative to the height of the wheel and rpm (i would say the height mostly).
if the wavelength is comparable to the height of the wheel it will have serious distortions, if not the difference is neglizible.
Lets say we have a simple example of wavelength $a$ and wheel height $b=\frac{2}{3}a$ (comparable)
Then the antenna at the heighest point will have a height of $\frac{5}{3}a$ which effectively alters the antenna transmission range.
On the other hand lets say the wavelength is $a$ but the wheel height is $b=10^{-5}a$,
in this case any difference in transmission wavelength will be of the order of $10^{-5}$, neglizible.
You might want to check this answer as well
A basic primer on antennas
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/143466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Distribution of current of a rotating cone
If I have a hollow cone (surface with no bottom cover ) as the one in the picture. The cone has surface charged density $\sigma$. It rotates around the symmetry axis with an angular velocity $\omega$. I want to find the distribution of current on the surface of the cone.
What I mean with distribution of the current is the following. I can write the current as :
$$I=\frac{dq}{dt}=\frac{dq}{dl}\frac{dl}{dt} \tag{1}$$$
That is useful in other situations like when I have a wire with linear charge density $\lambda$
I can use the eq(1) for finding the current if ther is a current density $\lambda$. As I can write $q=\lambda l$, $\frac{dq}{dl}=\lambda$ and $I=\frac{dq}{dt}=\lambda v$.
The same follows for a charged sheet if width $b$ with current density $\sigma$. As I can write $q=\sigma a=\sigma bl$ , $\frac{dq}{dl}=\sigma b$ and $I=\frac{dq}{dt}=\sigma bv$.
But with a cone what can I do with the cone. I can write $q=\sigma a=\sigma \frac{2\pi r}{2l} $. But as the length $s_n$ of one circular wire of current varies from 0 to $2\pi r$.Different than before that the width was constant. What can I do ?
My attemp the length that varies $s=2\pi r_{change}$ , now $q=\sigma \frac{s}{2l} $. $\frac{dq}{dl}=\sigma \frac{1}{2l}$ and $I=\frac{dq}{dt}=\sigma \frac{v}{2l}$.
It's clear that $v=wr$ since it is circular motion.
| Do not open the cone. Think of it in the profile view : You have an isosceles triangle.
Now move along the axis of the cone, say a distance $x$ and take an element $\mathrm{d}x$.
Somewhat like this :
This small element is similar to the rectangle you described. With length as $2\pi r(x)$ and width $\mathrm{d}x$.
You also know the velocity with which the charge moves :
$$v = \omega r(x)$$
$r(x)$ is the radius at that point and omega the angular velocity.
Thus the current would be
$$I = \sigma \omega r(x)\mathrm{d}x$$
for that rectangle.
$r(x)$ can be found by triangle similarity:
$$r(x) = R / H \cdot x$$
where $R$ is the base radius and $H$ the height of the cone
So the current varies with position on the cone and to find the overall current, integrate from $x =0$ to $x= H$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/143566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Clausius statement of the 2nd Law I'm slightly messed up with the Clausius statement of the 2nd Law.
I've seen at least two versions, which seem to be conceptually different.
a) It is impossible to transfer heat from a colder body to a hotter body without any other effect.
b) It is impossible to transfer heat from a cold (thermal) reservoir to a hot (thermal) reservoir without any other effect.
I would like to use the Clausius statement to rule out heat transfer from a colder body to a hotter body, where both have finite thermal capacity. No work put in. If we suppose this transfer could happen, then the hotter body would become hotter and the colder body would became colder. So, there actually is some other effect (temperature changes). How does a) rule out this procces? Obviously b) does the job.
| Imagine two systems. A and B. A is in a higher energy state (say is warmer) than B.
Clausius says you cannot transfer energy from B to A without a corresponding change in another system which is neither A nor B (lets say 'C').
But you can transfer energy from A to B without affectng or invoking C.
The 'other affect' bit implies another system/state/body.
Simples.
A.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/143651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Is this expression for the kinetic energy of a spinning disk revolving about a second axis correct? My question is motivated from a question from another user. You can see the configuration of the rotating system here: https://physics.stackexchange.com/q/143377/.
I am not interested in all the complicated arguments of his question, but only of the expression for the total kinetic energy. My answer was that the rotational KE can be expressed as the addition of the KE of the center of mass plus the KE relative to the center of mass, which results in this expression:
$$E_k=\frac{1}{4}mr^2\omega_2^2+\frac{1}{2}md^2\omega_1^2$$
(Notice that this result is independent of the sign of $\omega_2$).
But the original OP claims that the right expression is
$$E_k=\frac{1}{2}md^2\omega_1^2+\frac{1}{2}mr^2(\omega_1-\omega_2)^2,$$
based on answers obtained on other forums (which I checked) and even the moderators in those forums seem to agree with it. The OP itself does not know enough physics to come up with its own answer, but still does not believe mine for the reasons given above.
So, my question is: I am missing something pretty obvious here? which of the expressions is the correct one (if any?)
Thanks!
| Take the reference frame as centered in the fixed axis. The $R$ that connects the origin to the centre of the spinning disk forms an angle $\phi$ with the horizontal. Now, inside the disk of radius $r$, the angle of a certain point mass is given by the angle it forms inside the spinning circle, which we'll call $\theta$.
Now take as generalised coordinates those angles and write the radius vector as a decomposition in $\mathcal{X}$ and $\mathcal{Y}$:
$$\mathcal{X}: r_x = R \cos \phi + r\cos \theta \\
\mathcal{Y}: r_y = R \sin \phi - r \sin \theta ,$$
which means that
$$\dot{r}_x = -R \dot{\phi} \sin \phi - r \dot{\theta} \sin \theta \\
\dot{r}_y = R \dot{\phi} \cos \phi - r \dot{\theta} \cos \theta .$$
Summing up the squares,
$$ \dot{r}^2_y + \dot{r}^2_x = R^2 \dot{\phi}^2 \cos^2 \phi + r^2 \dot{\theta}^2 \cos^2 \theta - 2 rR\dot{\theta} \dot{\phi} \cos \theta \cos \phi \\
+ R^2 \dot{\phi}^2 \sin^2 \phi + r^2\dot{\theta}^2 \sin^2 \theta + 2 rR\dot{\theta} \dot{\phi} \sin \theta \sin \phi \\
= R^2 \dot{\phi}^2 + r^2\dot{\theta}^2 - 2Rr\dot{\phi} \dot{\theta} \cos(\phi + \theta) . $$
The kinetic energy will then be
$$T = \frac{m}{2}[ R^2 \dot{\phi}^2 + r^2\dot{\theta}^2 - 2Rr\dot{\phi} \dot{\theta} \cos(\phi + \theta)] .$$
My answer is coming out pretty different from both the ones you're aiming for, and I'm also using generalized coordinates and Lagrangian mechanics instead of the Newtonian one (so I'm not integrating anything, really). I think this extra term that appeared term does have a meaning, because if you consider the disk spinning in one direction and being rotated in another, then the kinetic energy should decrease in some specific configuration. If you only keep the squared angular velocity terms, then this is impossible.
P.S.: I can surely be doing something VERY wrong here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/143715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Tesla Coils - Is there a risk that the discharge can create x-rays? I've built a Tesla coil that stands about 3 ft tall and uses a spark gap as the interrupter for the primary circuit. Judging by the size of the streamers it's reaching at least a million volts.
Someone once told me that you have to be careful with Tesla coils because they can create x-rays. I had been skeptical, but then read about how x-rays can be produced by unwinding scotch tape. So now I am somewhat concerned.
So are harmful x-rays a risk with Tesla Coil operation, and if so how can I easily test my system to see if it's safe?
| Put a cover in front of the spark gap to shield your eyes from direct exposure to the UV light it produces. The risk is similar to arc-welding. You don't want to stare at the spark gap while it's running. The more powerful the Tesla coil the greater the risk. Run the coil for short periods of time somewhere well ventilated; that's the only way to deal with the Ozone.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/143817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Bosonic Schrödinger field When second quantizating the Schrödinger field
$$\psi(r,t) = \sum_i \phi_i(r)b_i(t),\quad\mbox{and}\quad \psi^{\dagger}(r,t) = \sum_i \phi_{i}(r)^* b_i^{\dagger}(t),$$
we have the commutation relations $[\psi(r,t),\psi^{\dagger}(r',t)]= \delta(r-r')$. Now I want to show that $[b_i,b_j^\dagger] = \delta_{i,j}$.
I tried substituting the expression for $\psi$ into the commutator and got
$$\sum_{i} \sum_{j} \phi_i(r) \phi_j(r')^* [b_i(t), b_j(t)^{\dagger}] = \delta(r-r'),$$
but I don't quite see how this could help me. Does anybody here have an idea how to show this?
| The $\phi_i(r)$ form an orthonormal basis of (square-integrable) functions on $\mathbb{R}$, i.e. you should have a relation like
$$
\int dr\,\phi_i(r)\phi_j^*(r)=\delta_{ij}.
$$
This is what you need in order to expand $\psi(r,t)$ and $\psi^\dagger(r,t)$ the way you did above. You can use this to write the $b_i(t)$ in terms of the $\psi(r,t)$ in the following way:
$$
\int dr\,\phi_i(r)^*\psi(r,t)=\int dr\sum_{j}\phi_i^*(r)\phi_j(r)b_j(t)=\sum_j \delta_{ij} b_j (t)= b_i(t).
$$
Similarly, you get
$$
\int dr\,\phi_i(r)\psi^\dagger(r,t)= b_i^\dagger(t).
$$
So, therefore, you have
$$
[b_i(t),b_j(t)]=\int dr\,dr'\,\phi_i(r)\phi_j^*(r)[\psi(r,t),\psi^\dagger(r',t)]=\int dr\,dr'\,\phi_i(r)\phi_j^*(r) \delta(r-r')\\
=\int dr\,\phi_i(r)\phi_j^*(r)=\delta_{ij}.
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/143917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Why doesn't a star's core cool down when it expands as a red giant? When a star starts to run out of hydrogen to fuse, it begins to collapse due to gravity until the central core temperature rises to $10^8~\text{K}$
Then due the force generated by the fusion of helium, the star expands again and becomes a red giant.
So, why doesn't the expansion cool the core?
| The simplest answer is that in order to maintain helium fusion, a certain pressure and temperature are necessary. Therefore, given the fact that helium fusion is occurring in the core, and the mass pushing down on the core is X from dynamic concerns, you therefore must conclude that the temperature in the core is Y, irrespective of the size of the core.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/144078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Why are permanent magnets permanent? Let me see if I get it right. When an iron bar is attracted by a permanent magnet it becomes a magnet itself because all of its magnetic domains start to point in the same direction. When the iron bar is no longer attracted by the permanent magnet, it is no longer a magnet itself because its magnetic domains point in different directions again.
When iron is heated up to curie temperature and cooled down all of its magnetic domains also start to point in the same direction. ( If I am not wrong the atomic structure does not change)
So why is it permanent in the second case and not in the first ? (Correct me if I messed up something here)
| when we apply magnetic field to any ferromagnetic substance and remove it , small fraction of magnetic strength remains. for completely demagnetize it, an opposite magnetic field is needed this magnetic field is called coercivity of the material.this is different for different materials. coercivity is low for iron, so that it demagnetize easily and it is very high for the materials by which permanent magnet is formed. so greater magnetic magnetic field is needed to demagnetize permanent magnet.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/144144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Kitchen floor dries faster with lights on? My mother used to leave the lights on in the kitchen after washing the floor, saying that it would dry faster.
Does this really happen, or is it just a superstition? If true, how substantial is the effect?
| In principle yes, but the effect is usually marginal. It also depends on how powerful your lights are compared to the size of the kitchen (a 1000 Watt flood light in a home kitchen will probably have a noticeable effect on the speed of drying).
*
*Bascially, the floor dries through evaporation, i.e. the water on the floor goes into the gaseous phase ('becomes vapour') as long as the air in the kitchen is not saturated with water.
*In other words, water continues to evaporate until there is no water left or until the equilibirum vapour pressure of water in the kitchen's air has been reached.
*The equilibrium vapour pressure on the other hand depends on the air's temperature. Higher air temperature means higher equilibrium vapour pressure (the air can 'hold' more water).
*Adding additional sources of heat, such as leaving the lights on, increases the temperature and thus increases the 'water capacity' of the air
The question essentially boils down to how much the temperature of the air increases in the kitchen by leaving the lights on. This however not only depends on the power of the lights (one can assume that all power is converted to heat in the end) but also on the size of the kitchen (how much air needs to be heated up) and the thermal isolation of the kitchen (how much heat goes e.g. through the window).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/144221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why are the poles of a magnet reversed when a magnet is split into two? This question may be odd, but today i noticed something when my little magnet split into two pieces.
The poles reversed, so they could not get into "one piece" again (because the poles was reversed).
What is the physical explanation of this?
|
This question may be odd, but today i noticed something when my little magnet split into two pieces.
As you clarified in the comments to the question, it was a refrigerator magnet. Those magnets have the poles on the flat sides, so that they stick to the iron metal of the doors. When they break it will be into two flat pieces, which will have north next to north and south next to south, and therefore repel .
The poles reversed, so they could not get into "one piece" again (because the poles was reversed).
No, the poles did not reverse because the break cuts perpendicularly the north and south surfaces.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/144364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to find an effective spring constant of a quadratic potential If a potential energy is given like $U(r)=A^3/r^2+2B^3r$, how do I find the effective spring constant using Taylor Expansion?
I compared spring constant $k$ to be equal to second derivative of potential energy with respect to $r$.
Am I going in the correct direction?
| Given the potential $U(r)=A^3/r^2 + 2B^3 r$, the effective spring constant can be defined as the second derivative of $U(r)$ evaluated at the equilibrium point. Hence,
$$U''(r)= \frac{6A^3}{r^4}$$
If $r_0$ is our equilibrium point, then $k_{\mathrm{eff}} = 6A^3/r^4_0$. Another way to perform the calculation is to compute the Taylor Series about the equilibrium point $r_0$, obtaining,
$$U(r) = \left( \frac{A^3}{r^2_0} + 2B^3 r_0 \right) + \left(2B^3 - \frac{2A^3}{r^3_0} \right)(r-r_0) + \frac{3A^3 (r-r_0)^2}{r^4_0} + \mathcal{O}(r-r_0)^3$$
If we compare the $r^2$ order term to $\frac{1}{2}k_{\mathrm{eff}}(r-r_0)^2$, we obtain the same result, $k_{\mathrm{eff}} =6A^3 /r^4_0$. From a geometric point of view, we are defining the spring constant as the curvature of the potential at the equilibrium point, which is a minimum of the potential, and by the second derivative test, $k_{\mathrm{eff}} \geq 0$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/144544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does Earth behave like Natural Bar Magnet? What is the reason for the Earth to behave like a bar magnet and have poles (North and South poles)?
| When we say that the Earth's field behaves "like a bar magnet", we have in mind a field that is approximately dipolar. So while it is true that a geodynamo powers the field (see this Wikipedia page), we need to say a little more. First, the rotation of the Earth organizes the flow, tending to align it with the rotation axis. Second, the magnetic field from a finite source looks increasingly like a dipole as we get further away from it; in a multipole expansion, the higher the order of the term, the faster it drops off. The surface of the Earth is at a radius that is roughly twice the radius at the core-mantle boundary, enough distance for a substantial reduction in non-dipole contributions (see Merrill and McElhinny, "The Magnetic Field of the Earth: Paleomagnetism, the Core, and the Deep Mantle", Academic Press 1998, chapter 2). Finally, the magnetic field in the core could have a large toroidal component, but we know little about its strength because toroidal components don't pass the core-mantle boundary (Merrill & McElhinny chapter 9.2).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/144636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Do satellites need to have their orbits externally maintained? Is the speed of a satellite self maintained ? Or it needs anything to be done externally ?
Condition : The satellite is in the orbit where it is held in equilibrium by the force of gravity and its centrifugal force.
| All that is needed is gravity. There is absolutely no need to invoke the centrifugal force to explain an orbit. This argument doesn't even make sense because when it is used, the satellite is shown as moving.
A nice simple way to look at orbits is via Newton's cannon. Imagine a cannon atop the only mountain on an airless and, except for this one mountain, spherical planet. Fire the cannon and the cannonball will fly a bit before falling to the surface of the planet. Add a bit more gunpowder and the cannon will fly a bit further. Add even more and now the cannonball flies partly around the curved surface of the planet before finally hitting the surface. Eventually you'll have the cannonball flying halfway around the planet before it hits. What happens if you add a bit more gunpowder? The cannonball is now in orbit. It will eventually come all the way around the planet and hit the cannon from behind.
That said, there are lots of things that perturb a satellite's orbit. Satellites in low Earth orbit fly through the Earth's upper atmosphere. This eventually slows the satellite down to the point where it enters the thicker part of the atmosphere and burns up. To counter this, satellites in low Earth orbit occasionally need to boost themselves back up to a higher altitude.
Other perturbations include the non-spherical shape of the Earth, gravitational perturbations from the Moon and the Sun (and Jupiter and Venus and ...), solar radiation pressure, general relativity, and others. While the satellite might still be in orbit, these perturbations tend to move a satellite from the orbit it is supposed to be in to some other orbit. As is the case for satellites in low Earth orbit, other satellites occasionally need to boost themselves to bring themselves back to the desired orbit.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/144792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why do we add the spin angular velocity and orbital anglar velocity when asked to calculate total angular velocity of Gyroscope? Normally when we talk of angular velocity we mean how the angle of a vector changes with time with respect to an origin.Thus the oribital angular velocity of gyroscope makes sense to me.However I find that we add another type of angular velocity -spin angular velocity- to find total angular velocity.This seems a bit ambiguious as this angular velocity is not due to change in angle about our origin about which we calculated the orbital angular velocioty.Thus adding both to get angular velocity seems confusing to me.
`
| Imagine yourself as the center post of the gyro and you lean 15 degrees to the right you have a bucket of water that you spin over your head.(this represents the spin of the gyro) As it spins you will see the angle of the bucket spinning and then have a friend estimate the angle
*
*If you left the bucket at the same angle as you lean you might fall over so spin the bucket as if it were at the top of your head when you were standing upright. Success
*Note: Hope this helps you see the angle of the spin.
*You will notice over a point the spin will no longer keep you up, this is due to the speed of the spin and the mass you are spinning. Have fun but don't get hurt.
Who says Physics has to be boring.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/144915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Have you ever seen in your car a mosquito? it's not capable of going 60 mph. How come it can still keep up with the car as if it weren't trying For instance, a fly is staying at the same spot, then your car suddenly moves forward at a rate of 60 mph. The mosquito IS staying at the same spot but for some reason it moves along with the car as if nothing happened.
| Actually, there are two factors responsible for helping the mosquito keep up with the car:
*
*If we consider the car to be already moving at 60mph, the air inside the car is also moving at 60mph. As the wings of the mosquito are subject to air resistance, its inertia will be resisted by the forward moving air, until the mosquito itself attains the constant velocity 60mph.
*If the car is accelerating, the situation illustrated in the image below will apply. The acceleration will cause more air to collect at the back of the car than in front. Therefore, the density of air at the back will be more. And hence, to move back, the mosquito will have to displace more air per unit volume than it will have to move forward (or remain stationary). So, this difference in densities of air in different sections of the car will help mosquito keep up with the car.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/145089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Newtonian tidal forces and curvature Today in my physics class, my lecturer said something which confused me. He said:
"Newtonian tidal forces are reinterpreted as a manifestation of curvature in General Relativity".
Now I know what tidal forces are (an effect of the force of gravity), a good example is the cause of the waves on the ocean because of the tidal forces with the moon. However I do not see how this shows curvature in the GR sense.
| Have a look at my answer to How to explain centripetal force in terms or relativity because much of the discussion there is relevant.
Consider what we mean by a tidal force. Suppose you're floating around in space and you arrange a number of marbles around you so they lie on the surface of a perfect sphere. Now monitor the shape of the surface marked out by those marbles. If the shape changes with time from a sphere to an ellipsoid you would conclude that there must be a force acting on the marbles to pull them apart. In the Newtonian interpretation this is the tidal force.
Now consider the GR interpretation, and this is where the discussion of geodesics in the answer I linked above comes in. Each marble follows a geodesic. In flat spacetime geodesics that are originally parallel remain parallel, so if the marbles are initially stationary with respect to each other they remain stationary with respect to each other, and the sphere does not change shape. However if spacetime is curved then initially parallel geodesics may not remain parallel, but can diverge or converge. Because each marble is following a different geodesic, and the geodesics might not remain parallel, the marbles may move apart and the sphere change shape. No force is acting - it's just that the individual marbles are following different geodesics.
And this is (I would guess) what your lecturer means. Newton would see the sphere change shape and conclude there must be a force acting. Einstein would see the sphere change shape and conclude that spacetime was curved.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/145162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Transverse doppler effect in light In most books to explain transverse Doppler effect the following example is given:
Consider a source that emits flashes at frequency f0 (in its own frame), while moving across your field of vision at speed v. There are two reasonable questions we may ask about the frequency you observe:
• Case 1:
At the instant the source is at its closest approach to you, with what frequency
do the flashes hit your eye?
• Case 2:
When you see the source at its closest approach to you, with what frequency
do the flashes hit your eye?
In the first case we observe from the trains frame, while in the second we do not.
The explanation for doing this is given as follows. If we observe from the ground frame the following error is supposed occur:
The error can be stated as follows. The time dilation result, ∆t = γ·∆t0, rests on the assumption that the ∆x0 between the two events is zero. This applies fine to two emissions of light from the source. However, the two events in question are the absorption of two light pulses by your eye (which is moving in the source frame), so ∆t = γ·∆t0 is not applicable. Instead, ∆t0 = γ·∆t is the relevant result, valid when ∆x = 0.
Here x0, t0 is the observation in the moving frame, and γ is the dilation factor.
My question is, for what events and why is ∆x0 not equal to 0. And why when we observe from the moving frame ∆x is supposedly 0.
| There are no the transverse Doppler effect in your two cases. Because there is always a classical Doppler shift when the distance between the source and the observer changes with time.
See: Investigations on the Theory of the Transverse Doppler effect
blog.sciencenet.cn/blog-267101-748804.html 2013-12-11.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/145305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to know if a vehicle is moving without any external source of information? The situation is the following:
I'm inside a vehicle (plane or a car, it doesn't matter) and I need to know if the vehicle is moving at a constant speed BUT I cannot perceive any external change like visual changes, vibration, etc.
How can I know if the vehicle is moving? Do I really can know?
Additional question Can I know my speed?
| Are you traveling in a vacuum? If not, use a pitot tube
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/145400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 8,
"answer_id": 6
} |
Why won't a block less dense than water fully submerge? Suppose we have an object of volume $1\: \mathrm{m^3}$. Mass of that object is $500\: \mathrm{kg}$, which means that the density of the object is $500\: \mathrm{kg/m^3}$.
If the object is in water it will float and half of it's volume ($0.5\: \mathrm{m^3}$) will be submerged in water (assuming that the density of water is $1000\: \mathrm{kg/m^3}$; as the object's density is half of water so half of it's will be submerged).
From the Archimedes principal we know that the object will displace the water of same mass as it. So the object will displace $500\: \mathrm{kg}$ water and $500\: \mathrm{kg}$ water = $0.5\: \mathrm{m^3}$ water.
We also know that the lost weight of an object = weight of water displaced by that object.
It means that the object will lose all of it's weight in water and as buoyant force is same as the weight of that object, the object should be submerged totally in water. But, that it is not possible, it will be submerged only half of it's volume. But how?
If the weight of displaced water is equal to weight of that object, shouldn't it be totally submerged?
|
From the Archimedes principal we know that the object will displace the water of same mass as it. So the object will displace 500kg water and 500kg water = 0.5m3 water.
We also know that the lost weight of an object = weight of water displaced by that object.
The object does not lose any weight. It is pushing down with its weight. The waters is pushing back up with an equal and opposite weight of volume .5 m3, displaced. Equilibrium. As the object is 1 m3 half of it is out of the water, since it did not displace it..
It means that the object will lose all of it's weight in water and as buoyant force is same as the weight of that object, the object should be submerged totally in water.
You are double counting. No weight/mass is lost. Just the forces acting on the body, gravity and buoyancy are in equal
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/145467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Atmosphere model Im working on project where I should simulate glider soaring. The goal is to create gliders that will look for regions with hot upwinds using evolution algorithms. That shouldn't be problem.
What I have problem with, is how to simulate the atmosphere with wind and temperature? I've read that meteorological simulators divide space into 3D matrix and compute temperature, pressure and wind speed for every cell.
What would be the simpliest atmosphere model I could use using 3D matrix? Please provide equations and example on how to compute.
Something like this http://hint.fm/wind/ but in 3D would be perfect, but it could be more simpler. I thought about matrix holding temperatures and differences in adjacent cells would give me vector with wind direction and speed but I'm not sure if that would work and if it isn't too easy for my simulation.
| Take a look at wikipedia article on numerical weather simulation and Atmospheric physics
In general the simulations involve complicated models and need fine-tuning and error compensations.
The idea though is simple. Start with simple models of the weather (see for example Lorenz equations)
Some references:
*
*http://www.slideshare.net/yotings/simulating-weather-numerical-weather-prediction-as-computational-simulation
*http://www.meteo.unican.es/en/research/climate_models
*http://www.eolss.net/sample-chapters/c02/e6-03a-04-03.pdf
Ijn general there are no standard models for weather modeling and prediction, some are better approximations or faster or more suitable for a specific application.
Hope this is helpful
UPDATE:
If only wind and temperature are needed, one should use the equations of fluid dynamics (e.g Navier-Stokes) and the thermodynamic equations of state for temperature
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/145724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
What is the entropy of a pure state? Well, zero of course. Because
$S = -\text{tr}(\rho \ln \rho)$ and $\rho$ for a pure state gives zero entropy.
But... all quantum states are really pure states right? A mixed state just describes our ignorance about a particular system. So how can properties like entropy and temperature come out of ignorance of information? That doesn't make sense to me.
|
A mixed state just describes our ignorance about a particular system
I don't think you can call our inability to access a pure state of any system an ignorance about a particular system. Because I think of pure state as a mathematical abstruction that can only be related to reality by application of Born rule - wich either reflects our fundamental ignorance (akin to Kant's numenal) or indeed a lack of adequate theory (hidden variables), but in any case not an ignorance about a particular system. Termodinamic properties arise not from ignorance about pure state, but from discarding or ignoring the accessible information, wich translates into entropy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/145795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 5,
"answer_id": 4
} |
How many more galaxies are out there in the Universe (beyond the observable radius)? Let's say that the number of large galaxies in the observable universe is $n$ (approximated to 350 billion).
If the universe is homogenous and isotropic, what are the estimations for the total number of large galaxies in it?
$5n$, $10n$, $50n$?
| Somewhere between zero and infinity, if one believes the eternal inflation scenario. BTW, Max Tegmark covers some of this here
Eternal inflation posits that in the false vacuum from which our own universe inflated there may be any number of others doing the same, beyond our event horizon, all with various combinations of starting conditions and fundamental constants. It is one of the multiverse scenarios that Tegmark covers. As an aside, a multiverse can also arise from a single cyclic universe if it cycles through all possible states sequentially.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/145893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Frequency of an open air column Given only the length of an organ pipe to be $2.14 m$, is it possible to find what frequency it vibrates at? If I use the equation $f=\frac{v}{\lambda}$, does the $v$ apply to the speed of sound in the organ pipe or in air?
| The speed of sound should apply to $v$ because the sound waves are travelling through the air after it leaves the organ pipe.
The speed of sound is approximated by the following formula:
$$
v = 331.3 + 0.606T
$$
Where $T$ is the temperature in degrees Celsius, and $v$ is the velocity in meters per second. In your case, suppose you're at room temperature (~25 degrees Celsius), then the speed of sound would be:
$$
\begin{align}
v &= 331.3 + 0.606(25)
\\&=346.45m/s
\end{align}
$$
Now, to solve for the frequency:
$$
\begin{align}
f &= \frac{v}{\lambda}\\\\
&=\frac{345.45ms^{-1}}{4.24m}\\\\
&=81.71s^{-1}\\\\
&=8.17\times10^1 \ Hertz
\end{align}
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/145988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Dirac operator partial integration When you have an action with bosonic $X$ and fermionic $\psi$ (Majorana) fields and perform a SUSY transformation $\epsilon$ (the constant, infinitesimal parameter of transformation, a real, anticommuting spinor) can you do normal partial integration on the Dirac operator just like you're used doing with a derivative? For example:
$$
\begin{align}
S &= \int \mathrm d^{2}\sigma\; \bar{\epsilon}[\left(\gamma^{\alpha}\partial_{\alpha}\gamma^{\beta}\partial_{\beta}\right)X(\sigma)]\psi(\sigma)\\
&= \int \mathrm d^{2}\sigma\; \bar{\epsilon}[{\not}\partial{\not}\partial X(\sigma)]\psi(\sigma)\\
&= -\int \mathrm d^{2}\sigma\; \bar{\epsilon}[{\not}\partial X(\sigma)][{\not}\partial\psi(\sigma)] + `boundary`
\end{align}
$$
This would make calculations much easier because I'm lost with all matrix multiplications and properties of the Gamma matrices...I wanted to check because I couldn't find any information about this.
| In general you cannot, but in your special case it works out.
You should be aware of what the objects in your expression actually are, and how they relate to each other. $X$ is a bosonic field and as such does not feel the presence of gamma matrices at all. Your first line could be rewritten as
$$ S = \int \mathrm d^2 \sigma \, \bar \epsilon \gamma^\alpha \gamma^\beta \psi(\sigma) \, \partial_\alpha \partial_\beta X(\sigma) $$
The expression $\bar \epsilon \gamma^\alpha \gamma^\beta \psi$ does not have any free spinor indices! Now, you can use partial integration to get
$$ S = -\int \mathrm d^2 \sigma \partial_\beta \left(\bar \epsilon \gamma^\alpha \gamma^\beta \psi(\sigma) \right) \partial_\alpha X(\sigma)$$
which is
$$ S = -\int \mathrm d^2 \sigma \bar \epsilon \gamma^\alpha \not \partial \psi(\sigma) \, \partial_\beta X$$
and this is your last line.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/146102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Question on Shockley's equation for FETs I'm currently studying FETs (Field Effect Transistors) in Navy school. What I know so far is that in FETs, $V_{gs}$ is reversed biased, creating a depletion zone. What this means in plain English is that the more negative the gate is with respect to the source, the more narrower the channel becomes, leading to more resistance in the drain to the source, so the drain to source acts like a resistor. There is a point when current flows constant and we call this the saturation point and denote this as $v_{gs(off)}$
We are introduced to this equation and I have no idea where it comes from:
$I_d = I_{dss} \left(1 - \frac{V_{gs}}{v_{gs(off)}} \right)^2$
Here $I_d$ is the current of the drain and $I_{dss}$ is the drain to source saturation. Can someone shed some light as to how we can get this equation?
| The exact equations for I-V characteristics of transistors are derived using quantum-mechanics. Several approximations can be used, one of which is based on the shottky barrier analysis
This reference here derives the I-V linear and quadratic approximation (in saturation) for FET transistors.
Another reference here
UPDATE:
As @QMechanic pointed, Electrical Engineering should be better suited for this question.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/146206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why would different metals glow red at different temperatures? According to everything I've been taught about incandescence and black-body radiation, and some quick Googling to confirm I'm not crazy, just about everything, regardless of composition, should start glowing red at about the same temperature- 798K, the Draper point, where sufficient power in the black-body radiation curve crosses into the visible spectrum to be visible.
I have just been informed by a metallurgist friend, however, that different metals in his experience begin to glow red at wildly different temperatures; typically, just below their melting points. For example, apparently aluminum glows red at much lower temperatures than steel.
My hypothesis so far: The metals in question are far from perfect black bodies (reasonable, since most metals are shiny), and differing levels of emissivity in the low end of the visible spectrum require different temperatures to raise total emission in that range to visible levels.
This, however, does not explain why there should be any connection between glow-point and melting point.
Am I close to correct? Is there another better explanation? Or is my friend simply crazy?
| There is no direct relation between melting point and colour of light produced, it's just that some heat energy is used in breaking intermolecular forces and a part of it is transferred to atoms. So for a higher melting point, a bigger part of energy is used in breaking intermolecular attraction and to change its state of matter.
The rest is explained by black body radiation's explanation by Max Plank. When energy is given to atom, it's valence electron gets excited and jumps to higher energy levels and return back to original shell by emitting electromagnetic rays of different wavelength like first low energy radiations like that of infrared, red and so on as the heat increases.
And your question of different light in different elements is explained by law of conservation of energy. neglecting energy lost in breaking intermolecular forces, same energy given to two different atoms produce same light no matter which element that it is.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/146256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 3,
"answer_id": 2
} |
Why does the "counting rule" of band theory fail to predict the conduction properties of some materials? I'm a little confused by the description I commonly hear about the electron counting rule in band theory. The general statement I find is that a "solid with an odd number of electrons per unit cell is a metal, while an even number of electrons could be an insulator or a metal". However, in materials such as CuO and VO2, there are two or more (even number) of formula units per unit cell, so regardless of, e.g., VO2 having a 3d1 configuration, two formula units would mean two d1 electrons and therefore an even number of electrons per unit cell. For some reason, "counting arguments" still imply that these materials should be metals, but I'm not sure how this is the case if the number of electrons per unit cell is the determining factor (according to band theory). I think this rule must be misstated. Can someone clear this up for me?
Note: I am aware that these are NOT metals - I'm just trying to understand why the band theory "counting argument" would suggest that they are.
| Are the bands filled for these materials?
Filled bands do not contribute to transport.
If a Band is filled at t=0 it remains filled for all times
(Consequence of Liouville Theorem).
Of course no transport also means the material has to be an insulator.
I hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/146402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Usage of Poisson's equation? I revisited electrostatics and I am now wondering what the big fuzz about Poisson's equation
$$\nabla^2 \phi = -\frac{\rho}{\varepsilon_0}$$
is. Wiki says
One of the cornerstones of electrostatics is setting up and solving problems described by the Poisson equation. Solving the Poisson equation amounts to finding the electric potential φ for a given charge distribution.
while the text I am currently studying [1] defines $\phi$ via
$$\phi(\mathbf{r}) = \frac{1}{4\pi\varepsilon_0}\iiint_{\mathbf{r}' \in \mathbb{R}^3} \frac{\rho(\mathbf{r}')}{\|\mathbf{r}-\mathbf{r}'\|} dV' \ .$$
So why would you go all the way to a complicated partial differential equation when there is already a closed-form formula available (which solves exactly what the Poi.Eq. is good for according to wiki)? Is that integral not generally valid or what am I missing?
EDIT: I found out that the integral above is called the d'Alembert solution of Poisson's equation, but that doesn't answer my question about the application-importance of the latter.
[1] Equation (1.21) in http://users.ox.ac.uk/~math0391/EMlectures.pdf
| The integral you wrote integrates $\rho$ over the whole space. This is impossible to calculate if $\rho$ is not known in the whole space.
For example, when the charge $\rho$ is known only inside some finite region enclosed by a metallic shell, the shell is known to have constant potential $\phi$ on its inner surface. This information is useless in calculating the integral because the potential does not occur in it and the charges outside are still unknown.
But together with the Poisson equation, the boundary condition determines the potential $\phi$ at all points inside uniquely. There are methods to find $\phi$ based on this, both analytical and numerical.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/146485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What's Optimal About Six Legs According to Physical Laws? In many respects the insects can be regarded as the most successful class of animals in evolutionary terms. And one of the most common features of insects is that they (mostly) all have six legs.
Not discounting other traits, is there something about six legs that has helped insects achieve this success?
Can we use physical laws to analyze and determine an optimality of having six legs - perhaps such as stability?
| I can think of two possible reasons: first, you can have half your legs up in the air at one time (as in walking - two on one side and one on the other, then change) and still be perfectly stable (3 legs = most stable, like a tripod); and second, if a predator chews off a leg on either side, you still have two legs (so you can still walk). I think those arguments are borderline biomechanical, rather than physical...
The first argument has some solid scientific backing - see for example http://web.neurobio.arizona.edu/gronenberg/nrsc581/powerpoint%20pdfs/cpg.pdf . It doesn't take a lot of brains to walk with six legs... I fact it can be done almost entirely with "local" neurons. That's a good thing when you don't have a lot of brains.
Quoting from https://answers.yahoo.com/question/index?qid=20090418111020AA75mgR :
Generalizing, insects walk with a metachronal gait and, with speed, a tripod gait - which involves a tripod stance - 2 legs on one side of the body and one on the other remain stationary while the other legs move forward, then the stationary legs walk as the others take a stance. In this way, walking involves maximum stability with a minimum of neural coordination. In fact, ganglia and other nerves and sensors located on each leg may contribute as much to the actual walking movement as the brain does. It's a very easy, stable and adaptable locomotory system which evolved from the basic arthropod body plan with 2 pairs of limbs on each body segment.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/146623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What's the difference between frequency domain and time domain spectra?
If I have a mechanical oscillator and want to observe the dynamical behavior of the oscillator, is there any additional information to observe it in time domain and frequency domain? Normally, we observe the frequency domain spectra (power spectral density) as the information of oscillator. In fact, I'm solving a dynamical behavior of two coupled mechanical oscillator, like the picture above. While someone told me that I could get different information from time domain than frequency domain. In my opinion, the difference between time domain and frequency domain is just the transform of Fourier. So what's the difference
| There are no differences. Frequency domain is used as a mathematics transformation tool (Fourier, Laplace) in order to resolve too complex differential equations in time domain spectra.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/146671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
What is the weight of the Philae lander on the Churyumov–Gerasimenko comet compared to earth? We know the payload mass of the Philae lander was 21kg.
We know the mass of the Churyumov–Gerasimenko comet is roughly 1 x 10^13kg.
We know the mass of Earth is roughly 5.9x10^24kg.
I've heard one estimate of the weight of the lander as 100 earth grams. Looking at the ratios (1/(1 x 10^11)) that doesn't seem right to me.
My question is: What is the weight of the Philae lander on the Churyumov–Gerasimenko comet compared to earth? (And does it matter that the comet is Duck shaped? ie does the weight change depending on where on the comet it is?)
| As you rightly pointed out, the fact that 67P is oddly-shaped should alter its gravitational attraction on various parts of the comet.
That said, if we were to go by Wikipedia in a rather off-hand manner, we find that the lander is $100 kg$ (as @fibonatic rightly pointed out) and 67P has an acceleration due to gravity of $\textbf{g'} = 10^{-3} m/s^2$. Its weight $W$ would therefore be simply a calculation of $W = m\,\textbf{g'}$, giving us $W = \frac{10^2}{10^3} kg = 0.1 kg$ or $100 \,\,\verb+earth+\, g$.
[P.S. I will update this answer with better sources than Wikipedia as soon as I find time.]
Edit 1: This ESA webpage seems to confirm the figures.
Edit 2: Calculating $\textbf{g'}$
I made some calculations:
Using the formula $\textbf{F} = M\,\textbf{g'} = \frac{GmM}{r^2} \Rightarrow \textbf{g'} = \frac{GM}{r^2}$ we can calculate the acceleration due to gravity on 67P (m being the comet's mass and M that of our lander). The above ESA page gives us this figure:
Seeing how the dimensions vary wildly, I decided to consider a mean of, say, 3.5km as diameter and 1.75km as $r$. 67P's mass is, of course, $10^{13}\,kg$ which gives us,
$$
\textbf{g'} = \frac{6.67 \times 10^{-11} \times 10^{13}}{\left( 1.75 \times 10^3 \right)^2} \approx 10^{-3} ms^{-2}
$$
A more precise answer, is, of course, $0.217 \times 10^{-3} ms^{-2}$ but since we have been very liberal in our assumptions of mass and radius, I think we ought to simply consider the order of magnitude, $10^{-3} ms^{-2}$. This pdf file contains some simulation data that agrees with our result.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/146744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Who plays the role of centrifugal force in an inertial frame of reference? It is noteworthy to quote a sentence from my book,
It is a misconception among the beginners that centrifugal force acts on a particle in order to make the particle go on a circle. Centrifugal force acts only because we describe the particle from a rotating frame which is non-inertial.
Yes, the statement is undoubtedly right. But one thing that is annoying me is that If the particle moves on a circle due to centrifugal force from a rotating frame, what is the cause for motion of the particle on a circle from an inertial frame ? If there were only centripetal force , the particle would go towards the centre and never it would move in a circle. So, in order to move the particle on a circle , who will play the role of centrifugal force when viewed from inertial frame?? Please help.
| In an inertial frame the only force that causes a particle to move in a circular motion is the centripetal force. The reason that a particle does not "fall" into the center is because it has some tangential velocity, so it moves away from the center tangentially as it is falling towards it. The relationship between the centripetal acceleration and tangential velocity is $a = v^2/r$. Remember that acceleration is just the rate of change of velocity, and it is velocity that dictates the direction the object is travelling. The centripetal acceleration is really only changing the direction of the velocity vector (not the magnitude) such that the velocity is always in a direction tangential to the circular motion.
Satellites for instance, are in constant free fall (at centripetal acceleration g), but it is moving fast enough tangentially that it misses the ground as it is falling towards the earth, the reason that its orbit is stable.
There is a "reactive" centrifugal force in the inertial frame, but it is a reactive force (per Newton's Third Law). It is applied by the particle to the object applying the centripetal force and is equal in magnitude and opposite in direction to the centripetal force. An example would be if one person was spinning another person and the two are held together by a rope, the tension in the rope is equal to the reactive centrifugal/centripetal force.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/146823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is a photon really massless? If a photon travels at a speed of light and its massless then it must have no energy but this is not the case as we see in photo electric effect. Also help me to know what are photons made of, how are they created?.
|
If a photon [is] massless then it must have no energy
This is not the case. One way to think of mass is as nothing more than a convenient name for rest energy. Photons are indeed massless and thus have zero rest energy. This is not an issue because according to special relativity, they do not come with a rest frame.
Please note that assuming we denote rest mass by $m$, the well-known $E=mc^2$ is not the whole story - the general formula reads
$$
E^2 = m^2c^4 + p^2c^2
$$
In principle, you could think of three types of particles, depending on the relative values of energy and momentum:
*
*$E^2 > p^2$: massive particles, $v < c$
*$E^2 = p^2$: massless particles (eg photons), $v = c$
*$E^2 < p^2$: tachyonic 'particles', $v > c$
The last variant is hypothetical and not really particle-like (they cannot be properly localized and would manifest more like an action-at-a-distance).
what are photons made of
As far as we know, they are elementary particles. They are excitations of a bosonic quantum field and not made out of anything.
how are they created
Through processes that involve the electromagnetic interaction in general and accelerating, vibrating or jumping electrons in particular.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/146975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Structure of white light? White light is a mixture of different wavelengths.
If so what will be the structure of a beam of white light ? Is there a separation between different colours ? what does it actually mean ?
Does a beam of white light shows any spacing between different wavelengths ?
| White light is composed of photons of varying energies. The photons themselves do not have to have any perpendicular spatial separation as your picture suggests. Rather, it is possible that the photons of different energies are coming along a single path, one after another.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/147254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is it possible to create a parachute large enough to stop all velocity? This idea came to me while playing Kerbal Space Program. I noticed that the larger my parachute was, the slower my rocket would fall back down to Kerbin. I would like to know if it is possible to create a parachute so large in the real world that it might stop all velocity, essentially making whatever is attached to it float in mid-air. Common sense is telling me "no," but I could always be wrong, and I would love some explanation behind whether or not it is possible.
| A parachute is a device specifically designed to create viscous friction.
Viscous friction generates a force that:
*
*is oriented opposite to the velocity;
*is proportional to (a certain power of [*]) the velocity.
So the falling velocity will increase until the drag force (pointing upwards) becomes equal to the weight of the falling object (pointing downwards). This equilibrium velocity can be reduced increasing the drag, but cannot be killed completely (unless you have infinite drag) because this would kill the drag force.
If you want to stop the motion you need another force, this can be the buoyant force (but then you have an aerostat). Another possibility is to have an upward air flow, then you will be falling with respect to air being steady with respect to ground.
[*] Typically $F\propto v$ for small Reynolds number and $F \propto v^2$ for big Reynolds number
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/147346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 8,
"answer_id": 6
} |
How two prove the second sound velocity is $1/\sqrt{3}$ times than the first sound From Landau two hydrodynamics model in superfluid, we have the result
$c_1^2=\frac{\partial P}{\partial \rho}|_T$ and $c_2^2=\frac{\rho_s s^2 T}{\rho_n c}$.
In the zero temperature limit, how to relate those two quantities, to get the conclusion that $c_1^2=3c_2^2$?
| Let's consider gas of particles which move with constant velocity $V$. In such case we will derive speed of sound. $c_s^2=\frac{\partial P}{\partial \rho}$ where $P$-is a pressure and $\rho$ is a density of gases. Set p is a momentum, then the momentum which gas give the wall from small time $\Delta t$ is $\Delta p=2 m V_x \frac{n}{2} V_x \Delta t S$ than $P=\frac{F}{s}=\frac{\Delta p}{\Delta t S}=m\rho V_x^2=\frac{\rho V^2}{3}$ and the speed of sound has following form $c_s^2=\frac{V^2}{3}$
The same picture is occur for second sound. There are two type of excitations in super fluid phonons and rotons. In case of small temperature the main contribution is given by phonon because rotons contribution suppressed via energy gap $n_{rot}\sim e^{-\frac{\Delta}{T}}$. The phonons spectrum has the following form $E(p)=u p$, where u is the speed of sound(constant).(It means that all particles moves with the same module of velocity $|u|$ ) In your notation $u=c_1$
The second sound is a temperature waves in gas of particles wich move with the same velocity.
Using argument in the first paragraph is is easy to obtain that $c_2^2=\frac{c_1^2}{3}$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/147438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does the conductivity of a wire in a vacuum decrease over time? Does the conductivity of a wire in a vacuum decrease over time, say over the period of years or decades? In other words: Does current degrade a wire, making it less conductive? If so, by how much, and why does this occur? Does it have something to do with the 2nd Law of Thermodynamics?
(I'm not looking interested in mechanical effects of currents on wire degradation, but thermodynamic effects.)
| Some corrosion always takes place (pure gold is not used for transmission lines, AFAIK:-) ), so the conductivity decreases with time, although for some materials this effect can be very small (http://books.google.com/books?id=IWn9uuISVIoC&pg=PA109&lpg=PA109&dq=transmission+line+corrosion+conductivity&source=bl&ots=Qt5Z3a9Irs&sig=0hoDGt_x9F-UtTUqk8eDReqfEbw&hl=en&sa=X&ei=xRlsVIOKOoqhNsiPgqAE&ved=0CEQQ6AEwCTgU#v=onepage&q=transmission%20line%20corrosion%20conductivity&f=false )
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/147531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Statistical physics and momentum conservation In statistical physics one usually looks at energy as a conserved quantity and e.g. in the canonical ensemble assumes a constant average energy of the ensemble. Now why don't we usually do this for other conserved quantities like momentum? Why not do a 'canonical' ensemble with momentum exchange? Is it more complicated or simply never useful?
| I think it wouldn't be useful. In statistical mechanics you want to model the microscopic behaviour of a thermodynamical system. In the laws of thermodynamics there's no mention to momentum. But I believe nothing forbids you from talking about the average momentum of a statistical mechanical system.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/147627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
What does the exponential decay constant depend on? We know the law of radioactivity:
$$N=N_0e^{-\lambda t}$$
where $\lambda$ is the exponential decay constant. My question is: This constant depends of what?
| The constant is a function of the stability of the nucleus, and is experimentally determined for every isotope. In other words - every kind of nucleus has its own value of $\lambda$ and there is no way (that I know) to get an accurate value for it, other than measurement.
But there are some nuclear physicists roaming who will put me out of my misery, I'm sure...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/147708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Why do silicon solar cells only produce ~0.6V when the band gap of silicon is ~1.1V? I've been researching into this and can't quite figure out where that lost voltage is going. When silicon is excited by a photon within its absorption spectrum, it will always have an internal potential of 1.1V as per the band gap. Why is the p n junction only able to extract roughly half of this?
| I'll talk about why silicon diodes usually have threshold voltage of about $0.6V$. For the potential generated by silicon solar cells the argument is much the same.
Built-in potential is what determines threshold voltage. This potential can be calculated using the formula:
$$q\varphi_{in}=kT\ln\frac{n_{n0}p_{p0}}{n_i^2},$$
where $k$ is Boltzmann constant, $n_i$ is intrinsic concentration of charge carriers, and $n_{n0}$ and $p_{p0}$ are concentrations of dominant charge carriers (i.e. holes in p-Si and electrons in n-Si).
If we take room temperature, then $kT=0.025\text{ eV}$, $n_i=10^{10}\,\text{cm}^{-3}$, and taking doping concentrations of $10^{15}\text{ cm}^{-3}$, and taking into account that at room temperature almost all the doping atoms are thermally ionized, we have $n_{n0}=p_{p0}=10^{15}\,\text{cm}^{-3}$, so we calculate the built-in potential to be
$$q\varphi_{in}=0.57\text{ eV}.$$
Taking higher doping concentration, e.g. $10^{16}\text{ cm}^{-3}$, we'll get $q\varphi_{in}=0.69\text{ eV}$.
See this wiki page for some more about doping and charge concentration.
Have a look at the band diagram of pn junction with doping of $10^{15}\text{ cm}^{-3}$ (from that same wiki page):
We can see that on bias of $0.59\text{ eV}$ the bands are almost flattened, so increasing the bias will just tilt the whole bands, which corresponds to ohmic conductance region of diode voltage response.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/147798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Is the world we are living in discretized? I do not know how to use professional words to ask my question, so I will try to use a layman language. Please bear with me for a moment.
A ROUGH GUESS
The world our eyes are seeing every moment is a picture reflected in our eyes. I guess our eyes are like cameras, that are taking pictures "continually". I suppose there is a frequency in this picture taking. Say it is 1/10000s, the time it takes a picture, let's assume it is negligible. Something like that.
THE QUESTION
My question is, if we take a picture at 0/10000s, 1/10000s, 2/10000s, etc. How do I know that between the time 1/10000s and 2/10000s, the world exists?
So now:
*
*If my guess is wrong, then what is the real picture? What is
happening in reality?
*If my guess is correct, how do we know the
world exists continually? How would you use experimental methods to
prove it? Or there might be theoretically, many worlds in our time gaps coexisting with ours?
EDIT
I feel I still have not got a satisfied answer for my first question. Could anyone explain to me: is our vision equipment (i.e., our eyes) functioning continually or discretely?
| According to quantum mechanics the time evolution of the universe is described by a path integral that will sum over all histories. If we consider a robot whose processor runs at a clock cycle of $\tau$ to simplify things, then all the possible time evolutions during that period of $\tau$ will contribute to explain the robot's observations, including the Moon changing to green cheese and back again within that time interval of $\tau$. However such strange histories only make extremely negligible contributions to the path integral.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/147865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
What does it mean to say "a paramagnetic material is attracted to an external magnetic field?" I'm just having a hard time wrapping my head around what actually goes on when a paramagnetic material is exposed to an external magnetic field. I understand that the individual dipoles line up so that they point in the direction of the field, but why does that need to happen for there to be magnetic attraction? And what exactly is being attracted in the first place? If I imagine the dipoles are caused by little rings of current, are the rings themselves pulled in the direction of the field? Also, what happens if the dipole starts out pointed out in exactly the opposite direction of the external field?
| Let me explain the difference using a simple model of an atom - as consisting of electrons revolving in orbits around a heavy nucleus.
When a material is exposed to a magnetic field, the electrons move in such a manner as to oppose it. The material mildly repels the magnetic field.
However, if a material has unpaired electrons, it has a net molecular magnetic moment. An external magnetic field tries to align the molecule moments along its direction while the thermal agitation tries to randomize them. The competition between the ordering effect of the magnetic field and the disordering effect of heat results in a temperature dependent magnetic attraction of the material to the field, called paramagnetism. All materials are diamagnetic while those with unpaired electrons are paramagnetic. When paramagnetism exists, it overshadows diamagnetism by several orders of magnitude.
As alluded by @TZDZ, a complete understanding of magnetism requires a quantum mechanical treatment. However, this simple model usually suffices for a first introduction to the topic.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/147930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Are there eight or four independent solutions of the Dirac equation? I edited the question as a result of the discussion in the comments. Originally my question was how to interpret the four discarded solutions. Now I'm making a step back and hope that someone can clarify in what sense it is sensible to discard four of the eight original solutions of the Dirac equation.
From making the ansatz ${\mathrm{e}}^{+ipx}$ and ${\mathrm{e}}^{-ipx}$, with $E=\pm \sqrt{ (\vec p)^2 +m^2} $ we get eight solutions of the Dirac equation. $u_1, u_2, u_3 , u_4$ and $v_1,v_2,v_3,v_4$.
Conventionally the four solutions ($u_3 , u_4,v_3,v_4$.) following from $E=- \sqrt{ (\vec p)^2 +m^2}$ are said to be linearly dependent of the remaining four solutions with $E=+\sqrt{ (\vec p)^2 +m^2}$ two ($u_1,u_2$) are commonly interpreted as particle and two ($v_1,v_2$) as antiparticle solutions.
Nevertheless, in order to be able to construct chirality eigenstates we need the other four solutions and I'm unsure in how far we can then say that four of the eight solutions are really linearly dependent. A chiral eigenstate must always be of the form $ \psi_L= \begin{pmatrix} f \\ -f \end{pmatrix} $ for some two component object $f$. In order to construct such an object we need all eight solutions. For example $\psi_L= u_1 - u_3$, as can be seen from the explicit form of the solutions recited below.
In addition, I'm unable to see that the eight solutions are really linearly dependent, because for me this means that we can find numbers $a,b,c,d,e,f,g,h \neq 0$, such that $a u_1 + b u_2 + c u_3 +d u_4 + e v_1 + f v_2 + g v_3 + h v_4=0$. As pointed out in the comments, this can be done, but only for one point in time. Is this really enough? In what sense is then for example the basis used in the Fourier expansion $\sum_n (a_n e^{in x} + b_n e^{-in x}) $ linearly independent? With the same reasoning we could find numbers for one $x$ to show that all these $e^{in x}$ and $e^{-in x}$ are linearly dependent...
The explicit solutions
This is derived for example here
Two solutions follow from the ansatz ${\mathrm{e}}^{-ipx}$ with $E=+ \sqrt{ (\vec p)^2 +m^2}$ and two with $E=- \sqrt{ (\vec p)^2 +m^2}$ .
In the rest frame the solutions are
$$ E=+ \sqrt{ (\vec p)^2 +m^2} \rightarrow u_1 = \begin{pmatrix} 1 \\ 0 \\0 \\ 0 \end{pmatrix} {\mathrm{e}}^{-imt} \qquad u_2 = \begin{pmatrix} 0 \\ 1 \\0 \\ 0 \end{pmatrix} {\mathrm{e}}^{-imt} $$
$$ E=- \sqrt{ (\vec p)^2 +m^2} \rightarrow u_3 = \begin{pmatrix} 0 \\ 0 \\1 \\ 0 \end{pmatrix} {\mathrm{e}}^{-imt} \qquad u_4 = \begin{pmatrix} 0 \\ 0 \\0 \\ 1 \end{pmatrix} {\mathrm{e}}^{-imt} $$
Analogous four solutions from the ansatz ${\mathrm{e}}^{+ipx}$, we get four solutions.
$$ E=+ \sqrt{ (\vec p)^2 +m^2} \rightarrow v_1 = \begin{pmatrix} 0 \\ 0 \\1 \\ 0 \end{pmatrix} {\mathrm{e}}^{imt} \qquad v_2 = \begin{pmatrix} 0 \\0 \\0 \\ 1 \end{pmatrix} {\mathrm{e}}^{imt} $$
$$ E=- \sqrt{ (\vec p)^2 +m^2} \rightarrow v_3 = \begin{pmatrix} 1 \\ 0 \\0 \\ 0 \end{pmatrix} {\mathrm{e}}^{imt} \qquad v_4 = \begin{pmatrix} 0 \\ 1 \\0 \\ 0 \end{pmatrix} {\mathrm{e}}^{imt} $$
Examples for chiral eigenstate are, with some two component object $f$
$$\psi_L = \begin{pmatrix} f \\ -f \end{pmatrix} \hat = u_1 - u_3 = \begin{pmatrix} 1 \\ 0 \\0 \\ 0 \end{pmatrix} {\mathrm{e}}^{-imt} - \begin{pmatrix} 0 \\ 0 \\1 \\ 0 \end{pmatrix} {\mathrm{e}}^{-imt} \qquad \text{ or } \qquad \psi_L = \begin{pmatrix} f \\ -f \end{pmatrix} \hat = u_2 -u_4 $$
$$\psi_L = \begin{pmatrix} f \\ -f \end{pmatrix} \hat = v_1 - v_3 \qquad \text{ or } \qquad \psi_L = \begin{pmatrix} f \\ -f \end{pmatrix} \hat = u_2 - u_4 $$
And similar for $\Psi_R = \begin{pmatrix} f \\ f \end{pmatrix}$.
Are four of the eight solutions really dependent? If yes, how can this be shown explicitly ? Any source, book, pdf would be awesome. Is it possible to interpret the solutions $(u_3,u_4,v_3,v_4)$ that can be discarded for many applications, but that are needed in order to create chirality eigenstates?
| this is not an answer to your question per se but it might be relevant.
The solutions you've listed are not the general solutions of the Dirac equation. These are only the solutions for a particle at rest. The general solutions have terms dependent on the momentum, energy and mass.
The chirality eigenstates are not derived from the spinors, but rather from the gamma-5 matrix and with the help of the helicity eigenstates in the ultrarelativistic limit ($E >>m$). Check Mark Thompson's book on Modern Particle Physics.
The solutions of the Dirac equation are made of a spinor times a plane wave. The spinors themselves are the 4-component vectors. They are the linearly independent ones. If you check the forms of the general (not yours, which are at rest) forms of the spinors (without the plane wave part) you'll see that indeed only four of them are linearly independent. All of the above can be seen derived from first principles in Thompson's book.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/148014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Habitable zones around other stars I have a question about measuring the boundaries of habitable zones on other planets.
Is it okay to assume that, if Sun's habitable zone starts at a distance $R_0$ and its luminosity is $L_0$, we can calculate any other star's with luminosity $L$ habitable zone's inner boundary as $$R= R_0 \sqrt{\frac{L}{L_0}}$$?
Formula was derived from $$F=\frac{L}{4\pi R^2}$$ where $F$ is the Flux, $R$ is the distance to the star. I assume that the Flux at the boundary should remain the same as it was in the solar system, thus $F_1=F_2$;
If thpse were incorrect assumptions I would like to know what am I missing.
Thank you!
| Let me see if I understand the derivation.
$$F=\frac{L}{4 \pi R^2}$$
becomes
$$F_\odot=\frac{L_\odot}{4 \pi R_\odot^2}$$
and
$$F_{\text{ other star}}=\frac{L_{\text{ other star}}}{4 \pi R_{\text{ other star}}^2}$$
and so setting them equal means
$$\frac{L_\odot}{4 \pi R_\odot^2}=\frac{L_{\text{ other star}}}{4 \pi R_{\text{ other star}}^2}$$
and
$$\frac{L_\odot}{R_\odot^2}=\frac{L_{\text{ other star}}}{R_{\text{ other star}}^2}$$
We re-arrange to get
$$\frac{R_{\text{ other star}}}{R_\odot^2}=\frac{L_{\text{ other star}}}{L_\odot}$$
$$\frac{R_{\text{ other star}}}{R_\odot}=\sqrt{\frac{L_{\text{ other star}}}{L_\odot}}$$
which is your equation
$$R_{\text{ other star}}=R_\odot \sqrt{\frac{L_{\text{ other star}}}{L_\odot}}$$
So the derivation seems to check out mathematically.
Logically, you also seem to be fine. It makes total sense that the luminous flux should be equal in both cases, and Wikipedia agrees with you twice, here:
Whether a body is in the circumstellar habitable zone of its host star is dependent on the radius of the planet's orbit (for natural satellites, the host planet's orbit), the mass of the body itself, and the radiative flux of the host star.
and here
Astronomers use stellar flux and the inverse-square law to extrapolate circumstellar-habitable-zone models created for the Solar System to other stars.
I think you're fine.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/148107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What do we see while watching light? Waves or particles? I'm trying to understand quantum physics. I'm pretty familiar with it but I can't decide what counts as observing to cause particle behave (at least when it's about lights). So the question is what do we see with our eye-balls?
*
*We point a laser (or any kind of light source) to the wall. We see its way from point A to B. Do I "see" a particle or a wave?
*Let's see an average object. It pretty much looks like their "pieces" a observing that influences their behave. Does this means while we're watching a light it acts like particle in quantum level?
| It would be physically impossible to be able to "see" light as anything other than a particle (photon). The only time photons, or any other subatomic particle for that matter, can be described as a wave is when we are NOT looking at them.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/148177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 5,
"answer_id": 0
} |
Triple integral $\iiint_{\mathbb{R}^3} d^{3}q ~\delta^{3}(\vec{q})\frac{(\vec{p}\cdot\vec{q})^2}{q^{2}} $ involving Dirac Delta function
I am trying find $$\iiint_{\mathbb{R}^3} d^{3}q ~\delta^{3}(\vec{q})\frac{(\vec{p}\cdot\vec{q})^2}{q^{2}},$$ where $\vec{p}$ is some fixed vector.
The answer should be $\frac{p^2}{3}$. Below is my attempt, which seems to lead to the wrong answer $\frac{p^2}{2}$.
Attempt: Let's align $q_{z}$ with $\vec{p}$, so we measure $\theta$ wrt $\vec{p}$. Since there is no $\phi$ dependence so I can write $$\delta^{3}(\vec{q})=\frac{\delta(q)\delta(\theta)}{2\pi q^{2}\sin(\theta)}.$$
Therefore I have
$$p^{2}\int_{0}^{\infty} dq \delta(q)\hspace{1mm}\int_{-\pi}^{\pi}d\theta\hspace{1mm} \delta(\theta)\cos^2\theta .$$
I understand $$\int_{0}^{\infty}\delta(q)dq = \frac{1}{2},$$ if we treat $\delta(q)$ as a limiting case of a symmetric Gaussian distribution. While the $\theta$ integral is $1$. So my answer to my question is $\frac{p^2}{2}$. Which is different from the correct answer $\frac{p^2}{3}$.
So my questions are:
*
*What went wrong in my derivation?
*How do you derive and justify the answer $\frac{p^2}{3}$ from first principles?
| $$δ^3(q⃗ )=\frac{δ(q)δ(\theta)}{2\pi q^2\sin(\theta)}$$
is wrong. The delta function is spherically symmetric, and thus has no θ dependence. Simply use: $$d^3(q⃗ )=\frac{δ(q)}{2\pi q^2}$$ instead. Use the Jacobian when you switch coordinate systems (from Cartesian to spherical) ($r^2 \sin(\theta)$), and you should get the result.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/148278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Has a body angular momentum and torque only in a circular path? In different contexts, my book(Principles of Physics by Resnick, Halliday ,Walker) , they wrote
For torque, the path need no longer be a circle and we must write the torque as a vector $\vec{\tau}$ that may have any direction. .... Note carefully that to have angular momentum about $O$ , the particle does not itself have to rotate around $O$ .
That's what they wrote. But I am really confused why they wrote so. In fact I can't imagine torque & angular momentum without circular motion. Why did they tell so? What is the cause?? Please explain.
|
That's what they wrote. But I am really confused why they wrote so. In fact I can't imagine torque & angular momentum without circular motion. Why did they tell so? What is the cause?? Please explain.
A planet in an elliptical orbit has angular momentum alright, there is no problem.
The magnitude of angular momentum does not change, is conserved, and the decrease of the arm is exactly compensated by the increase of the magnitude of the force
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/148319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
What are the definition and examples of topological excitation? I read topological excitation in wiki, while it's too brief. What is the precise definition of topological excitation? And can give me some examples and explain why they are topological excitation? Are there some references which give explainations in detail ?
| There is no better definition than what Wikipedia offers - in general, a topological excitation is a (field) state, i.e. a localized quantity since fields depend on spacetime, whose integral is a topological invariant.
One prime example are Yang-Mills theories in 4D, where the integral $\int \mathrm{Tr}(F\wedge F)$, as essentially the second Chern class of the underlying principal bundle, is a topological invariant and tells you which instanton is the local vacuum belonging to the $F$ in question, since perturbation (being a small and smooth addition) about any given minimum of the action will not change a discrete (topological) quantity. One would then call the instanton the topological excitation, since its value of the action is only this topological quantity. For more on 4D instantons as vacua/mediation between vacua, see my answer here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/148588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
All geodesics are inextendable? I think the title is true, because geodesics has a tangent vector with a constant length parametrized by an affine parameter.
Probably, it is easier to think about timelike or spacelike geodesics. In this case, its affine parameter measures the length of the curve. It is difficult to image such kind of curve has a future endpoint (or past endpoint).
Is it correct?
| A geodesic $\gamma$ is extendible only if $\gamma$ is incomplete, in the realm of semi-Riemaniann manifolds.
Suppose $\gamma$ is complete and extendible, say $\gamma:(-\infty,\infty)\to M$. Suppose $p$ is a future endpoint without lose of generality. Then, there is $t_0\in\mathbb{R}$, for all $t\ge t_0$, such that $\gamma(t)\in U_p$, where $U_p$ denotes an arbitrary neighborhood of $p$ in $M$. Let $U_p=\{x\in M:d(x,p)<\varepsilon\}$, where $d$ is the distance function defined in [O'Neill, Semi-Riemaniann Geometry, Definition 15.5]. Then, as we assumed, $\gamma(t_0)\in U_p$ and $\gamma(t_0+2\varepsilon)\in U_p$, and hence,
$$ d(\gamma(t_0+2\varepsilon),\gamma(t_0))\le d(\gamma(t_0+2\varepsilon),p)+d(\gamma(t_0),p)<2\varepsilon,$$
which contradicts with that $\gamma$ is a geodesic.
As a result, $\gamma$ could not be both complete and extendible.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/148673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Why do the high frequency waves have the most number of modes? While reading the Wikipedia page of Ultraviolet Catastrophe, I came across how Rayleigh and Jeans applied the equipartition theorem. They told that each mode must have same energy. Now as the number of modes are greatest in small wavelengths or large frequency, energy radiated will be infinite.
What is mode actually? And why do the large frequencies have the most modes? Please help me explaining these.
I need a math-free explanation.
| Edited and simplified on behalf of the crowd (useless at this point of other very good answers):
Consider a cube of edge length L in which radiation is being reflected and re-reflected off its walls. Standing waves occur for radiation of a wavelength λ only if an integral number of half-wave cycles fit into an interval in the cube. In other words, electromagnetic standing waves in a cavity at equilibrium with its surroundings cannot take just any shape.The solution to the wave equation must give zero amplitude at the walls, since a non-zero value would dissipate energy and violate our supposition of equilibrium.
The basic solutions are sinusoidal standing waves, that is, it has peaks and valleys. The peacks become valleys and valleys peaks as time gows ones, but the "edges" between peaks and valleys remains at a constant spatial position. Also, at the walls there is always an "edge".
More energetic waves have larger frequencies, that is, more peaks and valleys along a given length. The energy actually depends on the total number of peaks when yoy consider the three prependicular directions in space, because the wave is tridimensional. You can have 100 peaks in the vertical directions 10 in the horizontal and 25 in depth.
The larger the numbers you choose, the larger the combinations of the three numbers that can result, when squared and added, into a given specific number. A single mode consist of one specific wave with three specific number (and thus one total larger combined number). And there are many ways to choose the individual numbers to get a given combined obtai different waves of the same energy by combining the trhree numbers into the same large number.
Thus, a larger number means more energy, and more potential combinations of smaller numbers that can combine into that single number. The larger the energy the larger the number of modes (remember, one mode is a specific combination of three numbers).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/148732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Ball lightning: How are they formed? Ball lightning: How are they formed?
According to some Chinese researchers:
These strange balls of electricity are seen during intense thunderstorms as glowing orbs. They can be yellow, white, red, orange, purple or green and accounts report them passing through glass windows without leaving a hole.
How can the shape of lightning be sphere? How does it form?
| Ball lightning appears as glowing orbs that seem to occur during thunderstorms, usually following a lightning strike.They can be white, yellow, orange, red or blue in color.
There's no scientific explanation for balls of lightning, although there are several proposed theories.
The most popular current theory, proposed by John Abrahamson at the University of Canterbury in Christchurch, New Zealand, suggests that ball lightning is the result of a chemical reaction of silicon particles burning in the air.
When lightning strikes the ground, silicon that occurs naturally in soil combines with oxygen and carbon and turns into pure silicon vapor. As the vapor cools, the silicon condenses into a fine dust. The particles in this fine dust are attracted to each other by the electrical charge created by the lightning strike, binding together into a ball.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/149020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Strong CP Problem So, as far as I know, the Strong CP Problem in QCD results from the theta angle term in the action: $i\theta\int_X F_\nabla\wedge F_\nabla$ where $\nabla$ is the gauge connection and $X$ is a manifold on which the theory is defined. This term obviously breaks CP symmetry with non-zero choice of theta angle. Correct me if I am wrong.
At any rate, experimental evidence has shown CP symmetry to be a consistent aspect of QCD, and the Strong CP Problem is essentially to discover CP violation or to prove CP symmetry in the lagrangian. Now, I am wondering about the particulars of various solutions to the problem. In particular, is it necessary to fabricate a new particle such as the axion, or are there (hypothetically) simpler and more easily verifiable ways of solving the problem?
Also, how important is the problem? In other words, would making experiment and theory consistent warrant a Nobel Prize? Or is it simply an irritating discrepancy that is not fundamentally important to our understanding of the Standard Model?
| "Would making experiment and theory consistent warrant a Nobel Prize? Or is it simply an irritating discrepancy that is not fundamentally important to our understanding of the Standard Model?"
As far as I can tell:
It depends on what the experimental results will be :) If strong CP is found to be broken it is quite important, since CP violation is considered a cornerstone of understanding the matter-antimatter asymmetry.
If it is preserved and no further explanation is found to work, then it's an intriguing feature since there's a term in the SM Lagrangian which simply doesn't manifest.
If axions are found, it's obviously the most interesting case.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/149181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Best way to heat something in aluminum foil? Let's say we have a wet piece of paper, wrapped in aluminum foil, that we need to heat up in the fastest and most energy efficient way possible (no flamethrower).
What would that be?
Details regarding the methods would be highly appreciated.
| Not sure how hot you want it, but a hairdryer or heat gun would be quick, but not very energy efficient while it was blowing - but it turns on and off quickly so might be efficient by not having to be on for long.
edit - ok for 250 degrees C - heat gun probably not hot enough.
I would put an oven on at close to max temp and put the foil in, but even then that may not be hot enough for you....
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/149279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Why does the light reflecting off of ocean water sometimes appear 'smoother'? Looking out the window at some water in the Harbour - I noticed that some parts of the water appear 'smoother' than others.
My question is: Why does the light reflecting off of ocean water sometimes appear 'smoother'?
| Most likely what you are seeing is a thin film of oil floating on the surface of the water that comes either from natural underwater sources, runoff from the shore, or from ships. The oil breaks the surface tension of the water and reduces traction forces from the wind - thus ripple amplitudes are reduced or entirely diminished.
This phenomena has been known for years going back to at least early greek civilization who learned to quell the surface of the sea by throwing (I suppose olive) oil overboard.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/149436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Must Matter Particles Have A Hard Edge? It's my understanding that electrons are particles, and it's also my understanding that their location while orbiting an atom cannot be determined precisely and must be determined by statistics and probability, almost like electrons can be in multiple places at the same time. That made me think, hmm, could electrons exist more as smears instead of hard-edged particles? A smear can be in more than one place at a time. The only difference is a smear doesn't have a hard edge like a spherical particle would. It would sort of "blend" matter and space. I'm also wondering if perhaps smears would demonstrate wavelike properties that hard-edged particles can't. Is there any knowledge out there that states that matter particles either must have a hard edge or can't have a hard edge?
| A particle does not need to have a hard edge. It can for example, be a density function, which sort of fades to zero.
One might note that waves can intersect each other and come out as if the other wave was not there.
Particles with hard edges are more an artefact of our minds rather than 'what's really there', until we get some real evidence otherwise. Note however, that the scattering rules would still apply if the particle is nebulous, because instead of hitting a solid wall and bouncing off, one essentially rolls up some steep hill, and roll down at a different angle.
It's just that tennis balls are easier to understand, but they might not be the correct model at femtometre-scale physics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/149682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Cooling a satellite Satellites are isolated systems, the only way for it to transfer body heat to outer space is thermal radiation. There are solar panels, so there is continuous energy flow to inner system. No airflow to transfer the accumulated heat outer space easily. What kind of cooling system are being used in satellites?
| Typically, satellites use radiative cooling to maintain thermal equilibrium at a desired temperature.
How they do this depends greatly on the specifics of the satellite's orbit around Earth. For instance, sun-synchronous satellites typically always have one side in sunlight and one side in darkness. These are particularly easy to keep cool because you can apply a white coating to the Sunward side and and black coating to the dark side. The white coating has a low value for radiative absorption while the black coating has a high value for radiative emission. This means it can absorb as little light as possible while emitting more thermal radiation.
Different types of satellites have different strategies for cooling, but in general, cooling is achieved by applying functional coatings to the spacecraft that lower or raise the absorptivity/emissivity/reflectivity of its different surfaces. While designing a satellite, the space engineers perform thermal analyses and lots of calculations to determine which surfaces need to have what absorption values in order for the satellite to maintain the desired temperature.
It's hard for me to be more specific than this. But this is the reason any good space engineer knows how to find a coating with the desired absorptivity/emissivity values within a day or two.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/149832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44",
"answer_count": 4,
"answer_id": 1
} |
Width of the decay of Higgs boson into dimuon According to Standard model, the partial width of the decay of Higgs into dimuon (up to tree level) is:
$$\Gamma\approx\frac{m_H}{8\pi} \left(\frac{m_{\mu}}{\nu}\right)^2$$
with the Higgs mass $m_H=125 GeV$, muon mass $m_{\mu}=0.106 GeV$, and the vacuum expectation value of the Higgs field $\nu=246 GeV$, apparently the decay width is extremely small. Then why is the width of the resonance peak in the plot from ATLAS so wide? If it's due to experimental errors then is there any meaning in comparing it with the theoretical result? I'm having trouble understanding this. Could somebody please explain it for me?
| It is mainly measurement and detector errors that make up the width in the plots you show. The Monte Carlo simulates the detector resolution and folds in the theoretical values when it says that the width agrees. The real width is expected to be much smaller.
In this we see that the real width is only given as a bound by the experiments
the CMS experiment has gotten the closest yet to pinning it down, constraining the parameter to < 17 MeV with 95% confidence. This result is some two orders of magnitude better than previous limits: stronger evidence that this boson looks like the Standard Model Higgs boson.
.....
For a Higgs mass of ~125 GeV, the Standard Model predicts a Higgs width of ~4 MeV. Quite a low width, especially when compared to its compatriots, the W and Z bosons (with ~2 GeV and ~ 2.5 GeV widths, respectively). Before this new result, the best limit on Higgs width had it under 3.4 GeV, based on direct measurements.
So you were correct to be puzzled. The partial widths add up to the total width , that was how the width of the invisible neutrino decays of the Z have been found, by doing the sum and subtracting form the total. Leptonic machines have much better accuracies than hadronic. That is why the next collider will be a leptonic one, to study the Higgs accurately and nail down discrepancies to the standard model. Hadronic machines are just discovery machines.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/150232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Klein-Gordon propagator integral in the light-like case In Kerson Huang's Quantum Field Theory From Operators to Path Integrals (Amazon link), pages 28 and 29, he calculates the propagator in the following case: time-like, space-like and light-like. First he integrates the time-component of $k$, and arrive this expression:
$$
\Delta_F(x)=\frac{i}{4\pi^2}\int_0^\infty dk\,\frac{k^2}{\omega_k}\frac{\sin kr}{kr}e^{i\omega_k|t|}
$$
Then he gets the Bessel function in the time-like and the space-like case:
By Lorentz invariance $\Delta_F(x)$ can only depend on
$$s\equiv x^2=t^2-\mathbf r^2\tag{2.83}$$
For $s>0$, we can put $\mathbf r=0$ to obtain the representation
$$
\Delta_F(x)=\frac{i}{4\pi^2}\int_0^\infty dk\frac{k^2}{\omega_k}e^{i\omega_k\sqrt{s}}=\frac{m}{8\pi\sqrt{s}}H_1^{(1)}(m\sqrt{s})\tag{2.84}
$$
For $s<=0$, we put $t=0$ to obtain
$$
\Delta_F(x)=\frac{i}{4\pi^2}\int_0^\infty dk\frac{k^2}{\omega_k}\frac{\sin k\sqrt{-s}}{k\sqrt{-s}}=-\frac{im}{4\pi^2\sqrt{s}}K_1(m\sqrt{-s})\tag{2.85}
$$
At last, he gets the result in light-like case: a delta function:
where $H_1^{(1)}$ and $K_1$ are Bessel functions. In the time-like gregion $s>0$ the function describes an outgoing wave for large $s$. This corresponds to the $i\eta$ prescription in (2.80). The $-i\eta$ prescription would have yielded an incoming wave. In the space-like region $s<0$ it damps exponentially for large $|s|$. On the light cone $s=0$ there is a delta function singularity not covered by the above formulas:
$$\lim_{x^2\to0}\Delta_F(x)=-\frac{1}{4\pi}\delta(x^2)$$
Can anyone explain how he obtained the delta function? I don't understand the limit $s=0$ because the Bessel function is divergent there.
| I'll only consider the case $s \leq 0$. Consider your original integral;
$$
\Delta_{F}(s) \ = \ \frac{i}{4 \pi^2 r} \int_0^\infty dk \frac{k\ e^{i |t|\sqrt{k^2 + m^2}}}{\sqrt{k^2 + m^2 }} \sin(rk)
$$
If we're careful, we notice that this integral is quite naughty; it doesn't converge for $\mathrm{Im}(|t|)=0$. To see why this is the case, change the integration variable to $E = \sqrt{k^2 + m^2}$:
$$
\Delta_{F}(s) \ = \ \frac{i}{4 \pi^2 \sqrt{-s}} \int_m^\infty dE \sin\left(r\sqrt{E^2 - m^2}\right) e^{ i |t| E }
$$
We just have an integrand that wildly oscillates as we take $E$ further and further to the right.
Now that we agree that the integral diverges; consider the following integral which I've stolen out of Gradshteyn and Rhyzik:
$$
\int_0^\infty dz\ \frac{z e^{ - a \sqrt{ z^2 + c^2 } } }{ \sqrt{ z^2 + c^2 } } \sin( b z ) \ = \ \frac{ b c }{ \sqrt{ a^2 + b^2 } } K_{1}\left( c \sqrt{ a^2 + b^2 } \right)
$$
This is integral 3.914.9 in G+R for those interested. It converges only for $\mathrm{Re}(a) > 0$ and $\mathrm{Re}(c) > 0$. Meaning if we set $a =
i|t|$, $b = r$ and $c = m$, we can't use the above formula.
The way we remedy the situation, is we pick a tiny $0 < \epsilon \ll 1$ and fix $a = - \epsilon + i|t|$ (as well as $b= r$ and $c=m$). We then have an $\epsilon$-regulated $\Delta_{F}^{\epsilon}(x)$, where we understand that $\lim\limits_{\epsilon \to 0^+} \Delta_{F}^{\epsilon}(x) = \Delta_{F}(x)$. We get:
$$
\Delta_{F}^{\epsilon}(x) \ = \ \frac{i}{4 \pi^2 \sqrt{-s}} \int_m^\infty dE \sin\left(r\sqrt{E^2 - m^2}\right) e^{ - ( \epsilon - i |t| ) E }
$$
Using the above G+R integral we arrive at:
$$
\Delta_{F}^{\epsilon}(x) \ = \ \frac{i}{4\pi^2} \frac{ m }{ \sqrt{ ( i t + \epsilon )^2 + r^2 } } K_{1}\left( m \sqrt{ (it + \epsilon)^2 + r^2 } \right) \ = \ \frac{i}{4\pi^2} \frac{ m }{ \sqrt{ - t^2 + r^2 + i \epsilon } } K_{1}\left( m \sqrt{ - t^2 + r^2 + i \epsilon } \right)
$$
In the second equality, we've completed the squares, and noted that $\epsilon$ times anything will look like $\epsilon$. We see that we have the same function you gave, except for the $i\epsilon$'s.
Note that $K_1(z) \approx \frac{1}{z}$ near $z=0$. This means that near the light-cone $t^2 - r^2 \to 0$, we can write the above as;
$$
\Delta_{F}^{\epsilon}(x) \ \approx \ \frac{i}{4 \pi^2} \frac{1}{-t^2 + r^2 + i \epsilon}
$$
Finally, we have the principal-value prescription (see page 113 of Weinberg's 'Quantum Theory of Fields: Volume 1'):
$$
\lim_{\epsilon \to 0^+} \frac{1}{z \pm i \epsilon} = \mathcal{P}\frac{1}{z} \mp i \pi \delta(z)
$$
where $\mathcal{P}$ means principal value. Hence, near the light-cone;
$$
\Delta_{F}(x) \ \approx \ \lim_{\epsilon \to 0^+} \frac{i}{4 \pi^2} \frac{1}{-t^2 + r^2 + i \epsilon} \ = \ \frac{i}{4\pi^2} \frac{1}{-t^2 + r^2} + \frac{1}{4\pi} \delta(-t^2 + s^2)
$$
So we have extracted a delta function in the limit $\epsilon \to 0^+$. This is valid not only at the light-cone, but everywhere else, so we can now write:
$$
\Delta_{F}(x) \ = \ \frac{i}{4\pi^2} \frac{ m }{ \sqrt{ - t^2 + r^2 } } K_{1}\left( m \sqrt{ - t^2 + r^2 } \right) + \frac{1}{4\pi} \delta(-t^2 + r^2)
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/150332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Does the CMB signal get weaker over time? If the universe is infinite or flat, then this isn't true (I guess).
But if the universe is finite, then as it expands wouldn't the CMB signal weaker at any given point over time?
| If we define a scale factor $a$ for the universe (could be the distance between two galaxies), then this scale factor will change with time. This is also true in flat or infinite universes, so long as the Hubble parameter is $>0$ (i.e. the universe is expanding).
The energy density contained in the cosmic microwave background will scale as the energy of the CMB photons divided by the volume they occupy.
$$ u_{\nu} \propto \frac{h\nu}{a(t)^3},$$
where of course the volume of a chunk of space increases as $a^3$. This assumes that the number of photons is a conserved quantity.
At the same time we know that photons are being redshfted - their wavelengths are stretching in exactly the same way as $a$. That is $\lambda \propto a$. But for photons $\nu = c/\lambda$, so $\nu \propto a^{-1}$.
Overall then, we see that the energy density of the CMB decreases as $a^{-4}$. Thus in an expanding universe, as $a$ increases, the the energy density of the CMB decreases a lot and the "CMB signal" that you refer to will indeed get weaker.
Another effect is that the temperature of the CMB will decrease. It is currently at about 2.7 K, but for a blackbody spectrum, the temperature follows Wien's law such that $T \propto \lambda_{\rm peak}^{-1}$, where $\lambda_{\rm peak}$ is the peak wavelength of the blackbody spectrum. But we have already seen that wavelength is proportional to the scale factor, so as $a$ increases, $\lambda_{\rm peak}$ increases and $T$ decreases as $a^{-1}$. See also CMBR temperature over time?
The relevant timescale on which this occurs
$$ \frac{du_{\nu}}{dt} \propto -4a^{-5} \frac{da}{dt}$$
$$ \frac{d u_{\nu}/dt}{u_{\nu}} = -4 \frac{da/dt}{a}$$
But the ratio of $da/dt$ to $a$ is the Hubble parameter, currently thought to be about 70 km/s per Mpc (or about $2.3\times10^{-18}$ s$^{-1}$).
$$\frac{d u_{\nu}/dt}{u_{\nu}} = -4 H_0$$
and the timescale to measure a fractional change $\Delta u_{\nu}/u_{\nu}$ is therefore
$$ \Delta t \simeq \frac{-1}{4 H_0} \frac{\Delta u_{\nu}}{u_{\nu}}$$
So a 1 percent change in the energy density of the CMB will occur over 35 million years.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/150513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the difference between electromagnet and solenoid? What is the difference between electromagnet and solenoid? Both these terms seem as the same thing to me. The only difference that I can find seems to be that an electromagnet contains a soft iron core. I'm sure there must be some other difference between the two and I hope someone can clear this matter up for me.
| An electromagnet is an electromagnet. i.e.: A (insulated) wire that is wrapped around an iron core that produces an electromagnetic field when current is passed through it. A solenoid uses an electromagnet to perform a mechanical function.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/150570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
} |
Non-geodesic circular orbit? From N. Straumann, General Relativity
Exercise 4.9: Calculate the radial acceleration for a non-geodesic circular orbit in the Schwarzschild spacetime. Show that this becomes positive for $r>3GM$. This counter-intuitive result has led to lots of discussions.
This is one of those problems where I have absolutely no clue what to do. Since it says non-geodesic, I can't use any of the usual equations. I don't know what equation to solve. Maybe I solve $\nabla_{\dot\gamma}\dot\gamma=f$ with $f$ some force that makes $\gamma$ non-geodesic. But I don't know where to go from there if that's the way to do it.
Also any specific links to discussions?
Any help would be greatly appreciated.
EDIT: So I tried solving $\nabla_u u=f$ with the constraints $\theta=\pi/2$, $u^\theta=0$ and $u^r=0$. lionelbrits has explained I must also add $\dot u^r=0$ to my list. This all leads to
$$(r_S-2Ar)(u^\varphi)^2+\frac{r_S}{r^2}=f^r$$
($A=1-r_S/r$, notation is standard Schwarzschild) The problem with this is that the $u^\varphi$ term is negative for $r>3m$. So somewhere a sign got screwed up and for the life of me I don't know where it is. A decent documentation of my work: http://www.texpaste.com/n/a6upfhqo, http://www.texpaste.com/n/dugoxg4a.
| The equivalence principle tells us that we can evaluate $\nabla_u u$ in a co-moving reference frame and that for geodesics we should find no acceleration (to the occoupants of an elevator in free-fall, the contents seem to be experiencing no acceleration). Therefore, if we evaluate this when we are not along a geodesic (elevator sitting on earth), we find that it is not zero. Because it is a vector, if it is non-zero in one frame, it must be non-zero in another. In other words, yes, $f^r$ is what you have to calculate. The ingredient that you are missing is that $r=\mathrm{const}$ for a circular orbit implies that $\dot{u}^r = 0$. This is not a local thing, it is simply because you are forcing the orbit to be circular.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/150681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What limits the doping concentration in a semiconductor? Si and Ge can be blended in any ratio, $\mathrm{Si}_x\mathrm{Ge}_{1-x},\ 0\le x\le 1$. So do
InxGa1-x.
So what exactly causes doping impurities inside Si/Ge/etc. to saturate at $\sim 10^{-19}\ \mathrm{cm^{-3}}$?
| These are actually two questions in one. On one hand, certain materials are miscible like for $$ Si_{1-x}Ge{x} $$ or likewise for $$ In_xGa_{1-x}As $$. Depending on the phase diagram, some materials can be mixed, while some are not soluble and would segregate, like Steve B mentioned before. Some materials can be mixed despite a miscibility gap, if the growth process does not take place at thermal equilibrium. The material is then frozen in its mixed form and depending on the energy barriers can not segregate at room temperature.
The second question is the doping limit, where doping means the introduction of "foreign" species. In fact what counts is not the dopant concentration but the electron or hole concentration. These saturate, depending on the material somewhere in the range between 3e18 and 2e19/cm³. If you put more dopants in, you will not necessarily get more free electrons. This can be due to self-compensation effects: Si in GaAs can go onto a Ga site and create an electron, which is usually favored, but at high concentrations, some Si atoms may also go onto As sites and therefore create holes. For P doping of Si, there are no different crystal sites, but saturation effects could occur due to P clustering, which also can lead to different behavior.
Doping behavior sometimes is not very intuitive to understand. Si in GaAs prefers to sit on the Ga site, although Si would be "larger" than Ga, but C prefers to go onto As sites, although C is "smaller" than Si and could therefore comfortably occupy a Ga site.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/150747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Why is Graphene So Strong? There has been a lot of news about Graphene since its discovery in 2004. And as we are all told it is a revolutionary material which is very strong, conductive and transparent.
But what is it about the structure of Graphene which makes it so strong?
| What makes Graphene so strong is its electrostatic forces resulting from delocalized electrons flowing through positively charged carbon atoms. This diffrence in charge creates a strong electrostatic attraction that holds Graphene together. This phenomenon also explains why it is such a strong conductor.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/150845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Doesn't a box holding a vacuum weigh the same as a box full of air? This was recently brought up, and I haven't been able to conclude a solid answer.
Let's say we have two identical boxes (A and B) on Earth, both capable of holding a vacuum and withstanding 1 atm outside acting upon them.
A is holding a vacuum, while B is filled with air (at 1 atm). Shouldn't they weigh the same as measured by a scale?
Current thought process
The following thought experiment suggests they'd have the same weight, but I haven't formulaically shown this — and everyone has disagreed so far.
Take a box like B (so it's full of 1 atm air) and place it on a scale. Here's a cross section:
+------------+
| |
| |
| | <-- B box
| |
+------------+
***********************
| | <-- scale
Now, taking note of the scale readings, start gradually pushing down the top "side" (rectangle/square) of the box (assume the air can somehow escape out of the box as we push down)
| |
+------------+
| |
| |
| |
+------------+
***********************
| |
Then
| |
| |
+------------+
| |
| |
+------------+
***********************
| |
etc., until the top side is touching the bottom of the box (so the box no longer has any air between the top and bottom sides):
| |
| |
| |
| |
+------------+
+------------+
***********************
| |
It seems to me that:
1) pushing the top of the box down wouldn't change the weight measured by the scale.
2) the state above (where the top touches the bottom) is equivalent to having a box like A (just a box holding a vacuum).
Which is how I arrived to my conclusions that they should weigh the same.
What am I missing, if anything? What's a simple-ish way to model this?
| The way I would think about this just for a quick answer:
A balloon filled with air gradually sinks. Now if you took the same balloon and made it rigid, sucked all the air out of it but it still had the same volume, it would float straight up. So I would say that the balloon filled with air weighs more.
Same would go for boxes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/150899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 18,
"answer_id": 12
} |
What is the origin of CMB fluctuations? I have read somewhere that CMB (cosmic microwave background radiation) fluctuations in temperature are linked to mass distribution fluctuations in the early universe (at ~350000 years after Big Bang, which is of course when the cosmic radiation was emitted), and that is used to explain the formation of large structures (galaxies, clusters of galaxies..). Why is that so?
| The CMB is a snapshot of the state of the universe at the moment when the universe cooled enough to allow protons to capture electrons to form atoms, thereby allowing light to travel unimpeded for the first time - prior to this the universe was a remarkably uniform distribution of plasma. But there -were- minute variations, which appear to be a Gaussian random field: http://en.wikipedia.org/wiki/Gaussian_random_field ). The hotter spots reveal regions of slightly higher density, so it's theorized that these became centers of gravity for stars/galaxies to coalesce. This gives us a starting point to map the evolution of the universe over time to the present era.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/150976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Induced EMF of a rectangular loop should be zero? Considering the shape of a rectangular loop in a changing magnetic field:
The induced $\epsilon$ would be zero? Since a rectangular loop is a combination of wires in series to create such a shape. Each wire in this loop induces $\epsilon$ opposes the other, and they should each cancel out?
Here is the diagram adjusted with polarities:
EDIT:
Examples of induced $\epsilon$ canceling out:
A -
B -
Where there are two separate conductors that are wired in series together, each in the same magnetic field, that experience the same flux change over the same time period.
| I think your problem comes from the fact that a 'wire' is an idealized conductor with zero resistance. A loop formed by such wires does inded not allow for any voltage drops to appear.
Any attempt to induce a voltage (such as changing the magnetic flux) would immediately create an induced current, which in turn would cancel any change in flux.
Conclusion: In an idealized loop such as you have presented, there would indeed not appear any emf, simply because it is not possible to change the magnetic flux through it. This is what we observe in superconductors.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/151054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Is Parity really violated? (Even though neutrinos are massive) The weak force couples only to left-chiral fields, which is expressed mathematically by a chiral projection operator $P_L = \frac{1-\gamma_5}{2}$ in the corresponding coupling terms in the Lagrangian.
This curious fact of nature is commonly called parity violation and I'm wondering why? Does this name make sense from a modern point of view?
My question is based on the observation:
A Dirac spinor (in the chiral representation) of pure chirality transforms under parity transformations:
$$ \Psi_L = P_L \Psi = \begin{pmatrix} \chi_L \\ 0 \end{pmatrix} \rightarrow \Psi_L^P = \begin{pmatrix} 0\\ \chi_L \end{pmatrix} \neq \Psi_R$$
Chirality is a Lorentz invariant quantity and a left-chiral particle is not transformed into a right-chiral particle by parity transformations.(The transformed object lives in a different representation of the Lorentz group, where the lower Weyl spinor denotes the left-chiral part.)
In understand where the name comes from historically (see the last paragraph) but wouldn't from a modern point of view chirality violation make much more sense?
Some background:
Fermions are described by Dirac spinors, transforming according to the $(\frac{1}{2},0) \oplus (0,\frac{1}{2})$ representation of the (double cover of the) Lorentz group. Weyl spinors $\chi_L$ transforming according to the $(\frac{1}{2},0) $ representation are called left-chiral and those transforming according to the $(0,\frac{1}{2})$ representation are called right-chiral $\xi_R$. A Dirac spinor is (in the chiral representation)
$$ \Psi = \begin{pmatrix} \chi_L \\ \xi_R \end{pmatrix}$$
The effect of a parity transformation is
$$ (\frac{1}{2},0) \underbrace{\leftrightarrow}_P (0,\frac{1}{2}),$$
which means the two irreps of the Lorentz group are exchanged. (This can be seen for example by acting with a parity transformation on the generators of the Lorentz group). That means a parity transformed Dirac spinor, transforms according to the $(0,\frac{1}{2}) \oplus (\frac{1}{2},0) $ representation, which means we have
$$ \Psi = \begin{pmatrix} \chi_L \\ \xi_R \end{pmatrix} \rightarrow \Psi^P = \begin{pmatrix} \xi_R \\ \chi_L \end{pmatrix} $$
Now we can examine the effect of a parity transformation on a state with pure chirality:
$$ \Psi_L = P_L \Psi = \begin{pmatrix} \chi_L \\ 0 \end{pmatrix} \rightarrow \Psi_L^P = \begin{pmatrix} 0\\ \chi_L \end{pmatrix}$$
This means we still have a left-chiral spinor, only written differently, after a parity transformation and not a right-chiral. Chirality is a Lorentz invariant quantity. Nevertheless, the fact that only left-chiral particles interact weakly is commonly called parity violation and I'm wondering if this is still a sensible name or only of historic significance?
Short remark on history
I know that historically neutrinos were assumed to be massless, and for massless particles helicity and chirality are the same. A parity transformation transforms a left-handed particle into a right-handed particle. In the famous Wu experiment, only left-handed neutrinos could be observed, which is were the name parity violation comes from. But does this name make sense today that we know that neutrinos have mass, and therefore chirality $\neq$ helicity.
| I guess it is because you first of all change sign of $\vec x$ to $- \vec x$ in physical space(this is parity transformation in a nutshell). All this peculiar algebra concerning left and right chirality fields comes from $J = 1/2$ Lorenz group representation, so transformation rules are defined as representatives of parity transformation of physical space.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/151352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 4
} |
Is time travel impossible because it implies total energy in the universe is non-constant over time? I have always argued with my friends regarding Time Travel that it is impossible. My argument has been that it will destroy the theory that all the energy in the universe is constant since when one travels to a different time, the universe at that time requires extra energy for accommodating the extra person. Similarly the total energy of the universe of that person's current time will be lesser.
I would like to know whether I'm thinking correctly? Has anybody ever experimented or proved anything in similar veins?
| Time travel is not impossible because of conservation of energy, time travel is not impossible at all. Entropy does not allow you to go backwards in time, but forwards does not present any problems.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/151428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 8,
"answer_id": 7
} |
What is the cause for the validity of Newton's third law? What, specifically, causes newton's third law? For instance, if I push on a wall, why is it that I experience a force in the opposite direction?
I seem to vaguely understand that is has something to do with electronic repulsion or molecular compression (maybe that's completely wrong, I don't know). As a related question, what would happen if two objects that were /infinitely/ immovable (at the molecular level, it cannot be broken or compressed) collided with each-other?
| We start by noting that force is the rate of change of momentum. Let's suppose you and I are floating in space (so we are the only two interacting bodies) and you're pushing me so I feel a force $F_{me}$, then:
$$ F_{me} = \frac{dp_{me}}{dt} $$
where $p_{me}$ is my momentum.
But we know that momentum is conserved, so since you are the only thing interacting with me your momentum, $p_{you}$, must be changing in the opposite sense to balance out the changes in my momentum. In other words:
$$ \frac{dp_{you}}{dt} = - \frac{dp_{me}}{dt} $$
And since force is rate of change of momentum that means there is a force on you:
$$ F_{you} = \frac{dp_{you}}{dt} = - \frac{dp_{me}}{dt} = -F_{me} $$
So the two forces are equal and opposite just as Newton's third law tells us.
The details of exactly how the forces are transmitted depend on exactly how the two bodies are interacting, but whatever the interaction the changes in momentum must be equal and opposite, and therefore the forces are equal and opposite.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/151539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Does gravity affect water permeability? Suppose I have an approximately rectangular prism composed entirely of folded paper. If I place 600 lbs (250 kg) on top of these rectangular sheets of paper, the paper should compress. How does this affect water permeability across the sheet? Is the rate across which the fluid flows across the membrane slowed down?
| You elaborate in a comment:
We assume capillary action of flow of fluid, as any type of pressure may alter composition of the paper. The paper is similar to paper towels, except the sheets are folded.
The height $h$ of a column in a capillary with radius $r$ is given by
$$
h = \frac{2\gamma \cos\theta}{\rho g r}
$$
where $\gamma$ is the surface tension, $\theta$ is the contact angle, $\rho$ is the liquid density and $g$ the local acceleration due to gravity. You could presumably measure the ratio $\cos\theta/r$ for your paper towels by putting one in water and measuring how high the wetness climbs; you can get $\gamma,\rho,g$ from other sources.
If you model your paper towels as an array of narrow capillary tubes, you might instead
think of the product
$$
\rho g h = \frac{2\gamma\cos\theta}{r}
$$
as a sort of "capillary pressure," which pulls the liquid up into the capillary gaps in the paper. You might then use this capillary pressure in the laminar flow equations to estimate the the flow rate across each paper towel.
If you can successfully model the behavior of an uncompressed paper towel you can start to consider the compressed ones. There will be several competing effects. Most notably you'll be changing the distance between adjacent layers of paper (surely you've noticed how a folded or two-ply paper towel dries better than a single layer). You'll also crush the paper somewhat and change the structure of the internal capillaries.
You'll want to be careful what you mean by "rate of flow." I think it's plausible that a compressed block of paper would see the dry end get damp faster than an uncompressed block, but to find that the uncompressed block of paper has a higher volume flow rate for a given pressure head on the wet side.
This might be a case where it's simpler just to do the experiment.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/151639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
What will happen to a permanent magnet if we keep the same magnetic poles of two magnets close together for a long time? What will happen to permanent magnet's magnetic field or magnetic ability if we keep same magnetic poles of two permanent magnet for long time?
Will any magnetic loss happen over the long period of exposure or does the magnetic strength remain the same?
Sorry if my logic is wrong. Please explain this.
| In my experience (with ferroxcube materials) nothing happens. In fact, to change the magnetic properties the magnetic domains inside must be reoriented. But the force excerted by the second magnet is not strong enough to do so. But one can magnetize a non-magnetic piece of iron (for instance the tip of a screwdriver) by moving it over a magnet.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/152729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 1
} |
Why does room temperature water and metal feel almost as cool as each other? From what I've read about heat, temperature and conductivity, I understand that the reason water at room temperature feels colder than most other things at the same temperature (like wood, air, cotton) is because of its higher thermal conductivity. That is, it transfers heat quickly from my body to itself, as well as within itself.
(Assuming the thermal conductivity is the only reason why different materials feel colder or warmer), what I don't understand is why metals feel about as cold as water, while their thermal conductivities are 100-to-200 times higher than that of water (Water's is ~0.58 W/mK, the values for metals range from 50 to 400).
I suppose there is more to why materials at identical temperatures suck heat faster; what is it?
| The parts of your body that generate heat and that can sense temperature and the loss of heat are insulated from the environment by a layer of dead skin cells. The total thermal conductivity to the environment is the thermal conductivity of the materials that you touch in series with the thermal conductivity of this layer of skin. Since this layer has a rather poor thermal conductivity itself, the sensation of touching different materials with much better thermal conductivity will not differ much. If this skin layer is broken, however, temperature and heat conductivity differences are felt much stronger, usually in a rather painful manner.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/152812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Simple example showing why measurement & interaction are different Does someone know of a clear (pedagogical) example where one can really see(with the math) where interaction and measurement are not synonymous in quantum mechanics?
*
*I know that every measurement involves a certain interaction with the outside world (e.g. momentum gain from a photon), which results in the system collapsing into one of its eigenstates, meaning a pure state.
*On the other hand, it is also known that not all interactions result in collapsing into eigenstates of the system, so they are in principle very different from what we call "measurement".
It would be definitely nice to also see a bit of the math behind, maybe for simplicity just restricting it to operator algebra and showing how measurement and interaction are defined, shedding a clear light on their difference. I must admit, from a purely physical point of view, I don't know their difference either.
| QM says that interactions are not always measurements. The canonical example is Schrödinger's cat. We have countless trillions of interactions per microsecond for hours - the state of the system is a very complex huge wave function which lives in a hilbert space one can only describe as awesome. Then the box is opened, and the cat is dead or alive - but its state is complex.
This absurd construction (for instance that the individual smell molecules both exist and don't exist for hours) is ridiculous. QM is almost certainly wrong, and there exists some complexity limit where interactions and measurement are the same thing.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/152906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32",
"answer_count": 6,
"answer_id": 5
} |
Solar spectrum units Why is intensity $I$ on a graph of the solar spectrum always showed in units of $[\mathrm{W/m^2/nm}]$ instead of simply $[\mathrm{W/m^2}]$? (The y-axis on the graph.)
It is apparently shown as intensity per wavelength, but why add this extra specification? For me it just complicates matters (it is not clear to see at which wavelength the intensity is greatest e.g.), so what is the point?
| $W/m^2$ would be the total energy emitted, regardless of wavelength. When you use $W/m^2/nm$ you are explicitly saying that it corresponds to a specific part of the spectrum (nm is a unit of wavelength). Which is what the graph you posted is showing. The first one is called "irradiance", the one plotted here is called "spectral irradiance". For more details you can see here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/152963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How are momentum and position operators dependent on the chosen inertial frame? How are momentum and position operators in quantum mechanics dependent on the chosen inertial frame of reference?
| The are defined just once you have fixed an inertial reference frame and a Cartesian orthonormal coordinate system co-moving with it. Changing inertial reference frame, the new operators are related with the initial ones by means of a strongly continuous projective-unitary representation of (connected) Galileo group $G \ni g \mapsto U_g$,
$$P'_k = U_g P_k U_g^\dagger\:, \quad X'_k = U_g X_k U_g^\dagger$$
The representation is projective in the sense that $U_gU_{g'} = e^{i\alpha_M(g,g')}U_{gg'}$ where the real phase $\alpha_M(g,g')$ depends on the total mass $M$ of the physical system and there is no way to get rid of that phase (differently from the case of Poincaré group).
For instance a boost transformation $g_v$, at $t=0$, i.e., $x\to x'$, $p\to p'+Mv$ produces
$$U_{g_v} P_k U_{g_v}^\dagger = P_k + Mv_kI\:,\quad U_{g_v} X_k U_{g_v}^\dagger = X_k$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/153021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Length contraction in cyclic space Consider a flat universe with at least one finite cyclic spatial dimension: travel x meters in one direction, and you will end up back where you started.
For an object that is of small size relative to the scale of the cyclic dimension, relativistic length contraction ought to work out just fine; the cyclic nature of the space doesn't matter, and the object appears contracted in its direction of motion.
For sufficiently large objects, however, there appears to be a paradox. Consider a solid rod of length x, oriented along the cyclic dimension, so that it wraps around the universe and reconnects with itself. Topologically, it's a circle, but everywhere straight and flat. If the rod is accelerated along its length, it should appear to contract; this, however, would make it not long enough to span the cyclic space; thus, one should expect a discontinuity where two ends of the rod will break apart. There is, however, no unique location at which this discontinuity could occur.
So, what's going on? Is a flat cyclic spacetime simply not possible? Or am I missing some deeper understanding of special relativity?
| There is no length contraction in circular movements! Length contraction is only in the direction of movement.
A cyclic universe would be e.g. a 3-dimensional universe curved in a fourth dimension, or if a cyclic dimension is curved in a second dimension Thus length contraction could happen only locally, where the curving is not perceivable.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/153137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
QFT and violation of Heisenberg uncertainty principle In some QFT books is said that a free electron can emit a virtual photon as long as it reabsorbs the photon and returns to its original state within a time:
$$\Delta t<\dfrac{\hbar}{2\Delta E}$$
That inequality DOES VIOLATE the Heisenberg Uncertainty Principle. Why is that POSSIBLE? If it were said in a time
$$\Delta t\geq \dfrac{\hbar}{2\Delta E}$$
I would not be so puzzled.
| The first relation you give is simply wrong (typo in the textbook?). Because for on-shell stable particle $\Delta E=0$, it leads to $\Delta t< \infty$, meaning that any value from 0 to infinity could be possible. It's a non sense, a stable particle cannot live 0 s. The second formula is the good one: the greater the mass (or energy) is shifted from its nominal value, the shorter the particle lives.
@riemannium: (I edit my answer because I don't have enough permission to post a comment about your second post!). You seem to make a distinction in nature between a real particle and a virtual particle. For me, there is no such distinction because all particles can actually be considered as virtual. For instance, consider a photon produced by a star that you detect on earth. I'm sure that you would qualify this photon as real. However, between its production and its detection which destroys it, a finite time has been spent. So, this particular photon has had a finite lifetime. Hence, because of Heisenberg uncertainty, this photon can be in principle slightly off-shell $E^2 - p^2c^2 \ne 0$. Thus this photon is finally virtual! If you think this way, you will see that all particles are actually (more or less) virtual.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/153214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
How are these types of time dilation related? How are these two phenomena related (if at all):
1. Gravitation slowing down time
2. High speed slowing down time
| I believe the answer you are looking for is in the link below.
If you take how many Schwarzschild Radii you are from the center of a mass (or how deep you are in a gravitational field) it is equal to how many times faster light is going squared.
x=y^2
so if you are 4 (x) Schwarzschild radii from the center of a mass then to calculate the equivalent time dilation due to velocity you would solve for y which would be two in this case showing that the speed of light is going two times faster than the observed object.
let me know if this helps.
https://physics.stackexchange.com/questions/150542/time-dilation-geometry
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/153307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
For creating beats how small the difference should be between the two frequencies It is said that to create beats we need two "slightly" different frequencies, and subtract it.
1- My question is why do we need slightly different frequencies? Why not large difference?
2- Also how slightly different? what is the limit on maximum difference?
| By superposition principle we will arrive at,
$$y_{total} = {[ 2Acos(2\pi \Delta f/2) ]cos(2\pi f_{av})} $$
(source: unsw.edu.au)
The term inside the [] brackets can be considered as the slowly varying function that modulates the carrier wave with frequency $f_{av}$. (It is indeed an example of amplitude modulation or AM.) This function--the modulation of the amplitude--is the green wave in the diagram. It has frequency Δf/2, but notice that there is a maximum in the amplitude or a beat when the green curve is either a maximum or a minimum, so beats occur at twice this frequency. (One cycle of the green curve is from time (i) to time (v). There are beats at (i), (iii) and (v), and quiet spots at (ii) and (iv).)
So the beat frequency is simply Δf: the number of beats per second equals the difference in frequency between the two interfering waves.
If the beats occur more often than roughly 20 or 30 times per second, we no longer hear them as beats: our ears are not fast enough to respond to events that quickly. (Nor are our eyes: we cannot recognise a light that is flashing 30 times per second.)
Consider, for example, what happens when we play two tones with frequencies 400 Hz (approximately the note G4) and 500 Hz (approximately the note B4). The resultant waveform will look rather like a wave of 450 Hz whose amplitude varies at a rate of 100 times per second. But that is not what we hear: we hear the chord G4 plus B4 .
Reference here 1 and 2:
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/153562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Why does the tongue stick to a metal pole in the winter? since the Christmas season is here, I would like to ask a question about the movie, "A Christmas Story." In one of the subplots of the movie, Ralphie's friends were betting each other that their tongue would stick to to a frozen pole. Finally, the kid did it and it stuck to the pole.
Why does this happen? I believe that there is a physics explanation for this.
| The reason is the same as why a metal pipe feels colder than wooden plank at the same temperature: thermal conduction.
The heat from your tongue (including the moisture) is absorbed faster than your body can replenish it. This has the effect of freezing your saliva in the tongue's pores to the metal surface (which itself isn't too smooth at small scales). Doing that will net you this:
The remedy is actually quite simple: get some warm water and pour it where your tongue is stuck and you'll be free.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/153668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 1,
"answer_id": 0
} |
Temperature of electroweak phase transition How does one estimate the temperature at which electroweak phase transition (EWPT) occurred? Somewhere I have read it is around 100GeV but the reason was not explained.
| We calculate the free energy (density) for the Higgs field $\phi$ at finite temperature. In the Standard Model, this looks like
$\mathcal{F}_{SM}(\phi,T) = -\frac{\pi^2}{90}g_* T^4+V_{SM}(\phi, T) \ ,$
where $g_*$ is the number of degrees of freedom in the SM ($g_*=106.75$).
The potential has the form
$V_{SM}(\phi,T) = D(T^2-T_0^2)\phi^2 - ET\phi^3+\frac{\lambda_T}{4}\phi^4\ ,$
with $D,E,T_0^2,\lambda_T$ some factors depending on particle masses, coupling constants, the Higgs v.e.v. and temperature.
At the phase transition (PT), there are two degenerate minima of the potential. One sits at $\phi=0$, where we are in the symmetric phase, the other is at $\phi=\phi_0$, where we are in the broken phase. If my quick calculation is correct, this leads to a critical temperature $T_c\approx 163 GeV$.
Note that in this case, the order parameter of the PT $\phi_c/T_c$ is very small. This means for one thing, that the PT is only very weakly first order and else, that perturbation theory is no longer reliable and we need to do non-perturbative calculations.
(Though the procedure is standard, I took this paper from Carena, Megevand, Quirós and Wagner as reference, just because it was the closest at hand, not because I particularly like it, which I don't btw.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/153743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 1
} |
Why do floating water drops form spheres? Consider a drop of water floating in an inertial frame in STP air (e.g., the ISS). Intuitively, the equilibrium shape of the drop is a sphere.
How would one prove that? Is it equivalent to showing that the minimal surface area for a simply connected volume in $\mathbb{R}^3$ with a sufficiently smooth boundary is that of a sphere, i.e., the result of the isoperimetric inequality?
| The droplet wants to minimise its surface energy. This energy is proportional to its surface area. So the equilibrium shape is that which minimises the surface area for fixed volume (the bulk density is fixed by the temperature and pressure).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/153840",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Minimum spread of frequency and wavelength in neodymium laser What is the equation linking the minimum spread in wavelength and frequency of a pulsed laser, in relation to the lasers pulse time and operational wavelength.
For example:
If a Neodymium laser operates at a wavelength of 1×10–6 m and the laser is operated in
pulsed mode, emitting pulses of duration 3×10–11 s.
What is the minimum spread in frequency and wavelength?
| Obtain the frequency spread from the Energy-time uncertainty relation. Then you argue that from Energy-frequency relation and wavelength spread by extension can be calculated..... Just a trial. Thank you.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/153927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Force of a Train Imagine that there are two trains and the first train is twice as long as the second train. They have the same mass per unit length and they are traveling at exactly the same speed.
If the first train hit me, would it hit me with twice as much force as the second train? These are two distinct situations: 1) I am hit by the first train only, 2) I am hit by the second train only.
Force is mass times acceleration, so if the one train has twice the mass, then it seems likely that it would have twice the force. But I am not sure.
| First of all you should note that Newton's law says when $F$ acts on a mass $m$, then that mass will move with acceleration $a$.
Here, we should apply the laws of collision and by using the conservation of momentum, find out what your velocity will be after the collision. Before collision we have: $p_{tot}=mv$ and after collision $p_{tot}'=mv'+MV$ where $M$ and $V$ are your mass and speed, $m$ and $v$ are mass and speed of train(s). Also for energy, we have $$mv^2=mv'^2+MV^2$$Now by putting $p_{tot}=p_{tot}'$ and solving the equations, we find $$V=\frac{2mv}{M+m}.$$
Now you can see the bigger train will give you more speed (or more momentum) and so the collision is harder, which means the change of your momentum is more. (recall $F=\frac{dp}{dt}$). On the other hand, since $M \ll m$, so this exceeding is not obvious and maybe we can say the effect of both trains are similar.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/154009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Does lunar module need the same amount of fuel for landing and take off? Let's assume there is no atmosphere and let's assume there is no change in weight due to fuel consumption, will reactive rocket need the same amount of fuel for landing on a planet as for take off?
In theory - I think - you need the same escape velocity to get rocket to orbit as you need to break it to 0 speed after free fall from the orbit, but this changes if the descend is slower than free fall, is that right?
What is the real world(moon) situation in case of lunar module? (extrapolating for fact that the Apollo Lunar Module leaves the descend stage behind)
| Landing is gravity-assisted, so requires less energy. A spacecraft on the moon has used up fuel resulting in its mass being less. If it is significantly less, then it takes less energy to lift a less massive craft back into orbit, than landing a heavier craft.
Escape velocity applies only to non-powered projectiles, such as shooting a canon ball. There is no reason why you could not lift a craft at a constant velocity of 1m/s, if you apply a constant force.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/154066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
} |
How would gravitons couple to the Stress-Energy tensor? How would gravitons couple to the Stress-Energy tensor $T^{\mu\nu}$? How did physicists arrive at this result? I've read that it follows from the analysis of irreducible representations of the 4-dimensional Poincaré group, but is this accurate?
| The stress-energy tensor, is up to multiplicative factors, can be defined by $\frac{\delta S}{\delta g^{\mu\nu}}$, where $S$ is the action and $g_{\mu\nu}$ is the metric. When people talk about the graviton, they talk about quantizing the metric around it's classical solution, so we consider field values $g_{\mu\nu} = g^{(c)}_{\mu\nu} + h_{\mu\nu}$, where $h$ is considered a small perturbation (there are a lot of gauge fixings left out here). In order to evaluate the action for this new field $h$, we would simply plug $g$ into the action and collect terms involving $h$, as a starting point. To lowest non-trivial order in $h$, we can Taylor expand:
$$S(g) = S(g^{(c)}) + \int\!\frac{\delta S}{\delta g^{\mu\nu}} h^{\mu\nu} + \int\!\frac{1}{2} h^{\alpha\beta} \frac{\delta^2 S}{\delta g^{\alpha\beta} \delta g^{\mu\nu}} h^{\mu\nu} + O(h^3).$$
Notice the second term is just $\int\!dx\, T_{\mu\nu} h^{\mu\nu}$, as advertised. The third term is the kinetic term for the $h$ field, and gives a wave equation. Now, these gravitons are essentially free spin-2 particles moving in a classical GR background, with no interactions, because we have truncated the expansion at order 2. Once we try to add higher orders, however, quantum corrections require ever increasing powers of $h$ with no unique prescription to render their coefficients finite. The theory is said to be non-renormalizable.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/154165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to reconstruct the dependence of the potential from a coordinate? An ion moves along the x-axis of a black box with a speed $V$ and returns in a time
$$T=a V^b$$
where $a$ and $b$ are some known constants. Having this, can we reproduce the dependence of a field potential $U(x)$ of this box?
So far I have managed to do this:
Adding to an initial velocity some $dV$ we get the increase in a time $dt$ and so $$T+dt=a (V+dV)^b \, .$$
We can derive $dt$ by subtracting the initial $T$ value
$$dt=a(V+dV)^b-aV^b \, .$$
Using the equivalence formula for $dV$ approaching zero we find
$$dt=aV^b(1+b\frac{dV}{V})-aV^b=abV^{b-1}dV \, .$$
We can find the acceleration
$$\frac{dV}{dt}=\frac{V^{1-b}}{ab} \, .$$
Similarly, as $V=(T/a)^{\frac{1}{b}}$ we have
\begin{align}
dV
&= \left(\frac{T+dt}{a} \right)^\frac{1}{b}
- \left(\frac{T}{a} \right)^\frac{1}{b} \\
&= \left(\frac{T}{a} \right)^\frac{1}{b} \left(1+\frac{dt}{bT} \right) - \left( \frac{T}{a} \right)^\frac{1}{b} \\
&=\frac{T^\frac{1-b}{b}dt}{ab} \, .
\end{align}
This yields again
$$\frac{dV}{dt}=\frac{T^\frac{1-b}{b}}{ab} \, .$$
Now I assume it is our acceleration which the field imparts to the particle, thus
$$\frac{dV}{dt}=\frac{f}{m}=\frac{dU}{dx m} \, .$$
From there I am not sure whether or not I can integrate the equation
$$dU=m\frac{T^{\frac{1-b}{b}}}{ab} \, dx \, .$$
So, are the time $T$ there is a constant in relation to $dx$ or not? The answer I get from the last equation is
$$U(x)=U(0)+m\frac{T^{\frac{1-b}{b}}}{ab}x \, .$$
I am confused by two things here:
*
*The difference between the answers derived from the second and the first approach:
$$\frac{dV}{dt} = \frac{V^{1-b}}{ab} \quad \text{and} \quad \frac{dV}{dt} = \frac{T^\frac{1-b}{b}}{ab}$$
*The possibility of integrating that way.
Are there other ways to get the definite answer for this task?
| No, I don't think that you proceed correctly. You need the relationship between force and distance, and this is what you should integrate.
So, please follow my formulas. So, I understand that $V$ is the velocity of the ion (not of the box).
From your formula $T=aV^b$, I deduce the acceleration $A$ acquired at the end of the trip, by assuming something that is not written in the exercise, that inside the box the velocity obeys the same rule as we see at the end of the box. As Lionel says, between $v = V$ and $v = 0$ passes only half of the time, $T_1 = T/2$, s.t.
$$V=\left( \frac{2T_1}{a}\right)^{1/b} \Longrightarrow \, \, A= \frac{dV}{dT_1} = 2^{1/b}\frac{1}{ab} \left( \frac{T_1}{a}\right)^{1/b-1}$$
Given the acceleration, the force is $F = mA$, as you say.
Now, you have to calculate the space that the ion travelled inside the box.
$$ X = \int_0 ^{T_1} V \ dT' = 2^{1/b}a\int_0 ^{T_1} \frac {dT'}{a} \ \left( \frac{T'}{a}\right)^{1/b} = 2^{1/b}\frac {ab}{1 + b} \ \ \left( \frac{T_1}{a}\right)^{(1+b)/b}.$$
Now you can express the force as a function of $X$ by eliminating between them the time $T_1$, and then integrate and find the potential. Let's do it:
$$\left( \frac{T_1}{a}\right) = 2^{-1/b}\left( \frac {1+b}{ab} X\right) ^{b/(1+b)}$$
Substituting in the field strength $E = mA/q$, where $q$ is the charge of the ion,
$$ E = 2^{-1/b} \frac {m}{q} \frac {1}{ab} \left( \frac {1+b}{ab} X\right) ^{(1-b)/(1+b)}$$
This can be easily integrated over $X$
$$U(X) = 2^{1-1/b} \frac {1+b}{ab} \frac {m}{q} \left( \frac {1+b}{ab} \right) ^{(1-b)/(1+b)} X^{2/(1+b)} $$
$$ = 2^{1 -1/b} \frac {m}{q} \left( \frac {1+b}{ab} X \right) ^{2/(1+b)} $$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/154238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
The work-energy theorem Well here's the question.
From some previous excercises we know that from
\begin{align}
A&=\int F\;ds,\\
&=\int ma\;ds, &&(F=ma)\\
&=\int m \frac{dv}{dt}\;ds, &&(a=dv/dt)\\
&=m \int_{v_1}^{v_2}v\; dv,\\
&=m \frac{v_2^2}{2}-m \frac{v_1^2}{2},\\
&=W_2-W_1, &&(W_i=\frac12mv_i^2)\\
&=\Delta W.
\end{align}
Meanwhile for potential energy we have the shown figure
\begin{align}
A&= \int m a\;ds,\\
&= \int m \frac{dv}{dt}\;ds,
\end{align}
Here the professor did something like:
$$ds \times \cos \alpha =-dh$$
and then the equation goes
\begin{align}
A&=- \int m \frac{dv}{dt}\;dh,\\
&=- \int m v \;dv,\\
&=-m \int v \text{ }dv
\end{align}
and up to
$$A=-\Delta W_p$$
Now what I'd like to understand from you is one logic explanation for
$$ds \times \cos \alpha=-dh$$
I'd be very grateful!
| The height $h$ is probably the vertical displacement pointing downwards. Therefore:
$$
h = \left(-\mathbf{\hat j}\right)\cdot\mathbf s = -|\mathbf{\hat j}||\mathbf s|\cos\alpha = -s\cos\alpha
$$
Now we can derive:
$$
\frac{dh}{ds} = -\frac{d}{ds}\left(s\cos\alpha\right) = -\cos\alpha
\quad\Longrightarrow\quad \frac{dh}{ds} = -\cos\alpha
$$
Therefore, multiplying both sides by $ds$, we get:
$$
dh = -\cos\alpha ds
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/154320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why are position and velocity enough for prediction and acceleration is unnecessary? In classical mechanics, if you take a snapshot and get the momentary positions and velocities of all particles in a system, you can derive all past and future paths of the particles. It doesn't seem obvious why the position and its first derivative are enough and no further derivatives are needed.
For some reason the accelerations (forces) can be expressed by formulas that only mention the position and velocity of particles. For example, the gravitational force only requires knowing positions but some electromagnetic things need velocities as well. Why doesn't anything need the second derivative (acceleration)?
Does this say something about the universe or rather about our way of analysis?
Could we come up with a theory that only requires a snapshot of the positions? Could we devise a set of concepts and formulas where the second derivative is also required for prediction and instead of forces we'd be talking about stuff that induces third derivatives of motion?
Does modern physics (e.g. relativity) have something to say about this curious thing?
| The reason that you only need to specify initial position and velocity to exactly solve the equations of motion for a system is simply because Newton's Second Law (which is the equation governing motion in Classical Mechanics) is a second-order differential equation. The upshot is that to solve a 2nd-order ODE, you basically need to take 2 integrals. Each integral will have exactly one undetermined constant of integration, so by specifying those numbers with your initial conditions, you have uniquely specified your problem's solution.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/154456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 0
} |
Does wave interference happen only to same frequency waves? As the title says, from books and results from internet, I find that examples of wave interference always have the same frequency, only different in phase constant.
So, I'd like to know if wave interference happens only to same frequency waves
| Usually examples in textbooks, the internet, and others will use two waves of the same frequency for experiments such as the Double Slit experiment, or else to show complete constructive or destructive interference. All EM waves will interfere with one another to varying degrees. Usually if we are considering frequencies in white light the intereference will be time-averaged, so ultimately have little effect on different wavelengths.
A useful mental comparison to make whenever interference is mentioned is to consider water waves on a flat surface that are being peturbed. If you generate a longer wavelength in one position and a shorter one a distance away, will they still combine to creative interference? It will not be such a simple combination as two waves of the same frequency, but you will still get moments of constructive and destructive interference.
The net result is a more complex wave, and you can get all kinds of shapes from combining waves. A nice site with a bit about the same effect but for sound waves can be found here:
Hyperphysics Beat Frequencies
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/154525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
In what order will the magnetic quantum number be filled For example, the electron configuration for Cu(II) ion is [Ar]3d9. So only the 3d shell matters to the total orbital angular momentum of the ion. For 3d shell there are 5 possible values of $m_l : 0,\pm 1,\pm 2$. So how will the 9 electrons fill the 10 slots? What will the total orbital angular momentum $L$ be?
| Sorry, never mind. Hund's rule answers this directly.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/154628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.