Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Why isn't the Gear predictor-corrector algorithm for integration of the equations of motion symplectic? Okumura et al., J. Chem. Phys. 2007 states that the Gear predictor-corrector integration scheme, used in particular in some molecular dynamics packages for the dynamics of rigid bodies using quaternions to represent molecular orientations, is not a symplectic scheme. My question is: how can one prove that? Does it follow from the fact that the Gear integrator is not time-reversible (and if so, how can one show that)? If not, how do you prove that an integration scheme is not symplectic?
| Take a look at the notes on lectures 1 and 2 of Geometric Numerical Integration found here. Quoting from Lecture 2
A numerical one-step method $y_{n+1} = \Phi_h(y_n)$ is called symplectic if, when applied
to a Hamiltonian system, the discrete flow $y \mapsto \Phi_h(y)$ is a symplectic transformation for all sufficiently small step sizes.
From your link you have
$$x(t+h) = x(t) + h \dot{x}(t) + h^2 \left\{\frac{3}{24}f(t+h) +\frac{10}{24}f(t) -\frac{1}{24}f(t-h) \right\}$$
and
$$\dot{x}(t+h) = \frac{x(t+h) - x(t)}{h} + h \dot{x}(t) + h \left\{\frac{7}{24}f(t+h) +\frac{6}{24}f(t) -\frac{1}{24}f(t-h) \right\}$$
Now take $\omega(\xi,\eta) = \xi^T J \eta$ where
$J = \left(\begin{array}{cc} 0 & \mathbb{I} \\ \mathbb{-I} & 0 \end{array}\right)$. Then the integrator is symplectic if and only if $\omega(x(t),\dot{x}(t))=\omega(x(t+h),\dot{x}(t+h))$ for sufficiently small $h$.
All that you need to do is to fill in the values of $x(t+h)$ and $\dot{x}(t+h)$ from the integrator, and show that this condition does not hold.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/27224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 1,
"answer_id": 0
} |
Quantum mechanics as classical field theory Can we view the normal, non-relativistic quantum mechanics as a classical fields?
I know, that one can derive the Schrödinger equation from the Lagrangian density
$${\cal L} ~=~ \frac{i\hbar}{2} (\psi^* \dot\psi - \dot\psi^* \psi) - \frac{\hbar^2}{2m}\nabla\psi^* \cdot \nabla \psi - V({\rm r},t) \psi^*\psi$$
and the principle of least action. But I also heard, that this is not the true quantum mechanical picture as one has no probabilistic interpretation.
I hope the answer is not to obvious, but the topic is very hard to Google (as I get always results for QFT :)). So any pointers to the literature are welcome.
| Indeed, the true Lagrangian for the Schrödinger equation takes this from
$${\cal L}=i \hbar\psi^*\dot\psi-\frac{\hbar^2}{2m}|\nabla\psi|^2-V({\bf x},t)\psi^*\psi$$
and the action becomes
$$S=\int dtd^3x{\cal L}.$$
A Lagrangian for the Schrödinger equation has a meaning only in a quantum field theory context when you do a second quantization on the Schrödinger wavefunction. This applies in a lot fields and mostly in condensed matter and, generally speaking, to many-body physics. In this case, you have to generalize this equation to the Pauli equation and work with spinor and anticommutation rules to describe ordinary matter.
Then, the probabilistic interpretation applies to the states in the Fock space for the many-body problem you are considering.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/27281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 5,
"answer_id": 3
} |
Analyticity and Causality in Relativity A few weeks ago at a conference a speaker I was listening to made a comment to the effect that a function (let's say scalar) cannot be analytic because otherwise it would violate causality. He didn't make this precise as it was a side comment, and although it intrigued me (I had never heard that said before) I didn't give it much thought until this morning.
Now that I think about it, it actually seems quite obvious. Just look in $1+1$ Minkowski space: suppose $f$ is analytic around some point $(t_0, x_0)$, then I can find some ball about this point such that at another point $(t,x)$ in the ball but outside the light cone of $(t_0,x_0)$ we have that $f(t,x)$ is completely determined by the value of the function and its derivatives at $(t_0,x_0)$. This seems to be against the spirit of causality.
If the above is correct, does anyone know when this was first discussed? I imagine that it would have been quite a long time ago. It's interesting because until this conference I had never heard it said before. Perhaps it is deemed to be uninteresting/obvious?
| Analytic functions are functions which are locally given by a convergent power series.
Analyticity of a function does not does not imply that by knowing values of all derivatives one can determine value of the function in other point.
In particular, for any values of $y_0$ and $y_1$ one can construct such analytic function that $y_0=f(x_0,t_0)$ and $y_1=f(x_1,t_1)$.
Consequently, the argument that analyticity would break causality is not valid.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/27318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
What are the justifying foundations of statistical mechanics without appealing to the ergodic hypothesis? This question was listed as one of the questions in the proposal (see here), and I didn't know the answer. I don't know the ethics on blatantly stealing such a question, so if it should be deleted or be changed to CW then I'll let the mods change it.
Most foundations of statistical mechanics appeal to the ergodic hypothesis. However, this is a fairly strong assumption from a mathematical perspective. There are a number of results frequently used in statistical mechanics that are based on Ergodic theory. In every statistical mechanics class I've taken and nearly every book I've read, the assumption was made based solely on the justification that without it calculations become virtually impossible.
Hence, I was surprised to see that it is claimed (in the first link) that the ergodic hypothesis is "absolutely unnecessary". The question is fairly self-explanatory, but for a full answer I'd be looking for a reference containing development of statistical mechanics without appealing to the ergodic hypothesis, and in particular some discussion about what assuming the ergodic hypothesis does give you over other foundational schemes.
| You may be interested in these lectures:
Entanglement and the Foundations of Statistical Mechanics
The smallest possible thermal machines and the foundations of thermodynamics
held by Sandu Popescu at the Perimeter Institute, as well as in this paper
Entanglement and the foundations of statistical mechanics.
There is argued that:
*
*"the main postulate of statistical mechanics, the equal a priori probability postulate, should be abandoned as misleading and unnecessary" (the ergodic hypothesis is one way to ensure the equal a priori probability postulate)
*instead, it is proposed a quantum basis for statistical mechanics, based on entanglement. In the Hilbert space, it is argued, almost all states are close to the canonical distribution.
You may find in the paper some other interesting references on this subject.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/27402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "114",
"answer_count": 6,
"answer_id": 4
} |
Rigorous proof of Bohr-Sommerfeld quantization Bohr-Sommerfeld quantization provides an approximate recipe for recovering the spectrum of a quantum integrable system. Is there a mathematically rigorous explanation why this recipe works? In particular, I suppose it gives an exact description of the large quantum number asymptotics, which should be a theorem.
Also, is there a way to make the recipe more precise by adding corrections of some sort?
| perhaps it can be derived from te approximation over the density of states
$$ N(E)= \sum_{n=0}^{\infty}\theta(E-E_{n})\approx \frac{1}{2\pi \hbar}\iint_{V}\theta(E-H)dxdp $$
with $ H= P^{2}/2m +V(x) $ is the Hamiltonian of the particle and $ \theta (x) $ is heaviside step function
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/27492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 5,
"answer_id": 4
} |
which letter to use for a CFT? In math, one says "let $G$ be a group", "let $A$ be an algebra", ...
For groups, the typical letters are $G$, $H$, $K$, ...
For algebras, the typical letters are $A$, $B$, ...
I want to say things such as "let xxx be a conformal field theory" and "let xxx $\subset$ xxx be a conformal inclusion".
Which letters should I use? What is the usual way people go about this?
Here, I'm mostly thinking about chiral CFTs, but the question is also relevant for full (modular invariant) CFTs.
| There is, I think, no really standard symbol for the generic (chiral) CFT used universally, but there is within the different formalizations.
*
*When chiral CFTs are modeled by vertex operator algebras, the standard symbol is usually "$V$" (for obvious reasons) as user388027 notes in his reply..
*When chiral CFTs are modeled as conformal nets, then (as you know), the standard symbol is usually "$\mathcal{A}$" or "$\mathfrak{A}$" (for A lgebra of observables)
A randomly picked standard reference with this usage is Gabbiani,Fröhlich, Operator algebras and CFT
It seems to me that most authors who need and use the notion of CFT more abstractly tend to write things like
$$
CFT_1 \to CFT_2
$$
For instance so here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/27542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Convert state Vectors to Bloch Sphere angles I think this question is a bit low brow for the forum. I want to take a state vector $ \alpha |0\rangle + \beta |1\rangle $ to the two bloch angles. What's the best way? I tried to just factor out the phase from $\alpha$, but then ended up with a divide by zero when trying to compute $\phi$ from $\beta$.
| You are probably dividing by $\alpha$ at some point to eliminate a global phase, leading to your divide by zero in some cases. It would be better to get the phase angles of $\alpha$ and $\beta$ with $\arg$, and set the relative phase $\phi=\arg(\beta)-\arg(\alpha)$. Angle $\theta$ is now simply extracted as $\theta = 2\cos^{-1}(|\alpha|)$ (note that the absolute value of $\alpha$ is used). This is all assuming that you want to get to
$$|\psi\rangle = \cos(\theta/2)|0\rangle + \mathrm{e}^{i\phi}\sin(\theta/2)|1\rangle\,,$$
which neglects global phase.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/27589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
How to write a paper in physics? I really like to do research in physics and like to calculate to see what happen. However, I really find it hard to write a paper, to explain the results I obtained and to put them in order. One of the reasons is the lack of my vocabulary.
*
*How do I write physics well? I think that writing physics is more dependent of an author's taste than writing mathematics is.
*Are there any good reference I can consult when writing?
*Or could you give me advice and tips on writing a paper?
*What do you take into account when you start writing a paper?
*What are your strategies on the process such as structuring the paper, writing a draft, polishing it, etc?
*In addition, it is helpful to give me examples of great writing with the reason why you think it is good.
*Do you have specific recommendations?
| I never forgot my old lecturer Robert Barrass and his book Scientists Must Write. - He never stood a chance, with me.
I still use the basic, 'Theory, diagram, experiment, results and conclusions' approach, otherwise I am lost!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/27675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43",
"answer_count": 6,
"answer_id": 3
} |
How much water is destroyed in photosynthesis, relative to the world's supply? Water is involved in the photosynthesis. How much water are we talking about compared with the total amount on water on Earth? Is it enough to have an effect on the average age of water molecules?
| According to http://ga.water.usgs.gov/edu/earthhowmuch.html the total volume of water on the earth is $1.386\times 10^9$km$^3$, which is about $1.4 \times 10^{21}$kg (I'm rounding because I don't know the average temperature and therefore density of the water).
According to http://en.wikipedia.org/wiki/Biomass_(ecology)#Global_rate_of_production the annual photoautotrophic production of biomass is 104.9 billion tonnes C/yr. Assuming this is all carbohydrate one carbon atom is associated with roughly one water molecule (monosaccharides are C6H12O6) so the weight of water associated with that weight of carbon is $1.6 \times 10^{14}$kg.
So if you assumed all water gets cycled through the biosphere you'd conclude that the average life of a water molecule is 8.75 million years.
I must admit this is shorter than I expected.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/27860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How much more effective is it to stir in both directions? I have been told that industrial mixing machines (say, for cake batter) switch directions periodically, first stirring in one direction, then the other, because this mixes the material more thoroughly.
I imagine (but don't know for sure) that stirring in only one direction will tend to create helical structures in the mixed material, where each helix is more or less uniform but two helices might be quite different from one another; and that switching directions tends to break up and mingle these helices. Is this at all correct?
Is there a way to quantify the effectiveness of different methods of stirring? If so, how much better is it to stir in alternating directions, and how often should one switch directions?
| Mixing means and requires turbulence.
Single direction stirring can settle into a pretty laminar regime at least some of the time. Abruptly reversing direction would break up that order for a while. SO would abrupt stops and starts or just running the machine in a mode where the motion of the blades has a highly turbulent Reynolds number.
How much reversal do you need? Well, that the crux of it and the devil is in the details. I presume that they settle this question empirically. Engineers can be very simple that way.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/27929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Did anyone claim that quantum theory meant lasers would never work I've been reading 'How the Hippies saved Physics', which describes a design for a superluminal communication device, of which the crucial part was a laser which duplicated an incoming photon many times. The reason this won't work is what is now known as the no-cloning theorem - a quantum state can't be duplicated in this way. It may appear that a laser can do this, but it can't.
The thing is that I have vague memories of reading that when the laser was first talked about, it was claimed that quantum theory forbade such a device. What I'd like to know is whether there were such claims, and if so were they based on the idea that a laser would duplicate a quantum state.
| As far as I know, initially, the main requirement for lasing was population inversion. It can easily be shown that this is not possible for a pure two (energy )level system. I suppose this is what you are referring to.
However, since then, using quantum interference in multi-level systems, one can have lasing without inversion. LWI in atomic vapor. A laser does not duplicate a quantum state nor am I aware of any such device that can do this.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/27994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
'Applications' of surface tension
What are some common applications, uses, exploitations of the properties of surface tension?
Here is what I mean. A water strider can walk on water, that is a consequence of surface tension. This is a consequence, but it is not human made.
On the other hand, I heard that in the construction of some tents, the upper cover of the tent is the rain protector. It is not really impermeable, but if water is placed on it then the water surface tension does not let the water pass through the fine, small pores of the tent cover. However, if you touch the cover while water is on it, you break the surface tension and water passes through.
I would say that the above fact is a clever use of the effect of surface tension. Are there any other known applications, or interesting experiments regarding the surface or interfacial tension?
| Take a look at this paper
Tears of Venom: Hydrodynamics of Reptilian Envenomation
Reptiles use surface tension to eject venom from their fangs. See also this and this.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/28055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
A certain regularization and renormalization scheme In a certain lecture of Witten's about some QFT in $1+1$ dimensions, I came across these two statements of regularization and renormalization, which I could not prove,
(1) $\int ^\Lambda \frac{d^2 k}{(2\pi)^2}\frac{1}{k^2 + q_i ^2 \vert \sigma \vert ^2} = - \frac{1}{2\pi} ln \vert q _ i \vert - \frac{1}{2\pi}ln \frac{\vert \sigma\vert}{\mu}$
(..there was an overall $\sum _i q_i$ in the above but I don't think that is germane to the point..)
(2) $\int ^\Lambda \frac{d^2 k}{(2\pi)^2}\frac{1}{k^2 + \vert \sigma \vert ^2} = \frac{1}{2\pi} (ln \frac{\Lambda}{\mu} - ln \frac{\vert \sigma \vert }{\mu} )$
I tried doing dimensional regularization and Pauli-Villar's (motivated by seeing that $\mu$ which looks like an IR cut-off) but nothing helped me reproduce the above equations.
I would glad if someone can help prove these above two equations.
| perhaps since your itnegral is logarithmic divergent you could do the following
$$ \int_{0}^{\infty}\frac{kdk}{k^{2}+a^{2}}\to \int_{0}^{\infty}\frac{kdk}{k^{2}+a^{2}}- \int_{0}^{\infty}\frac{dx}{x+b}+\int_{0}^{\infty}\frac{dx}{x+b} $$
then the integral $$ A=\int_{0}^{\infty}\frac{kdk}{k^{2}+a^{2}}- \int_{0}^{\infty}\frac{dk}{k+b}$$ is convergent so we must now regularize
$ \int_{0}^{\infty}\frac{dx}{x+b} $ which can be made by using Ramanujan's resummation to get $ \sum_{n=0}^{\infty} \frac{1}{n+b}= -\Psi (b) $ now use the Euler-Maclaurin summation formula
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/28194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Dielectric in a parallel plate capacitor Uniform charge: each atom has charge $q$.
Magnitude of dipole moment is $q s$, where $s$ is the distance the nucleus is shifted.
According to my notes, the charge on the surface of a dielectric in between the plates is $N q s S$, where $N$ is the number of dipoles and $S$ is the surface area of the plate.
But surely this should be $N q s$, because the charge should on the surface should be the same irrespective of the surface area (because we are using the number of charges on the surface $N$).
| There are two misconceptions present in your explanation of the problem.
*
*$N$ is not number of dipoles, but their volumetric density
*$Q$ is not total charge, but equivalent charge at boundaries of the dielectric.
The idea is that (a) dielectric of the area $A$ and height $L$ polarized homogeneously along its height and (b) two plan-parallel plates of the area $A$, distanced by $L$ and with charges $N A p$ and $-N A p$ produce macroscopically the same electric field ($N$ is volumetric density and $p = q s$ is polarization of one dipole).
This effect can be relatively simply understood if you imagine that you have charges of volumetric density $N$ homogeneously distributed all along material. Initially positive and negative charges are on the same positions, all material is electrically neutral and polarization equals zero (picture left). Now you pull all positive charges for $s/2$ up and all negative charges for $s/2$ down (picture right), so you actually get total dipole moment $P' = N V p = N A L q s$.
Figure: red = positive charge, blue = negative charge, violet = neutral.
What is the effect of such movements? The bulk of dielectric material remains neutral in terms of charge, but you do get excess charge $Q = N A s q$ at the top and excess charge $-Q = -N A s q$ at the bottom of the dielectric ($A s$ is the volume at the top or bottom where only one type of charge is present).
The point of this simple derivation is that surface charge density $\sigma = \frac{Q}{A} = N s q$ equals polarization volumetric density $P = \frac{P'}{A L} = N q s$, i.e. $\sigma = P$. (Polarization density is by definition total dipole moment of the dielectric divided by its volume.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/28273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
This sentence makes no sense, electrostatics and electrons moving in a conductor - current
I highlighted the part where the confusion is.
The sentence said that the potential difference is 0, yet it then immediately talks about how electrons can have motion.
What are they trying to say?
| dmckee's answer has nailed it, but to try and put it more simply (and less accurately) suppose instead of electrons in a wire you were looking at gas molecules in a tube. If there's no pressure drop along the tube, i.e. no voltage drop along the wire, there will be no net flow of gas. But this doesn't stop the gas molecules whizzing about at random in all directions. It's just that because the motion is random it all balances out and there's no net flow of gas.
If you now turn up the voltage, i.e. create a pressure gradient along the tube, gas will now start flowing just as electric current flows in a wire. The gas molecules are still whizzing around like crazy, but now there is a net flow along the tube just as you get a net flow of electrons in the wire.
Actually my analogy isn't so weird. Electrons in a metal can indeed be approximated as a free electron gas. The equivalent of the gas molecule velocities is the Fermi velocity of the electrons.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/28312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 0
} |
X-Ray crystallography using Bragg's Law I was looking up X-Ray crystallography using Bragg's Law:
$2d\sin\theta = n\lambda$
and I can understand the values of everything except this integer value $n$.
As far as my research got $n$ is used to describe the atom spacing in the crystal lattice, but I don't understand how you'd express $n$ or how it would describe it.
Could someone please explain this to me please?
Note: diagrams tend to be very useful in developing my understanding and if anyone has any reference to a video that might help as well. Thanks.
| Positive interference occurs when the waves reflected from two different "layers" differ by an (any!) integer number of wavelengths: $n$ is that integer.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/28510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What is the roaring in a roaring fire? I was just starting a barbecue fire by blowing on the smouldering coals when I realised I had no idea what the sound was actually caused by. I can make the sound by blowing at almost any flame I can think of, and I guess it is perhaps related to the increased oxygen consumption and a turbulent flow. Why does a disturbed flame make a sound?
| The roar is indeed due to turbulence.
When a solid (or liquid) burns it isn't the solid that burns. The heat causes the solid to vaporise or emit vapour and it's the vapour that burns. When you have a steady flame the vapour burns smoothly. However, when you blow on it you make the vapour flow, and therefore the flame, turbulent. Under these circumstances the vapour burns as, in effect, a series of tiny explosions and this causes the roar.
I couldn't find a basic article on this subject (for once Wikipedia let me down), but if you Google for "flame turbulence sound" you'll find lots of scientific papers on the subject.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/28567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
How do we recognize hardware used in accelerator physics When I see a new accelerator in real life or on a picture, I always find it interesting to see how many thing I can recognize. In that way, I can also get a small first idea of how the accelerator is working.
Here is a picture, I have taken of LEIR at CERN
Help me to be able to recognize even more stuff, than I can now(I will post a few answers myself)
Suggested answer form:
*
*Title
*Images
*One line description
*Link
| Ion pump
Ion pumps are used to pump away rest gas in beam tubes at very low pressure.
http://en.wikipedia.org/wiki/Ion_pump_(physics)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/28633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
On constancy of cometary orbits how are the comets able to keep to a nearly fixed orbital period, though they lose a certain amount of mass during their perihelion?
| What exactly do you mean by a "nearly fixed orbital period"? For most comets the deviations from an orbit calculated based solely on gravitational parameters are on the order of fractions of a day per apparition, but for Comet 1P/Halley it is about four days. In any case, these deviations are well observed.
The paper Cometary Orbit Determination and Nongravitational Forces (D. K. Yeomans, P. W. Chodas, G. Sitarski, S. Szutowicz, and M. Królikowska) provides a nice overview (including some history). From the above linked:
[Friedrich] Bessel (1836) noted that
a comet expelling material in a radial sunward direction
would suffer a recoil force, and if the expulsion of material did not take place symmetrically with respect to perihelion, there would be a shortening or lengthening of the
comet’s period depending on whether the comet expelled
more material before or after perihelion ...
Although Bessel did not identify the physical mechanism
with water vaporization from the nucleus, his basic concept
of cometary nongravitational forces would ultimately prove
to be correct.
Also from the above paper:
The breakthrough work that allowed a proper modeling
of the nongravitational effects on comets came with Whipple’s introduction of his icy conglomerate model for the
cometary nucleus (Whipple, 1950, 1951). Part of his motivation for this model was to explain the so-called nongravitational accelerations that were evident in the motion of
Comet Encke and many other active periodic comets. That
is, even after all the gravitational perturbations of the planets were taken into account, the observations of many active comets could not be well represented without the introduction of additional so-called nongravitational effects into
the dynamical model. These effects are brought about by
cometary activity when the sublimating ices transfer momentum to the nucleus.
The paper goes on to detail the development of models and also summarizes a variety of observational studies over the last half century, and makes for an interesting read on the subject.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/28689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Time Reversal Invariance in Quantum Mechanics I thought of a thought experiment that had me questioning how time reversal works in quantum mechanics and the implications. The idea is this ... you are going forward in time when you decide to measure a particle. The particle then collapses to the observed state. Now if physics were to be the same in reverses time, then if we stop and reverse time then measure that very same particle again ... then I would imagine that since the wave function has collapsed we ought to measure the same thing. What this says to me is that given some time evolution in the + direction, if we measure a particle and it collapses the wave function, then if you reverse the arrow of time to go in the - direction we ought to get the same answer as before. The future/present effects the past. This means if we theoretically had a time machine and went back in time, we would have traveled into a different past.
Another implication of this thought experiment is that the future would be indistinguishable from the past and would hence forth be the same. I would imagine that this is consistent with the 2nd law of thermodynamics since physics dictates that entropy only increases ... going in the reverse direction of time to decrease entropy would violate the laws of physics. Has anyone else out there thought about this?
From my studies in quantum mechanics, I don't remember any postulates stating anything like this, but this all makes sense to me. Are there any theories out there that go along these lines?
| No!
Time invariance holds in quantum mechanics ONLY when wave function does not collapse. This means once you did any measurements, the time invariance is destroyed. There is no time invariance in the presence of observer.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/28816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 3
} |
Why is the sky not purple? I realise the question of why this sky is blue is considered reasonably often here, one way or another. You can take that knowledge as given. What I'm wondering is, given that the spectrum of Rayleigh scattering goes like $\omega^4$, why is the sky not purple, rather than blue?
I think this is a reasonable question because we do see purple (or, strictly, violet or indigo) in rainbows, so why not across the whole sky if that's the strongest part of the spectrum?
There are two possible lines of argument I've seen elsewhere and I'm not sure which (if not both) is correct. Firstly, the Sun's thermal emission peaks in the visible range, so we do actually receive less purple than blue. Secondly, the receptor's in our eye are balanced so that we are most sensitive to (roughly) the middle of the visible spectrum. Our eyes are simply less sensitive to the purple light than to the blue.
| All light is Rayleigh scattered, it's just that short wavelength light is scattered more. The bluest light we can see has a wavelength of about 400nm while the reddest has a wavelength of about 700nm, so there is a roughly a factor of ten increase in the scattering going from the red to the blue end of the spectrum.
So the light we see from the sky isn't a pure wavelength, it's a mix of all the colours but weighted towards the blue end of the spectrum. For the sky to be purple we'd need the scattering of purple light to be much stronger than red light, not just a factor of ten.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/28895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "77",
"answer_count": 3,
"answer_id": 0
} |
The Planck constant $\hbar$, the angular momentum, and the action Is there anything interesting to say about the fact that the Planck constant $\hbar$, the angular momentum, and the action have the same units or is it a pure coincidence?
| Although the answers so far to this questions are very interesting and informative, I think from an analytical point of view, your question is not quite sensible.
In a mathematical structure, one could argue that there are no "coincidences", everything is related through the fundamental basis. Now in practice, the answers expain why "$\hbar$", "angular momentum" and "the action $S$" are related. But if "mass $m$", "position $x$" and "momentum $p$" would have the same units, then there would also be an explaination for that, because these are parts of a physical theory, put into mathematical terms.
So if you ask "Is there anything interesting to say about the fact that ℏ, the angular momentum and the action have the same units or is it a pure coincidence?" (and you do), then the answer is "Yes.", optionally followed by an elaboration of the mathematical structure of the theory, a search for a common denominator.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/28957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 3,
"answer_id": 2
} |
How can a Photon have a "frequency"? I picture light ray as a composition of photons with an energy equal to the frequency of the light ray according to $E=hf$. Is this the good way to picture this? Although I can solve elementary problems with the formulas, I've never really been comfortable with the idea of an object having or being related to a "frequency". Do I need to learn quantum field theory to really understand this?
| No, sometimes photons exhibit properties of a particle, and other times it exhibits properties of a wave, therefore having a "frequency" and at the same time being a particle.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 4,
"answer_id": 2
} |
Is the classical Doopler Effect, for light shift, $1-v/c$, exact? What is it an approximation of? Is the classical doopler effect for light shift equal to $1-v/c$ exact or an approximation of a classical formula? I know that it is an approximation of the relativistic formula, but what was the corresponding classical formula? I ask this because in Einstein's On the Electrodynamics of moving bodies he derives $\sqrt\frac{1-v/c}{1+v/c}$ and notes that it is different from the classical case. I'm not exactly sure what formula he is comparing it to.
| He is comparing $\sqrt{1-v\over 1+v}$ to the classical Doppler shift $(1-v)$ (where v is the velocity divided by c, since I use units where c=1). The formula you give $1-v\over 1+v$ doesn't have a classical interpretation, and Einstein reduces to Doppler's at slow speeds.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Should any theory of physics respect the principle of conservation of angular momentum or linear momentum? Is it possible that a theory that can describe the universe at the planck scale can violate things that we now consider fundamental in nature?For example can it violate rotational and translational invariance and subsequently momentum wouldn't be conserved ?Should one consider these invariance principles to be fundamental that we must choose the lagrangian to respect them ?
| Conservation laws in theory are valid because they rest on solid and innumerable data. An experiment finding non-conservation of a law supported by theory would immediately invalidate the theory.
Our experimental experience is that the two laws you mention, conservation of angular momentum and momentum are such universal laws within the data we can access.
Theories are a different matter. Theories can be extended to variable and parameter regions where experiments cannot go at present or possibly ever. This does not necessarily mean that the extension of these theories will hold willy nilly in the unexplorable experimentally regions,( i.e. the conservation laws should also hold there experimentally). It is only necessary that at the limit where we know from the data that the conservation laws hold, the theories for the extended region reproduce the behavior of the standard theories, i.e. conserve these laws .
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Can every particle be regarded as being a combination of Black holes and White holes? Can the statement be regarded as true? That every particle, or element in the universe can be regarded as a combination of black hole and white hole in variable proportion.
| You first have to understand what a "white hole" is. It's the time reverse of a black hole. It was rightly pointed out in previous answers that white holes violate the second law of thermodynamics. Now, like anything in thermodynamics, this makes them unlikely but not impossible (unlikely here usually means unlikely even in an astronomical number of universes ...). But if you keep that aside, there is a more beautiful picture of what a white hole is: it is a quantum superposition of all possible black holes of about the same size. There are so many ways to make a black hole, and they come in so many microstates, that the quantum superposition corresponding to a white hole is astronomically unlikely, ... but not impossible.
Such considerations could be important if you ask what the relation is between elementary particles and black holes. This has been answered already: if you try to make such a comparison you would have to realise first that all known particles are so light that those black holes would be tiny, in fact billions of times smaller than the smallest distance conceivable in physics: the Planck length. Because of this, comparing known particles with black holes, or imagining them as being built from black holes, black, white or otherwise, is not considered to be a fruitful exercise in theoretical physics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Gamma Ray Bursts What is the maximum frequency of the Gamma Rays produced during supernovae? And how are these detected by telescopes without getting some serious damage done?
| A quick Google for "gamma ray burst spectrum" found lots of hits including http://arxiv.org/abs/1201.2981, which contains a collection of spectra from gamma ray bursts in the appendix. The maximum energies detected are around 10MeV, which seems a lot but remember that the LHC accelerates particles to around a million times more energy than this.
Remember that although GRBs are fantastically energetic, they're a long long way away, so by the time the radiation reaches the earth it's so weak that it can't do any damage. Given the debate in the comments about what constitutes a gamma ray telescope you might like to have a look at http://www.nasa.gov/mission_pages/swift/main/index.html. This describe the NASA Swift satellite, which is used to detect GRBs.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Grassmann Variables Representation? It might be a silly question, but I was never mathematically introduced to the topic. Is there a representation for Grassmann Variables using real field. For example, gamma matrices have a representation, is it not possible for Grassmann Variables? The reason for a representation is, then probably it will be easier to derive some of the properties.
| I think that this Wikipedia article will tells this all.
The only problem is that for $n$ (I mean $\theta_1,\theta_2,...\theta_n$) Grassmann numbers you will need to use $2^n\times 2^n$ matrices.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Is it possible to have incommensurable but equally valid theories of nature which fits all experimental data? Is it possible to have mutually incommensurable but equally valid theories of nature which fits all experimental data? The philosopher of science Paul Feyerabend defended this seemingly outrageous thesis and made a very strong case for it. In such a case, is it impossible to decide which of the incommensurable competing theories is the "right" one? In that case, does "anything goes"?
| From your link:
if theories are incommensurable, there is no way in which one can compare them to each other in order to determine which is more accurate .
So the decision is based on accuracy, and for physical theories experimental accuracy. As time goes on, accuracy on measurements increases as well as methods of measuring are improved; there will always be propositions for measurements which will distinguish these theories once more accuracy is obtained.
Two disparate in the beginning theories about physics, as time progresses have often been found after long thinking by theorists that they are a subset or a transform of each other; in that case the problem is eliminated.
It is only a philosophical question, imo, not a practical one.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Lenses (refractor) or mirrors (reflector) telescope? What differentiates, in terms of practical quality, not technical implementation, a refractor from a reflector telescope?
Why would one prefer a refractor over a reflector, when reflectors come with such large diameters at a smaller price?
| Refractors suffer from fewer optical aberrations than reflectors because they have only two elements in their optical assembly, making it easier to align and maintain collimation.
The biggest problem with refractors is chromatic aberration, which can be corrected to a certain degree by Apochromatic lenses, but isn't completely eliminated.
For deep sky objects which are visually faint, chromatic aberration is not a significant problem, and hence refractors may be preferred.
For solar system objects and brighter DSOs, the chromatic aberration might become significant.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 6,
"answer_id": 2
} |
Electrostatic Pressure Concept There was a Question bothering me.
I tried solving it But couldn't So I finally went up to my teacher asked him for help . He told me that there was a formula for Electrostatic pressure $\rightarrow$
$$\mbox{Pressure}= \frac{\sigma^2}{2\epsilon_0}$$
And we had just to multiply it to the projected area = $\pi r^2$
When i asked him about the pressure thing he never replied.
So what is it actually.Can someone Derive it/Explain it please.
| When charge is given to a conductor body then due to mutual repulsion between two charges on the two parts of the given conductor, a net force at a point on the surface of a charge conductor whose direction is normally outward. This mechanical force developed per unit area on the surface of charge conductor is also called electrostatic pressure/electrostatic stress.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 5
} |
Does friction decrease as objects move faster against each other? I was told that the faster two objects move against each other, the less the friction between them would be… compared to if they were moving slower. So does friction decrease when the body is moving faster or does it remain the same. Does the speed affect friction in any way?
| Friction is not a fundamental force itself, rather it is a macroscopic collective effect of the interactions between atoms and molecules of the two surfaces, dominantly electromagnetic interactions. Yet in reality, it can depend on a large number of other factors such as the relative speeds of two surfaces, the way the atoms or molecules are arranged in the two solids, and so on. The coefficients of static and kinetic friction pretty much summarize this unimaginably complex interactions for most common materials, and we use those coefficients to simplify our calculations without taking into account all the complex interactions. And there surely is a bound within which these coefficients can really yield satisfactory results. Outside the applicable conditions, they are mere nonsense. So those coefficients can not be taken too seriously, they do not correspond to a fundamental law of nature, rather they summarize the results of a large number of experiments for the purpose of making our calculations easier.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Entropy increase and end of the universe While taking thermodynamics our chemistry teacher told us that entropy is increasing in day by day (as per second law of thermodynamics), and when it reaches its maximum the end of the world will occur. I didn't see such an argument in any of the science books, is there any probablity for this to be true?
| What your describing is the theory of the Heat death of the universe which is speculated about since 1850s. However, as explained here, object at astronomical scale are often self-gravitating and that gives them have unintuitive thermodynamical properties like a negative heat capacity. This usually gives more structured systems as entropy increases, and negates the idea of heat-death.
Furthermore, given the fact that the universe is currently thought to be forever expanding and that the majority of the entropy is/will be in black-holes, the estimated time-scale for such thermal equilibrium to occur is huge (of the order of 10100 years), which gives us vastly enough time to change our cosmological theories about the end of the universe...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Direct observations of a black hole? I'm not very knowledgeable about physics generally, but know that nothing can escape a black hole's gravitational pull, not even light (making them nearly invisible?).
My question is: What has been obtained from direct observation of a black hole to prove their existence? Please note that I'm not questioning the existence of black holes, but am just curious as to how we have come to realize it.
| This animation from UCLA's Galactic Center Group shows stars near the galactic core in images taken from 1995 to 2011. You can clearly see they are orbiting a small and massive object.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 3,
"answer_id": 1
} |
Lorentz invariance and the vacuum expectation value of fields with spin > 0 I had a question about Moduli space, which I was reading about here, but then I read this sentence:
"Lorentz invariance forces the vacuum expectation values of any higher
spin fields to vanish."
Can someone explain how exactly this happens? Or at least suggest an exercise to carry out?
| There is a Lorentz transformation that maps a spacelike vector $u$ to $-u$.
If $A(x)$ is a field of spin 1 with $\langle u \cdot A(0)\rangle = c$ then applying the Lorentz transform we find $-c=c$ and hence $c=0$. Doing this for all spacelike vectors implies $\langle A(0)\rangle = 0$, and translation invariance then gives $\langle A(x)\rangle =0$ for all $x$.
For other spins the argument is similar. You are welcome to try the spinor case as an exercise.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Why/how does an electron emit a photon when decelerating? I've had two special relativity courses so far but none really gave me a clear description of the process.
| Ahhh... let's see!
Any electron that changes speeds or direction emits blue light. When an electron moves to difference valence levels emits blue and white light.
For an electron to change speed or direction it must have interaction with other particles or other EMFs. If the electron is spinning around an atom it will behave according to black body radiation theory. If it is a free electron
then it can retain broader bands of energy or absorb energy differently than if in a valence level restricted model, and if free, will never uniformly absorb and emit because angle of particle interaction and external field strengths encountered and the starting speed of every electron will vary. Valence electrons require specific photons to jump valence levels. So a free electron can be hotter, faster when absorbing, cooler and slower when re emitting. So the electron is not creating the photons itself. It is absorbing, deflecting, transferring energies that have already been given to it from external source.
At rest, and cold an electron doesn't emit photons because it isn't absorbing energy and changing its energy state. For an electron to move something has to hit it.
The source of light when changing direction is related to the emf electrons carry. This means then any photon coming from an electron is not the electron's photon, it was given addition energies through heating, absorption, and acceleration of speeds.
For anything to change speed or direction it must give up its momentum and since very little mass in electrons most of the energy is created by temperature and velocity, any of which should change, will cause emission of light.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Would the rate of ascent of an indestructible balloon increase as function of it's altitude? Assume a balloon filled with Hydrogen, fitted with a perfect valve, and capable of enduring vacuum (that is to say, it would retain it's shape and so well insulated that the extremes of temperature at high altitudes and in space would have little effect) were to be launched.
As long as the balloon were in atmosphere it would ascend upwards (and also affected by various winds/currents, and gravity). As the balloon passed through increasingly rare atmosphere, would it rise faster?
| The change in height with respect to time is given by the buoyant force divided by the viscosity, for slow rising objects, and neglecting the initial acceleration. From this differential equation we can see that, regardless of changes in viscosity, the stable attractor of this system is the height where the buoyancy is zero. That is the height where the density of the rising object equals the ambient density of the gas through which it is rising.
Why viscosity and not drag? Drag increases quadratically with speed, while the viscosity of displacing the surrounding gas increases linearly with speed. Thus at sufficiently small speeds viscose constraints will be larger than drag constraints. One can see this by comparing walking to driving a car at highway speeds. When walking the constraints are viscose, but when driving a car at highway speeds the constraints are drag.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/29985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Why do magnetic field lines go from North to South? Why magnetic lines comes from north to south out side of the magnet
is any magnetic lines comes from south to north if so in which direction
What is the reason of magnetic lineS
| A picture is worth a thousand words.
Iron filings display the "lines", like small dipoles as @PhysGrad has mentioned. The compasses are larger dipoles and the permanent magnet itself is the largest. One can imagine tiny dipoles following "lines", so in a sense they exist to the accuracy of the experiment.
The image displays the need for a convention , one could call it red and blue poles.
Actually the convention is a bit bizarre it was adopted by watching a compass pointing to earth's north, which makes the magnet in the north pole a south magnetic pole!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/30046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
How does a strobe lamp stop a fast moving object? A strobe lamp can be used to seemingly stop a fast moving object when calibrated.
Commonly used in quality assurance during production to inspect otherwise non-observable assembly line activity.
What causes this effect in observations?
| Well, it doesn't really stop the motion. If you time the strobes to coincide with the revolution period of a wheel, the wheel will make one exact revolution between strobes and will appear to be stationary. This is called sampling in signal processing (you might want to read about the Nyquist sampling rate).
On a different note, one has to be careful not to undersample, otherwise motion artifacts will occur. For example, the fact that stagecoach wheels turn backwards in western movies is because of the fact that the sampling rate of the movie (24fps) undersamples the motion. This effect can be very important, e.g. if you are a doctor evaluating the video of a beating heart for diagnosis, very high frame rates are used in this case.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/30164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does infrared rays pass through polarized glass? Actually I had asked in another post that "Does infrared rays pass through active shutter glass" but someone just commented that infrared rays dont pass through polarized glass. If infrared rays doesnt pass through polarized glass can someone explain the reason or give reference link to look through.
| Infra-red radiation will pass through a polarised medium just like visible light does i.e. the component at right angles to the plane of polarisation will be blocked. So there's nothing special about the fact the glasses are polarised.
However most materials are only transparent over a restricted range of wavelengths. Optical glass only transmits light with a wavelength from about 300nm to around 2500nm. There's a good article on the optical properties of glass here.
However I think passive 3D glasses are made from conventional polarisers, which are made from a polymer called polyvinyl alcohol. I struggled to find much about the spectra of PVA polarisers, though I found this article that lists some polarisers that work down to 3000nm. So it seems PVA polarisers will transmit at least the near infra-red. However the term infra-red covers a huge range of wavelengths from 750nm - 1mm. I can't think of anything, even air, that is transparent right across this range of wavelengths so everything absorbs infra-red light to some extent.
Generally speaking, ultra-violet light is absorbed because it has enough energy to excite the electrons in atoms and infra-red light is absorbed because it has the right energy to excite molecular vibrations. Visible light has too much energy to excite molecular vibrations, but too little to excite electrons in atoms, so that's why most things are transparent to visible light (and probably why organisms like humans evolved to use it). Most things that are opaque, like say paper or chalk, don't absorb the light but scatter it by multiple reflections at air/solid interfaces. The exceptions are things like transition metal complexes and organic dyes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/30263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Changing the Half-Life of Radioactive Substances Is there a way to extend or reduce the half-life of a radioactive object? Perhaps by subjecting it to more radiation or some other method.
| The simple answer is no, we can't change the half life. There's no technology available to us that can affect energy levels in the nucleus enough to make a change to the half life.
Having said that, I've always wondered if the Mossbauer effect could change the half life. Mossbauer spectroscopy measures tiny changes in the energy levels of nuclei due their chemical environment. If you can change the spacing of energy levels in a radioactive nucleus you could in principle change the probability of transition between them and therefore change the half life. However I've never heard of this effect being observed, and I suspect the shifts of energy levels would be too small to make any significant difference. You can only observe the shifts because Mossbauer spectroscopy is exquisitely sensitive.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/30308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 6,
"answer_id": 2
} |
Collision of a black hole & a white hole A black hole and white hole experience a direct collision.
What happens? What shall be the result of such a collision?
| Another amateur answer: the energy of a White Hole is convex and the energy of a Black Hole is concave, so they cannot approach each other. Two black holes can approach one another; two white holes can approach one another. But white holes and black holes are kept apart magically, in much the same ways that matter and antimatter are kept apart.
The White Hole and the Black Hole are the same, just at different stages of their life -- the White Hole is the Expansive Stage; the Black Hole is the Recessive or Withdrawn Stage. White Holes become Black Holes become White Holes become Black Holes over and over again during their life cycle.
A Black Hole in one dimension is effectively a White Hole in an Opposite Dimension. A Black Hole in this dimension is effectively building an anti-Universe in an Opposite Dimension.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/30406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 8,
"answer_id": 4
} |
Distorted colors of Google StreetView photographs near electric power lines This is a followup to my question:
Cyclist's electrical tingling under power lines
Some users presented a convincing picture that the electric shocks under power lines are primarily from the electric fields, not the magnetic one, because the frequency is just too low, and the body must be considered as a capacitor to estimate the currents etc.
Google Maps just included pretty much all of the Czech Republic to the Street View. That's why I could look at the place where I had felt the electric shocks. First, the mast over there clearly corresponds to 400 kV according to a list of masts (model 8a-3), a pretty high voltage. But it's even more interesting to see what the Google car was seeing in front of itself at the same point where I experienced the shocks.
A pretty nice colorful distortion. It is strongly correlated with the masts so I guess it has something to do with the electric fields, too. Am I right? If I am right, what are the electric fields that may cause similarly strong effects in the digital cameras used by Google or others? There are lots of capacitors in those digital cameras, aren't there?
Why is the ordering of the anomalous colors yellow, cyan, violet (from the top towards the bottom where the treetops and roofs are located)? Is it linked to different capacities or voltages or other electric parameters of the three color-sensitive segments of the digital camera?
A blog URL related to the Street View anomalies in Czechia.
| Just discovered that the link can move around.
As following the power lines does not show this effect, it is not associated with them.
It seems to be associated with a tree line, even a tree.
either:
a) a temporary glitch in the camera program building up the picture, since it is not on all views.
b) Much less probable, somebody had been spraying the trees for something and it is a diffraction pattern from the settling spray.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/30466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 4,
"answer_id": 1
} |
Path traced out by a point While studying uniform circular motion at school, one of my friends asked a question:
"How do I prove that the path traced out by a particle such that an applied force of constant magnitude acts on it perpendicular to its velocity is a circle?" Our physics teacher said it was not exactly a very simple thing to prove.
I really wish to know how one can prove it.Thank you!
| I think you can prove it you can prove that the acceleration vector $\vec{a}$ is decomposed into two components
$$\vec{a} = \dot{v}\, \hat{e} + \frac{v^2}{r} \hat{n}$$
one tangential to motion along the unit direction vector $\hat{e}$ and one perpendicular to along the unit direction $\hat{n}$, with tangential speed $v$, change in speed $\dot v$ and path radius of curvature $r$.
So if the path was not a circle, there would be a component of $\vec{a}=\frac{\vec{F}}{m}$ along the tangential vector $\hat{e}$ changing the speed $v$ as to moves along. The hole thing hinges on the fact that speed does not change in uniform circular motion.
The above comes from math called Differential Geometry along a curve, and it is related to accelerations here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/30614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Does $p=mc$ hold for photons? Known that $E=hf$, $p=hf/c=h/\lambda$, then if $p=mc$, where $m$ is the (relativistic) mass, then $E=mc^2$ follows directly as an algebraic fact. Is this the case?
| Here's another way to think about it (personally, I think this addresses the question most directly):
$E = hf$ and $p = \frac{hf}{c}$ both apply to photons. What those get you is simply that $E = pc$, so you can conclude that $E = pc$ should be valid for photons. And it is.
Now, your question is worded to ask whether you can start with $p = mc$ and plug in $E = pc$ to get $E = mc^2$. But I think what you really want to know is, can you start with $E = mc^2$ and use it with $E = pc$ to derive $p = mc$?
The answer is, of course, no. $E = mc^2$ doesn't apply to photons. In fact, there is no case in which $E = mc^2$ and $E = pc$ both apply to the same object. So you can never validly combine them. The former is for objects at rest, for which $p = 0$, and the latter is for massless objects, for which $m = 0$, and which always move at the speed of light. As others have shown, they're both special cases of $E^2 = p^2 c^2 + m^2 c^4$.
Incidentally, I can't think of a single physical system for which $p = mc$ is satisfied.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/30764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 2
} |
Why isn't it allowed to use a flash when taking pictures in a certain place? When I go to, for example, a museum I try to take some pictures.
Sometimes the museum staffs forbid me to use a flash. Do you know the reason? I don't think it is related to photo-electric effect, right?
| It's actually astonishing to see how damage much a camera flash can do to black/dark colored objects!
This is a pretty good demonstration.
So, I imagine that if a photographer takes a picture with a powerful flash gun close to a black object (e.g. a dark painting) it could cause the painting to undergo combustion and get irreversibly damaged. It makes sense why most museums and galleries are paranoid about such occurrences.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/30835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 5
} |
What's the difference between Fermi Energy and Fermi Level? I'm a bit confused about the difference between these two concepts. According to Wikipedia the Fermi energy and Fermi level are closely related concepts. From my understanding, the Fermi energy is the highest occupied energy level of a system in absolute zero? Is that correct? Then what's the difference between Fermi energy and Fermi level?
| Fermi level as a state with 50% chance of being occupied by an electron for the given temperature of the solid and at absolute zero temperature occupancy is 100%.
Fermi energy is the corresponding energy of Fermi level.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/30922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 5,
"answer_id": 3
} |
Non linear QM and wave function collapse I heard that there have been some propositions about describing the collapse of the wave-function by adding non-linear terms, but I couldn't anything in any any textbooks or even articles (probably those propositions never reached a good level of consistency). However, I'd like to read about it. Could someone send me a reference?
| Roger Penrose advanced the notion that gravity causes wave function collapse, giving handwavy arguments involving the Schrodinger-Newton equation (one particular flavor of the nonlinear Schrodinger equation).
The references I'm aware of:
*
*Roger Penrose, "On Gravity's Role in Quantum State Reduction", General Relativity and Gravitation 28 5 (1996) 581-600. DOI:10.1007/BF02105068
*Roger Penrose, "Quantum computation, entanglement and state reduction", Phil. Trans. R. Soc. Lond. A 356 no. 1743 (1998) 1927-1939. DOI:10.1098/rsta.1998.0256
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/30982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Does regular sky clouds stops the sun's ultraviolet rays? I have an argue with an ex-Doctor of Medicine about the amount of ultraviolet light reaching us under the clear day sun and under the 100% cloudy sky.
To what extent we can say that the sky clouds stops ultraviolet light from the sun?
| Clouds don't absorb light (much), they reflect and refract it, and this applies to uv light in the same way as visible light. So a 100% cloudy sky will block uv light in the same way it blocks visible light. A quick Google suggests that heavy cloud cover will remove 80-90% of uv light. Anyone disputing this should try sunbathing on a cloudy day :-)
I've heard occasional claims that broken cloud cover can actually enhance uv levels in the unshaded areas by reflecting uv light into the breaks. However I've never come across any studies that prove this happens. While it seems vaguely plausible I'd be surprised if the effect was very big.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Principal value integral I am reading A. Zee, QFT in a nutshell, and in appendix 1 he has:
Meanwhile the principal value integral is defined by:
$$\int dx\,{\cal P}{1\over x}f(x)~=~ \lim_{\epsilon \rightarrow 0} \int dx\, {x\over x^2+\epsilon^2}f(x)$$
Please can someone explain to me why this is the case? As I understood it the principal value integral is rather defined as
$$\int_a^b dx\,{\cal P}{1\over x}f(x)~=~ \lim_{\epsilon \rightarrow 0^+} \int_a^{-\epsilon} dx\, {1\over x}f(x)+\lim_{\epsilon \rightarrow 0^+} \int_{\epsilon}^b dx\, {1\over x}f(x),$$
where $a<0<b$. But as far as I can see these two definitions are not equivalent.
| Note that the right spelling is "principal value".
The formulae aren't identical but the results are the same whenever both definitions yield a well-defined expression. What matters is that we remove the leading logarithmic divergence on both sides from $x=0$ and we do so in a symmetric way with respect to $x\to -x$.
If you denote the second definition-based integral $Cut(\epsilon)$,
$$ Cut(\epsilon) = \left(\int_{a}^{-\epsilon}+\int_{\epsilon}^b\right) \frac{dx}x f(x)$$
then I claim that there exists a weighting function $g(y)$ such that
$$\int_0^{\infty} g(y) Cut(y) dy = \int dx\,\frac{x}{x^2+\epsilon^2}f(x) $$
so it reduces to the first definition-based integral. The function $g(y)$ is supported for $y$ of the same order as $\epsilon$ so the limit has the same effect on both expressions.
You may also see the equivalence of both expressions if you just Taylor-expand $f(x)$ near zero. Assuming that $f(x)$ is finite and well-behaved near $x=0$, it's easy to prove that both definitions yield the same result. The real purpose of the "principal value" terminology deals with branches of functions of complex variables. So you may also imagine that $f(x)$ is a meromorphic or holomorphic function of a complex $x$. The indefinite integral $\int f(x)/x$ has a logarithmic singularity around $x=0$ and one needs to define on which branch we are at. The principal value takes the average of the results one would get on the $+i\pi$ and $-i\pi$ branches for the logarithm of the negative numbers.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
} |
What is the ontological status of Faddeev Popov ghosts? We all know Faddeev-Popov ghosts are needed in manifestly Lorentz covariant nonabelian quantum gauge theories. We also all know they decouple from the rest of matter asymptotically, although they "superficially" interact over finite time periods.
*
*So, what is their ontological status?
*If they don't really exist, can we just formulate a local theory without them?
*OK, we can using spin foams?
*So are they not really needed?
*If they do "exist" out there, then say a state with an electron and no ghosts is "actually" different in "reality" from the state with same electron with the same momentum, but with a ghost added?
*According to the formalism, they are different states. But observationally, we can never tell the difference. Both states will always give the same observable results for any physical observation. So are they the same state or not?
*Do we just take the partial trace over the ghost sector?
*But that's not gauge invariant except asymptotically, is it?
| I'd say BRST ghosts have more elements of reality compared to longitudinal gauge bosons which are pure gauge. Look at the inner product structure. Physical states belong to the BRST cohomology of BRST-closed modulo BRST-exact. pure gauge longitudinal gauge bosons are BRST-exact and have zero inner product with any other BRST-closed state. BRST ghosts have nonzero inner products with themselves and other BRST-closed states. BRST ghost states are merely BRST-closed. BRST-closed states with different ghost numbers have zero inner product between themselves, even while having nonzero norms.
The BRST ghost structure is independent of the choice of gauge-fixing. Different choices of gauge fixings correspond to different extended Hamiltonians but these Hamiltonians can only differ by BRST-exact quantities. So, different gauges only differ by BRST-exact differences to be quotiented over.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
How long was a day at the creation of Earth? Since the earth is slowing its rotation, and as far as I know, each day is 1 second longer every about 1.5 years, how long was an earth day near the formation of earth (4.5 billion years ago)?
I wouldn't assume to just do 4.5b/1.5 and subtract, because you would think the rate of change is changing itself, as seen here from wikimedia. It is a graphical representation of data from INTERNATIONAL EARTH ROTATION AND REFERENCE SYSTEMS SERVICE. They decide when its time for a leap second (the last one being on Jun 30, 2012) The data can be found here.
| Shouldn't each day be one second longer every 1.5 years, if the Earth is rotating slower? Assuming your info is accurate, (1 sec)/(1.5 years) * (4.5 billion years) = The Number of Seconds Shorter The Day Was 4.5 billion years ago. Subtract that from the number of seconds in a day now. I would convert the final answer to minutes.
But your numbers are wrong and lead to an absurd result. "The average day has grown longer by between 15 millionths and 25 millionths of a second every year" http://www.popsci.com/jessica-cheng/article/2008-09/ive-heard-earths-rotation-slowing-how-long-until-days-last-25-hours
So, do the same calculation with these numbers instead.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Can a huge gravitational force cause visible distortions on an object In space, would it be possible to have an object generating such a huge gravitational force so it would be possible for an observer (not affected directly by gravitational force and the space time distortion) to see some visual distortions (bending) on another small object placed near it ?
(eg : a building on a very huge planet would have his lower base having a different size than the roof).
We assume that object would not collapse on himself because of the important gravitational force.
| Also, a neutron star, itself, is an example of this effect--it's gravity is so strong that it causes matter to begin to collapse, and protons and electrons to combine via reverse beta decay. Anything less dense than a nucleus sitting on the surface of a neutron star will eventually find itself smashed flat on the star's surface.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Friction at zero temperature? By the fluctuation-dissipation theorem (detailed-balance for Langevin equation), $$\sigma^2 = 2 \gamma k_B T$$ where $\sigma$ is the variance of noise, $\gamma$ is a friction coefficient, $k_B$ is Boltzmann's constant, and $T$ is temperature. So in principle, one can have $\gamma\neq 0$ while $T=0$ and $\sigma=0$.
Is it indeed possible to experimentally achieve a system whose temperature and noise approach zero, but whose friction coefficient $\gamma$ does not approach zero?
*
*If yes, what would be an example of such a system? What is the physical source of friction for such a system?
*If not, why not? Is there some sort of "quantum" correction to the fluctuation-dissipation theorem that rules out such zero-noise, non-zero friction systems?
| Isn't the friction here the mechanical analogy to the resistance in a circuit? At $T=0$ the voltage noise is zero but you still have the finite property 'resistance'.
In more general terms, the dissipation is given by the imaginary part of a generalized susceptibility $\chi$ of your physical system. So as long as your system can dissipate energy you can have non-zero friction at any temperature.
An example is a particle in a liquid under Brownian motion (fluctuation and dissipation for Brownian motion).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
Cannon on spacecraft: hitting yourself Some Soviet space stations reportedly had anti-aircraft cannons installed. Could such a cannon hit the firing space station accidentally on a subsequent orbit? The muzzle velocity of the cannon is under 700 m/s, significantly slower than orbital velocity so the projectiles should have similar orbits to the station.
| The space station could shoot itself, but it's extremely unlikely to happen by accident.
Assuming your space station is in a circular orbit you can calculate it's position in polar co-ordinates as a function of time $(r(t), \theta(t))$ very easily since $r$ is constant and $\theta = 2\pi t/\tau$ where $\tau$ is the orbital period. When you fire the cannon the shell is in a different orbit, and specifically it's in an elliptical orbit $(r'(t), \theta'(t))$. For the station to shoot itself the two orbits must intersect, i.e. at some time $t$ you have simultaneously:
$$r(t) = r'(t)$$
$$\theta(t) = \theta'(t)$$
The problem is that for a generic elliptical orbit the expressions for $r'(t)$ and $\theta'(t)$ are not at all simple so there is no easy way to solve the above simultaneous equations and work out at what time, if ever, they intersect. There's certainly no obvious reason to suppose they should intersect.
You can see that it is possible for the spaceship to shoot itself. If you fire the cannon radially outwards the shell will fall behind the space station. If you fire in the direction of motion the shell will move ahead of the space station. So there must be some angle in between where the shell hits the space station. However this will be the exception rather than the rule.
I'm aware this isn't a great answer since I can't solve the equations of motion and give you a rigorous answer. If anyone else can do this I'd be very interested to see the calculation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
If a superconductor has zero resistance, does it have infinite amperage? If amps = volts / ohms, and ohms is 0, then what is x volts / 0 ohms?
| In the world we live in, with the accuracies we can generate it is an observed fact the R=V/I.
Infinities need careful interpretation if they happen in the physical world.
In this form when there is no current one talks of infinite resistance ( seen also on the potentiometers sold) .
When one reverses the equation to the form I=V/R one has to be careful to see if there can be any material where R is 0. There are no such every day materials because they are composed by atoms tied together with electromagnetic forces which will always display some resistance to change of status at normal temperatures.
But there exist special materials under special conditions, superconducting materials and superconductivity, which take advantage of the quantum mechanical behavior of certain metals, and there one achieves practically zero resistance and very high currents indeed, according to the voltage applied.
The simplest method to measure the electrical resistance of a sample of some material is to place it in an electrical circuit in series with a current source I and measure the resulting voltage V across the sample. The resistance of the sample is given by Ohm's law as R = V/I. If the voltage is zero, this means that the resistance is zero.
Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. Experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. Experimental evidence points to a current lifetime of at least 100,000 years. Theoretical estimates for the lifetime of a persistent current can exceed the estimated lifetime of the universe, depending on the wire geometry and the temperature.
The current in the superconductors is not found by this simple formula, but theories have been developed and methods of measuring it use the magnetic fields generated.
The LHC uses high power superconducting magnets to achieve the high magnetic fields it needs. The problem is technological, keeping the superconductors cooled and the high power needed under control.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 2
} |
Are there any objective wavefunction collapse theories which are local and forbid superluminal signalling? Are there any objective wavefunction collapse theories which are local and forbid superluminal signalling? GRW is nonrelativistic and nonlocal.
| No, and there cannot be. Imagine entangling two particles, sending one of them off to a colleague on Mars, and then measuring them both at almost the same time (according to Earth's reference frame, for the sake of argument). An objective collapse theory would say that whoever measures their particle first "collapses" the joint wavefunction of the two particles, putting the other person's particle into a definite state, which they then measure. Whatever speed the wavefunction collapse "propagates" at, it must be fast enough to reach the other experimenter before she makes her measurement. Since the two measurements are at a space-like separation, this speed has to be faster than light, there just isn't any way around it.
Note that in any objective collapse theory, the state that the second particle ends up in has to depend on the action taken by the first experimenter, otherwise it's impossible to explain the results of entanglement experiments. This means that different interpretations of what "measurement" is, and what causes the "collapse" cannot change the conclusion that the "collapse" has to happen superluminally.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Non-Newtonian Fluid Stop a Bullet? I just saw a YouTube video about Non-Newtonian fluids where people could actually walk on the surface of the fluid but if they stood still, they'd sink. Cool stuff.
Now, I'm wondering: Could a pool of Non-Newtonian fluid stop a bullet? Why or why not?
If so, if you put this stuff inside of a vest, it would make an effective bullet-proof vest, wouldn't it?
| BAE Systems have already done this. Annoyingly there seems to be some problem on the BAE web server at the moment, but there's a description here with links to the BAE site. Alternatively Google for something like "liquid armour site:baesystems.com".
Dilatant fluids are very good at absorbing energy as forcibly shearing them requires evaporating the water between the particles, and this absorbs a lot of energy. There is more info about dilatant fluids in the answers to Why do non-Newtonian fluids go hard when having a sudden force exerted on them?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 2
} |
Why can't gauge bosons have mass? Clearly, a mass term for a vector field would render the Lagrangian not gauge-invariant, but what are the consequences of this? Gauge invariance is supposed to be crucial for the renormalisation of a vector field theory, though I have to say I'm not entirely sure why.
As far as removing unphysical degrees of freedom - why isn't the time-like mode $A_0$ a problem for massive vector bosons (and how does gauge invariance of the Lagrangian ensure that this mode is unphysical for gauge bosons)?
| Let me anwser a closely related quenstion:
Consider a U(1) gauge theory with massless gauge bosons, can any small perturbations give the gauge boson an mass.
Amazingly, the anser is NO. The masslessness of the gauge boson is topologically robust.
No small perturbations can give the gauge boson an mass.
For detail, see my article.
Let me make the statement more precise. Here we consider
a compact U(1) gauge theory with a finite UV cutoff (such as a lattice gauge theory),
that contains gapless gauge bosons at low energies.
Then no small perturbations to this compact U(1) gauge theory with a finite UV cutoff
can give the gauge boson an mass, even for the small perturbations that break the gauge invariance.
So the masslessness of gauge boson is a stable universal property of a quantum phase.
Only a phase transition can make the gauge boson massive.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/31994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 0
} |
How to capture electomagnetic radiation/waves? If I wanted to find out what kind of electomagnetic waves "travel" through my room at which frequency, what kind of equipment would I need? Suppose I want to view frequencies from 0 Hz to 6 GHz.
| The oscillating magnetic field associated with an EM wave will induce a voltage in any electrical conductor that it passes through. So in principle all you need to do is stretch a piece of wire across your room then measure the voltage across it. However, as usual, the devil is in the detail.
If you've ever listened to a radio in your room then you've done this experiment for a limited range of wavelengths. Radio is transmitted as an electromagnetic wave, the wave induces an oscillating voltage in the aerial and the electronics in the radio amplify and process this voltage to extract the sound. But a radio will only receive a limited range of wavelengths, and this is the main problem you'll run into. If you want to receive and measure EM waves of all frequencies from zero to 6GHz you'll need some specialist kit.
The usual way this is done is with a spectrum analyser, but you'll pay a lot for a good quality one. You could do it on the cheap by looking for a second hand oscilloscope and building your own filters to select specific frequencies for measurement.
Whichever way you do it, you'll still run into other problems. It's very difficult to measure low frequency waves because the voltages they induce in your aerial will be very low. Ultra low frequencies require very large aerials to detect them.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Does gravity slow the expansion of the universe? Does gravity slow the expansion of the universe?
I read through the thread http://www.physicsforums.com/showthread.php?t=322633 and I have the same question. I know that the universe is not being stopped by gravity, but is the force of gravity slowing it down in any way? Without the force of gravity, would space expand faster?
Help me formulate this question better if you know what I am asking.
| For most of the 20th century, it was thought that expansion slowed gravity. Many weighed in on the value of omega, it being thought that the universe was so close to critical balance it could not be determined whether it would expand forever or collapse. That mystery should have evaporated with the discovery that the universe is accelerating - yet science TV shows are still promulgating the mystery as a fine tuned feature of the universe. Fine tuning of conflicting factors that lead to perfect balance indicates something is wrong unless one things of creation in the same vein as a potter working on clay. Some years ago the idea evolved that gravity and expansion are not opposites in conflict, but rather one depends from the other - that they appear to be independent perfectly balanced factors, is due to the fact that neither Newtonian gravity nor GR addresses the cause of the acceleration factor G that needs to be inserted into both formalism to predict the motion of one body in the vicinity of another. Embellishing upon an idea first put forth by Richard Feynman suggesting that gravity might be explained as a pseudo force, one can make a quick calculation using Friedmann's equation to express G as 3H^2/4(pi)p where p is the density of the Hubble sphere and H is the Hubble constant. In an accelerating (q=-1) universe, the value of G then reduces to (c^2)/4(pi)R[S] where S represents the Hubble sphere as an area density of one kg/m^2 and R is the effective Hubble scale.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 7,
"answer_id": 3
} |
Introduction to differential forms in thermodynamics I've studied differential geometry just enough to be confident with differential forms. Now I want to see application of this formalism in thermodynamics.
I'm looking for a small reference, to learn familiar concepts of (equilibrium?) thermodynamics formulated through differential forms.
Once again, it shouldn't be a complete book, a chapter at max, or an article.
UPD Although I've accepted David's answer, have a look at the Nick's one and my comment on it.
| I'm afraid that from the aesthetic side, there is not too much differential geometry to discover in (equilibrium) thermodynamics (at least on an undergrad level and if you don't want to bother with the conceptual question how to properly define the idea of heat for the most abstract situations). I suppose any book on thermodynamics has some sections, which makes use of the mathematical properties, which come from holding on parameter constant and so on.
So I suggest that starting with the axioms and the potentials, you involve yourself with the following basic statements, which make "heavy use" of the formalism:
*
*Maxwell relations
*Gibbs-Duhem equation
*Gibbs–Helmholtz equation
(The articles all contain the derivations too)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 4,
"answer_id": 1
} |
What entities create a gravitational field? It is well known that masses create a gravitational field. Photons are affected by gravitation, but do they generate a gravitational field as well? What about the other gauge bosons?
Do gravitons create a gravitational field?
|
Do gravitons create a gravitational field?
There's an interesting section in MTWs "Gravitation" describing how the GR equations can be arrived at by considering massless spin-2 field ("gravitons") in flat spacetime and iterating corrections from considering that the non-zero stress-energy tensor for this field is a source of this field, i.e., gravitons gravitate.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
What voltages are used to "safely" shock someone (as in a carnival game) I've had this debate with some coworks. What voltage (rough order of magnitute) is used to safely shock people?
"Safe" is a vague term, but as an example, there are arcade games where you hold onto two rods and you are hit with a jolt that grows in intensity (the challenge being to hold on until the end). I've also seen this sort of thing at museum where you touch two contacts to "feel the jolt of an electric eel". Not painful, but you definatley feel some force going through you.
Right now, due to different interpretations of some fundamental laws of electricity, we have guesses of "about 40 volts" and "about 40,000 volts", so please explain why one value is used over another.
| In general there is not any safe voltage. The danger mostly depends on the current flowing through your body, especially the heart and the duration of the current flow.
So for any kind of trick device you want to use a relatively high voltage, so that wet or dry hands do not make a large difference and limit the duration of the pulse to less than a few milliseconds. Here is a diagram on p. 7 (Code of Practice for
The Safe Use of Electricity) that illustrates this. For less than a few milliseconds a few mA are considered safe but it decreases rapidly for longer pulses.
To be on the safe side I would do quite a bit of background research into local regulations and similar devices before trying something like this on other people. Especially the taser should be a warning example, it was considered relatively safe but a few people died from the application, so be cautious.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 5,
"answer_id": 0
} |
Which universe had a beginning? The universe or the observable universe? When we say the universe had a beginning, do we mean the entire universe or the observable universe? Or did both of them have a beginning?
| The Penrose-Hawking theorem, strictly applied, only indicates if we have a trapped null surface, which we get with an enclosing boundary larger than the Hubble radius, there has to be a singularity or closed timelike curve sometime in the past in some subregion. Strictly speaking, not even the entire observable universe need to begin from a singularity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
What is the physical meaning of diffusion coefficient? In Fick's first law, the diffusion coefficient is velocity, but I do not understand the two-dimensional concept of this velocity. Imagine that solutes are diffusing from one side of a tube to another (this would be the same as persons running from one side of a street) to unify the concentration across the tube.
Here we have a one-dimensional flow in x direction. The diffusion coefficient should define the velocity of solutes or persons across the tube or street direction. How the two-dimensional velocity does this? I wish to understand the concept to imagine the actual meaning of the diffusion coefficient.
| The diffusion coefficient $D$ is a constant relating the spreading $\left\langle x^{2}\right\rangle$ and the time $t$ it spread out. This relation can be clear seen in the diffusion of a single point source as follow.
Lets consider the homogeneous diffusion equation:
$$u_t = D u_{xx} \tag{1}$$
The solution is given by:
$$u(x,t) = \int K(x,t,x')u_o(x')dx' \tag{2}$$
where $u_o(x')$ is the initial condition and the $K(x,t,x')$ is the diffusion kernel (or Greens function):
$$K(x,t,x')=\frac{1}{\sqrt{4\pi D t}}\exp\left[-\frac{(x-x')^2)}{4 D t}\right] \tag{3}$$
For a single point source $u_o(x')=\delta(x-x')$, we have the solution:
$$u(x,t) = K(x,t,0) =\frac{1}{\sqrt{4\pi D t}}\exp\left[-\frac{x^2}{4 D t}\right] \tag{4}$$
The solution is shown in Fig. 1. Its second moment (same as variance since $\left\langle x\right\rangle=0$) is given by
$$\left\langle x^{2}\right\rangle=\int x^2 u(x,t)dx=2Dt \tag{5}$$
Therefore, it clearly implies that the grow of the width square $\left\langle x^{2}\right\rangle$ of the Gaussian is linear proportional to the time $t$, with rate given by $2D$.
Fig. 1
The $x_{rms}=\sqrt{\left\langle x^{2}\right\rangle}$ defines a length scale of the spreading. If we have another length scale $\ell$, we would expect that when $x_{rms}\gg\ell$ or $Dt \gg \ell^2/2$, then the system behaviour is the same as a single Gaussian with the only one length scale $x_{rms}$.
Lets consider two point sources located at $\pm \ell/2$ so the solution is
$$ u(x,t) = \frac{1}{2}(K(x,t,-\ell/2) + K(x,t,\ell/2)) \tag{6}$$
The result is shown in the Fig. 2. When the time increase, those two peaks spread out and eventually merge into one when time is large enough. After a long time, the solution can be described approximately by:
$$ u(x,t) \approx K(x,t,0) \tag{7}$$
Fig. 2: Plot of Eq (6) when the time is $Dt=0.01, 0.1, 1$ of $\ell^2/2$ from top to bottom.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
What does symplecticity imply? Symplectic systems are a common object of studies in classical physics and nonlinearity sciences.
At first I assumed it was just another way of saying Hamiltonian, but I also heard it in the context of dissipative systems, so I am no longer confident in my assumption.
My question now is, why do authors emphasize symplecticity and what is the property they typically imply with that? Or in other more provocative terms: Why is it worth mentioning that something is symplectic?
|
Why is it worth mentioning that something is symplectic?
This question is a little like asking why it's worth mentioning that an electric field is in the room.
As an important characteristic, I'd point out that if you have a symplectic structure, you have a Poisson algebra. That means that not only can the functions
$$f:P\in \mathcal M\ \longrightarrow\ f(P)\in\mathbb{R}$$ on your manifold do things like
$$(f,g,h,P)\ \longrightarrow\ f(P)g(P)+h(P),$$
but also like
$$(f,g,P)\ \longrightarrow\ \{f,g\}(P).$$
Consequently, if you add a symplectic structure in your function algebra, some awesome results occur. Notice that the structure as well as the manifold you consider might be wild, but the Possion bracket has some qualities to it, which are true in general.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 4
} |
uncertainty of fields with many harmonic modes In most basic level introduction to the quantum harmonic oscillator formulation of fields, it is assumed that the commuting variables for the fields $p_m$, $q_m$ are
$$ \lbrack p_m , q_n \rbrack = \delta_{m n} i \hbar $$
which seem to imply that each individual mode holds an uncertainty relation like $ \Delta p_m \Delta q_m \ge \hbar $
now, uncertainties of field values with many modes must be expressed like (assuming the vacuum state, where $\langle E \rangle = \langle E_k \rangle = 0$):
$$ \langle E^2 \rangle = \langle \psi | ( \sum_k{ E_k } )^2 | \psi \rangle = \sum_k{ \langle \psi | E_k^2 | \psi \rangle } $$
but since each mode has some uncertainty in vacuum, it seems to imply that the uncertainty of the net field is infinite, which clearly does not make any sense
Any idea where my assumptions are going wrong?
| The fluctuation in a field at a point is infinite in any field theory, this is because of the reason you state. This is why you need to smear the field over a region with a test function for it to have finite fluctuation, and the reason that the fields are characterized as operator valued distributions.
If you look at the expected value of the square of the field at a point, you consider the point split regulated version:
$$ \langle \phi(x)\phi(0)\rangle = G(x)$$
and take the limit $x\rightarrow 0 $. This is clearly infinite, since G(x) goes as $1\over x^{d-2}$ or as a log in 2d. It is only finite in 0+1 dimensions (quantum mechanics). If you smear the field and look at the square of the smeared operator, you get
$$ \langle \int f(x) \phi(x) \int f(y)\phi(y)\rangle = \int f(x)f(y) G(x-y) d^dx d^dy $$
This is completely finite, since the G(x-y) singularity is always softer than the volume. So free fields always produce well defined after smearing by test functions.
This was analyzed by Bohr and Rosenfeld (for the electromagnetic field) in the early 1930s, at the beginning of quantum field theory.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Is there some connection between the Virial theorem and a least action principle? Both involve some 'averaging' over energies (kinetic and potential) and make some prediction about their mean values. As far as the least action principles, one could think of them as saying that the actual path is one that makes an equipartition between the two kinds of energies.
| As that lovely article linked by dfan says the virial theorem comes from varying the action $S[x]$ by $x\rightarrow(1+\epsilon)x$
$$\frac{1}{T}\delta S = \frac{1}{T}\epsilon\int_{0}^{T} dt\{m\dot{x}^2 -x\frac{\partial V}{\partial x}\}$$
This is a variation of the action and therefore must vanish up to some boundary terms if $x$ is a solution of the equations of the motion. But the equation $\delta S=0$ is just the virial theorem:
$$2\langle T\rangle ~=~ \langle x\cdot \frac{\partial V}{\partial x}\rangle~=~- \langle x\cdot F\rangle,$$
where the angle brackets mean time average.
The only remaining issue is neglect of the boundary terms. This enforces the condition on the virial theorem that the motion be bounded and that I take a long enough time average. If both of these conditions are true, then I can take $T\rightarrow \infty$. Since everything is bounded the boundary terms remain finite as $T\rightarrow \infty$ and therefore there contribution to $\frac{\delta s}{T}$ goes to zero. Leaving us with the virial theorem.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 3,
"answer_id": 1
} |
Looking for a list of possible subatomic particle collisions This is going to be a strange question, but here we go. I'm working on a computer puzzle game that will simulate subatomic particle collisions. I am not a physicist by training, but I do dabble. I would like the game to be loosely based on reality, and I've been trying very hard to find a resource that could outline, simply put, what happens when:
proton <-> proton collides
electron <-> positron collides
electron <-> electron collides
...etc.
It would be even more interesting to see what happens at varying energy levels (as energy, in the game, will be a resource). If you think think this question is a waste of time, I apologize in advance.
| In a sense these "games" exist, need large computing power and are called high energy physics monte carlos.
These are very complicated simulations of the reality of the experiment and include all the detector effects.
At the first level of the core of these HEP monte carlos there exist tables of "complete" possibilities of scattering products: all interactions are simulated with their probabilities according to the known physics of the time of the experiment, with correct balances of all quantum numnbers and conservation laws.
I suppose if you started reading the code of these programs you might extract just the generators and the tables and these could become the core of your game.
If you are serious you should try to find somebody familiar with GEANT to collaborate with, else spend a lot of time in reading up on the programs.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/32949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How is a cathode ray tube different from beta minus radiation? In beta minus the result is one neutron in the nucleus changing to a proton, plus an electron and an anti-neutrino being sent off.
The antineutrino is indifferent to our health. So I guess what makes a beta source dangerous compared to a cathode ray tube must be a difference in the kinetic energy of the emitted electrons?
| Beta emitters, or indeed most radioactive materials, aren't especially dangerous unless they get into your body. For example iodine 131 (a beta emitter) is concentrated in the thyroid and causes destruction of the thyroid and/or a cancer there. Likewise plutonium (an alpha emitter) is most dangerous when particles are inhaled because they lodge in the lung and cause tissue damage and probably cancer. This is how Alexander Litvinenko was murdered!
Anyhow the electrons in a CRT don't make it through the glass, so even if you pressed yourself up against the TV screen you still wouldn't be hit by any electrons. There used to be sporadic rumours that the collisions of electrons with the glass screen could generate X-rays, but as I recall the X-rays generated are barely measurable and certainly not dangerous to health. If you removed the glass from a TV (and somehow managed to maintain the vaccum) the electrons would kill any tissue you exposed to them, just like iodine 131. I suppose you could kill yourself that way, though I imagine the vacuum would kill you first.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Ideal 2D Unicycle Kinematics A particle is connected to a massive wheel by a rigid rod. The wheel can roll without slipping on a horizontal surface. The particle is free to rotate around the centre of the wheel.
I believe the system has two degrees of freedom:
The centre of the wheel and the particle each have x- and y- positions, and the wheel has an angle of rotation. The constraints are:
*
*The wheel is on a horizontal surface, fixing its y-coord
*The wheel cannot slip, so its x-coord is directly proportional to its angle of rotation
*The particle is a fixed distance from the centre of the wheel
Is this correct?
This leaves two generalised coordinates, which I have taken to be the angle of rotation of the wheel and the angle between the rod and the y-axis.
After struggling (and failing) with a Newtonian approach I constructed a Lagrangian for the system and applied the Euler-Lagrange equation(s), using the angles as generalised coordinates. After much algebra out popped two second-order non-linear differential equations.
To my surprise, I could eliminate the wheel rotation completely from one equation, leaving it concerning only the rod angle and its derivatives. What, if at all, is the significance of this?
And finally, I would like to simulate the situation computationally. Is there a general way of simulating rigid constraints acting on rigid bodies/particles, or must one find and solve (numerically) the differential equations governing the system?
| The significance of the separability of the differential equations is simple:
The difficulty involved in balancing a unicycle in 2D is independent of how fast you are going. (in 3D, you have to balance sideways as well, and only there does the wheel speed comes in handy).
Regarding the simulation, I'm not aware of any free, good, mechanical simulation tools, but solving the differential equations shouldn't be difficult - you could first try to see if there is an analytic solution, and if not use the many available tools for solving ODE's.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What is a good introductory book on quantum mechanics? I'm really interested in quantum theory and would like to learn all that I can about it. I've followed a few tutorials and read a few books but none satisfied me completely. I'm looking for introductions for beginners which do not depend heavily on linear algebra or calculus, or which provide a soft introduction for the requisite mathematics as they go along.
What are good introductory guides to QM along these lines?
| Feynman's Six Easy Pieces is an excellent introduction to quantum mechanics. For a more thorough analysis (and some philosophical ruminations), I'd recommend The Dancing Wu Li Masters by Gary Zukav. For an easy-to-understand discussion of the weirdness of quantum mechanics, Fred Kuttner and Bruce Rosenblum's Quantum Enigma: Physics Encounters Consciousness is excellent.
Here's an Amazon list I put together with some books I've found helpful.
*
*Robert Kroese, author of Schrödinger's Gat
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "103",
"answer_count": 19,
"answer_id": 12
} |
wave superposition of electrons and quarks Is quantum wave superposition of electrons and quarks possible?
If not, can different types of elementary particles be mixed in wave superposition?
| This is a good question. No experiment has shown mixing between leptons (such as the electron) and quarks. I'm using the word "mixing" here in the same you used superposition (though more often the word superposition is used to refer to energy states). There is certainly experimental evidence for the superposition of other particles (e.g. neutrino oscillations).
Mixing between leptons and quarks is also not allowed by the standard model since it would violate a good number of symmetries (the symmetries themselves being assumptions of the standard model).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Where 2 comes from in formula for Schwarzschild radius? In general theory of relativity I've seen several times this factor:
$$(1-\frac{2GM}{rc^2}),$$
e.g. in the Schwarzschild metric for a black hole, but I still don't know in this factor where 2 comes from?
| On the one hand, for nonrelativistic particle moving in an external
gravitational field, the Lagrangian has the form:
$$
L=-mc^{2}+\frac{m\mathbf{v}^{2}}{2}-m\phi,\quad\quad(1)
$$
where $m$ is a mass of particle, $\phi$ is a gravitational potential. On the
other hand, general relativity requires the following action for the particle:
$$
S=-mc\int ds,\quad\quad\left( 2\right)
$$
where
$$
ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu},\quad dx^{\mu}=\left( cdt,d\mathbf{r}
\right) ,
$$
thus for a static field ($g_{i0}=0$)
$$
ds^{2}=g_{00}c^{2}dt^{2}+g_{ij}dr^{i}dr^{j}=\left( g_{00}c^{2}+g_{ij}
v^{i}v^{j}\right) dt^{2}.
$$
Let's find the leading correction to the metric in the nonrelativistic limit,
i.e., in the $c\rightarrow\infty$ limit:
$$
g_{00}=1+h_{00}+O\left( c^{-4}\right) ,\quad g_{ij}=-\delta_{ij}+O\left(
c^{-4}\right) ,
$$
so that $h_{00}=O\left( c^{-2}\right) $, hence
\begin{align*}
ds & =dt\sqrt{g_{00}c^{2}+g_{ij}v^{i}v^{j}}=cdt\sqrt{1+h_{00}-\mathbf{v}
^{2}/c^{2}+O\left( c^{-4}\right) }\quad\quad\quad\left( 3\right) \\
& =cdt\left( 1+h_{00}/2-\mathbf{v}^{2}/2c^{2}+O\left( c^{-4}\right)
\right) .
\end{align*}
Using the action (2) we obtain the following Lagrangian:
$$
S=\int Ldt=\int dt\left( -mc^{2}-mc^{2}h_{00}/2+m\mathbf{v}^{2}/2+O\left(
c^{-4}\right) \right) .
$$
Comparison with the Lagrangian (1) yields:
$$
\frac{c^{2}h_{00}}{2}=\phi,\quad\Rightarrow\quad h_{00}=\frac{2\phi}{c^{2}
}=-\frac{2GM}{c^{2}r}.\quad\quad\left( 4\right)
$$
Hence, one can see that the factor $2$ appears due to the square root
operation in the interval (3).
It is worth noting here that it is not directly connected with the metric
singular points. The expansion (4) is so called gauge independent, but the
position of singularity is not. For example, the expression of the
Schwarzschild metric in the harmonic coordinates has the form:(see, e.g., S.
Weinberg, Gravitation and Cosmology, eq. (8.2.15)):
$$
ds^{2}=\frac{1-GM/rc^{2}}{1+GM/rc^{2}}c^{2}dt^{2}-\frac{1+GM/rc^{2}
}{1-GM/rc^{2}}\,\frac{G^{2}M^{2}}{r^{4}c^{4}}\left( \mathbf{r}\cdot
d\mathbf{r}\right) ^{2}-\left( 1+\frac{GM}{rc^{2}}\right) ^{2}
d\mathbf{r}^{2},
$$
which has the singularities in the points:
$$
r=\pm\frac{GM}{c^{2}}.
$$
However the expansion always has the form of eq. (4):
$$
g_{00}=\frac{1-GM/rc^{2}}{1+GM/rc^{2}}=1-\frac{2GM}{rc^{2}}+O\left(
c^{-4}\right) .
$$
The Schwarzschild gauge is the simplest gauge such that $g_{00}$ has
exactly the form of the expansion (4):
$$
g_{00}=1-\frac{2GM}{rc^{2}}.
$$
$$
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Cosmological constant of standard model of cosmology and observational data I am curious whether the current Lambda-CDM model of cosmology matches well with observational data, especially expansion of the universe.
How well does Lambda-CDM defend its established status from other models, such as quintessence (quintessence can be said to extend Lambda-CDM, but there are some models against the standard model, I guess.)?
| We don't know how the relationship between gravity and dark energy changes over time as gravity decreases (from the rest of the universe), because one cancels out the other to a degree we don't know.
It is not reasonable to assume that as the universe expands more strings of dark energy magically appear to keep the density constant.
Einstein originally proposed the idea of a cosmological constant because it was needed to maintain a static, non-expanding non-contracting universe, and he called it the biggest blunder of his career.
There's no way that dark energy can keep a constant density in an expanding universe. If the observations are correct that the universe is expanding faster, it's because as gravity decreases, the universe only needs a smaller push to accelerate.
If dark energy is not at constant density, which doesn't seem likely due to the way other energy behaves in the expansion, and if it changes from a pushing to a pulling force in the future, like when it gets sucked into a black hole, then the fate of the universe changes to an endless cycle.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Maxwell-Boltzmann distribution and total energy per unit volume We know that
$$n(E) ~=~ \frac {2 \pi (N/V)}{(\pi k_B T)^{3/2}} E^{1/2} e^{-E/(k_B T)} dE,$$
where $V$ is total volume.
If then, how do we derive total energy per unit volume from this equation?
| Integrate n(E)*E over all possible energies and divide the result by the total volume, this gives the average energy per unit volume.
Ali
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
prove that flat shape minimizes a functional The following functional arises in an information theoretic problem that I work on currently.
$I(G(\omega)) = \int\limits_{-\kappa\pi}^{\kappa\pi}d\omega \frac{A}{G(\omega)+A}-\frac{| \int\limits_{-\kappa\pi}^{\kappa\pi}d\omega \frac{A}{G(\omega)+A}\exp(-i\omega)|^2}{ \int\limits_{-\kappa\pi}^{\kappa\pi}d\omega \frac{A}{G(\omega)+A}}$,
where $\kappa<1$, $A>0$, and $G(\omega)\geq 0$.
Now I would like to minimize $I(G(\omega))$ under the constraint of unit area of $G(\omega)$, i.e., $\int\limits_{-\kappa \pi}^{\kappa \pi}d\omega\, G(\omega)=1$.
My hypothesis is that a flat $G(\omega)=\frac{1}{2}\kappa\pi$ is optimal, but I cannot prove that (Matlab hints towards it).
| This hypothesis is not right. The first integral alone $\int {A\over G(\omega) +A } d\omega$ has a local minimum at the constant function (and a global minimum too), so that it's first variation is zero, but the numerator of the second term doesn't have a vanishing variational derivative, so it isn't extremal for a constant, so there are easy counterexamples close to a constant function.
To see that a constant locally minimizes the first integral, you can expand in a Taylor series in G
$$ \int 1 - {G\over A} + {G^2\over 2A^2} d\omega $$
The first two terms are constant (since you have a constraint on the total integral), and the last term is positive definite quadratic form, so the constant is a local minimum. You can show it's a global minimum too using convexity arguments (the second variation of the function is everywhere positive definite).
Knowing this, the first variation of the numerator near the constant function is all you need to check. This variation is proportional to
$$2\mathrm{Re} (\int e^{i\omega'-i\omega} \delta G(\omega) d\omega'd\omega) $$
which is not zero for general $\delta G$. If you want an explicit counterexample, use a little positive bump for $\delta G$ anywhere that $\cos(\omega)$ is negative, or a little negative bump where $\cos(\omega)$ is positive (the real part can be taken inside the integral, so you can see what a $\delta G$ perturbation does explicitly).
The solution to your extremization problem can be given as a nonlinear integral equation using the method of Lagrange multipliers. Add the term
$$ \int \lambda G $$
to your functional, and minimize the sum. The equation doesn't simplify very much, and should be solved numerically, and for this purpose, you can just make a grid and minimize the values by steepest decent, starting from the constant. You can also solve the integral equation from the calculus of variations by iterations, starting from a constant, and this is roughly equivalent.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Non-conservative Electric Field I was watching this video from Walter Lewin and while watching these two videos, I noticed there is a "contradiction" in what he is doing. All links direct you exactly to where he begins, so you don't have to search it yourself
LR Circuit
http://www.youtube.com/watch?v=UpO6t00bPb8#t=10m22s
Resistor Circuit (Kirchoff's Law)
http://www.youtube.com/watch?v=59eTiTa9Tvk#t=3m02s
In the LR Circuit video, he sets his current to run CCW and traverses CCW. Then he mentions about the electric field present in the circuit and he sets up the equation:
$$+IR - V = -L\frac{\mathrm{d} I}{\mathrm{d} t} $$
In the Resistor Circuit (Kirchoff's Law) video, he talks about potentials going up and down. He prepared beforehand for us the direction of the current in the circuit. He begins at point P, he traverses the circuit CCW (in the direction of the current). This time, however, he didn't mention anything about the electric field present in the circuit. As he traverses the outer loop, he gets the equation:
$$-6 + 20 - 1 - 2 - \varepsilon_2 - 4 = 0$$
Notice that he gets $-6$ this time when he traverses in the direction of the current? As opposed from the LR Circuit video, he gets $+IR$ when he traverses in the direction of the current.
Could someone please clarify what is going on? Thank you very much. I am unable to progress because of this confounding concept
| Lewin does appear to be using a different sign convention in the two lectures. In the first he takes a voltage drop as positive while in the second he takes a voltage drop as negative.
But it doesn't matter which convention you use as long as you are consistent. The point is that if you go round any closed loop the total voltage change must add up to zero. It doesn't matter whether you take resistors as negative and batteries as positive or the other way round. Either way the voltage changes must total to zero.
In any case in the first lecture Lewin makes the point that you can't always tell which way the current will flow. But if you get the direction of the current wrong all that happens is all your voltage differences change sign and they still add up to zero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What are the units or dimensions of the Dirac delta function? In three dimensions, the Dirac delta function $\delta^3 (\textbf{r}) = \delta(x) \delta(y) \delta(z)$ is defined by the volume integral:
$$\int_{\text{all space}} \delta^3 (\textbf{r}) \, dV = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \delta(x) \delta(y) \delta(z) \, dx \, dy \, dz = 1$$
where
$$\delta(x) = 0 \text{ if } x \neq 0$$
and
$$\delta(x) = \infty \text{ if } x = 0$$
and similarly for $\delta(y)$ and $\delta(z)$.
Does this mean that $\delta^3 (\textbf{r})$ has dimensions of reciprocal volume?
As an example, a textbook that I am reading states:
For a collection of $N$ point charges we can define a charge density
$$\rho(\textbf{r}) = \sum_{i=1}^N q_i \delta(\textbf{r} - \textbf{r}_i)$$
where $\textbf{r}_i$ and $q_i$ are the position and charge of particle $i$, respectively.
Typically, I would think of charge density as having units of charge per volume in three dimensions: $(\text{volume})^{-1}$. For example, I would think that units of $\frac{\text{C}}{\text{m}^3}$ might be possible SI units of charge density. If my assumption is true, then $\delta^3 (\textbf{r})$ must have units of $(\text{volume})^{-1}$, like $\text{m}^{-3}$ for example. Is this correct?
| Yes. The Dirac delta always has the inverse dimension of its argument. You can read this from its definition, your first equation. So in one dimension $\delta(x)$ has dimensions of inverse length, in three spatial dimensions $\delta^{(3)}(\vec x)$ (sometimes simply written $\delta(\vec x)$) has dimension of inverse volume, and in $n$ dimensions of momentum $\delta^{(n)}(\vec p)$ has dimensions of inverse momentum to the power of $n$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 2,
"answer_id": 1
} |
The appearance of volume $V$ in the Fourier series representation of a periodic cubic system In the textbook Understanding Molecular Simulation by Frenkel and Smit (Second Edition), the authors represent a function $f(\textbf{r})$ (which depends on the coordinates of a periodic system) as a Fourier series. I quote from page 295 of the text:
Let us consider a periodic system with a cubic box of length $L$ and volume $V$. Any function $f(\textbf{r})$ that depends on the coordinates of our system can be represented by a Fourier series:
$$f(\textbf{r}) = \frac{1}{V} \sum_{\boldsymbol{\ell} = -\infty}^{\infty} \tilde{f}(\textbf{k}) e^{i \textbf{k} \cdot \textbf{r}} \; \; \; \; \textbf{(12.1.6)}$$
where $\textbf{k} = \frac{2\pi}{L}\boldsymbol{\ell}$ with $\boldsymbol{\ell} = (\ell_x, \ell_y, \ell_z)$ are the lattice vectors in Fourier space. The Fourier coefficients $\tilde{f}(\textbf{k})$ are calculated using
$$\tilde{f}(\textbf{k}) = \int_V d\textbf{r} \; f(\textbf{r}) e^{-i\textbf{k} \cdot \textbf{r}} \; \; \; \; \textbf{(12.1.7)}$$
Now, the authors use equation (12.1.6) to write the electric potential $\phi(\textbf{r})$ in Fourier space:
$$\phi(\textbf{r}) = \frac{1}{V} \sum_{\textbf{k}} \tilde{\phi}(\textbf{k}) e^{i\textbf{k} \cdot \textbf{r}}$$
The authors write:
In Fourier space, Poisson's equation has a much simpler form. We can write for the Poisson equation:
$$-\nabla^2 \phi(\textbf{r}) = -\nabla^2 \left( \frac{1}{V} \sum_{\textbf{k}} \tilde{\phi}(\textbf{k}) e^{i\textbf{k} \cdot \textbf{r}} \right) = \frac{1}{V} \sum_{\textbf{k}} k^2 \tilde{\phi}(\textbf{k}) e^{i\textbf{k} \cdot \textbf{r}} \; \; \; \; \textbf{(12.1.8)}$$
My question is, why is the $\frac{1}{V}$ factor present in equations (12.1.6) and (12.1.8)? What is the significance of the $\frac{1}{V}$ factor in $\phi(\textbf{r}) = \frac{1}{V} \sum_{\textbf{k}} \tilde{\phi}(\textbf{k}) e^{i\textbf{k} \cdot \textbf{r}}$?
In contrast, the article on Wikipedia does not include this prefactor. I realize that that article is dealing with the general case, whereas here we are considering a system with a cubic box of volume $V$. But shouldn't the units of $\phi(\textbf{r})$ be the same as those of $\tilde{\phi}(\textbf{k})$? The $\frac{1}{V}$ seems to preclude $\phi(\textbf{r})$ and $\tilde{\phi}(\textbf{k})$ having the same units.
Do you have any advice? Thanks.
| I) Let us just consider $1$ dimension for simplicity. (The generalization to higher dimensions is straightforward). Then the volume factor $V$ is just a length factor $L$.
II) The standard Fourier series formulas can be derived from $(12.1.7)$ and $(12.1.6)$ by taken the length $L$ to be $L=2\pi$. Then $(12.1.7)$ and $(12.1.6)$ become the standard Fourier series formulas
$$\tag{12.1.7'} c_{n} ~=~ \frac{1}{2\pi}\int_{-\pi}^{\pi} \! dx~ f(x) e^{-in x}, $$
$$\tag{12.1.6'} f(x)~=~\sum_{n\in\mathbb{Z}} c_n~e^{in x}
~=~f(x+2\pi), $$
via the identifications
$$\ell~=~ n~\in~\mathbb{Z}, \qquad \tilde{f}(\ell) ~=~2\pi c_{n} . $$
III) Going back to $3$ dimensions, the $1/V$ normalization in $(12.1.6)$ is important. Of course, in another convention, it could be put in $(12.1.7)$ instead, or alternatively, symmetrically as $1/\sqrt{V}$ in both formulas $(12.1.6)$ and $(12.1.7)$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Topological phase Can anybody tell me, if generically any system, which is solely described by a topological field theory, resides in a topological phase? I cant find any clear notion of topological phase. Only topological phase of matter, but I mean any kind of system.
Thanks for your help.
| Topological order is a new kind of order in zero-temperature phase of quantum spins, bonsons, and/or electrons. The new order corresponds to pattern of long-range quantum entanglement. Topological order is beyond the Landau symmetry-breaking description. It cannot be described by local order parameters and long range correlations. However, topological orders can be described/defined by a new set of quantum numbers, such as ground state degeneracy, non-Abelian geometric phases of degenerate ground states, quasiparticle fractional statistics, edge states, topological entanglement entropy, etc.
Fractional quantum Hall states and quantum string liquids are examples of topologically ordered phases.
The low energy effective theory of topological phases happen to be topological quantum field theory. In nature, topological quantum field theory always appears as the
low energy effective theory of topological phase of quantum spins, bonsons, and/or electrons, etc. By definition topological phase is always a quantum phase of
quantum spins, bonsons, electrons, etc. ie topological phase is always a quantum
state of matter.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Is there an analogue of configuration space in quantum mechanics? In classical mechanics coordinates are something a bit secondary. Having a configuration space $Q$ (manifold), coordinates enter as a mapping to $\mathbb R^n$, $q_i : Q \to \mathbb R$. The primary thing is the manifold itself and its points.
On the contrary, quantum mechanics for classical coordinates has operators $\hat q_i$. And I never encountered some sort of "manifold abstraction" for the space operator. Is there a coordinate-free approach to the space operator in a non-relativistic quantum mechanics?
| If you work with a path integral to define the quantum system, then the path integral sums over paths that live on the manifold. The measure of integration is given by the action, which is an integral over a given path, and it is defined without referring to a specific coordinate system.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
How can one know if a theory allow action at a distance effects or not? 1-In general, if a theory has action at a distance effects, where can that appear exactly in the theory?
2-Does it appear in the dynamical law of the theory? (does it appear in Newton's 2nd law? where can it be spotted?)
3-Does it appear in the force law of the interaction? (it is said that Newton's law of gravitation, $\displaystyle F\sim\frac{m_1m_2}{r^2}$, supports action at a distance effects. How can one see that from the form of the law?)
4-Before special relativity, causality roughly means cases always come before effects. Now if the force law allows action at a distance, as in Newtonian gravity in 3, then the interaction is instantaneous. It seems that the words "before" and "after" lose their meaning in this case, then how causality is defined then?
| Take a look at Newton's law in the form of gravitational potential $$\nabla^{2}\phi=4\pi G\rho$$ Let's say we change the mass density of whatever our gravitating object is. Now, the gravitational field is instantly changed - everywhere, everyone feels a different potential, and a different gravitational force.
Like you said in 3), it also appears in the force law. Let's say you grabbed one of the masses, and yanked it backwards, increasing the radius. Even if the other object is on another end of the observable universe, it instantly feels a weaker gravitational force. This is instantaneous.
4) Right. Newton himself said the idea that gravity occurred instantaneously everywhere was 'philosophically absurd'. However, Newton's laws worked excellently to describe the motion of planets, and the trajectory of objects through gravitational fields. There were some attempts to find a 'speed of gravity' by formulating a more complete theory of gravity, such as Le Sage gravity, but none really caught on. Of course, this was until general relativity, which solves the problem of gravitational action at a distance.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/33977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
What is Dalitz decay? What is Dalitz decay?
I know there are Dalitz $\pi^0 \to e^+ + e^- + \gamma$ decay, $w \to \pi^0 + e^+ + e^-$ decay, may be more. But is there a rule to say which decay is Dalitz and which is not?
Is there a rule to say which particle can decay by Dalitz decay and which does not?
| A particle's Dalitz decay means the particle decays to a massless gauge boson and two massless fermions. You can find this definition in 1308.0422.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Is it wrong to talk about wave functions of macroscopic bodies? Does a real macroscopic body, like table, human or a cup permits description as a wave function? When is it possible and when not?
For example in the "Statistical Physics, Part I" by Landau & Lifshitz it is argued that such systems must be described via the density matrix (chapter I, about statistical matrix). As far as I got it, roughly speaking, macroscopic bodies are so sensible to external interaction that they never can be counted as systems, one have to include everything else to form a system. Is my interpretation right?
When is it wrong to talk about wave functions of bodies that surround us?
| Almost always, one cannot write a wavefunction for a macroscopic object even in principle, because if something is macroscopic, it means that it is usually entangled strongly with the environment (i.e., decohered by it, as others have pointed out). By definition of entanglement, if two systems are entangled, then the combined system cannot be written as the product of wavefunctions of each system, i.e., one cannot associate any wavefunction with the individual systems.
The exception is of course for very carefully prepared systems at low temperatures where you try to minimize the decohering influence of the environment. Perhaps a table near absolute zero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 7,
"answer_id": 3
} |
Negative Mass and gravitation Since Newtonian gravity is analogous to electrostatics shouldn't there be something called negative mass? Also, a moving charge generates electric field, but why doesn't a moving mass generate some other field?
| Moving mass does generate gravitation different from stationary mass. This is the ''gravitomagnetic'' effect predicted by Lens and Thirring in the 20's and measured by Gravity Probe B:
http://en.wikipedia.org/wiki/Gravitoelectromagnetism
It is related to the ''frame dragging'' effect that you hear about with respect to spinning black holes. There, there is a spin-dependent radius where an observer will be outside the horizon, and able to escape to infinity, but will not be able to be still relative to infinity, even with an infinitely strong rocket--they will be forced to co-rotate with the black hole.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
Space Expansion vs. Relative Motion Given 2 objects moving at some velocity $v$ relative to one another, is it possible to determine whether they are moving or whether the space between them is expanding?
| thinking about this question, one is eventually led to think of the initial attempts to formulate Mach principle; in that hypothesis, which predates the creation of General Relativity, but after Special Relativity was established, Ernst Mach especulated that, if there was no special frame of reference from where to measure absolute velocities, there should not be absolute accelerations either; objects will have an inertial response only when they are tried to be accelerated relative to the "distant background of stars". In a sense, General relativity captures that sort of behavior in the frame dragging effect confirmed by Gravity Probe B (although the data from that probe is still unclear regarding if we live in an universe with non-zero torsion or not)
But i think it is worth to mention that the dilemma of your question has a somewhat similar flavor to the Mach's dilemma, and Mark' answer is a proof of that; indeed, because all the objects are moving consistently, we classify it as a space expansion. Even if the movements are not perfectly uniform, we would say that the space expansion describes the long scale behavior of spacetime, while individual galactic motions describe the granular scale.
Who knows? maybe there is some hidden symmetry that is still eluding us, that is relevant to inertia and cosmology.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 1
} |
What causes a spark to move along rods that are not parallel? I took my son to a science museum where they had a gadget that many of us probably saw in movies involving a mad scientist. The gadget had two metal rods about two inches apart at the bottom. The rods were about six feet long, and four inches apart at the top. An electric spark would start at the bottom where the rods are about two inches apart. Then the spark would move up to the end where the rods are about four inches apart. The spark took about three seconds to get from one end to the other. After the spark got to the end, it would start again at the end where the rods are close together. I am fairly sure the spark is caused by a high voltage between the rods. What causes the spark to start at one end, and move to the other end?
| The device you describe is called a Jacob's Ladder. You are correct that it is high voltage between the rods that produces the initial spark at the bottom of the ladder where the gap between the rods is the narrowest. Then the ionized air heats up, becoming less dense, so it rises. The current path rises as well because once a breakdown of the air has occurred, that ionized path is always the path of least resistance between the rods.
An easy way to test this explanation would be to place the rods horizontally instead of vertically. Presumably no movement along the rods from the narrow separation end to the wide separation end would be observed.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
What exactly does the holographic principle say? Does the holographic principle say given a spatially enclosing boundary satisfying the Bousso condition on expansion parameters, the log of the number of microstates in its interior is bounded by $\exp\{A/4\}$ where $A$ is its area? Or does it say something stronger, namely the state restricted to the boundary uniquely determines the state restricted the interior? Can we have the former without the latter?
| There's no a priori reason why we can't have the former without the latter. The stronger version was only introduced to resolve the black hole information loss "paradox".
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 1
} |
If the electron is point like, then what is the significance of the classical radius of the electron? What is the physical meaning/significance of the classical radius of the electron if we know from experiments that the electron is point like?
Is there similarly a classical radius of the photon? The W and Z bosons?
| That wiki article itself provides the answer:
In simple terms, the classical electron radius is roughly the size the electron would need to have for its mass to be completely due to its electrostatic potential energy - not taking quantum mechanics into account.
So since the the photon is massless, and uncharged (it doesn't interact with itself), its "classical radius" would be zero. The W bosons are charged and could similarly be given classical radii, but the Z boson is neutral and could not. In fact that article also gives the formula for finding the classical radius of any charged particle:
$$r=\frac{1}{4\pi\epsilon_0}\frac{q^2}{mc^2}$$
where $q$ is the charge and $m$ is the mass.
Edit in response to Revo's comment:
Care to clarify what it is about the quoted section that you don't understand?
The electrostatic interaction between charged particles contributes to their potential energy, e.g. the repulsion between like-charged particles is implemented by their "wanting" to move farther apart in order to decrease their potential energy. So if one were to assemble a sphere of charge, it would cost energy to hold it together since the same-charged parts repel each other. The bigger the sphere is, the less energy it takes because the charge is allowed to be more spread out, as it wants. This electrostatic energy is then viewed as the "source" of the electron's mass via Einstein's mass-energy equivalence relation. So given the electron's charge, one can solve for the size of the sphere necessary to get the correct electrostatic energy that is equivalent to the electron's mass.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
Why do photons travel? Photons travel at the speed of light.
Is there a known explanation of this phenomenon, and if yes, what is it?
Edit:
To be clearer, my question is why do photons travel at all. Why do they have a speed?
| Kind of as an expansion on what drake said, this can be explained in several ways. For example:
In electromagnetism, we know that Maxwell's equations govern electromagnetic radiation. From Maxwell's equations you can derive the EM wave equation
$$\frac{\partial^2\vec{E}}{\partial x^2} = \frac{1}{c^2}\frac{\partial^2\vec{E}}{\partial t^2}$$
(and the same for $\vec{E}\to\vec{B}$) which has solutions corresponding to waves that travel at light speed. As the quanta of these waves, photons will also travel at light speed.
In special relativity, the energy of a particle is related to its mass via $E = \gamma mc^2$. Photons are massless, but they have finite energy. The only way both of these facts can be true without rendering $E = \gamma mc^2$ outright incorrect is if $\gamma$ is undefined, and since $\gamma = 1/\sqrt{1 - v^2/c^2}$, the only way to make $\gamma$ undefined is to have $v = c$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 0
} |
Parity, how many dimensions to switch? Parity is described in Wikipedia as flipping of one dimension, or - in the special case of three dimensional physics - as flipping all of them.
Is there any simple rule that generalises both for any dimension? Like: "Flip an odd number of dimensions."?
| I think David is being a bit harsh, because I had to read the Wikipedia article a couple of times to see what they were getting at.
As the article states at the beginning, a parity transformation is the flip of a single spatial co-ordinate. In effect it's like looking in a mirror: when you look in a mirror your height and width co-ordinates are unchanged bu the depth (normal to the mirror) is flipped.
However in 3-D flipping all three co-ordinates is equivalent to a rotation plus a reflection, so it also flips the parity. It isn't the same as flipping a single co-ordinate, but it changes the parity in the same way.
I must admit I'm not sure how this extends to higher dimensions. In 3D two flips is equivalent to a rotation around the axis normal to the two axes being flipped, but in >3D obviously there is more than one axis normal to the two axes being flipped and my grasp of hyper-dimensional geometry isn't good enough to work out what happens. However I think that you are correct and an odd number of flips will always change the parity while an even number leaves it unchanged.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is it intuitive that the conserved quantity from time symmetry is what we know as energy? Is there an easy (aka intuitive) way to understand that the conserved quantity from time translation symmetry is just what we call energy?
In other words, we use two definitions of energy. One is with Noethers theorem, and I've been told this is the fundamental one. The other is what is you learn in school and is mentioned in the examples below. The question is how to connect this two definitions.
Examples
*
*I can lift weights, so they get more energy.
*I can boil water for tee, so it gets more energy.
*I can burn $CaO$ with carbon, so I get carbide, that has more energy.
*..
If I define energy as conserved quantity, how do I arrive at my examples..?
(Well, "easy/intuitive" is in the eye of the beholder. Thank you nevertheless.)
Side question: We have energy conservation in thermodynamics. I have never seen a Lagrangian formulation of thermodynamics. Can I only hope for an answer of my main question in theories that have such a formulation?
| Answer to your side question:
Energy conservation indeed follows from symmetry properties of Lagrangian. The first law of thermodynamics is a little more than energy conservation. It says that although the heat change $\delta Q$ in the system and the work done $\delta W$ on it are inexact differentials, their sum $dU$ is an exact differential. That is, although the total heat change and work done depend on the path taken by a thermodynamic transformation, their sum is the same for all paths. Their sum $U$ is therefore a state function, called the system's internal energy.
The first law of thermodynamics guarantees the existence of an internal energy function.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
} |
Why does heterodyne laser Doppler vibrometry require a modulating frequency shift? On the wikipedia article (and other texts such as Optical Inspections of Microsystems) for laser Doppler vibrometry, it states that a modulating frequency must be added such that the detector can measure the interference signal with frequency $f_b + f_d$. Why couldn't you remove the modulating frequency $f_b$ and interfere the two beams with frequencies $f_0$ and $f_0+f_d$ to produce a signal with frequency $f_d$ at the detector? I haven't been able to find any reasoning on the subject.
My first idea was that the Doppler frequency might fall inside the laser's spectral linewidth and thus not be resolvable, but for a stabilized low-power CW laser (linewidth on the order of KHz) and a typical $f_d$ in the tens of MHz range I don't see this being an issue.
| To talk about heterodyne interferometry in your example you would need $f_1-f_2=f_0$, this is non-practical due to optical frequencies. If you utilize directly $f_0$ then we talk about homodyne detection.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Does the uncertainty principle apply to photons? Wikipedia claims the following:
More generally, the normal concept of a Schrödinger probability wave function cannot be applied to photons. Being massless, they cannot be localized without being destroyed; technically, photons cannot have a position eigenstate and, thus, the normal Heisenberg uncertainty principle does not pertain to photons.
Edit:
We can localize electrons to arbitrarily high precision, but can we do the same for photons? Several sources say "no." See eq. 3.49 for an argument that says, in so many words, that if we could localize photons then we could define a current density which doesn't exist. (Or something like that, I'll admit I don't fully understand.)
It's the above question that I'd like clarification on.
| In addition to what was discussed already, and besides the fact the Schrödinger formalism is not relevant for photons, a good place to start in my view is in Roy Glauber's work (or some other introductory text to quantum optics). There, you'd see different uncertainties arising, such as between the photon number and phase, etc...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/34947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 4,
"answer_id": 0
} |
When do the von Neumann projections occur and what causes them? (Transferred as a separate question from comments in Scott Aaronson’s gravitational decoherence question)
Reversing gravitational decoherence
The modern answer seems to be that they never occur, and that therefore nothing causes them. This leads on (or back) to Everett and MWI. A closely related idea is that there is only one collapse, just as there is only one wave function, but it covers the whole universe, all time and all space.
An older answer, at least sometimes called Copenhagen, is that collapses are merely Bayesian updating, and hence do not occur as physical processes.
A third answer, associated with Wigner and von Neumann himself, is that the observer causes collapses by the act of observation.
GRW suggest that collapses occur gradually, due to a nonlinear modification of the Schrodinger equation.
I do not find these answers satisfactory. See related question, “Can you count collapses?”
I am interested in answers that do not modify QM, (unlike GRW), but (like GRW) nevertheless regard collapse as a real physical process that can be observed, and counted.
Can someone provide references to papers that discuss collapses that meet this criterion?
Are there other answers to the collapse question not mentioned above?
| I think you might find a series of blog posts that I wrote recently useful. They also point to a paper that is available on arxiv and is currently in the review pipeline. See http://aquantumoftheory.wordpress.com
The posts specifically discuss how the collapse postulate with the Born rule can emerge from unitary quantum mechanics without the need of additional postulates.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Time period of torsion oscillation For the oscillation of a torsion pendulum (a mechanical motion), the time period is given by
$T=2\pi\sqrt{\frac{I}{C}}$ which is a result of the angular acceleration $\alpha=\frac{d^2\theta}{dt^2}=-(\frac{C}{I})\theta$ where $C$ is the restoring couple of the string.
Do we relate $T=\frac{2\pi}{\omega}$ and $\alpha$ for finding the time period of torsion pendulum? I can't really understand how they arrived at $2\pi\sqrt{\frac{I}{C}}$ ?
| There are lots of different examples of oscillatory systems that have essentially the same mathematical form. Let's start by just looking at one type of differential equation:
$a = \frac{d^2 x}{dt^2} = -\omega^2 x$
This equation has a general solution (you can check this)
$x(t) = A \sin (\omega t + \phi)$
which oscillates with a period of $T=2\pi/\omega$ since the system will be in exactly the same state at any time $t$ and $t + 2\pi/\omega$. So now if we find physical systems that are described by an acceleration equation that looks like this, we know exactly what the solution is.
For example, in a mass spring system we would have
$ma=-kx \rightarrow \frac{d^2 x}{dt^2} = -\left(\frac{k}{m}\right) x$
which is exactly the same form as we had before with $\sqrt{\frac{k}{m}}=\omega$, so plugging this into our equation for period we get $T=2\pi\sqrt{\frac{m}{k}} $
The same applies for your torsion pendulum, you switch the position $x$ with angular position $\theta$ (this doesn't change how to solve the differential equation) and have
$\frac{d^2\theta}{dt^2} = -\left(\frac{C}{I}\right) \theta = -\omega^2 \theta$
as your differential equation with $\omega = \sqrt{\frac{C}{I}}$, it has the same solution, and plugging this $\omega$ into your period formula you get
$T=2\pi \sqrt{\frac{I}{C}}$
This shows that once you solve the general form of one type of differential equation you can apply it to any other type of system that has the same form for its equation (this is also used in circuits, perturbations to an orbit, or pretty much anything else that is slightly pushed away from a stable equilibrium position)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
How can one prove that the number of images formed by the reflecting surfaces of two plane mirrors at right angles to each other is 3? How can one prove that the number of images formed by two plane mirrors at right angles to each other is 3?
Is there a mathematical proof for the same?
| Sticking to 2D for simplicity, the transformation matrices for reflections in the x = 0 and y = 0 lines are:
$$ R_x = \left( \begin{matrix} -1 & 0 \\ 0 & 1 \end{matrix} \right) $$
$$ R_y = \left( \begin{matrix} 1 & 0 \\ 0 & -1 \end{matrix} \right) $$
Any combination of these transformations can be given by $R_x^m R_y^n$ where $m$ and $n$ are integers giving the number of each reflection.
But both the reflections are their own inverses i.e. $R_xR_x = I$ and $R_yR_y = I$. If this isn't intuitively obvious you can prove it by multiplying out the matrices above. So for any integer $m$, $R^m$ is equal to $R$ if $m$ is odd or $I$ if $m$ is even. That means there are only three distinct combinations that are not the identity:
*
*$R_x$
*$R_y$
*$R_xR_y$
That's why there are three and only three reflections.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
Why does observation collapse the wave function? In one of the first lectures on QM we are always taught about Young's experiment and how particles behave either as waves or as particles depending on whether or not they are being observed. I want to know what about observation causes this change?
| An electron, indeed any particle, is neither a particle nor a wave. Describing the electron as a particle is a mathematical model that works well in some circumstances while describing it as a wave is a different mathematical model that works well in other circumstances. When you choose to do some calculation of the electron's behaviour that treats it either as a particle or as a wave, you're not saying the electron is a particle or is a wave: you're just choosing the mathematical model that makes it easiest to do the calculation.
The next question is OK what is an electron then? At the moment our best description is that the electron is an excitation of a quantum field. Using quantum field theory allows us to calculate the behaviour of electrons whether they happen to be involved in particle-like or wave-like interactions. This doesn't mean that the electron is a quantum field, and we'll almost certainly replace quantum field theory by some even more complicated e.g. some future development of string theory.
The collapse of the wavefunction is a separate issue, and one that has generated much debtae over the years. I think the general consensus is that the collapse of the wavefunction is a manifestation of a more general process called decoherence.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/35328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 7,
"answer_id": 4
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.