Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why ZrO2 reservoir is useful for reducing tungsten's work function in Schottky field-emitters? Schottky emitters (field-assisted thermionic emitters) use a tungsten filament for thermionic emission, as well as barrier lowering electric field to reduce the effective work function of the filament. Commercial tips are made of <100> tungsten crystal, and a ZrO2 reservoir that is said to be useful in reducing tungsten's <100> work function. But if the emission is at the tip apex (pure tungsten, far from the zirconia), how the zirconia reservoir is capable of lowering the effective work function at the apex? (And why the whole tip is not made out of ziroconia in the first place?) (source)
Because zirconium oxides increase in electrical conductivity at higher temperatures, which essentially lowers the energy barrier for electrons to move around, lowering the energy required to get them to the tip. It also preferentially lowers the work function of the tungsten (100) surface which lowers the spread of the beam of a <100> oriented filament. The reduction in work function from Zr comes from monolayers of Zr covering the entire emitter. The large agglomeration of Zr in the middle of the tip acts a s a reservoir to replenish any evaporated sites of Zr.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/291808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do one show that the Pauli Matrices together with the Unit matrix form a basis in the space of complex 2 x 2 matrices? In other words, show that a complex 2 x 2 Matrix can in a unique way be written as $$ M = \lambda _ 0 I+\lambda _1 \sigma _ x + \lambda _2 \sigma _y + \lambda _ 3 \sigma_z $$ If$$M = \Big(\begin{matrix} m_{11} & m_{12} \\ m_{21} & m_{22} \end{matrix}\Big)= \lambda _ 0 I+\lambda _1 \sigma _ x + \lambda _2 \sigma _y + \lambda _ 3 \sigma_z $$ I get the following equations $$ m_{11}=\lambda_0+\lambda_3 \\ m_{12}=\lambda_1-i\lambda_2 \\ m_{21}=\lambda_1+i\lambda_2 \\ m_{22}=\lambda_0-\lambda_3 $$
To show that $\{I, \sigma_i\}$ is a base of the complex vector space of all $2 \times 2$ matrices, you need to prove two things: * *That $\{I, \sigma_i\}$ are linearly independent. *That every complex $2 \times 2$ matrix can be written as a combination of $\{I, \sigma_i\}$. To prove point 1, you need to show that the only four complex numbers $a_0,a_1,a_2,a_3$ such that $$a_0 I + a_1 \sigma_1 + a_2 \sigma_2 + a_3 \sigma_3 = 0$$ where $0$ is the zero matrix, are $a_0=a_1=a_2=a_3=0$. To prove point 2, you need to show that every complex $2 \times 2$ matrix $M$ can be written as $$M = c_0 I + c_1 \sigma_1 + c_2 \sigma_2 + c_3 \sigma_3 $$ where $c_0,c_1,c_2,c_3$ are complex numbers. Your equations are correct, but what do you need to show in order to prove 2?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/292102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 1 }
Is acceleration continuous? The extrapolation of this Phys.SE post. It's obvious to me that velocity can't be discontinuous, as nothing can have infinite acceleration. And it seems pretty likely that acceleration can't be discontinuous either - that jerk must also be finite. All 4 fundamental forces are functions of distance so as the thing exerting the force approaches, the acceleration must gradually increase (even if that approach/increase is at an atomic, or sub-atomic level) e.g. in a Newton's Cradle, the acceleration is still electro magnetic repulsion to it's a function of distance, so it's not changing instantaneously, however much we perceive the contact to be instantaneous. (Even if we ignored the non-rigidity of objects.) Equally I suspect that a force can't truly "appear" at a fixed level. Suppose you switch on an electromagnet, if you take the scale down far enough, does the strength of the EM field "build up" from 0 to (not-0) continuously? or does it appear at the expected force? Assuming I'm right, and acceleration is continuous, then jump straight to the infinite level of extrapolation ... Is motion mathematically smooth? Smooth: Smoothness: Being infinitely differentiable at all point.
acceleration cannot be infinite. Things need a force to be able to accelerate an object. So to infinitely accelerate and object it needs a infinite force. Also try wording question better.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/292181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
General method of deriving the mean field theory of a microscopic theory What's the most general way of obtaining the mean field theory of a microscopic Hamiltonian/action ? Is the Hubbard-Stratonovich transformation the only systematic method? If the answer is yes then what does necessitate our mean field parameter to be a Bosonic quantity ? Is the reason that all of directly physical observable quantities should commute?
Actually Wikipedia has an answer for your question, https://en.wikipedia.org/wiki/Mean_field_theory which will tell you how to bulid a mean field approximation self-consistently based on the Bogoliubov inequality. If you want to know more details about the fundamental inequality,you can go through the book,Statistical Mechanics: A Set Of Lectures,written by Feynman. Hope it helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/292303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
Why do some chemicals take electrons from other chemicals? How can some chemicals, if they have an equilimbrium of electrons, take away electrons from other chemicals? One example I believe is placing a small amount of gallium on top some alluminum and watching the alluminum melt. Why does the gallium, if it is at equilibrium state, need more electrons?
There is no electron exchange when you put gallium on top of aluminum. The known observed reaction is that aluminum diffuses into the gallium because it has a very high solubility there. The tendency of an atom/molecule to take electrons away from others is related to the concept of electronegativity. See Electronegativity. In the end it is due to the fact that the total energy of the reactants is lower after the electron transfer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/292373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is a pendulum in dynamic equilibrium? When obtaining the equation of a pendulum following classical mechanics (Virtual Work) we state that: The work is in equilibrium, therefore $\textbf{F} = 0$ and the Virtual Work is $$\textbf{F} · \delta \textbf{r} = 0\tag{1}$$ But, is a pendulum in equilibrium? I mean, the velocity of the pendulum changes with the time, how can we say that the pendulum is in equilibrium? Often is also used the expression $$\textbf{F} - m \ddot{\textbf{r}} = 0\tag{2}$$ to express this equilibrium, but it isn't an equilibrium at all, since the only think we do is move to the left the inertial force from the Newton's second equation $\textbf{F} = m \ddot{\textbf{r}}$. Goldstein sais in his book that equation (2) means: that the particles in the system will be in equilibrium under a force equal to the actual force plus a "reversed effective force" $- m \ddot{\textbf{r}}$. What does it mean an how applies this to the pendulum?
The equilibrium Goldstein is referring to is the equilibrium between the actual force $\vec F$ acting on the particle and the inertia force $-m\vec a$, i.e., $$\vec F-m\vec a=0.\tag 1$$ The idea, due to d'Alembert, is to extend the applicability of the virtual work principle from statics to dynamics and in some sense to transform the problem of motion to the problem of equilibrium. Note that this is consistent with the fact that we can always go to the reference frame where the particle is at rest (thus static) and in this frame we need to introduce a fictitious force (which is in equilibrium with the interaction force). The above idea seems to be trivial however it is not. The whole point is that, since constraint forces do no virtual work, then Eq. (1) implies $$(\vec F_s-m\vec a)\cdot\delta \vec r=0,$$ where $\vec F_s$ are the specified or impressed forces and $\delta\vec r$ is a virtual displacement. This is actually the so-called d'Alembert Principle, and that is what Goldstein is intending to apply for the pendulum.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/292466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Does a trumpet operate using an open air column or a closed air column Just as the title states. I could not find a coherent answer online. Many thanks in advance
The trumpet is a closed air column according to this source: Closed Air Column A closed-end instrument is an instrument in which one of the ends of the metal tube containing the air column is covered. An example of an instrument which operates on the basis of closed-end air columns is the clarinet. Some instruments which operate as open-end air columns can be transformed into closed-end air columns by covering the end opposite the mouthpiece with a mute. Even some organ pipes serve as closed-end air columns. As we will see the presence of the closed end on such an air column will effect the actual frequencies which the instrument can produce. If both ends of the tube are uncovered or open, the musical instrument is said to contain an open-end air column [my emphasis]. The above line would not seem to me to apply here, except perhaps to some woodwind instruments. I have no musical background, my answer is based on the similarly of design and operation of both the clarinet and the trumpet.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/292669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Young's Double Slit Experiment-Problem Can it be proved that there would be no change in the "fringe width" when the main illuminated slit(s) is shifted to a position, which makes an angle of $\Theta$ with the original position of the source slit? My try - I first found out the fringe width in a normal double slit where the position of the source slit is unchanged. However, after that, As I tried for the new position of the slit, I got stuck and couldn't equate the two values of fringe width.
The fringe pattern or spacings will not change. As long as D and d and the wavelength stay the same the position or angle of the light source will not matter. The only thing that may change is the location of the maximum bright spot on the detection screen but the pattern will remain the same. I cover this on page 5 and 6 of my paper "single edge certainty" at billalsept.com
{ "language": "en", "url": "https://physics.stackexchange.com/questions/292826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why is nuclear waste more dangerous than the original nuclear fuel? I know the spent fuel is still radioactive. But it has to be more stable than what was put in and thus safer than the uranium that we started with. That is to say, is storage of the waste such a big deal? If I mine the uranium, use it, and then bury the waste back in the mine (or any other hole) should I encounter any problems? Am I not doing the inhabitants of that area a favor as they will have less radiation to deal with than before?
The uranium that was mined was heavily diluted with other elements, and goes through an extensive refining process to produce nuclear fuel. Nuclear waste has 90%-99% of the uranium concentration of the refined nuclear fuel. Thus, it is far more radioactive than the raw mined material and the mine itself is no longer a suitable repository for it. Additionally, nuclear waste contains decay products that are not produced from natural decay, and those can be much more dangerous than the uranium and other elements found naturally in the mine.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/292958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "150", "answer_count": 8, "answer_id": 3 }
What's the reference frame for displacement when calculating work? I know that one way to calculate work is force*(displacement in the direction of the force). But what reference frame does that displacement value come from? One where the object starts at rest? Here's an example to clarify. Imagine two identical rockets in space, each with the same amount of fuel. At $t=0$, the first rocket is at rest in the observer's reference frame, and the second rocket is already moving forward at some speed. They both start their engines and burn out all of their fuel, exerting identical, constant forces. When their engines stop, the second rocket has traveled farther than the first, despite both having burned the same fuel and therefore done the same work/energy. So in that situation, what reference frame would you use for each rocket to measure its displacement? The two rockets do indeed do the same amount of work, right? Am I misunderstanding something?
As matter of fact, you may use any reference frame for calculating work in general. But if you want to go with the idea of displacement, you may use any inertial frame of reference.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/293128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is frequency discrete? We know that E = hv * *E = photon energy *h = Planck constant *v = frequency We also know that photon energy E can only come in discrete values (quanta). Does the combination of these two assumptions then determine that frequency, v can only come in discrete values as well? ====== Note on research: There are Phys.SE questions that are similar to mine, but none seem satisfactory, in terms of explaining how the equation can only take on discrete values.
You say: We also know that photon energy E can only come in discrete values (quanta). but this is not true. It is generally true that the energy of a bound system takes discrete values, but the energy of a free system has a continuous range and can take any value. Since for such a system the energy is not discrete it follows that the frequency is not discrete either.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/293237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What's the equation for frustrated total internal reflection? Could someone provide an equation for calculating frustrated internal reflection? Like for a partially reflective laser mirror or a beam splitter. I believe that it depends on the refractive indexes of a first medium and a third medium if a second medium separating the two (like a mirror) is thin enough that evanescent wave coupling allows light to be transmitted through to the third if it's in a certain angle range. However, I couldn't find an equation to calculate this. I found one for total internal reflection but not for this. I could use some help on this, or otherwise something that explains this. Thank you.
Frustrated total reflection obviously means that the usual total reflection at a surface to a medium with lower refractive index $n$ becomes less than total because the thickness of the lower $n$ medium becomes comparable to the evanescent wave damping length penetrating the lower $n$ medium. This can be calculated by using the Fresnel equations with multiple surfaces. See Fresnel equations
{ "language": "en", "url": "https://physics.stackexchange.com/questions/293663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Gravitons unaffected by gravitational lensing? Photons have energy, and so they are affected by gravitational lensing. More generally, they feel the force of gravity. Gravitons have energy too, but it seems preposterous to assume they will swerve due to other gravitational fields. This should violate the inverse-square law. But how can they go free? What is actually happening?
Gravitons are definitely effected by gravitational fields (i.e. there is 'back-reaction'), this is part of why general relativity is so hard to solve in general situations (e.g. numerical solutions to the field equations). In terms of the inverse square law: First, and most importantly, the inverse square law comes from the symmetry and dimensionality of the problem: it's an inverse square law because 'field lines' in spherical symmetry pass through surface areas proportional to $r^2$. Second, note that the inverse square law is only an approximation which definitely breaks down in the strong-field regime (which is also where gravitational back-reaction would be important). I think you can interpret the deviations from $r^{-2}$ as resulting from the geometry of space-time losing some of that spatial symmetry in the strong-field regime (i.e.~measuring devices will start to disagree on $r$, etc)... perhaps* this could even be interpreted as a 'back-reaction' where the existence of the gravitational field starts to cause geometric effects which then lead to deviations from $r^{-2}$. (*Hopefully someone wiser about the field equations can comment if that's fair or not*.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/293884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Phonons and heat conduction What is the physical picture of heat conduction in a poor conductor? In particular, I'm curious about the role of phonons in conduction in poor conductors. I know that phonons (within the harmonic approximation) move without being scattered and would lead to infinite thermal conductivity. This problem is resolved by including anharmonic terms in the Hamiltonian so that there are phonon-phonon scatterings. * *But how do the phonon-phonon scatterings reduce the thermal conductivity? I wish to understand this both physically and mathematically. The expression for conductivity can depend upon various quantities and scatterings must be affecting one of those. *How does this phonon picture explain the fact that when we heat a poor conductor the heat propagates gradually from the hotter to the cooler end? If they are delocalized collective excitations, shouldn't they heat up all parts of the substance at the same time? I don't have a condensed matter background and therefore, a detailed but not-too-technical answer will be helpful.
Ballistic propagation can be observed but that needs special conditions. Normally, transport is diffusive. At low temperatures, scattering is dominated by defects in the lattice. Even isotopes have an effect, diamond with a reduced content of $^{13}$C has higher thermal conductivity than diamond with the natural isotope composition. This scattering determines the mean free path. While $\lambda_{free}$ may be approximately independent of temperature at low temperatures, the conductivity increases with temperature because the phonons carry more energy (proportional to $C_v$). At high temperatures, thermal conductivity decreases because the shorter mean free path caused by phonon-phonon interaction. When the variations in atomic bond lengths get larger, anharmonic terms become important. The wave equation is not linear anymore, waves do not always pass through each other anymore, there is a probability that new waves (phonons) will be created.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/293995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 1 }
Small nucleus emission from a larger nucleus Like alpha decay, is there the possibility of a small (n,z) nucleus coming out of a large (N,Z) nucleus? Why lithium and beryllium don't decay out of big nuclei as helium does ?
Yes, this is possible. It is the case of $^{223}\textrm{Ra}$ for instance, which can decay through an $\alpha$ process with a lifetime of $\sim$ 11 days, but also through the emission of a $^{14}\textrm{C}$ nucleus. However, this decay mode is extremely disfavored (branching ratio $\sim 10^{-9}$). There are two factors at play here. One is energetic, because the height of the energy barrier sets the amplitude of the tunneling process. Another factor is that it is much less likely for a nucleus larger than Helium to form in the large nucleus and escape. This increases the lifetime for this mode by several orders of magnitude. References [1] Lund/LBNL Nuclear Data Search, http://nucleardata.nuclear.lu.se/toi/nuclide.asp?iZA=880223 [2] Introductory nuclear physics, Kenneth Krane, section 8.4, "theory of $\alpha$ emission".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/294125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Partition function for classical indistinguishable particles and Bose particles We have two particles that can be in either level $E_0 = 0$ or in level $E_1$. If we treat them as Bose particles, then the partition function will be: $$ Z = 1 + e^{-\beta E_1} + e^{-2\beta E_1}, $$ whereas if we treat them as classical indistinguishable particles we'd get: $$ Z = \frac{(1+e^{-\beta E_1})^2}{2!} = \frac{1}{2} + e^{-\beta E_1} + \frac{e^{-2\beta E_1}}{2}. $$ Why the discrepancy?
It comes from the first partition, your not considering all possible 2-states. Indeed, one possible state is both are in $0$, 2 other are one is in $0$ the other in $E_1$ and finally both in $E_1$. Because they are indistinguishable you must divide the whole thing by 2!.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/294540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why will we never run into a magnetic field that falls off as $\frac 1 {r^2}$? For example, Walter Lewin says in many lectures that we will never find a magnetic field $B\propto \frac 1 {r^2}$ - why is this? I believe it must be related to $\nabla \times E= -\partial_t B$, but I don't see why this would make the previous impossible.
A magnetic field of the form $$ \boldsymbol{B} \propto \frac{\boldsymbol{\hat{r}}}{r^2} $$ is impossible because $$ \nabla \cdot \left( \frac{\boldsymbol{\hat{r}}}{r^2} \right) = 4 \pi \delta(\boldsymbol{r}), $$ so a magnetic field of this form would violate Maxwell's equations, one of which is $$ \nabla \cdot \boldsymbol{B} = 0. $$ It seems that a magnetic monopole might produce a magnetic field like this, but magnetic monopoles are forbidden in classical electromagnetism.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/294640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How does the de Broglie-Bohm interpretation explain quantum uncertainty? I have heard recently that the de Broglie-Bohm theory, or pilot wave theory, is an acceptable alternative to the Copenhagen interpretation. But how does it explain Heisenberg's uncertainty principle? Doesn't uncertainty depend on the Copenhagen interpretation?
It's exactly the same in Bohmian mechanics, only the reasoning is different. From Dürr et al. (1992) - DOI: 10.1007/BF01049004: From a general perspective, perhaps the most noteworthy consequence of our analysis concerns absolute uncertainty (Section 11). In a universe governed by Bohmian mechanics there are sharp, precise, and irreducible limitations on the possibility of obtaining knowledge, limitations which can in no way be diminished through technological progress leading to better means of measurement. This absolute uncertainty is in precise agreement with Heisenberg's uncertainty principle. But while Heisenberg used uncertainty to argue for the meaninglessness of particle trajectories, we find that, with Bohmian mechanics, absolute uncertainty arises as a necessity, emerging as a remarkably clean and simple consequence of the existence of trajectories. Thus, quantum uncertainty, regarded as an experimental fact, is explained by Bohmian mechanics, rather than explained away as it is in orthodox quantum theory. Essentially, as Bohmian mechanics is deterministic and relies on a universal wave function, it's impossible to separate our measuring equipment (or ourselves) from the quantity being measured, hence it's never fully in equilibrium, which gives rise to the uncertainty.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/295156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Confusion between two different definitions of work? I'm doing physics at high school for the first time this year. My teacher asked us this question: if a box is slowly raised from the ground to 1m, how much work was done? (the system is only the box) Using the standard definition, $W = Fd\cos(\theta)$, the work should be 0, because the sum of the forces, the force due to gravity and the force of the person, is 0. However, using the other definition he gave us, $W = \Delta E$, work is nonzero. $\Delta E = E_f - E_i$ , so that would be the box's gravitational potential energy minus zero. My teacher might have figured it out but class ended. Does anyone have any insight?
Work is done by something, on something. If you put the weight inside a box (so you can't see it), with the rope sticking out of the top, and you pull on the rope, you can say "I am doing work on something in the box". You don't know what the something is - gravity, a gang of minions, a very long spring, a paddle wheel in a bath of treacle, ... and it doesn't matter. When you look inside the box, you will see that something else is also pulling on the box - but it is pulling in the opposite direction to the motion of the box. So gravity is doing negative work on the box, and we can say that the box + earth gains potential energy. If you look at you, the box, the earth all together - then no net work was done on the total system (what you would have if you put you, the weight and the earth all in a really big box). No external forces acting on the contents of the box (for the purpose of this explanation) -> no net work. What actually happened is that your work was converted to potential energy of the weight, and the total energy of the system you+weight+earth is unchanged.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/295245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 1 }
Why do excited states decay if they are eigenstates of Hamiltonian and should not change in time? Quantum mechanics says that if a system is in an eigenstate of the Hamiltonian, then the state ket representing the system will not evolve with time. So if the electron is in, say, the first excited state then why does it change its state and relax to the ground state (since it was in a Hamiltonian eigenstate it should not change with time)?
The atomic orbitals are eigenstates of the Hamiltonian $$ H_0(\boldsymbol P,\boldsymbol R)=\frac{\boldsymbol P^2}{2m}+\frac{e}{R} $$ On the other hand, the Hamiltonian of Nature is not $H_0$: there is a contribution from the electromagnetic field as well $$ H(\boldsymbol P,\boldsymbol R,\boldsymbol A)=H_0(\boldsymbol P+e\boldsymbol A,\boldsymbol R)+\frac12\int_\mathbb{R^3}\left(\boldsymbol E^2+\boldsymbol B^2\right)\,\mathrm d\boldsymbol x $$ (in Gaussian units, and where $\boldsymbol B\equiv\nabla \times\boldsymbol A$ and $\boldsymbol E\equiv \dot{\boldsymbol A}-\nabla\phi$) Therefore, atomic orbitals are not stationary: they depend on time and you get transitions from different states. The problem is that what determines time evolution is the total Hamiltonian of the system, and in Nature, the total Hamiltonian includes all forms of interactions. We usually neglect most interactions to get the overall description of the system, and then add secondary effects using perturbation theory. In this sense, the atom is very accurately described by $H_0$, but it is not the end of the story: there are many more terms that contribute to the real dynamics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/295365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50", "answer_count": 3, "answer_id": 0 }
Question on the Proof of Hohenberg -Kohn Theorem Between Equation (5) and Equation (6) of the the original paper titled as "Inhomogeneous Electron Gas" by P. Hohenberg and W. Kohn, there is a sentence stating that: " Now clearly (unless $v'(\mathbf{r}) - v(\mathbf{r})=\text{constant}$) $\Psi'$ cannot be equal to $\Psi$ since they satisfy different Schrodinger equations." This statement basically says: two different Hamiltonians have different ground-state wavefunctions. This is a necessary condition of Hohenberg-Kohn theorem 1. But I'm not convinced by this statement, nor can I find a proof of it. Is there a proof of this statement? Thank you in advance for providing any references or comments. Correction: After thinking about this question, I find that this statement has a strict condition: these two Hamiltonians are differ by an external potential $v(\mathbf{r})-v'(\mathbf{r})$, which is not a constant. With this condition, the proof is not hard.
(The notations follow the original paper by Kohn and Hohenberg.) Suppose there are two Hamiltonian $H_1 = T+U+V_1$ and $H_2=T+U+V_2$, where $T = \frac{1}{2}\int\nabla \psi^\dagger\nabla \psi d^3 r $ $ U = \frac{1}{2}\int\frac{1}{|\mathbf{r}-\mathbf{r}'|}\psi^\dagger(\mathbf{r})\psi^\dagger(\mathbf{r}')\psi(\mathbf{r}')\psi(\mathbf{r})d^3 r d^3 r'$ and $ V_i = \int v_i(\mathbf{r}) \psi^\dagger(\mathbf{r})\psi(\mathbf{r}) d^3 r $, $(i=1,2)$. Note the precondition is: $v_1(\mathbf{r})$ and $v_2(\mathbf{r})$ differ by more than a constant. We can prove $\hat{H}_1$ and $\hat{H}_2$ don't have the same ground-state wavefunction by reductio ad absurdum. Let's assume they have the same ground-state $\Psi$, i.e., $\hat{H}_1 \Psi = E_1 \Psi$ and $\hat{H}_2 \Psi = E_2 \Psi$. $\Rightarrow (\hat{H}_1 - \hat{H}_2) \Psi = (E_1 - E_2) \Psi$ $\Rightarrow (V_1-V_2) \Psi = \epsilon \Psi$, ($\epsilon = E_1 - E_2$ is a constant.) Now plug in expression of $V_i$ and $\psi^\dagger(\mathbf{r})\psi(\mathbf{r})=\sum^{N}_{i=1}\delta(\mathbf{r}-\mathbf{r}_i)$: $\Rightarrow \int \bigg(v_1(\mathbf{r}_i) - v_2(\mathbf{r}_i)\bigg) \sum_{i=1}^N \delta(\mathbf{r}-\mathbf{r}_i)\Psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)d^3 r = \epsilon \Psi (\mathbf{r}_1,\ldots,\mathbf{r}_N)$ $\Rightarrow \Psi(\mathbf{r}_1,\ldots,\mathbf{r}_N) \bigg(\sum_{i=1}^N \big(v_1(\mathbf{r}_i) - v_2(\mathbf{r}_i) \big) - \epsilon\bigg) = 0 $ Since $\Psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)$ is not zero, so: $\sum_{i=1}^N \big(v_1(\mathbf{r}_i) - v_2(\mathbf{r}_i) \big) = \epsilon \Rightarrow v_1(\mathbf{r}) - v_2(\mathbf{r})=\text{constant}$, which contradict with our condition.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/295590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Is general relativity a background dependent theory in five dimensions? I read the article What is a background-free theory? by John Baez and was wondering that if I add a fifth dimension to a background independent theory like general relativity I get a background dependent theory like the Maxwell's equations. The only difference: In Maxwell's equations you have electromagnetic fields. In five dimensions you have spacetime fields,- or spacetime-fluidflows or whatever you want to call it. I couldn't find good arguments against or in favor of this viewpoint.
By a five-dimensional extension of general relativity that unifies it with electromagnetism, you presumably mean Kaluza-Klein theory or something very similar. As explained here, K-K is indeed background-dependent; as with string theory decades later, this is considered a problem.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/296157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Several strange phenomenon when magnetron interacts with light bulbs Is this video real or is it a hoax? What physics is going on here? https://youtu.be/POGSEG20hkg In the video, there appears to be a magnetron set up, and someone is using fluorescent light bulbs and potatoes to play with it. 00:18 - Eighteen seconds into the video, a fluorescent light bulb seems to turn on when the magnetron touches the glass of the bulb. What causes the bulb to illuminate? 00:28 - At the twenty-eight seconds in, the antenna on the magnetron acts like a candle flame. is this an ordinary flame or some physical effect of the microwaves? 01:05 - One minute and five seconds in, a hollow glass tube, possibly a double ended high power bulb, is placed close to the antenna and only lights up after an initial spark at one end. What extra effect does the initial spark have that makes it light up?
not sure what the gas is but its same as your local neon signs there is nothing special about what he is up to this is just exciting argon rayon neon gasses and the like with a high frequency electromagnetic field. could be low frequency not my area of knowledge as to the wavelength but the basic principles are not to obscure its a field interaction with the gasses in the tube.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/296287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Maxwell equations from Euler-Lagrange equation: I keep obtaining the wrong equation I'm deriving the Maxwell equations from this Lagrangian: $$ \mathscr{L} \, = \, -\frac{1}{4} F^{\mu \nu}F_{\mu \nu} + J^\mu A_\nu \tag{1}$$ My signature is $$(+ - - -)\tag{2}$$ and $$ F^{\mu \nu} \, = \,\left(\begin{matrix}0 & -E_x & -E_{y} & -E_{z} \\ E_{x} & 0 & -B_{z} & B_{y} \\ E_y & B_{z} & 0 & -B_x \\ E_z & -B_{y} & -B_{x} & 0\end{matrix}\right)\tag{3} $$ My procedure is almost exactly the same as this one: https://physics.stackexchange.com/a/14854/121554 But he has a $+\frac{1}{4}F^{\mu \nu}F_{\mu \nu}$ in the lagrangian. So, while he obtains the right equation $$\partial_\mu F^{\mu \nu}\, = \, J^\nu,\tag{4}$$ I carry a minus sign till the end and my final equation is $$ J^\nu \,=\, -\partial_\mu F^{\mu \nu}\, = \, \partial_\mu F^{\nu \mu}\, ; \tag{5} $$ Which is clearly wrong if you write it down explicitely in function of the fields, the charge and the currents. Is my lagrangian wrong for my metric and my definition of the elctromagnetic field tensor? We spent some time talking about that lagrangian at lesson and the professor gave a lot of importance to that minus sign in order to have positive kinetic term. Am I missing something? I can write down all my calculations if requested but they are basically the same of the link I provided above.
I suspect the error is in your source term: with reference to Jackson's "Classical electrodynamics", the correct Lagrangian density is $$ {\cal L}=-\frac{1}{16\pi} F_{\alpha\beta}F^{\alpha\beta}-\frac{1}{c}J_\alpha A^\alpha\, , $$ which differ from yours by a sign in the source term. (The other factors $1/16\pi$ and $1/c$ are linked to the use of Gaussian units.) The article you link to also have the same sign for both terms in the Lagrangian density.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/296552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is bench pressing your bodyweight harder than doing a pushup? Why does bench pressing your own bodyweight feel so much harder than doing a push-up? I have my own theories about the weight being distributed over multiple points (like in a push-up) but would just like to get a definite answer.
When doing pushups, you're not lifting all of your body weight the full distance. Your heels don't move any appreciable amount and a point halfway between shoulders and toes only moves about half the height. Consider an iron bar, 1m in length. Lifting the whole thing 1m requires twice as much energy as lifting it half a meter. Lifting one end to a height of 1m and leaving the other end on the floor means lifting the mass half a meter on the average.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/296650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49", "answer_count": 6, "answer_id": 3 }
Electric field associated with moving charge I have recently started to learn about the electric field generated by a moving charge. I know that the electric field has two components; a velocity term and an acceeleration term. The following image is of the electric field generated by a charge that was moving at a constant velocity, and then suddenly stopped at x=0: I don't understand what exactly is going on here. In other words, what is happening really close to the charge, in the region before the transition, and after the transition. How does this image relate to the velocity and acceleration compnents of the electric field?
According to Special Relativity, information travels at the speed of light and this case is no different. The information here refers to the position of the particle at a certain time. Let me explain. When the charge was at x=1, its field lines were radially outward. When the charge reaches x=0, the information that the charge has reached that point hasn't been conveyed to the region outside the circle in the figure. Hence if the field lines outside the circular region is extrapolated, it intersects at x=1
{ "language": "en", "url": "https://physics.stackexchange.com/questions/296904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 1 }
Why is there a Cardy formula in 2D CFT? In 2d CFTs, we have the Cardy formula which tells us the number of states, which can be derived from the partition function by using modular invariance. What special property of 2D CFTs make it possible to derive such formula?
This question was just bumped to the homepage, so let me try to give a physical answer to explain why modular invariance is particular to two dimensions. The Cardy formula tells you something about the density of states of a CFT. In order to count the states of a CFT in $d$ dimensions, you naturally consider the thermal partition function $Z_{S^{d-1}}(\beta,R)$ on a sphere $S^{d-1}$, where the sphere has radius $R$. "Thermal" means that we are working in Euclidean time, compactified on a circle of length $\beta$. The Hamiltonian in the thermal direction is the generator of dilatations $D$, so $$ Z(\beta,R) = \sum_{\text{all states}} e^{-(\beta/R) \Delta}. $$ In the limit $\beta/R \ll 1$ there is barely any exponential suppression, so the sum is sensitive to all states, and you can extract thermodynamical information about the CFT. In the opposite limit $\beta/R \gg 1$ only a few terms contribute significantly. In $d=2$ something special happens. "Space" $S^{d-1}$ is a circle $S^1$ of length $L = 2\pi R$. So the whole manifold is just a rectangle (or to be precise a torus, since we have periodic boundary conditions). Nothing happens if you swap $L$ and $\beta$, so we get an identity $$ Z(\beta,L) = Z(L,\beta). $$ Since the theory is scale invariant we can rescale and drop the second argument, which gives $$ Z(\delta) = Z(\delta^{-1}), \quad \delta = \beta/L. $$ This means that you can say something about a difficult thermodynamic limit $\delta \ll 1$ from a trivial limit $\delta \gg 1$, and this leads to identities like Cardy's formula. The crucial ingredient was that there is a symmetry between the spatial $S^1$ and the thermal $S^1$ in 2d, whereas in higher d we cannot swap $S^{d-1}$ and $S^1$. I have glossed over some technical details, especially in neglecting the so-called Weyl anomaly. However, the above logic should explain what is special about $d=2$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/297038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Lorentz transformation, problem with derivation I have a question about the Lorentz transformation: In the derivation it's said that two systems S and S' should at $t=0$ and $x=0$, overlap. We get the following transformation rules: $t'=\gamma_0(t-v_0x/c^2)$ $x'=\gamma_0(x-v_0t)$ $y'=y$ $z'=z$ My question is: What happens if at $t=0$ they dont overlap? Can I just add a constant to both the time and coordinate $x$ transformation equation, to account for the misalignment at $t=0 $? $t'=\gamma_0(t-v_0x/c^2) \color{Red}{+ T}$ $x'=\gamma_0(x-v_0t) \color{Red}{+ X}$
To give a less "groupy" answer: always think first about the 3D-analogue to what you're doing in 4D. So in 3D we have these translations and rotations which are linear transforms preserving $x^2 + y^2 + z^2;$ in 4D we in addition have these "boosts" and all three are linear transforms preserving $w^2 - x^2 - y^2 - z^2$ where $w = c t.$ So just "downgrade" the boost to a rotation and ask yourself what you'd do in 3D. So in 3D we know these really easy rotation matrices $R$ to rotate a vector about the origin. What do you do when you want to rotate your points about a point $\vec r_0$ that is not the origin? You form something complicated, $\vec r' = \vec r_0 + R(\vec r - \vec r_0),$ where you first translate your coordinates to the origin, then rotate, then translate them back so that for $\vec r = \vec r_0$ we have also $\vec r' = \vec r_0.$ And this procedure will work equally well for Lorentz boosts, $r_2^\mu = r_0^\mu + L^\mu_{~~\nu} (r_1^\nu - r_0^\nu).$ However: also reflect that the rotation about the point that is not the origin is really disorienting if you're at the origin, now all the things that you're talking about locally are rotated to some strange point $\vec p = (I - R) ~\vec r_0,$ very difficult to use. So in practice, what we do in 3D and 4D is to choose some origin which is helpful to us: in 3D it is usually a point on an object which we're keeping track of; in 4D it is usually some instantaneous event which both observers can agree happened. And then even though both observers find this a little clumsy for the other points, we get a really nice translation between them that makes the math super-easy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/297123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Momentum and energy as a function of time If a constant force $F$ acts on a particle of rest-mass $m_0$, starting from rest at $t=0$, then what is its total momentum $p$ as a function of time? What is the corresponding energy $E$ as a function of time? So I know $p=\gamma mu$ and $E=\gamma mc^2$ I know that $t'=\gamma (t-(v/c^2)x)$ I rearranged to get $\gamma$ by itself and setting $t=0$ I get $\gamma = t'/(-(v/c^2)x)$ My new equations are $p=t'mu/((-u/c^2)x)$ and $E=t'mc^2/(-1/c)x$ Are these new equations correct? I'm hesitant about this as no part of this equation mentions point of reference, but I couldn't find any other way to relate momentum and energy to time.
It depends on what you mean by a constant force. If this means that the accelerating observer feels a constant force, i.e. a constant acceleration $a=F/m$, then this is the relativistic rocket problem. As discussed in this question the velocity measured by a non-accelerating observer is given by: $$ v = \frac{at}{\sqrt{1 + (at/c)^2}} $$ The momentum is then simply given by: $$ p = \gamma m v $$ Alternatively if you mean that the force is constant in the non-accelerating observer's frame then this means that $dp/dt$ is constant for the non-accelerating observer so we end up with the boring result that the momentum is just proportional to time. Of more interest in this situation would be the velocity as a function of time, which is obtained by solving: $$ \frac{d}{dt}\left(\gamma m v\right) = F $$ for the constant force $F$. Offhand I don't know if this has a closed form solution.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/297239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to distinguish "system" and "environment" in quantum decoherence Quantum decoherence distinguishes the whole big system into "system" and environment, and shows how system, when density matrix is traced over environment, comes to be decoupled from environment. But this requires distinguishing environment from system, and I do not get how clear separation is possible. Doesn't the fact the whole big system is quantum should bring caution to separating systems arbitrarily, especially considering special relativity effects?
Quantum decoherence distinguishes the whole big system into "system" and environment, and shows how system, when density matrix is traced over environment, comes to be decoupled from environment. This summary is wrong. The system is coupled to the environment. As a result of that coupling interference is suppressed and you can trace over the environment to get a mixed state instead of a pure state. But this requires distinguishing environment from system, and I do not get how clear separation is possible. Doesn't the fact the whole big system is quantum should bring caution to separating systems arbitrarily, especially considering special relativity effects? Separation between systems is not arbitrary. You can interact with the environment without interacting with the system and vice versa. So they are separate systems. You may be thinking that quantum mechanics is non-local and you can change the state of system A by interacting with system B. In reality, quantum mechanics is entirely local: https://arxiv.org/abs/quant-ph/9906007. You might think that Bell's theorem implies that quantum mechanics is non-local, but if so you are wrong. Bell's theorem implies that if systems are described by stochastic variables, then to match the predictions of quantum mechanics they would have to interact non-locally. But in quantum mechanics, systems are described by Heisenberg picture observables, not stochastic variables and the observables change locally.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/297329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Quasistatic and Reversible processes I'm having trouble understanding the quasistatic process concept. I understand that for any process we have well defined initial and final states and the problem is in specifying the path, and the path is important as we need to know it in order to calculate work or heat because they vary depending on the path. But I don't quite understand how implementing the process slowly can make the system in equilibrium at all points during the process, in other words, how does the system change but in the same time it remains in equilibrium? Also I read in my textbook that if a piston compresses a gas very fast, that results in a higher pressure region near the piston's surface and hence the pressure is not uniform in the gas. I just find all of that confusing and I want to understand why the concept of quasistatic processes is important and what problem would there be in the theory of thermodynamics if we don't define processes this way? ( we cant apply integration if there is not a set of points or a path for the process? And if we want a path we have to assume that at any point the pressure(or any property) is uniform and we can only assume that if the process is done extremely slowly so that its almost not happening and the system isn't changing!).
The quasistatic hypothesis is what make you use the equations, such as Gas Laws, in every point of your transition (in every point in space and in every moment in time), because you know that in every point that gas is in equilibrium. This is necessary especially when you have to integrate them and therefore you need that the equation you are integrating is valid in the whole interval of integration. This idea of making changements happen slowly implies that the equations are still true at least approximately, because changements are small.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/297411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Could Dark Matter be used as reaction mass for a propulsion device? Assuming a best case scenario, where humans are able to discover a way to interact with Dark Matter, could we use dark matter like a row boat uses the water? Assuming Dark Matter is made of WIMPs, they are going to be moving with respect to a propulsion device, but assuming your device was able to accelerate the particles in a specified direction, could this device solve the problem of exponentially increasing rocket weight in relation to Delta-V? is the density of Dark Matter in our galaxy high enough to provide stuff to push on? It would be pretty cool since it would be like water, but only interact with your engine, passing through your ship, unaffected.
Assuming there's a way to put a force on dark matter particles, then it's just an application of Newton's third law. Airplanes push air backwards to move forward an down to move up. The problem I see is that any vehicle that interacts with dark matter to a significant degree will experience a drag force from the dark matter particles that it doesn't accelerate. This spaceship won't be able to coast through space, but will require constant power to maintain a constant speed with respect to the local part of the galaxy--just like an airplane or boat experiences drag in air and water.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/297515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How does an ElectroDynamic Tether (EDT) clear space debris? Earlier today (9 December 2016), the Japan Aerospace Exploration Agency (JAXA) launched their Kounotori Integrated Tether Experiments (KITE) into orbit. What I understand from the description is that it will have a 20 kg weight at the end of a 700 m tether. If I understand correctly, the current mission is one of measurement (of induced current and voltage) rather than an attempt at actually clearing space debris. However, the technology is touted as a promising candidate to deorbit space debris at low cost. In doing some searching, I have not yet found a clear explanation for how that would work. My question: How would this actually work for that purpose?
Clearing (large) debris objects with current sats would take a large amount of fuel. Enough that doing more than one or two is unlikely to be possible. If you carried more fuel, doing the first one is more expensive (in fuel terms). Electrodynamic tethers allow you to take power and electrons and use that to "push" against the earth's magnetic field for propulsion. With solar panels, you can now (theoretically) generate low levels of thrust indefinitely in low earth orbit and zero propellant. This could allow enough manueverability to catch up to some deorbit candidates, attach or otherwise grab, then thrust again until close to deorbit. Then raise back up and repeat. See also: http://erps.spacegrant.org/uploads/images/2015Presentations/IEPC-2015-301_ISTS-2015-b-301.pdf I don't understand how it affects the debris which is presumably in some close orbit KITE doesn't. From the PDF: Primary objectives of KITE are to obtain data on the fundamental characteristics of the original EDT components In other words, a full-up deorbit system would need lots of things (some method of control, approach, and capture of debris included). But KITE was only designed to test the tether and its ability to provide zero-fuel thrust. Assuming that portion becomes more viable in the future, the additional problem of capturing and de-orbiting debris would need to be worked on.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/297712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why should the perturbation be small and in what sense? In time-independent perturbation theory, one writes $$\hat{H}=\hat{H}_0+\lambda \hat{H}^\prime$$ where $\lambda H^\prime$ is a "small" perturbation. * *Why should the perturbation be small for perturbation theory to work? *Both $\hat{H}_0$ and $\hat{H}^\prime$ are operators. Therefore, what does it mean to say the perturbation is "small"? I think, saying $\lambda \hat{H}^\prime\ll \hat{H}_0$ is meaningless. *Is it that the matrix elements of $\lambda\hat{H}^\prime$ much smaller than that of $\hat{H}_0$ in the eigenbasis of $\hat{H}_0$? If yes, why is such a mathematical requirement necessary? In other words, what if the matrix elements of $\lambda\hat{H}^\prime$ are comparable to that of $\hat{H}_0$?
Actually, perturbation theory can work very well even if lambda is not small. For example, the harmonic oscillator with linear potential term added yield exact results to first order in perturbation theory. The proper way to look at it is that you assume the hamiltonian, the eigenfunctions and the energies are analytic functions of $\lambda$ , and can be Taylor expanded. Then you solve for the expansion coefficients. Such expansions can work well even at large $\lambda$ provided that the higher order terms cancel out. Whether performing such a perturbation expansion is legitimate is mostly a matter of trial and error, since rigorous mathematical estimates are exceedingly hard to obtain.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/297826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Sound waves: frequency, speed, and wavelength I need to understand the relationship between the three. When does the wavelength stay the same as there is a change in one of the other two variables? When does it change as there is a change in one of the other variables? My understanding is this: If the speed changes due to a shift in the medium, let's say increases, only frequency increases. But if the frequency changes due to a change in the source of vibrations, wavelength would change to compensate for it, since there is no change in medium, correct? I just need to know if this is right, because I am not sure if I am getting it, and if there is anything about this relationship that I am missing. Thank you:)
The frequency of a sound wave will stay the same when sound passes from one medium to another because each compression or rarefaction in the first medium will produced exactly one compression or rarefaction in the second medium. Typically the speed of a wave depends mainly on the properties of the medium so in this case speed and wavelength would change while the frequency stays constant. A number of sound sources produce specific frequencies based on standing waves in some medium - for example most musical instruments or the human voice. When we want to change these frequencies we typically change the length of the resonating system, say by effectively shortening a guitar string or sliding a trombone to change the length of an air column. In that case the medium is unchanged (except for length) meaning the speed of the sound is fixed. In this case as the wavelength changes the frequency changes as the speed stays constant. As for a case where the wavelength would stay constant while the frequency and speed change consider tuning a guitar. The wavelength is determined by the length of the string which is constant but the tension of the string changes which changes the speed of the wave. This results in a changing frequency. Drums can be tuned in a similar way by adjusting the tensing in the drum skin. In some drums, for example the tabla, the tension in the drumskin is manipulated while playing to change the frequency of the sound.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/297961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
An accelerating charge emits EM waves, but how can this be explained in terms of photons? I was reading this response to a question involving EM radiation due to an accelerating charge. A charge's oscillations disturb its electric field, and this effect propogates at the speed of light. If a charge is oscillating back and forth then it is emitting oscillating electric fields, or EM waves. Can this effect be explained in terms of a photon? If i wiggle a charge around, how is a quantized packet of energy created and what determines how often these light packets are emitted?
If you wiggle a charge around you has done a energy transfer to this charge. At the end, if you follow to the basics, this transfer happens by the transfer of photons (see the discussions in PSE about the touch of the hand to something). So it should not be wrong to say that the transfered photons somehow are sitting on the charge. This holds even for charges in a magnetic field where the charge has to have a kinetic energy to get deflected from the magnetic field. In the case of accelerations, say inside an antenna rod, as well as under the influence of a magnetic field the charge emits electromagnetic radiation. It was found empiricaly that this emission happens always in packages. Until now it was not found a deeper explanation why this happens in packages.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/298157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If time dilation can slow time down, is there away to speed time up? Okay, I know the title is really confusing but I couldn't find words to explain it sorry. Pretty much what I mean is, if I can get in a lightspeed spaceship moving away from earth, time slows down for me. So one year for me will be 20 earth years or what ever. But is there away were I can reverse this? Where if I get on that same craft and travel for a year but it only will be a few months on earth? I know this is just a random thought.
No, there is not way to have the reverse effect in special relativity. This is because an object being at rest maximizes the time that elapses for it in relativity (a consequence of the lagrangian formulation of special relativity). So if your question is you have your friend sit on earth for a few (say three) months, and you want to know what is the most you can age in the time it takes your friend to wait three months, then the answer is three months, and this is accomplished by sitting on earth doing nothing. In general relativity, you can use a gravitational field to accomplish what you want. Assuming you are already on earth, you just need to go a region of lower spacetime curvature, such as outer space, and then wait there. Time will pass faster for you, and if you wait long enough, more time will have passed for your friend than for you when you come back to earth. This was illustrated in the movie Interstellar, where we saw a man on a spaceship age much faster than the people on the surface of the planet being orbited.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/298258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Is the topological index of a self-adjoint operator always zero? By the Atiyah-Singer index theorem, the index of a self-adjoint opeartor D (e.g., Hamiltonian) is given by Index(D) = dim Ker(D) − dim Ker(D*), where D* is the adjoint operator of D. Since D is self-adjoint, D=D*, we conclude that Index(D)=0. Is this conclusion right? Can we define a non-zero index for the self-adjoint operator?
Your conclusion is right. The index defined in the way you did, of course, can not be non-zero for self-adjoint operator. One can try to define some other index, if one wants, but I am afraid that it would have nothing to do with the Atiyah-Singer theorem.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/298379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
High(very high) frequency sound waves = heat? I had this question in mind and did not quite found the answer on google. If heat is a vibration of particles in matter, and sound is pressure wave moving using particles (causing them to fluctuate), could a very high frequency sound wave raise the temperature of matter which it is travelling in?
Technically speaking there is a difference between high frequency sound waves and heat, though they can have similarities. Heat is typically stochastic vibrations -- random vibrations that nobody can predict. Sound waves are typically well ordered vibrations, which still have some structure to them. Accordingly there are ways you can use many sound waves which you cannot use heat for. Now quite often high frequency sounds can be absorbed by a medium, which really means that they were turned into random vibrations -- heat. However, we typically think of the sound as something other than heat as long as we can find ways to use its structure. If you were to think of all the particles of a material as little tiny loudspeakers, and they were all emitting noise (Brownian noise, to be specific), then the connection between sound waves and heat would probably be pretty close to perfect.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/298471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can we get Pauli Exclusion Principle from QFT? I am learning QFT and fermion statistics. I am confused about whether the Pauli Exclusion Principle is a fundamental rule or it can be deduced from QFT? I saw a sentence from wiki but I don't understand. In relativistic quantum field theory, the Pauli principle follows from applying a rotation operator in imaginary time to particles of half-integer spin.
A fermion is a state $|\vec p,\sigma\rangle$ with half-integer spin, i.e., such that $$ J|\vec p,\sigma\rangle=\sigma|\vec p,\sigma\rangle\qquad\text{with}\qquad \sigma\in\mathbb N+\frac12 $$ where $J$ is the angular momentum operator (generator of rotations). Therefore, upon a rotation by an angle $2\pi$ around an arbitrary axis, $$ U(2\pi)|\vec p,\sigma\rangle=\mathrm e^{i\pi}|\vec p,\sigma\rangle=-|\vec p,\sigma\rangle $$ Finally, if you have a two-fermion system, interchanging them is equivalent to rotating the system by an angle $2\pi$, and therefore $$ |\vec p_1,\vec p_2\rangle=U(2\pi)|\vec p_2,\vec p_1\rangle=-|\vec p_2,\vec p_1\rangle $$ To make this suggestive argument precise, one needs the CPT theorem, which can be proven in an axiomatic framework using just a couple of properties of quantum fields. For more details, see PCT, Spin and Statistics, and All That, by Streater and Wightman. Spin, Statistics, CPT and All That Jazz by Baez might be of interest too (see the last entry for an explanation of what a rotation in imaginary time has to do with the spin and statistics).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/298617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Can a rocket with no forces acting upon it except a single push force with constant acceleration keep accelerating forever? I was wondering why a rocket with no opposing forces acting upon it couldn't keep accelerating given that it has the potential to release enough energy to maintain its acceleration at all costs. I have heard that any object with mass cannot reach the speed of light. Is that true? If so, why?
"Why?" This is the question physics can't answer. Physics describes the universe, it does not explain it. Given some postulates that you are willing to accept as true without proof or explanation, one can draw logical conclusions, but they all trace back to the unexplained postulates. From an observer in an inertial frame ("on earth", so to speak) the speed of the rocket will approach the speed of light but not exceed it. But the observer on the rocket will always feel the acceleration. The astronaut will always feel pushed back against his seat. He will observe that he is going faster and faster forever. It seems like a paradox. We don't know why it works that way. We can "explain" it if you are willing to accept Einstein's theory of relativity, but we can't explain why the theory is such a good "explainer" of the universe. That just seems to be the way the universe works.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/298685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Why can a red laser beam reflect in a blue surface? I've been taught that a blue object only reflects the blue fraction of light, and all the other colors are absorbed. So what happens with a red laser?
Your teacher was oversimplifying. A surface looks blue because it reflects more light in the blue portions of the spectrum than in other portions. But less is not none. It's very rare for a surface to be completely non-reflective. Your laser pointer is much brighter than the ambient light. So even if most of the light is absorbed you still get enough reflected to produce a red dot.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/298797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What make an egg explode in the microwave I think it's related to the air pouch in the egg, but I'd like to have a full physics explanation. What are the forces in presence? What are the tricks to prevent the explosion?
Here is a layman site about explosions in microwaves Explosion happens when when water in the food is heated rapidly, producing steam. When there’s no way for the steam to escape, an explosion takes place. Anyy food that has a skin or membrane can explode in the microwave, according to Snider, a professor at the University of Delaware. Hot dogs, eggs and potatoes are just a few common examples. In order to reduce the odds of food exploding in your microwave, you want to give the steam a place to escape. Simply take a fork and pierce the food item several times, Snider suggests. Microwave ovens have frequencies specifically to raise the energy level of the water molecules which exist as part of the lattice of edibles, in order to cook them. If the temperature of water goes over 100C it turns into steam. If steam cannot escape due to a mebrane , or the thick skin of the egg the pressure goes up, and continued heating generates an explosion. To avoid exploding eggs, I think if you set them in a small bowl, crack them and stick a pin through the inner membrane the pressure will be relieved. I have not tried it. It is easier to boil them in water.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/299107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How is the frequency of a photon related to the gravitational force it exerts? I know that even though a photon has no mass it still has energy. From what i understand the mass and velocity of a photon have no bearing on the amount of energy it emits, rather its energy is dependent solely on the frequency that the particular photon is resonating. How is this frequency and the energy being emitted by it related to the gravitational force exerted from the photon? Does a photon resonating at a high frequency and short wavelength emit more of a gravitational effect than a photon resonating at a lower frequency?
Yes, a photon's energy $E = \hbar \omega$ and momentum $p = (\hbar/c) \omega$ contributions to the stress-energy tensor of general relativity are both directly proportional to its frequency $\omega$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/299258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the minimal discrete model of wave propagations? If one takes the step size of an $n$-dimensional symmetric random walk to be infinitesimal, then the transition probability becomes the heat kernel. Thus, symmetric random walks are discrete, or microscopic, models of heat/diffusion. The heat equation and wave equation are merely different in the time derivative. So what is the minimal discrete/microscopic model for wave propagations, analogous to random walks?
I'm not exactly sure what you're looking for, but here's how I think about this at a discrete level (this follows the Wikipedia article on the wave equation). Consider a line of springs each of mass $m$ and length $h$, with spring constant $k$. The distance a spring, located at $x$, is displaced from equilibrium is denoted by $y(x)$. The force of the spring at location $x+h$ is $$F = m \frac{d^2 y(x+h)}{dt^2}.$$ From Hooke's law, the mass balance on this spring is given by $$F = F^{x+2h}-F^{x}$$ where the superscript means the force exerted by all of the springs on that side of the spring under consideration. Next, $$F = F^{x+2h}-F^{x}=k([y(x+2h)-y(x+h)]-[y(x+h)-y(x)]).$$ Finally, we take the number of springs to be $N$, with the total mass being $M=Nm$, the total spring constant being $K=k/N$ and the total length is defined as $L=Nh$. Therefore, we have $$\frac{d^2 y(x+h)}{dt^2}=\frac{KL^2}{M}\frac{ y(x+2h)-2y(x+h)-y(x)}{h^2}.$$ Taking the limits $h \to 0, N\to \infty$ and defining $c^2 =KL^2/M$, we have the wave equation $$y_{tt}-c^2 y_{xx}=0.$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/299657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why is Sachdev-Ye-Kitaev (SYK) Model model important? In the past one or two years, there are a lot of papers about the Sachdev-Ye-Kitaev Model (SYK) model, which I think is an example of $\mathrm{AdS}_2/\mathrm{CFT}_1$ correspondence. Why is this model important?
The other answers already pointed out very important properties, but there is a further aspect related to black hole physics. Namely, $AdS_2/CFT_1$ is the relevant holographic description of four dimensional extremal black holes, for instance the near horizon limit of an extremal Reissner–Nordström is $AdS_2 \times S^2$. Holographic techniques allowed the comparison of five dimensional black holes microstates between small and large string coupling constant regime. The same technology is not available for 4d black holes, and one must use other tools like the supersymmetric quantum mechanics on the worldvolume of the intersecting branes forming the black hole or string theory scattering amplitudes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/299959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 3, "answer_id": 0 }
Why is the natural singularity $r=0$ in Schwarzschild geometry a spacelike one? Why is the natural singularity $r=0$ in Schwarzschild geometry a spacelike one?
Nice question. Topologically, a singularity isn't a point or set of points. It's treated as a hole in the manifold. Therefore it doesn't have its own topology or geometry. We can't even say what its dimensionality is. So if we want to define what is a spacelike or timelike singularity, we need to define it in terms of the nearby spacetime, which is a point-set and does have a geometry. A timelike singularity is one such that there exists an observer (i.e., a timelike world-line) who has it both in his past and in his future light cones. Given that definition, I think it should be pretty clear why a black hole singularity is not timelike. It's in your future light cone, because you can fall into it. It's not in your past light cone, because we don't observe things popping out of it. Black hole singularities can form by gravitational collapse. If timelike singularities could form by gravitational collapse, it would be shocking, because the laws of physics can't predict what could pop out of such a singularity, and therefore the laws of physics would lose their power to predict what happens in our universe.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/300260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Diatomic molecule in an electric arc Molecular nitrogen is heated in an electric arc, and it is found spectroscopically that the relative populations of excited vibrational levels is $f_0/f_0=1.0; f_1/f_0=0.2; f_2/f_0=0.04; f_3/f_0=0.008; f_4/f_0=0.002$. Is the nitrogen in thermodynamic equilibrium with respect to vibrational energy? What is the vibrational temperature of the gas? Is this necessarily the same as the translational temperature? I found this problem in Mcquarrie's Statistical Mechanics and didn't understand the role of the electric arc for the heating of the gas. Knowing that the vibrational temperature for $N_2$ is 3374 K and the relative population of all excited states are given by $$ f_{n>0}=\sum_{n=1}^{\infty}\frac{e^{-\beta h\nu(n+1/2)}}{q_{vib}}=1-f_0=e^{-\beta h\nu}=e^{-\Theta_v/T} $$ When a molecule is heated, the ratio $\Theta_v/T$ is lower at high temperatures and the population of the vibrational excited states is augmented. So, if $N_2$ is placed in an electric arc for heating, there exists some perturbation on vibrational energy levels, like a Stark effect, that makes a perturbation on the thermodynamic equilibrium?
One can see by inspection that the vibrational occupancy numbers are an arithmetic series. A harmonic oscillator has evenly spaced energy levels, so one can conclude that the occupancy probability is given by the Boltzmann factor for some vibrational temperature. To calculate that temperature, one would need to know the vibrational frequency of the excited state.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/300758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Nature of metallic bonding in solid state What is the reason behind attraction of metal kernels & free electrons in electron sea model?
In metallic solids, the constituent particles are orderly arranged positively charged metal ions (called kernels) surrounded by a sea of free electrons. These electrons are mobile and are evenly spread throughout the crystal and flow throughout the crystal like water in sea. These are produced from those metal atoms that have low ionization energy and can easily lose their valence electrons. These electrons are free to move in all directions like molecules of a gas. These mobile electrons are simultaneously attracted by the positive ions (kernels) and hence hold these positive ions together. The force that holds the metal ions together is called the metallic bond. Greater the number of mobile electrons, greater the force of attraction, and hence stronger is the metallic bond resulting in high melting and boiling points.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/301304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is really instantaneous? How can a body travel at an instant and what does instantaneous speed tells us? What really is meant by speed of an object at an instant if an object does not travel at an instant? I would like a mathematical explanation.
Instantaneous speed (and indeed many other instantaneous concepts) is a bit of a formalism, but in layman's terms, the instantaneous speed is the ratio of distance covered to time, taken over a very small time interval. Formally, we say that the time over which the distance is measured is actually infinitesimally small, so we can represent instantaneous speed as the following limit: $$v=\lim\limits_{\Delta t\to 0}\frac{\Delta x}{\Delta t}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/301438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Thermodynamics Question: Does measuring the temperature of an object change its temperature? Suppose that I want to measure the temperature of an object, such as a pot of hot water. When I stick the thermometer into the pot, I know that the temperature measured by the thermometer is its own temperature when it reaches thermal equilibrium, which, according to the Zeroth Law of Thermodynamics, is equal to the temperature of the object (the pot of hot water) at thermal equilibrium. Does this imply that the temperature of the object that I am now measuring is different than the initial temperature, and if so, is the change significant? Also, can I somehow use the information of the system at thermal equilibrium to find the initial temperature of the object?
Short answer is "yes" but in general is insignificant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/301524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Algebra behind the wave function properties In lecture, for the tunnelling wave function $$ \psi(x) = C_1\cosh(x/l)+C_2\sinh(x/l)$$ the current density is $$ J = h/(2mi) [ \psi^*(\Delta\psi) - (\Delta\psi)^*\psi] $$ Here is my problem, lecture says that $J$ is equivalent to: $$J=(h/m)Im[\psi^*(\Delta\psi)] \tag{1} $$ and $$J={h/(ml)}Im[C_1^*C_2] \tag{2} $$ What is the algebra behind the equation (1) and (2)? How they was derived (especially equation (2))?
First, your expression for the current is incorrect. It should be $\nabla$ everywhere, not $\Delta$ (first derivative, not second). To derive (1), you just use the formula $Im(z)=\frac{z-z^*}{2 i}$ for a complex $z$. As for (2), you just substitute the expression for the wave function into (1) and evaluate for $x=0$. I did not check all the coefficients.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/301738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why do we assume weight acts through the center of mass? The weight of a body acts through the center of mass of the body. If every particle of the body is attracted by earth, then why do we assume that the weight acts through the center of mass? I know that this is true but I can't understand it. Does it mean that the earth does not attract other the other particles of the body ? Wouldn't it mean that girders would not need any support at the periphery if we erect a pillar at the center?
The other answers here, which show that gravity does not exert a torque on an object, are correct. However, they rely on the following implicit step of logic to get to the answer the OP wants: An object that has a force acting on it, but no torque acting on it appears as if it is being pulled from its center of mass. This is true in the case of ideal rigid bodies only. In the case of elastic objects, OP is absolutely correct, in that gravity does indeed act on each individual particle in the object. This is why girders bend under their own weight, among other things.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/301854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 6, "answer_id": 1 }
How does a small object move with constant velocity when drag force is equal to its weight? When drag force ($bV$) equals to object's weight (mg) then upward and downward force becomes equal. As a result the object comes to rest. If this is true, how is a body moving with constant velocity?
Isaac Newton: "An object at rest will remain at rest unless acted on by an unbalanced force. An object in motion continues in motion with the same speed and in the same direction unless acted upon by an unbalanced force." This law is often called "the law of inertia".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/301961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
How matter waves travel faster than light? I read this in my physics textbook. It says that matter waves travel faster than light. Why is it so? Also are matter waves G-Waves?
I read this thing in my physics textbook.They said matter waves travel faster than light, why is it so? Matter waves is a confusing terminology coming from the de Broglie quantum mechanical description of particles, which is what matter is at the quantum mechanical level. Here lamda is the wavelength of the probability distribution that gives the probable location of the particle in space , when it is measured. It is not the particle that is waving, i.e. changing sinusoidaly in space and time, but the probability of finding it at that (x,y,z,t). Special relativity imposes the limit of the velocity of light on any motion of particles, in all frameworks, classical and quantum mechanical. Thus there is no way the statement as you have written is correct, unless there is some reference to the phase velocity which can be different for light (classical framework) in a medium. For a particle it would again refer to the probability distribution and will be a non measurable quantity, just a mathematical description. Also matter waves Are G-Waves? No , they are probability waves of particles in the quantum mechanical framework, the underlying framework of nature. G waves are gravitational waves. There also exist gravity waves, but that is another story. Just saw your edit. The statement "matter waves travel faster than light" as written in the book is wrong. Period.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/302188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Is current density (J) in Ampere's law derivable? In Ampere's law: $$ \nabla\times\mathbf{B}=\mu_0\mathbf{J} +\mu_0\epsilon_0\frac{\partial\mathbf{E}}{\partial t} $$ the current density is listed explicitly as a separate term from the change in electric field. My understanding of the history (perhaps completely wrong), is that the $J$ term was determined first, and then the $E$ term was added later (by Maxwell?) to account for displacement current. As the $J$ term is physically a set of moving charges, which each produce a time-varying electric field, why isn't the $E$ term sufficient to calculate the magnetic field? That is, to determine the magnetic field from a set of moving charges, couldn't you determine the magnetic field of a single moving charge from: $$ \nabla\times\mathbf{B}=\mu_0\epsilon_0\frac{\partial\mathbf{E}}{\partial t} $$ and then the total magnetic field of a current would be the sum of magnetic fields from many moving charges?
Just by looking at the two equation that you wrote down, you get immediately that for them to be consistent you need to have $\mathbf{J}=0$. In other words the two systems of equation are not equivalent (unless you are in the trivial case $\mathbf{J}=0$). Looking at the whole structure of Maxwell's equation + Lorentz forces on the charges, you can see that they consist of a set of coupled equations between EM fields and Matter fields (ie charges and hence currents). In general it is a very difficult problem to solve unless you have additional symmetries (like rotational symmetry or time independence). It is moreover true that you cannot solve the equations one variable at time independently by each other, because these are coupled differential equations: a change in the charge distribution causes a change in the fields, both E and B, but not in a independent way (they are related!), then this causes a change in the charge distribution and so on. This is why a generic solution is difficult to write (at least explicitly). However in certain situations it could be meaningful to consider some fields as given, non dynamical, and solve the equations for the other fields, freezing somehow some degrees of freedom.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/302291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Confusion with partial derivatives as basis vectors So I have seen that the directional derivative can be written as $$ \frac{df}{d\lambda} = \frac{dx^i}{d\lambda}\frac{df}{dx^i} $$ And we can identify $ \frac{d}{dx^i} $ as basis vectors and $ \frac{dx^i}{d\lambda} $ as components. What I don't understand is why is $\frac{df}{d\lambda} $ considered a vector? It's a derivative of a function w.r.t. a parameter and surely that's not a vector? I.e. In vector notation the directional derivative is given by a dot product $$ \frac{df}{d\lambda} = \hat{n} \cdot \nabla f $$ which is a scalar but in tensor notation that seems to not be the case?
The directional derivative $\frac{df}{d\lambda}$ is not actually a vector in the space spanned by the $x^i$. What the source was trying to say was that in the abstract vector space spanned by the partial derivative operators, $\frac{d}{d\lambda}$ can be thought of as a vector. *http://www.physicspages.com/2013/02/10/tangent-space-partial-derivatives-as-basis-vectors/
{ "language": "en", "url": "https://physics.stackexchange.com/questions/302359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How does a Galilean telescope form an enlarged image even though it has a diverging lens? I have been reading about Galilean telescope and the picture in the book is something like this: After rays pass through the converging lens, there is a real image formed which is intercepted by the diverging lens but as I learnt before, diverging lens cannot form an enlarged image. So, is the ray diagram inaccurate?
The angular magnification of a telescope $M$ is defined as the ratio of the angle subtended by the image of the object when looking through the telescope $b$ to the angle subtended by the object when looked at with the unaided eye $a$. $$M=\dfrac ba$$ Those angles are often called visual angles and they detained the size of the image which is formed on the retina. The bigger the visual angle, the bigger the image formed on the retina and the bobber the “object” being viewed is perceived to be. . I have annotated your diagram which clearly shows that $b>a$ which means that the angular magnification of such a telescope is greater than one ie the Galilean telescope magnifies. The final image can be formed at infinity as shown in the ray diagram below. . $f_1$ if the focal length of the convex lens which converges the incoming rays and $f_2$ is the focal length of the concave hens which diverges the incoming rays. Again the visual angle for the final image $u’$ is greater than the visual angle for the object being observed $u$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/302456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Why does stacking polarizers of the same angle still block more and more light? I have some sheets of polarization film. They came in a big box, all stacked at the same angle. I noticed that the entire stack of them lets almost no light through, even though they're all at the same angle. I pulled out two, and those two also block more light than just one. Why? Is this because I have low-grade polarizers? Or because lining them up at EXACTLY the same angle is impossible? Or because the light that gets through the first one is not really polarized exactly to its angle — it's just that less of it is polarized away from its angle than before? If it's because these are low-grade polarizers, can anyone recommend a linear polarizer that I can stack several of in a row at the same angle and still have a 100% probability of the light getting through? I feel like I'm probably just misunderstanding polarization theory so please correct me.
In addition to the other answers: Even if you had extremely clear glass (like for optical fibers), stacking them would cause them to get more quickly opaque than what would be expected by their transmission coefficient. This is only for completeness because the effect on real polarized filters is dominated by their transmission coefficient as said by Odano. The reason is that on every boundary surface (seam between two glasses) light is reflected, the amount is approximately 4%. So after 17 glasses only 50% of light is transmitted even with perfectly clear glass.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/302795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 4, "answer_id": 1 }
Can you touch something which is massless? Can one touch massless things? If not then why the light get scattered by the tiny particles present in air? If light is massless how can it hit particles or dust to get scattered? $$**OR**$$ The light do not need any medium to travel then why it changes its path by changing medium.
The underlying level of nature is quantum mechanical, and the theory that describes the behavior of matter is the standard model of particle physics. All classical behaviors emerge from this underlying quantum mechanical level. The photon is a massless particle and classical electromagnetic waves , for example light, emerges from a huge number of confluent photons. can one touch mass-less things? The particles interact, and the interaction can be felt as a "touch", for example light falling on your hand and felt as heat is the interaction of innumerable photons with your skin. The interaction with the retina of the eye and the transfer of the interaction to the brain builds an image of the world in our brains If not then why the light get refracted by the tiny particles present in air The massless photons scatter off the tiny particles and the light built up by them changes direction/refracts because of the interaction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/302991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
How does the sun really produce light in terms of waves? Electromagnetic waves are caused by changing electric and magnetic fields, and these are caused by a charge possible oscillating like an antenna or a varying current etc. My question is, with the sun, where is this source that causes the electric and magnetic fields to oscillate. Everywhere I've read stated that it was due to the energy released from nuclear fusion, but when looking at the process of nuclear fusion there are no charges produced. How is this 'energy' supposedly producing the same effect as an electron oscillating?
This is not technically very exact since the reasoning is classical but that energy you're talking about mostly turns into heat and when something is hot it's constituents; ionic cores and electrons, move around, collide, and oscillate violently and since these constituents are charged you essentially have a bunch of oscillating charges sending out light since that is what accelerating charges do.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/303118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Max damage to target by projectile The image above is that of a past exam question. Unfortunately I am having trouble deriving a solution, my method is as follows: It is clear that the launch velocity for max range ($\theta = 45˚$) is: $$ V_{max} = {(30g)}^{1/2} $$ I assumed the landing velocity symmetric to the launch velocity, then applied the coefficient of restitution in the y-plane: $$ v_{land} = Vcos\theta i - Vsin\theta j $$ $$ v_{bounce} = Vcos\theta i + eVsin\theta j $$ Using the range equation given: $$ 45m = \frac{V^2sin2\theta}{g}+\frac{eV^2sin2\theta}{g} $$ This gives no meaningful answer for $\theta$ for any value of $v$. What is wrong with my thinking? This is all assuming max velocity is used. EDIT I believe the source of the given expression is as follows: In the vertical direction: $$u=Vsin\theta, t = T_{flight}, a = -g, s_y=0$$ Therefore:$$s = ut + \frac{1}{2}at^2$$ $$ T_{flight} = \frac{Vsin\theta}{g} $$ Now in the horizontal plane the range is: $$ s=u_xt = Vcos\theta T_{flight} = \frac{\frac{1}{2}V^2sin\theta cos\theta}{g} = \frac{V^2sin2\theta}{g} $$ Why cannot the $V^2$ term not be left as is seeing as it looks like it just expresses the magnitude of the launch velocity?
Your 1st equation (just before your EDIT) is correct, it must be your arithmetic which is going wrong. There is a solution for 1 bounce. The formula for range can be written in terms of the horizontal and vertical components of velocity $v_x, x_y$ as $R=\frac{2v_x v_y}{g}$. After each bounce $v_y$ is reduced to $ev_y$ while $v_x$ is unchanged. So after $n$ bounces the range will be $R(n)=\frac{2v_xv_y}{g}(1+e+e^2+...+e^n)=(1+e+e^2+...+e^n)R_0 \sin2\theta$ where $R_0=\frac{v^2}{g}=30m$ is the maximum range without bouncing $(\theta=45^{\circ})$. To reach a range of $45m$ the superball must bounce at least once. Maximum damage to the target requires maximum KE, which is accomplished by maximising the final vertical component of velocity $e^n v_y$ (since $v_x$ is constant), which requires minimising the number of bounces $n$ (because $e \lt 1$). For $n=1$ bounce there is a solution for $\theta$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/303181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
The position of an element on a periodic sound wave So I am looking at the equation used to locate of a small element relative to its equilibrium position on a periodic sound wave. The equation is defined as below: $s(x, t) = A\cos(kx - wt)$ Now I understand why the use of a sinusoidal function, but the equation is expressed in terms of a Cosine and why not simply use a sin function? Is it just a convention or there is more?
It is a convention with some "method to its madness". We often use complex notation for waves: $$y = A e^{i(kx - \omega t)}\tag1$$ Now we know that $$e^{i\theta} = \cos\theta + i\sin\theta$$ So it follows that the real part of (1) is a cosine function...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/303337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Example of compact operators in quantum mechanics Can anyone give an non-trivial example of compact operators in quantum mechanics? Of course, any operator on a finite-dimensional Hilbert space is compact.
Compact operators often appear in integral equations and can be viewed as continuous generalizations of matrices, where the corresponding integral kernel must not be to singular and must decay fast enough at infinity. An example is the Lippmann–Schwinger equation in quantum scattering theory, see https://en.wikipedia.org/wiki/Lippmann%E2%80%93Schwinger_equation .
{ "language": "en", "url": "https://physics.stackexchange.com/questions/303533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 0 }
Grashof number as a ratio of buoyant and viscous forces The Grashof number is supposed to be a ratio of buoyant forces to viscous forces. I find this hard to believe, since if $$F_b=\beta g \rho \Delta T$$ is the buoyancy force, the definition of the Grashof number, $$\text{Gr}=\frac{\beta g\Delta T L^3}{\nu^2},$$ implies that the viscous force is something like $\frac{\rho}{L^3}\nu^2$, instead of something linear in $\nu$. How is this supposed to be the viscous force?
Don't take those intuitive notions of dimensionless numbers as ratios of forces too seriously. Those kinds of statements are to be understood as vague metaphors more than anything else. But, clearly the expression $\frac{\rho}{L^3}\nu^2$ has the dimension of a force, and clearly this force depends on viscosity. That's pretty much all there is to say about this. How exactly viscous and buoyancy forces arise in convection problems depends on the boundary conditions and will be complex in general.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/303714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Phase Transitions and Bubble Nucleation The potential for a first order phase transition is shown below The phase transition occurs from the spontaneous formation of bubbles. Inside the bubbles the field value is at the "true vacuum" and outside the bubble the field value is at the "false vacuum". In many texts, a second order phase transition is described to occur in a smooth fashion. My question is can bubble nucleation occur in a second order phase transition? Or is a first order phase transition necessary?
I will just address the question from mean-field theory with weak fluctuations, which is I think the only regime where the bubble-nucleation picture makes sense. Below you see a picture of a Landau-Ginzburg potential (taken from Cardy's book "Scaling and Renormalization in Statistical Physics", which I highly recommend) for a continuous phase transition. Note that the symmetric minimum at the origin and the two symmetry-breaking minima never co-exist, so there is no way to draw the diagram like in the question. So I would say bubble-nucleation requires a first-order transition. On the other hand, in the ordered phase we have two different minima, and in a low temperature state there are typically ``domains" of either one with some domain walls between them. The two domains are equally favorable, so they don't spontaneously nucleate, but if we explicitly, we can tilt the scales and cause one domain to have slightly lower energy (eg. by applying a magnetic field to a ferromagnet) and then we will have spontaneous nucleation of domains sitting in the lower of the two minima. Something else you might look into in the setting of self-organized criticality is avalanches, which are a bit like bubble nucleation. They happen when part of a system relaxes from a supercritical to a subcritical state near a continuous phase transition. Their sizes reflect the scaling behavior of the critical point. Here is a starting place.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/303903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why does torque produce a force on the axis of rotation? If a door is rotated about its fixed axis in (outer) space, a force parallel to the door on the hinges will arise due to centripetal force on the centre of mass and conservation of momentum (Newton's third law). But any torque on the door will create a force on the hinges which is equal to $t/r$ or torque divided by radius. I'm looking both an intuitive and mathematically based explanation for this fact. I can sort of 'see' why, but my understanding is vague and uncertain.
Pure torque does not produce any forces. So it is not true that "any torque on the door will create a force on the hinges".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/303997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 9, "answer_id": 3 }
Can a ship float in a (big) bathtub? I am confused. Some sources say it is possible at least theoretically ( http://www.wiskit.com/marilyn/battleship.jpeg ) and some say it is not true ( http://blog.knowinghumans.net/2012/09/a-battleship-would-not-float-in-bathtub.html ) Is it necessary or not that there exists an amount of water around the ship that weights at least the same as the weight of the ship?
Sammy Gerbil and Pirx have already answered the question correctly. I will only include a minor statement here, since the whole confusion seems to revolve around the concept of "weight of displaced water". "Weight of displaced water" is the weight of water that would have to occupy the submerged volume of the body, if the body were to be not present. It has nothing to do with the amount of water that is already there when the floating object is present.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/304245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 8, "answer_id": 2 }
Are the diffusion terms conservative? Generally the diffusion terms are of the form $$D = \dfrac{\partial}{\partial x} \left(\mu \dfrac{\partial u}{\partial x} \right) .$$ Is this this term conservative or nonconservative?
This form is conservative in the sense that, if you approximate the right hand side with a central finite difference approximation (using $\mu$ at the boundary of each grid cell and u at the center of each cell), the finite difference approximation will automatically conserve mass. For those of us who solve diffusive problems using numerical methods, this is what a conservative form of the diffusion terms represents. An example of the non-conservative form would be if we differentiated by the product rule to obtain the mathematically equivalent form: $$D=\mu\frac{\partial ^2u}{\partial t^2}+\frac{\partial \mu}{\partial x}\frac{\partial u}{\partial x}$$If this were expressed in finite difference form, the finite difference scheme would not automatically conserve mass. Such a version would be regarded as non-conservative.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/304359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
4-Vector Potential Notation How am I supposed to interpret this notation: $$F^{uv} = \partial^uA^v-\partial^vA^u$$ I know that $\partial^u = (\frac{1}{c}\frac{\partial}{\partial t},- \vec\nabla)$ So for example for the potential $$A=\left(\begin{matrix} 0 & 0 & 0& E_z\\ 0 & 0 & B_y & 0\\ 0 & -B_x & 0 & 0\\ E_z & 0&0&0\end{matrix}\right)$$ So to compute $F^{23} = \partial^u\left(\begin{matrix}0\\B_y\\0\\0\end{matrix}\right) -\partial^v \left(\begin{matrix}0&0&B_y&0\end{matrix}\right) = 0 - - B = B$. Is this the correct way to do it? I'm just getting confused by the notation.
The 4-potential $A_\mu$ is a four-vector, not a matrix. Set the speed of light $c=1$ and it is defined as $$ A^\mu = (\phi,\vec{A})\,, $$ in which $\phi$ is the electric potential and $\vec{A}$ vector potential. The magnetic field is given by $$ \vec{B} = \nabla \times \vec{A}\,, $$ and the electric field is given by $$ \vec{E} = -\frac{\partial \vec{A}}{\partial t}-\nabla \phi\,. $$ Thus the electromagnetic tensor (as you write in the question description) is $$ F^{\mu\nu} = \partial^\mu A^\nu-\partial^\nu A^\mu = \left(\begin{array}{cccc} 0&-E_1&-E_2&-E_3\\ E_1&0&-B_3&B_2\\ E_2&B_3&0&-B_1\\ E_3&-B_2&B_1&0 \end{array}\right)\,. $$ This tensor is gauge invariant. Under the transformation $A_\mu\rightarrow A_\mu+\partial_\mu\chi$, $F^{\mu\nu}$ will not change. The Lagrangian of electromagnetic field is $$ \mathcal{L} = -\frac{1}{4}F_{\mu\nu}F^{\mu\nu} + A_\mu J^{\mu} $$ and the corresponding Euler-Lagrange equation is Maxwell equation $$ \frac{\partial \mathcal{L}}{\partial A_\nu}-\partial_\mu\frac{\partial \mathcal{L}}{\partial \partial_\mu A_\nu}=0~~\Rightarrow ~~\partial_\mu F^{\mu\nu} = J^\nu\,. $$ Actually, the 4-vector potential $A_\mu$ can be viewed as the connection on a fiber bundle (something like the Christoffel symbol in general relativity), and thus the tensor $F_{\mu\nu}$ is the curvature. In quantum field theory we have the covariant derivative $D_\mu=\partial_\mu + ieA_\mu$, it is similar to the covariant derivative on curved spacetime $\nabla_\mu=\partial_\mu+\Gamma_{\mu\nu}^\lambda$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/304505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What did the big bang "look like"? I've been reading here for a while now and something I always see is people saying "the big bang happened everywhere" or "the center of the universe is where you are", explaning that the big bang didn't happen from a single point, but everywhere at once. The problem is that I am unable to get an "image" of what that might look like in my head. What does it mean when the universe expands everywhere at once? I know that this might make sense from a mathematical point of view, but what would it actually look like?
The argument that there is no center of the universe is only logically valid if the universe is actually infinite in size and mass or the universe is torus(aka it loops back onto itself(you would go back to your starting point if you traveled in one direction for long enough)). If the universe has finite mass and isn't torus then if you go in one direction for long enough you either see a gradual decrease in the density of matter or run into some sort of wall. In both these cases the shape of the universe would likely be a sphere. If it is a sphere, then the universe would have a center. That point would be where the big bang actually occurred.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/304607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Momentum of an electron acting as a wave Was working on a problem with electrons acting as waves in diffraction. Part of the question asked me to calculate the momentum of the electron. Since I was dealing with waves I used the following equation: $h=pλ \implies p = h/λ$ Since $λ = v/f$ we can substitute that in, resulting in $p = hf/v$. Substituting in the de Broglie $h = E/f$ into the above equation we get $p = E/v$. Since we're talking about electrons the only energy that the electron has is kinetic so we can substitude $E = 0.5mv^2$ into the equation giving us $p = 0.5mv^2/v = 0.5mv$. I've repeat that, $p = 0.5mv$. Any 4th-grade physicist knows that momentum is $mv$ so on one hand, I have mv and on the other I have a derivation saying the momentum is $0.5mv$. Is there a mistake in my derivation I'm not seeing? P.S: I noticed something a bit later. $p = E/v \implies E = pv = mv^2$. See any similarities between this and another infamous equation in the realm of relativity?
From Einstein's famous equation we have, $E=mc^2$ From classical mechanics we have, $E = \frac{mv^2}{2}$ Equate both of them (both are E, right?) and you'll get $mc^2 = \frac{mv^2}{2}$ $c^2 = \frac{v^2}{2}$ All objects in the universe are moving at $c\sqrt{2}$ Yes, they are moving FASTER than light. Oh dear! Physics does not work! Do you notice the mistake? You cannot equate arbitrary equations with one another even if the quantity has the same sign and is of the same type. $E = h\nu$ is an equation which describes the energy carried by an electromagnetic wave of frequency $\nu$ Are electrons photons? No. How can you equate the kinetic energy of an electron with the energy associated with a photon of a particular frequency? Does that even make sense?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/304859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Why don't humans burn up while parachuting, whereas rockets do on reentry? I guess it has something to do with their being both a high horizontal and a vertical velocity components during re-entry. But again, wouldn that mean there is a better reentry maneuver that the one in use?
The distances and speeds involved are materially different. On the scale of a parachute dive, the atmospheric density doesn't change much (and is relatively high). A parachutist quickly reaches a terminal velocity where the drag from the air matches the pull of gravity. In a re-entry, you're approaching in a much less dense atmosphere, and you're going much faster. At these speeds, drag warms you up much faster. Also, you're plowing into the atmosphere, and that means you're increasing the drag. Between these effects, you see substantially more heating. A parachutist dropping from orbit would have the same issues with burning up. There are some interesting things that are done regarding reentry maneuvers. The Chinese had one lunar orbiter which skipped off the atmosphere. The idea was simple. If the orbiter were to re-enter our atmosphere directly, it would receive too much heating. Instead, it was allowed to just enter the rarified upper fringes of our atmosphere, bleed off some of its velocity (into heat) before skipping off the atmosphere similar to a stone on a pond. This gave it time to get rid of some of that heat before a second re-entry brought it down safely.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/304992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 2 }
Where does all the heat go during winter? I do not understand where actually the heat in our surroundings go during the winter season. Is it radiated out into space? I know it cannot coz global warming would not be a issue then. It might get absorbed but where? I tried figuring it myself but couldn't please help.
Just imagine heat escaping out of Earth in all directions at the same rate. But due to the tilt of the Earth, the sun's rays hit a larger surface area of the Earth for one hemisphere and less at the other. As a result, one side would experience more heat from the sun while the experience less.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/305095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
How force exerted by spring is always opposite to the direction of displacement in Hooke's law Suppose a spring lying on a horizontal table, displaced from its equilibrium length by an external agent. The external agent is removed, the spring will head back to its equilibrium length. Here, the direction of spring force and displacement will be same. But according to Hooke's law, $$\mathbf{F}=-k\Delta\mathbf{x}$$ The minus sign tells us that the force exerted by spring is always opposite to the direction of displacement. How is this? Please explain the reason for the minus sign. Thanks.
The external agent is removed, the spring will head back to its equilibrium length. Here, the direction of spring force and displacement will be same. No! $x$ is not direction of change. It is just direction. * *If the spring is stretched to the left, then $x$ points leftwards. *Force points rightwards, because it tries to go back to original length. *That force makes the end move back towards the right. *When the end has moved a bit, the remaining stretching is still towards the left, but it is becoming smaller. *When it has moved half the way, the remaining stretching is still towards the left, but it is getting smaller towards the right. *When the end is almost back at original position, the stretching is still a tiny bit towards the left, but almost disappeared. $x$ shows the direction of stretching; not the direction in which the stretching changes. (That would rather be some kind of "velocity"). And the force will always appear to pull back to original size, so always opposite to the stretching.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/305206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Can a (micro) black hole be used to make a microscope? We have seen that black-holes can be used as a telescope. Is there a chance that light bending from a micro black-hole can be used to create a microscope?
Gravitational lenses would be a very poor choice for use in an optical instrument. For optical instruments we require that the lenses focus parallel rays of light to a point - the focal point: This happens because the farther a light ray is from the optical axis the more strongly it is bent. However for a gravitational lens the farther the light is from the lensing object the more weakly it is bent. The light rays focussed by a micro black hole would look more like this: Instead of a focal point a gravitational lens has a focal line, and this means it doesn't produce images in the way a conventional lens does. Consequently it would be of little use in a microsocope.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/305399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Confusing working of lens Why do lens don't splits light into its seven constituent colors, like Prism? * *Why is lens left is correct, not right one? *How does lens came to know that rays are coming from infinity or are at Focus and converge/diverge them at different point accordingly?
All the other answers that lenses do show chromatic aberration are perfectly true, but usually they do not show it to anything like the same degree as a prism. This it's because prisms are typically operated with light at much higher incidence angles to their interfaces than for lenses. For an incidence angle of $\theta$, the refraction or angular deviation wrought by the interface is, from Snell's law: $$\Delta\theta=\arcsin\left(\frac{n_2}{n_1}\sin\theta\right)-\theta$$ So that a change in that deviation owing to a wavelength induced refractive index shift is: $$\mathrm{d}_{r_n} \Delta\theta =\frac{\sin\theta}{\sqrt{1-r_n^2\,\sin^2\theta}}$$ Where $r_n=n_2/n_1$ and this quantity increases with incidence angle, especially if total internal reflexion is approached. At least one of the incidence angles in a prism is of the order of $45^\circ$; one seldom allows an angle anything like as high as this in lens design. The reason for this is that spherical aberration is roughly caused by the nonlinearity in Snell's law; if Snell's law were $\theta_1/\theta_2=n_2/n_1$, then spherical lenses would truly focus rays to a point. Whenever one has severe refraction in lens design, one adds a great deal of aberration which must be nulled elsewhere in the lens system; one thus tends to end up with finely balanced differences of large aberrations and the design becomes exquisitely sensitive to the positioning of lens elements. Thus one only ever sees it in applications where high optical powers in few surfaces are needed and the cost justifies someone's hand tweaking of lenses as the system is built. Typically in miniaturized, high cost optics like microscope objectives. As John Rennie says, the stacking of different materials can compensate for chromatic aberration. A spherical surface with different materials either side will be converging at wavelengths where the refractive index on the side of the center of curvature is greater than the other, diverging at wavelengths when this side's index is the lesser of the two and the surface yields no power at the wavelength where the two indices are equal. Thus one can choose such surfaces to offset the wavelength dependent optical power elsewhere in the system. "Achromatic"systems bring two wavelengths to a common focus ( usually at either end of the visible spectrum) , apochromats bring three wavelengths to a common focus and I have in the past designed a system bringing seven wavelengths to a common focus. Needless to say, that was a highly specialized application.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/305508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Guided waves equations In Griffiths's Introduction to Electrodynamics, monochromatic guided waves are proposed to have the form $$\mathbf{\tilde{E}}(x,y,z,t)=\mathbf{\tilde{E}}_0(x,y)e^{i(kz-\omega t)}$$ $$\mathbf{\tilde{B}}(x,y,z,t)=\mathbf{\tilde{B}}_0(x,y)e^{i(kz-\omega t)}$$ where $$\mathbf{\tilde{E}}_0=E_x\mathbf{\hat{x}}+E_y\mathbf{\hat{y}}+E_z\mathbf{\hat{z}}$$ $$\mathbf{\tilde{B}}_0=B_x\mathbf{\hat{x}}+B_y\mathbf{\hat{y}}+B_z\mathbf{\hat{z}}$$ Then the following is stated: In every denominator the expression $(\omega /c)^2-k^2$ appears. But, as far as I know, $\omega /c=k$, so $$(\omega /c)^2-k^2=k^2-k^2=0$$ What am I missing here?
Usually, people don't use the symbol $k$ in this context so as to avoid exactly the kind of confusion you are having. In this context, the symbol written as $k$ in Griffiths's equations is often written $\beta$ or $k_z$; it is then called the propagation constant and it depends on the geometry of the wave in the waveguide. It is found through an eigenvalue equation that arises from the waveguide's boundary conditions. The discrete spectrum of the eigenvalue equation defines the bound modes of the waveguide. $\beta$ is always less than the wavenumber for the medium within the waveguide, so that you don't get singularities in Griffiths's equations. For example, in a two dimensional waveguide comprising an air channel (or "slab") between two perfectly conducting walls, the modes are actually the superpositions of two plane waves propagating in pairs at angles $\pm\theta_j$ (one is the other reflected off the walls in accordance with the law of reflexion). So $\beta_j = k\,\cos\theta_j$ and the angles $\theta_j$ are defined by the boundary condition that the longitudinal component of the electric field must vanish at the walls in a perfectly conducting waveguide.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/305595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How molecules radiate heat as electromagnetic wave? an object of higher temperature radiate infrared rays as a way to decrease the temperature. how a molecule produce a electromagnetic wave? in atoms electromagnetic radiation is caused by electrons. what is responsible in molecules?
As it was correctly noted by others, molecules consist of atoms, and the radiation can be emitted as transitions between the atomic orbitals. Molecules also have other degrees of freedom, related to the rotational and vibrational motion of atoms within a molecule, their frequency being usually in infrared or even radio range. It is worth noting that not every transition between two energy levels may result in emission of electromagnetic waves: the transition should necessarily have a non-zero matrix element of a dipolar or magnetic moment, so that it couples to the EM field (although more complex types of coupling, e.g., quadrupole coupling, are also possible). Thus, vibrational and rotational modes are usually not active themselves in the EM spectrum, but modify the electronic transitions, by adding satellite lines: $$\hbar\omega_{optical} \rightarrow \hbar\omega_{optical}\pm n\hbar\Omega_{vibrational/rotational}.$$ Finally, it is necessary to mention organic dyes - a special class of organic molecules where complex re-arrangement of electronic structure is possible. This makes these molecules fluorescent and widely used in laser technology.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/305696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does the equipartition theorem for a diatomic gas apply to the three rotations if the temperature is high enough? In a diatomic gas, there are three degrees of freedom of rotation, of which the frozen mode (the rotation around the bond axis) is ruled out because the energy spacing of frozen rotational energies is about 100 000 times as great as the other two, non-frozen rotations (around the axes perpendicular to the bond axes). But what if the temperature of the gas is very high (just beneath the temperature at which the molecular bonds start breaking up)? The two "normal" rotations possess an energy which grows quadratically with n, the number of angular momentum quanta ($L=\frac{nh}{2\pi}$): $T=\frac{n^2h^2}{8\pi^2I}$ (T is the kinetic energy, I the moment of inertia and h is Planck's constant). So if the molecule possesses 100 quanta (n=100) of angular momentum (for the "normal" rotations), the energy gets bigger 100 000 times as in the case n=1 (of course, the same is true for the frozen rotation). Are the non-frozen rotational energies high enough before the molecular breakup to convey quanta of angular momentum to the two atoms in a "target" molecule, and give this molecule an angular momentum around the bond axis (n=1), in which case the equipartition theorem holds?
Let us consider, e.g., a hydrogen molecule. The bond dissociation energy is about 5 eV (https://en.wikipedia.org/wiki/Bond-dissociation_energy). On the other hand, to initiate rotation of the diatomic hydrogen molecule around its axis you need to drive the electrons in hydrogen atoms from the ground state to higher levels (the moment of inertia of the nucleus is negligible), which requires at least 10 eV (http://astro.unl.edu/naap/hydrogen/levels.html) (I guess this energy is not dramatically different in the molecule compared to the atom). Thus, temperature increase can only partially "unfroze" this degree of freedom before the molecule dissociates.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/305809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why are light rays able to cross each other? See the image first: Why are light rays able to cross each other? Air isn't able to.
Why are the light rays able to cross each other The underlying level of nature is quantum mechanical. Light is an emergent phenomenon from the quantum mechanical level of photons, where an enormous number of photons of energy $h\nu$ build up the classical electromagnetic wave which is light. Photon–photon interactions are very very rare at energies below twice the mass of an electron . The quantum mechanical feynman diagram of two photons interacting, from which the probability of interaction can be calculated: has four electromagnetic vertices, i.e. (1/137)^1/2 for the amplitude, and when squared as it multiplies the integral for the probability, the number becomes miniscule, so photon photon interactions are very rare. As the answer by @AccidentalFourierTransform states, using classical electromagnetic waves, some interaction can happen, but it would need very good instrumentation to see it. One can see interference between two light beams, but interference is not interaction, it comes from the superposition of two beams collective wavefunctions, which when detected show the interference pattern from the way the photons' wave functions build up the macroscopic light beam. Superposition is not interaction, so the beams can cross and continue on their way, if a detector is not introduced in the overlap.(Note that anyway, to see interference patterns one should have coherent monochromatic beams). For high energy photons other channels open with higher probability, but that is another story.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/305942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 7, "answer_id": 0 }
Is there any advantage in stacking multiple images vs a single long exposure? Suppose I have a source object that is not time varying, to be concrete let's say it's a galaxy. Is there anything additional that can be learned or done with multiple short exposure images of exactly the same field as compared to a single long exposure, given that the total integration time is identical? I'm thinking of things along the lines of noise suppression, background removal, image processing magic... So far the only thing I can think of is that a long exposure could saturate the detector (I'm thinking CCD here). Short exposures could avoid this, allowing for accurate photometry across the entire image. I've tagged this [astronomy] since that's the area of application I'm most familiar with, but perspectives from other fields are welcome.
If your exposures are short enough (a fraction of a second), you can even combat turbulence in the atmosphere. The trick is to do very many short images then pick the ones where a (bright) point source is sharpest and only stack those. The technique is called Lucky Imaging and can deliver images as sharp as the Hubble Space telescope from ground-based instruments. As an aside, your question could be - what should be my criterion for when not to stack images? - because the advantages to doing so, in terms of bad pixel rejection, cosmic ray removal and dynamic range, are so great. For optical CCD images, the break-even point is normally when the readout noise becomes a negligible contributor to the signal to noise of whatever you are trying to measure. Another consideration can be how long it takes to read out the CCD, which results in "dead-time". Lucky Imaging relies on special electron-multiplying CCDs that can be read out very rapidly with modest readout noise, at the expense of a dispersion in the gain (number of output electrons per input photon). Most other astronomical CCDs minimise readout noise at the expense of readout times of tens of seconds, but are highly linear.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/306036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 6, "answer_id": 3 }
Can an accelerating frame of reference be inertial? In physics problems, the earth is usually considered to be an inertial frame. The earth has a gravitational field and the second postulate of the general theory of relativity says: In the vicinity of any point, a gravitational field is equivalent to an accelerated frame of reference in gravity-free space (the principle of equivalence). Does this mean that accelerating frames of reference can be inertial?
No. By definition an accelerating frame of reference cannot be an inertial frame of reference. The Earth is only approximately an inertial frame of reference over sufficiently small distances and times.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/306303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Is the magnetic field of a moving electron caused by length contraction in the direction of motion? Consider an electron moving relative to us. Because the space in the electron's rest frame is contracted relative to us in the direction of the electron's velocity, the electric field lines are squeezed in the same direction, so the electric field "density" is bigger perpendicular to the electron's motion (but smaller (zero?) in the direction parallel to its motion). Is this the qualitative source of the magnetic field?
The idea is good but it is a little more complicated. You have to work with the tensor form of the electromagnetic field $ F^{\mu \nu}$and the 4 dimensions of space-time. The Lorentz transformation that takes an electron from rest to an electron with a constant velocity can be seen as a rotation in the 4 dimensions of space time (${\Lambda^{\nu'}}_\nu $). All tensors will change, "rotate", according to this rotation. Like 3 dimensional vectors rotate when you rotate a frame. Because the electromagnetic tensor has two indices you have to apply the rotation on each index as described here . Basically the tensor transforms like this: $F^{\mu'\nu'} = {\Lambda^{\mu'}}_\mu F^{\mu\nu} {\Lambda^{\nu'}}_\nu $ And you get the new electric and magnetic field inside the new tensor $F^{\mu'\nu'}$. You could also use the electromagnetic potential vector $A^{\mu}$. Its transformation is simpler because it has only one index (it's a vector). So basically your idea is good. Contraction (which is a rotation in fact) of space and time will "rotate" the electric and magnetic fields. The math involves tensor transformation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/306371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
How are club-style weapons effective? The First Law of Thermodynamics states that I can't swing an object held in one hand with more energy than I can swing my arm, and the Second Law says that the total energy would probably even end up being somewhat less. And yet, a person who might not be afraid of getting punched by me would certainly be more cautious if I had some sort of blunt object, such as a club, baseball bat, or crowbar. If it were a blade, I could understand that: the edge focuses the impact down to a much more concentrated line. But a blunt weapon doesn't do that, so how is it able to be an effective weapon, hitting harder than you can hit with an unarmed blow?
There are several reasons: * *It is hard. That's why even wearing a knuckle duster will increase the damage you do greatly. And wearing boxing gloves decrease the damage you do. *Humans are more strong than they are fast. I'm a bit oversimplifying, but the limiting factor in giving a strong blow is not the energy or force you can exert, but rather how fast your hand can move. You can have the same blow speed with a brick in your hand, but again, cause much more damage. *It is long. This is again related to the second point, by using a longer object and more force, you can have an even faster moving point of contact and momentum\energy, without having to carry a very heavy object around.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/306446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
How do scientists know Iron-60 is created during supernovae? I know that the meteoroids contain Ni-60, which is formed after decaying Fe-60, and as per my study, I got to know that Fe-60 is formed during the time of a supernova. But I wonder how scientists know/find that these elements were created during that event?
Don Clayton investigated the production of Fe-60 in his 1971 Nature paper New Prospect for Gamma-Ray-Line Astronomy (paywalled, but the abstract also hints to Arnett & Clayton 1970, also paywalled, but that abstract is unclear as to the contents being about Fe-60). This likely would have used supernova nucleosynthesis calculations (see, for example, Clayton's sometimes collaborator Brad Meyer's NucNet tools, though this is probably more advanced than what Clayton had at his disposal in the 1970's). Clayton later wrote a summary The Role of Radioactivities in Astrophysics which included a history of gamma-ray lines and discusses the Fe-60: The $^{60}$Fe nucleus emits a 59 keV gamma ray upon decay, and its daughter $^{60}$Co emits gamma-ray lines of 1.17 and 1.33 MeV. Reasoning that during its long mean lifetime some 50,000 supernovae occur in the Milky Way, their collective effect should be observable. This reasoning applied equally well thirteen years later to the first interstellar radioactivity to be detected, that of $^{26}$Al. Which mostly confirms the notion that the computations came before the observations (which I'm not sure the Fe-60 $\gamma$-ray line has been observed, Binns et al (2016) indicate that the element itself has been observed in very small numbers as cosmic rays, but it doesn't seem to say anything about the $\gamma$-ray emission).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/306662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 2, "answer_id": 0 }
Does length of a hosepipe affect pressure/flow I have connected a hosepipe to my shower drainage plumbing (1st floor), which I am running into the garden (ground floor) as a kind of grey-water system. However, the water drains out the shower terribly slowly. The hosepipe is 50 meters long. I'm wondering whether the length is the problem. It probably only needs to be 20m to reach the end of the lawn. I don't want to cut it and ruin the hosepipe without knowing with some certainty that a shorter pipe will increase the flow/draining rate. Any opinions?
Yes, you are correct that the length is the problem. As a matter of fact, in an application like yours the flow rate is pretty much inversely proportional to the length of the pipe: If you cut the pipe length in half, the flow rate will roughly double. The flow rate $Q$ in $\mbox{m}^3/\mbox{s}$ is given by $$Q=\frac{\Delta P}{L} \frac{\pi D^4}{96\mu},$$ where $\Delta P$ is your pressure drop in Pascal ($\Delta P=\gamma\,h$, with $\gamma\approx9{,}810\,\mbox{N/m}^3$ the specific weight of the water, and $h$ the elevation of the inlet over the outlet), $L$ the length of the pipe, $D$ its inner diameter (all lengths are in $\mbox{m}$), and $\mu\approx8.9\times10^{-4}\,\mbox{Pa s}$ the dynamic viscosity of water. Notice that, if you could use a hose with a larger diameter, you could potentially gain a lot more than by shortening the pipe. Double the diameter of the pipe, and your flow rate increases by a factor of 16. That's assuming the flow stays laminar, which may or may not be the case; but you will increase your flow rate significantly even if you get turbulence.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/306807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Relation between perturbation theory and Taylor expansion in QM So I am looking at non-degenerate perturbation theory. The idea is that the perturbing term in the Hamiltonian is small so you somehow expand the energies and wave functions in this small term and collect orders. Now I did an exercise in which you apply perturbation theory to a system, which is solvable. You then show by Taylor expanding the analytical result of the energies that the first order perturbation term is equal to the first order term in the Taylor expansion. Should this be obvious? I know that the first order perturbation theory was derived based on expanding the energies in the small perturbing term but somehow I cannot see that it is exactly equivalent to simply calculating the first order term in the energy.
I was thinking, if we write the matrix m=2 by 2 matrix. Then the $\lambda H'$ (which was treated as unknown), could be thought as the linear combination of $\lambda(H^1+\lambda H^2 + ...)$. In a sense $\lambda$ here was $x$ in Taylor expansion, and the sequence of $H^1+\lambda H^2 + ...$, if you thought of the each of four indices, they were actually "independent" of each other. Thus you practically got four independent sequence for the taylor expansion of all possible function in the four places in the matrix. Where the combination of $\psi^j_n$ and $E^j_n$ were the resulted function and energy states for each $\lambda^j$'s power. (Notice $\psi^j_n$ and $E^j_n$ was the sum of all taylor expansion for the solution of $H^i$ in each $j$ states and then a double sum range from $n=0$ to infinity.) In conclusion, there was two sets of taylor expansion. One for $H^i$ where represented the combination of taylor expansions for $m^2$ number of places. Two was the taylor expansion of $\psi^{ij}$ and $E^{ij}$ for each $H^i$ operator. Notice the limits of taylr expansion, which was taken care of by the physical assumption that the system won't blow up(singularities) or discontinuity, and smooth meant you could always get one. Also, just to be more clear, I suspect that $E^j_n$ in griffiths was actually the sum of all $j$ states of all $E^{ij}_n$ where $E^{0j}_n$ the base case was excluded, so did the $\psi_n^{j}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/306890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Is hermiticity a basis-dependent concept? I have looked in wikipedia: Hermitian matrix and Self-adjoint operator, but I still am confused about this. Is the equation: $$ \langle Ay | x \rangle = \langle y | A x \rangle \text{ for all } x \in \text{Domain of } A.$$ independent of basis?
Symmetric operators are usually employed when working on real vector space, whereas Hermitian operators are usually employed when working on complex vector spaces. In finite dimension, the associated matrix is symmetric in the first case ($a_{ij}=a_{ji}$ for all $i,\,j$), whereas it is equal to its complex conjugate transposed matrix in the second case ($a_{ij}=\overline{a_{ji}}$ for all $i,\,j$). In both cases, the property (of being symmetric / Hermitian) is independent of the choice of basis but dependent on the choice of scalar product for the first case, Hermitian product for the second case.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/307113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 1 }
When is the motion oscillatory and when is it not Sometimes in physics questions we see examples in which applied force is balanced by an innate force of the body. Like in torsion balances and Cavendish's experiment. In this case we say that rotation( of coil) was up to the point where our applied force balanced the innate restoring force & there is no oscillation in this case. But in certain cases we see that instead of rotation & then stopping the coil rotates. For eg: in a ballistic galvanometer. So why in similar cases there is rotation in one case & oscillation in other cases ? What causes oscillation in not just the case of a B.G but other things also ( please give other eg also) & what causes just rotation & no oscillation in other one (please give other eg also) ? Consider the example of an ammeter , the coil rotates till the external torque ( due to current) equals the torsion couple. But in a B.G the coil oscillates though charge ( or current has passed). It doesn't stop the cases are similar but results are different.
For the motion to be oscillatory two conditions should be met. 1. There should be a return force that returns the system back to equilibrium. 2. The system should have an inertia, that is once it is in equilibrium position with a certain velocity it should continue to move. Inertia takes the form of inductance in a LC electric circuit and capacitor the role of return force. Damping takes away the energy from the oscillating system. With too much damping oscillations are barely possible. Similar connection can be made for other systems.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/307255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What does the $j$ mean in this notation? This section of Introduction to Quantum Mechanics by Griffiths is talking about the Maxwell Stress Tensor. I don't quite understand what the $j$ means on the left side of the "$=$" sign (for either of the 2 representations).
The $j$ represents the $j$-th component like is mentioned above. Seeing as your question is really about confusion about index notation rather than than the actual quantum mechanics, let me suggest a book to you. There is an incredible amount of books that will deal with vector and tensor notation across all different fields and subject types. The one I found that really cemented index gymnastics and notation was Tensors, Relativity and cosmology - Dalarsson & Dalarsson. This might be because it's a great book or because it was probably my 4th or 5th time trying to get my head around it. If I were you, I'd get a good strong mental capacity for the notation and index gymnastics before jumping into a book on QM. Two more are Goldstein - Mechanics. Landau - Mechanics Vol I in series. Cheers.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/307382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Pauli- Villars regularization in the Electron Vertex Function: Evaluation I'm studying one loop contribution for electron vertex function form Peskin and Schroeder's book " An introduction to quantum field theory " Section: 6.3. I have some troubles with Pauli- Villars regularization and getting the final results, so any help will be appreciated .. Starting from: $I = \delta\Gamma^\mu(p',p)= 2 i e^2 \int \frac{d^4l}{(2\pi)^4} \int^1_0 dx dy dz \delta (x+y+z-1) \frac{2}{D^3} ~ \times \bar{u}(p') \Big[\gamma^\mu . \Big(-\frac{1}{2} l^2 +(1-x)(1-y)q^2+(1-4z+z^2)m^2\Big) + i \frac{\sigma^{\mu\nu}q_\nu}{2m} (2 m^2 z (1-z)) \Big]u(p)~~~~~~~~~(6.47)$ where $D= l^2-\Delta+i\epsilon, ~~~~~~~~\Delta=-xyq^2+(1-z)^2m^2$, while : $ \Delta_\Lambda = -xyq^2+(1-z)^2m^2+ z \Lambda^2 $. After momentum integration and Pauli- Villars regularization, this equals: $I= \frac{\alpha}{2\pi} \int^1_0 dx dy dz \delta (x+y+z-1) ~ \times \bar{u}(p') \Big(\gamma^\mu . \Big[ \log \frac{z\Lambda^2}{\Delta} + \frac{1}{\Delta}\Big((1-x)(1-y)q^2+(1-4z+z^2)m^2\Big)\Big] + i \frac{\sigma^{\mu\nu}q_\nu}{2m\Delta} \Big[2 m^2 z (1-z)\Big] \Big)u(p)~~~~~~~~~(6.54)$ * *Here it's suppose it substitute by $\log\frac{\Delta_\Lambda}{\Delta} = \log\frac{-xy q^2+(1-z)^2m^2+z\Lambda^2}{\Delta}$ , but why instead it substitute only by $\log\frac{z\Lambda^2}{\Delta}$ ? *Then how can we reach for : $ F_1(q^2) = 1 + \frac{\alpha}{2\pi} \int^1_0 dx dy dz \delta (x+y+z-1) ~ \times \Big[ \log \Big( \frac{m^2(1-z)^2}{m^2(1-z)^2-q^2xy}\Big) + \frac{m^2(1-4z+z^2)+q^2(1-x)(1-y)}{m^2(1-z)^2-q^2xy+\mu^2z} - \frac{m^2(1-4z+z^2)}{m^2(1-z)^2+\mu^2 z}\Big] ~~~~~~~ (6.56)$ In deed I'm little bit confused, how did we get $1$ term ?, where $\log z\Lambda^2$ had gone ? now $\log m^2(1-z)^2$ in the nominator, the part of $\Delta_\Lambda$ which didn't written in the previous equation , also in this $\log$ part of the nominator, where's the $q^2$ term ? I tried to read the book explanation, but I can not understand too much so have any one made this exercise before ?
I only recently stumbled over the same issue, so this answer might come a bit late: The first term in 6.56 - the 1 - appears due to the fact, that here $F_1(q^2)$ includes all corrections (to all orders), indicated by the last term, which represents terms of second order and higher in the electric coupling constant by the Bachmann-Landau-Symbol O($\alpha^2$). Hence, the 1 represents the contribution of the first order of pertubation theory to the Form Factor, as shown on in section 6.2, page 184. Now, in order to clear up the confusion around the devious log term, one only has to write out the executed subtraction $\delta F_1(q^2) - \delta F_1(0)$ as this will then give $$ \delta F_1(q^2) - \delta F_1(0) = log ( \frac{z \Lambda^2}{\Delta(q^2)}) - log ( \frac{z \Lambda^2}{\Delta(0)}) + rest= log (\frac{z \Lambda^2}{\Delta(q^2)}\cdot ( \frac{z \Lambda^2}{\Delta(0)})^{-1}) + rest = log ( \frac{\Delta(0)}{\Delta(q^2)}) + rest = log ( \frac{2m^2(1-z)^2}{m^2(1-z)^2-q^2xy} ) +rest \ \ \ \ . $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/307598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Wave packet expression Speaking in general about plane waves propagating along $z$ (electro-magnetic waves, for example; not necessarily particles represented as waves), a wave packet can be defined as $$A(z,t) = \int_{\omega_1}^{\omega_2} A ( \omega ) e^{j (\omega t - kz) } d\omega$$ In particular, this expression is used when dealing with group velocity. But a single plane wave is usually expressed as $$B(z, t) = B_0 \cos ( \omega t - kz )$$ So, why is the complex exponential used above? Or is the actual $A(z,t)$ of the wave packet just the real part of the first expression? Observation: I did not consider the form $B_0 \cos ( \omega t - kz )$ because I necessarily want real functions, but because this is the standard form a plane wave is presented and written.
A real-valued wavepacket solution of the dispersionless 1D wave equation can always be defined as $$A(z,t) = \int_{\omega_1}^{\omega_2} A ( \omega ) e^{j (\omega t - kz) } d\omega, $$ where $\omega_1=-\omega_2$ and the frequency-domain amplitude satisfies $A(-\omega)=A(\omega)^*$; if this is not the case then $A(z,t)$ will have some complex values. This form is consistent with the plane-wave function you wrote, $$B(z, t) = B_0 \cos ( \omega_0 t - k_0z )= \frac{B_0}{2} \left(e^{j (\omega_0 t - k_0z) }+e^{j (-\omega_0 t + k_0z) }\right),$$ with a frequency-domain amplitude $B(\omega)=\frac12 B_0 \left[ \delta(\omega-\omega_0) + \delta(\omega+\omega_0)\right]$. As such, there is no contradiction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/307680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why exactly do we say $L = L(q, \dot{q})$ and $H = H(q, p)$? In classical mechanics, we perform a Legendre transform to switch from $L(q, \dot{q})$ to $H(q, p)$. This has always been confusing to me, because we can always write $L$ in terms of $q$ and $p$ by just taking the expression for $\dot{q}(q, p)$ and stuffing it in. In thermodynamics, we say $U$ is a function of $S$, $V$, and $N$ because $$dU = T dS + p dV + \mu dN,$$ which is exceptionally simple. But for the Lagrangian, we instead generally have $$dL = (\text{horrible expression})\, dq + (\text{horrible expression})\, d\dot{q}$$ In this case, I see no loss in 'naturalness' to switch to $q$ and $p$, so what's the real difference between considering $L(q, \dot{q})$ and $L(q, p)$?
There's nothing stopping you from writing $L$ as a function of $q$ and $p$. In fact, you're required to write $L$ as a function of $q$ and $p$ to get the Hamiltonian! But the Euler-Lagrange equations become very ugly. Consider the normal Euler-Lagrange equation $$ \frac{d}{dt}\frac{\partial L}{\partial \dot q}=\frac{\partial L}{\partial q} $$ Let's try writing this in terms of $q,p$. The left hand side just becomes $\dot p$. But the left hand side is a lot uglier. We'd have $$ \frac{\partial }{\partial q}L(q, p(q,\dot{q}))=\frac{\partial L}{\partial q}+\frac{\partial L}{\partial p}\frac{\partial p}{\partial q} $$ and the Euler-Lagrange equation becomes $$ \dot{p}=\frac{\partial L}{\partial q}+\frac{\partial L}{\partial p}\frac{\partial p}{\partial q} $$ This might not look ugly at first glance, but it is actually terrible. In order to write down the proper Euler-Lagrange equation, we need to know the functional form of $p$ in terms of $q$. Thus, the Lagrangian as a function of $(p,q)$ is not sufficient to generate equations of motion. This is avoided when we go to the Hamiltonian formalism, where Hamilton's equations treat $p$ and $q$ as independent.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/307794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 2, "answer_id": 0 }
Charged plasma and ion grid in interaction in ion thrusters I was just wondering ..... In this Image of an ion thruster, when the positively charged particles pass through the grids, wouldn't they just bombard the negatively charged grid(a fraction of them). This means that there must be a constant adjustment to maintain the potential difference between the grids. Is this the reason for the high energy consumption of these engines (along with ionization..)?
A good high power ion thruster uses a lot of energy to accelerate ions to high energies. The image does not include any power source, which is a serious problem if a person looking at the image wants to understand how ion thrusters work. A simple ion thruster woks like this: A small amount of energy is used to ionize a bunch of atoms, then a much larger amount of energy is used to move some of the electrons away from the plasma. Now the plasma is a positively charged plasma, from which positive ions tend to fly off. If there's some negative object nearby, it accelerates the approaching positive charges and decelerates the positive charges that are moving away, so the negative object does not really do anything to the ions that move past it, but it may prevent ions flying off into the opposite direction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/307964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is an induced electric field? I have read in many books about induced current in a coil (Faraday's law), and also the motional emf across a moving conductor in a magnetic field. But somewhere I read about induced electric field due to a time varying magnetic field. And I think that Induction of electric field is the fundamental phenomenon, and induced emf and current are the results of it I am just a novice in physics. Could someone explain me how these phenomenon (Induction of emf and Induction of electric field) are related to each other?
Talking of "induced" fields is, again, yet more bad terminology and language that conveys a misleading notion - here, an impression of a kind of "causality" of one field upon the other - that is not part of our generally-used physical model. What it means is this: In any case where that the magnetic field is changing in time, there must also be present at the same time an electric field proportional to the rate of change of magnetic field. It violates Maxwell's equations to have a situation with a changing magnetic field only and no associated electric field. The reason for this is that the electric and magnetic field are really one single mathematical entity, and the Maxwell's equations describe how that single entity changes. That's why you talk of an "electromagnetic field". Situations where you have, say, a dancing magnet, produce an electromagnetic field that has both an electric and a magnetic component, while if the magnet is stationary, it has only the magnetic component. This is most naturally expressed in the fully-relativistic formulation using the vector four-potential $^{(4)}\mathbf{A}$ (usually called by its components, $A^\mu$), which elegantly and seamlessly integrates the two fields. This is the truly "fundamental" entity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/308061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why the electrons below the Fermi level do not conduct electricity? Physically, why is it that the electrons need to excited above the Fermi level to conduct electricity? In other words, why is the current zero when the electrons lie below the Fermi level? Does Pauli exclusion principle play any role here?
Electron bands are symmetric about $k = 0$, so for every electron in a filled band, there exists another electron with opposite momentum which cancels out its current, resulting in zero net current flow. An infinitesimal applied electric field just tilts the bands by an infinitesimal amount, so if the whole band lies below the Fermi level, then it will remain filled if infinitesimal electric field is applied, and this current cancellation will remain robust.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/308181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 7, "answer_id": 4 }
Is short circuit technically the same as overloading? Taking the simplest circuit: battery and resistors. If I connect lots of resistors in parallel, wouldn't that increase the current to an extent that it would be technically be very similar to shorting the circuit?
No, a short circuit needn't be an overload. There are circumstances (like in current transformers) where no load, however small in resistance, is an overload. There are ideal signal sources that are voltage sources (i.e. low impedance), and sources that are current sources (i.e. high output impedance), and sources that are of known impedance (50 ohm RF wiring, and 110 ohm digital differential wiring, depend on that). When something is an overload, it means that it is outside the specified intended load limits. Sometimes, that means a HIGH resistance is an overload (and a current source will overvoltage and damage the insulation). Low resistance can be an overload if the source is such low impedance that destructive currents flow. Even that, though, isn't an overload if the intention is an explosive squib.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/308261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Why is perturbation theory always implemented around $\alpha=0$? In the perturbative approach to field theory we expand whatever we are computing on a power expansion in some coupling $$ \sum^nd_n\alpha^n $$ then in principle we can compute all the $d_n$. This series is in general expected not to be convergent, but it is hoped that it at least is an asymptotic expansion of the true thing when $\alpha\to0$. My question is, why is the perturbative approach only implemented around $\alpha=0$? I mean, we could make expansions around any given $\alpha_0$ and obtain expansions that (would be hoped) to be asymptotic to the real thing when $\alpha\to\alpha_0$? $$ \sum^nb_n(\alpha-\alpha_0)^n $$ why is perturbation theory always implemented around $\alpha=0$?
Usually the problem is set such that $\alpha=0$ simplifies the equations: eliminates mixing / interacting terms, or allows to ignore certain effects at first order. In general, we want to get a problem that we can solve, that is the main point of perturbation theory. If we can solve the problem for some different value of the parameter, than it is meaningful to expand it around the other point. Note that you can always rescale the parameter defining $\beta = \alpha - \alpha_0$ and have the new series in $\beta$ around at zero, so effectively the starting point, as far the parameter itself is concerned, is meaningless. However note that the coefficients $b_n$ and $d_n$ of the expansion around the two points are related, so they are effectively equivalent as long as you can actually solve the problem for both values of the parameter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/308350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
"Bending up" $LC$-circuit to a linear antenna mathematically In introductory physics books one often finds a picture series leading from an $LC$-circuit to a simple linear antenna by "bending up" the $LC$-circuit, for example like this: The key difference between the first and last picture is the described as that the fields and the $EM$-energy are very good localized in the $LC$-circuit but expanding in space in the right picture as indicated in the next figure: In my own words, I would say that the $LC$-circuit will already emit an $EM$-wave, but with a power magnitudes smaller thatn the linear antenna. * *Are there any experimental measurements available which compair (in far field) the power radiated by the $LC$-circuit with the linear antenna such that I can get a feeling of how much magnitudes the difference will be. *Can this deformation be made mathematically more rigorous?
Are there any experimental measurements available which compair (in far field) the power radiated by the LC circuit The radiation coming from a discrete inductor or capacitor will depend on the details of their construction. For example, it's possible to buy a "shielded" inductor which has a ferrite material surrounding it as well as in its core, for the purpose of reducing radiation from the inductor. The radiation coming from a tank circuit made of a discrete inductor and capacitor is likely to come more from the wires connecting the parts rather than from the parts themselves, and it will depend on the details of the construction of the circuit. How long are the wires, and how far apart are they, for example. So there's no way to compare a "generic" discrete-element LC circuit with an equivalent antenna the way you're asking about. Can this deformation made mathematically more rigorous? You can use finite element analysis to analyze an antenna and find the equivalent L and C elements to model it as a lumped circuit. However I don't think this is more "rigorous" in the way most physicists use the term "rigorous".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/308447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }