Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What does symplecticity imply? Symplectic systems are a common object of studies in classical physics and nonlinearity sciences. At first I assumed it was just another way of saying Hamiltonian, but I also heard it in the context of dissipative systems, so I am no longer confident in my assumption. My question now is, why do authors emphasize symplecticity and what is the property they typically imply with that? Or in other more provocative terms: Why is it worth mentioning that something is symplectic?
Given a symplectic structure, some awesome results occur. This is seen most obviously in Classical Mechanics as the Wiki-site states. For instance, in talking about particle motion, you are lead to phase-space, which is the cotangent bundle $T\approx\mathbb{R}^6$ over $\mathbb{R}^3$, and this bundle naturally carries a symplectic structure. Once you have such a structure, then (as Wiki states verbatim): Any real-valued differentiable function, on a symplectic manifold can serve as an energy function or Hamiltonian. You can now discuss gradient flows (like in fluid dynamics), and some conservation statements such as Liouville's theorem. But yes, the main good thing is that you are now able to get your hands on a differential equation which predicts the future behavior of your system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/32738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 3 }
In electro-optic material, what is happening to the structure of the material for the index of refraction to change? I apologize if electro-optic material is not the correct word. As I understand it, when an electric field is applied to an electro-optic material, the index of refraction changes in proportion to the applied field. What is happening to the structure of the material for this to occur?
Firstly, it is important to realize that the refractive index of a material is not an "inherent" property of the material. Any dielectric material can have permanent or induced electric dipoles. In linear materials, the density of these electric dipoles (called the polarization density) is proportional to the applied electric field. The refractive index is very simply related to this constant of proportionality. In a non-linear material (which you're referring to as an "electro-optic" material), this density will be proportional to higher powers of the electric field (in the simplest cases). Thus, if one takes the ratio of the polarization density to the electric field, one finds it to be proportional to some powers of the electric field itself. Thus, the refractive index now is not a constant of the material and will depend on the field, among other things. The physical origin of this effect, therefore, is the manner in which permanent and induced dipoles in an dielectric material respond to an applied field. For a more mathematical treatment, look up http://en.wikipedia.org/wiki/Polarization_density , or a quantum electronics book by Amnon Yariv.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/32788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is it theoretically possible to reach $0$ Kelvin? I'm having a discussion with someone. I said that it is -even theoretically- impossible to reach $0$ K, because that would imply that all molecules in the substance would stand perfectly still. He said that this isn't true, because my theory violates energy-time uncertainty principle. He also told me to look up the Schrödinger equation and solve it for an oscillator approximating a molecule. See that it's lowest energy state is still non-zero. Is he right in saying this and if so, can you explain me a bit better what he is talking about.
By the third law of thermodynamics, a quantum system has temperature absolute zero if and only if its entropy is zero, i.e., if it is in a pure state. Because of the unavoidable interaction with the environment this is impossible to achieve. But it has nothing to do with all molecules standing still, which is impossible for a quantum system as the mean square velocity in any normalized state is positive.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/32830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 8, "answer_id": 2 }
Is there some connection between the Virial theorem and a least action principle? Both involve some 'averaging' over energies (kinetic and potential) and make some prediction about their mean values. As far as the least action principles, one could think of them as saying that the actual path is one that makes an equipartition between the two kinds of energies.
* *There is an interesting Hamiltonian counterpart to BebopButUnsteady's nice Lagrangian answer: An infinitesimal canonical transformation (CT) $$\begin{align} \delta q~=~&\varepsilon q, \cr \delta p~=~&-\varepsilon p,\end{align}\tag{1}$$ [with type-2 generator $F_2 = (1+\varepsilon)q\cdot P$] of the Hamiltonian action $$\begin{align} S_H~=~&\int\! dt~ L_H,\cr L_H~=~&p\cdot \dot{q}-H,\end{align} \tag{2}$$ leads to the Hamiltonian virial theorem for long time-averages: $$\langle q\cdot \frac{\partial H}{\partial q}\rangle~=~\langle p\cdot \frac{\partial H}{\partial p}\rangle,\tag{3} $$ under the usual assumption of bounded motion. *The virial theorem (3) in Hamiltonian mechanics has the same form as the corresponding virial theorem in classical statistical mechanics, with the understanding that the long time-averages $\langle\cdot \rangle$ are replaced with statistical averages $\langle\cdot \rangle$. The latter follows from the (generalized) equipartition theorem $$ \langle F(z)\frac{\partial H(z)}{\partial z}\rangle ~=~k_BT \langle \frac{\partial F(z)}{\partial z}\rangle, \tag{4}$$ cf. a (currently deleted) answer by Nikolaj-K. *The (generalized) equipartition theorem (4) in classical statistical mechanics, in turn, is an analogue of the Schwinger-Dyson (SD) equations $$\langle F[\phi]\frac{\delta S[\phi]}{\delta \phi}\rangle~=~i\hbar\langle \frac{\delta F[\phi]}{\delta \phi} \rangle\tag{5}$$ in QFT.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/32896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 3, "answer_id": 0 }
Radioactive decay, why such unintuitive formula? When talking of exponential decay, as with radioactive decay, the formula used (e.g. Wikipedia and my textbook) is: $$ N(t) = N_0e^{-\lambda t} $$ This formula, with the decay constant $\lambda$ makes little intuitive sense. It is the ratio between the amount of radioactive material and the decay at any time. It might lead one to believe that after one time unit, the amount of radioactive material has been decreased by a factor $1/\lambda$, but that is not even the case. A much more intuitive form would be like the formula of exponential growth: $$ N_{wrong}(t) = N_0*(1-k)^t, k= 1-e^{-\lambda} $$ One only needs to look at that formula for a second to get an intuitive understanding of the rate of the decay. I got curious about this, and I want to ask why mathematicians or physicists have chosen the first mentioned formula. Did I miss something clever here?
We come to the first formula by considering the differential equation which we can experimentally measure: $$\frac{dN}{dt}=-\lambda N\tag{1}$$ We can solve differential equation $(1)$ by rewriting as follows: $$\frac{dN}{N}=-\lambda\cdot dt$$ We then integrate: $$\int{\frac{dN}{N}}=\int{-\lambda\:dt}\implies\ln{N}=-\lambda t+c_{1}$$ Exponentiating both sides (with base $\rm e$), gives: $$N(t)={\rm e}^{-\lambda t+c_{1}}={\rm e}^{\lambda t}{\rm e}^{c_{1}}$$ We can rewrite ${\rm e}^{c_{1}}=C$ as it is an arbitrary multiplicative constant. So we have: $$N(t)=C{\rm e}^{-\lambda t}$$ It just so happens that this constant $C=N_{0}$. This is why we choose to write it in the form: $$N(t)=N_{0}{\rm e}^{-\lambda t}\tag{2}$$ I hope this helps!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
How is a cathode ray tube different from beta minus radiation? In beta minus the result is one neutron in the nucleus changing to a proton, plus an electron and an anti-neutrino being sent off. The antineutrino is indifferent to our health. So I guess what makes a beta source dangerous compared to a cathode ray tube must be a difference in the kinetic energy of the emitted electrons?
Modest energy electrons (such as those found in CRT televisions and monitors) range out (i.e. dump all their energy and stop) very quickly in dense materials like glass, so these tubes are not emitting significant numbers of electrons and those that penetrate are even lower in energy. In fact they do emits small quantities of soft x-rays do to electron interactions with the material, but again the rate is low and the energy is minimal so there is little penetration. You probably don't want to sleep on an operating CRT, but watching television subjects you to a infinitesimal dose. (And the allowed dose is regulated throughout the industrialized world.) We can use the online interface to PSTAR to quantify the range TVs runs at a few tens of thousands of volts, meaning the electrons get, say 30,000 eV = 30 keV = 0.03 MeV, so the penetration is around $10^{-4}\text{ g/cm^2}$, which given that the density of glass is about 2.5ish we get a range of $4\times 10^{-5}\text{ cm}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is a good introductory book on quantum mechanics? I'm really interested in quantum theory and would like to learn all that I can about it. I've followed a few tutorials and read a few books but none satisfied me completely. I'm looking for introductions for beginners which do not depend heavily on linear algebra or calculus, or which provide a soft introduction for the requisite mathematics as they go along. What are good introductory guides to QM along these lines?
If you're new to this, start with University Physics by Young and Freedman. The reason is that this book discusses the concepts without the rigorous math. Study the following chapters: Chapter 38 Photons: Light Waves Behaving as Particles Chapter 39 Particles Behaving as Waves Chapter 40 Quantum Mechanics Chapter 41 Atomic Structure Chapters 38 and 39 give you background of early quantum theory. Chapter 40 and 41 discusses quantum mechanics. You can also read Feynman Lectures Volume 3 to grasp the concepts without heavy math. If you want to dig deeper, you have to study linear/matrix algebra and calculus. Afterwards, read Introduction to Quantum Mechanics by David Griffiths or Richard Liboff. Then if you want more, read Modern Quantum Mechanics by J.J. Sakurai. That's how I suggest you do it. Quantum Mechanics is, unfortunately, on of the more difficult physics subjects. You have to build you knowledge from easier texts or else you will get lost. Watching lectures is also an option. Stanford and Oxford uploaded their QM lectures in Youtube. Then again, you have to know calculus and linear algebra to be able to keep up with the lectures. Cheers! Berty
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "103", "answer_count": 19, "answer_id": 11 }
Electrial Conductivity of Thin Metal Films What is the best way to find specific/electric conductivity which is dependent of very thin film thickness?
If the film is thick enough to be more-or-less smooth and contiguous, then $$(\text{sheet resistance}) = (\text{resistivity}) / (\text{thickness}).$$ How thick? It differs from metal to metal. John Rennie's answer says that 40nm is roughly the threshold for silver to be contiguous. I know that gold is very susceptible to dewetting when you deposit too thin a film. It depends on the substrate too. But I think most if not all metals would be "more-or-less smooth and contiguous" if they are as thick as 250nm. Surface scattering or surface disorder can also cause the above equation to be inaccurate. But for metals, I doubt that would be noticeable except for films thinner than maybe 10nm.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How do people get the proton number for each element from experiment? How did people determine the proton number for each element from experiment in each decade of 20th century?
Moseley, the physicist who 'fixed' the Periodic Table at the start of the 20th Century, did it by measuring X-ray spectra. The energy of the $K_\alpha$ X-ray emission line is proportional to $(Z^2 - 1)$, where $Z$ is the atomic number. The results of Moseley's experiment fitted his formula so perfectly that he was able to predict the existence of several as-yet-undiscovered elements by looking at the gaps in his graphs. He also re-ordered the controversial placement of nickel and cobalt. Sadly he was killed in World War One before he was able to become the great scientific figure he surely would have been.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Where 2 comes from in formula for Schwarzschild radius? In general theory of relativity I've seen several times this factor: $$(1-\frac{2GM}{rc^2}),$$ e.g. in the Schwarzschild metric for a black hole, but I still don't know in this factor where 2 comes from?
A couple of preliminaries: (1) The Schwarzschild metric is not just the metric for a black hole. It's the exterior metric for any spherically symmetric, nonrotating gravitating body. For example, it's a very good approximation to the earth's metric, since the earth is nearly spherical and is not rotating at relativistic speeds. (2) Let's take units with $G=1$ and $c=1$. So the question to be answered is why, in a field such as the earth's, the time-time component of the metric $g_{tt}=1-2M/r$, expressed in Schwarzschild coordinates, has the factor of 2 in it. Because the 2 is present even in the weak-field case, we can appeal to the weak-field case to explain it. In the weak-field case, the Schwarzschild $r$ coordinate just means what we naively expect it to mean. In any static gravitational field, the metric can be written in a form where $g_{tt}=e^{2\Phi}$, where $\Phi$ is the gravitational potential. The interpretation is that for a clock at rest (relative to the preferred frame of the static field), the proper time $s$ can be found from $ds^2=e^{2\Phi} dt^2$. (This is with the +--- metric.) This simply means that there is a gravitational time dilation factor of $e^\Phi$. This time dilation factor can be found from standard arguments about elevators and the equivalence principle. The factor of 2 is present because the metric relates the squares of coordinate changes to the square of the change in proper time. For the weak-field limit of the Schwarzschild case, we have $\Phi=-M/r$, so $g_{tt}=e^{2\Phi}=1-2M/r+\ldots$, where ... represents higher-order terms that are negligible in the weak-field case. This explains why the 2 is present in the weak-field case. The question didn't ask for a complete derivation of the Schwarzschild metric, and it's not necessary to rederive the metric in order to suss out the reason for the 2. An explanation of the 2 in the weak-field case also constitutes an explanation of the 2 in the strong-field case. Given the form of the strong-field case, the 2 has to be there so that the weak-field behavior is recovered at large distances.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Cosmological constant of standard model of cosmology and observational data I am curious whether the current Lambda-CDM model of cosmology matches well with observational data, especially expansion of the universe. How well does Lambda-CDM defend its established status from other models, such as quintessence (quintessence can be said to extend Lambda-CDM, but there are some models against the standard model, I guess.)?
It fits remarkably well. One of the defining features of a cosmological constant is its equation of state. The equation of state, $w$, is given by $p \over \rho$, where $p$ is the pressure it contributes, and $\rho$ is the energy density. A cosmological constant has $w=-1$. The WMAP seven year report recorded the value as $w=-1.1 ± 0.14$. Within the error margins, the cosmological constant fits very well.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Bound states in QCD: Why only bound states of 2 or 3 quarks and not more? Why when people/textbooks talk about strong interaction, they talk only about bound states of 2 or 3 quarks to form baryons and mesons? Does the strong interaction allow bound states of more than 3 quarks? If so, how is the stability of a bound state of more than 3 quarks studied?
In a sense every nucleus is a bound state of 3N quarks. After all, the nuclear force between nucleons (protons and neutrons) is a result of the leakage of the strong color force outside the "boundary" of the nucleon. So there are undoubtedly gluons and even quark exchanges between the nucleons of a nucleus.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 2 }
Problems in the modern semiconductor/electronics technology? From what I have read, the problem with modern semiconductors/electronics seems to be quantum tunnelling and heat. The root of these problems is the size of the devices. The electrons are leaking out, and currents are causing active materials to melt. How far have we become in this regard? Can we make our devices even smaller? What is being done to maintain advancements in computing power? What is the main research, particularly in quantum mechanics and in solid state physics, being done to compute faster using less energy and space?
Well, seems that Intel can often find some material to get closer to the physical limit. But the limit can't be reached, your transistor needs at least 1 atom. Another limit is on the clock frequency, which is essentially due to material's intrinsic property (mobility, or speed of electrons). Graphene may have a good chance for its ultrahigh mobility. One practical solution should be parallel computing, as the CPU of our PCs has more and more physical cores. GPU computing is another way out. As for quantum computers, it's very hard to say as there are plenty of theoretical and technical obstacles.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the Principle of Maximum Conformality? I'm trying to understand this article about an advance in the theoretical understanding of QCD which centers on the Principal of Maximum Conformality. What is this Principle? In other words, what is being maximized and what does this tell us about the structure of QCD? Also, is this a new principle or a new application of an old principal? Here's the full paper on the principle's application to top physics.
It is an approach to perturbative QCD which resolves ambiguities regarding the renormalization scale of the theory. It is done by summing terms for which the $\beta$-function is non-zero into the running coupling. In this sense, the remaining terms are now "maximally conformal" due to $\beta=0$. This results in predictions independent of the renormalization scheme. This approach seems to be relatively new, see http://arxiv.org/abs/1107.0338 for a detailed treatment.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
What are the units or dimensions of the Dirac delta function? In three dimensions, the Dirac delta function $\delta^3 (\textbf{r}) = \delta(x) \delta(y) \delta(z)$ is defined by the volume integral: $$\int_{\text{all space}} \delta^3 (\textbf{r}) \, dV = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \delta(x) \delta(y) \delta(z) \, dx \, dy \, dz = 1$$ where $$\delta(x) = 0 \text{ if } x \neq 0$$ and $$\delta(x) = \infty \text{ if } x = 0$$ and similarly for $\delta(y)$ and $\delta(z)$. Does this mean that $\delta^3 (\textbf{r})$ has dimensions of reciprocal volume? As an example, a textbook that I am reading states: For a collection of $N$ point charges we can define a charge density $$\rho(\textbf{r}) = \sum_{i=1}^N q_i \delta(\textbf{r} - \textbf{r}_i)$$ where $\textbf{r}_i$ and $q_i$ are the position and charge of particle $i$, respectively. Typically, I would think of charge density as having units of charge per volume in three dimensions: $(\text{volume})^{-1}$. For example, I would think that units of $\frac{\text{C}}{\text{m}^3}$ might be possible SI units of charge density. If my assumption is true, then $\delta^3 (\textbf{r})$ must have units of $(\text{volume})^{-1}$, like $\text{m}^{-3}$ for example. Is this correct?
Let $x$ be dimensionless and Using the property $\delta (ax)=\frac{1}{|a|}\delta (x)$ we see that indeed the dimension of a Dirac delta is the dimension of the inverse of its argument. One reoccurring example is eg $\delta(p'-p)$ where $p$ denotes momentum, this delta has dimension of inverse mass in natural units.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 2, "answer_id": 0 }
Does the speed of light vary in non-inertial frames? The speed of light is the same in all inertial frames. Does it change from a non-inertial frame to another? Can it be zero? If it is not constant in non-inertial frames, is it still bounded from above?
The speed of light has a velocity of c in an accelerating frame of reference if you constrain yourself to making local measurements. So, the simple answer is that yes, the speed of light remains constant. However, if you don't take purely local measurements, you can get a different speed depending on your coordinate system. If you use a coordinate system where you, an accelerating observer, are at rest (like Rindler coordinates, where time is measured by accelerating clocks and distance is measured by rulers undergoing Born rigid acceleration) then light may not move at c.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42", "answer_count": 2, "answer_id": 1 }
Euler angle: space-fixed vs body-fixed axes I am sooo confused!! Between active and passive, intrinsic and extrinsic, vectors and basis .... Stipulate that we stick to active rotations only. Then Standard derivation of $R(\alpha, \beta,\gamma)=R_{z^{\prime\prime}}(\gamma)R_{y^\prime}(\beta)R_{z}(\alpha)$ uses intermediate frame $(x^\prime,y^\prime,z^\prime)$ in transformation from space-fixed axes $(x,y,z)$ to the body-fixed axes $(x^{\prime\prime},y^{\prime\prime},z^{\prime\prime})$ to derive $$ R(\alpha, \beta,\gamma) = \left(\begin{array}{ccc} ~~\cos{\gamma}&-\sin{\gamma} & 0 \\ \sin{\gamma}&\cos{\gamma}& 0 \\ 0 & 0& 1\end{array}\right) \left(\begin{array}{ccc} \cos{\beta} & 0 &\sin{\beta} \\ 0 &1& 0 \\ -\sin{\beta}& 0&~~\cos{\beta} \end{array}\right) \left(\begin{array}{ccc} ~~\cos{\alpha}&-\sin{\alpha} & 0 \\ \sin{\alpha}&\cos{\alpha}& 0 \\ 0 & 0& 1\end{array}\right) $$ But when rewriting in terms of spaced-fixed axes (Sakurai pg 172, e.g.), fairly straightforward arguments (mathematically, just similarity transformations), take us to $R(\alpha, \beta,\gamma)=R_z(\alpha)R_y(\beta)R_z(\gamma)$. But this does NOT multiply out as the same matrix -- despite the use of = everywhere! So I figured the former applies to the basis, the latter the vector components (since they transform inversely to one another). But the results are not transposes of one another. And even so, what of their purported equality? As you can see, I'm really tied in knots!! Anyone have a sword?
What this refers to is the Rotation Reversal Theorem - rotating first about axis z with angle az , and then about the rotated y axis by angle ay , followed by rotation by the now twice rotated z axis by angle bz is the same as rotating first about the original z axis by bz, followed by rotation about the ORIGINAL y axis by ay and then finally about the ORIGINAL z axis by az. This remarkable theorem works for any number of rotations and for other axis sequences than the Euler angles. There is a reference on https://www.researchgate.net/profile/Edward_Barile called Rotation Dyads and Coordinate Transformations for Moving Radar Platforms. It also references some books as well Shuh, Jung Yang "Advanced Dynamics" and E Neal Moore "Theoretical Mechanics" The Rotation Sequence Theorem is treated in my reference on pg 14 but my notation is not to everyone's liking so the other references may be better
{ "language": "en", "url": "https://physics.stackexchange.com/questions/33851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is Dalitz decay? What is Dalitz decay? I know there are Dalitz $\pi^0 \to e^+ + e^- + \gamma$ decay, $w \to \pi^0 + e^+ + e^-$ decay, may be more. But is there a rule to say which decay is Dalitz and which is not? Is there a rule to say which particle can decay by Dalitz decay and which does not?
After a really brief cursory review of the literature, I think that a Dalitz decay is a meson decay that involves two leptons in the final state, plus a photon. A double Dalitz decay has four leptons in the final state: see this paper and this paper for examples of the usage. The Dalitz decay is when a virtual photon from 2 photon decay of $\pi_0$ internally converts to a real lepton pair before it gets too far, and analogous thing for other meson or Higgs processes (two electrons from an internal photon conversion, plus a neutral object). I guess that the usage comes from the kinematic decay product phase space is described by a Dalitz plot, hence the name. I don't think it's anything deep.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/34041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Is it wrong to talk about wave functions of macroscopic bodies? Does a real macroscopic body, like table, human or a cup permits description as a wave function? When is it possible and when not? For example in the "Statistical Physics, Part I" by Landau & Lifshitz it is argued that such systems must be described via the density matrix (chapter I, about statistical matrix). As far as I got it, roughly speaking, macroscopic bodies are so sensible to external interaction that they never can be counted as systems, one have to include everything else to form a system. Is my interpretation right? When is it wrong to talk about wave functions of bodies that surround us?
It's not wrong, but you have to consider configuration spaces with exponentially large dimensionality. For N nonrelativistic particles, it's 3N dimensional not counting spin. This is beyond our ability. So, we take partial traces and "collapse" the wavefunction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/34084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 7, "answer_id": 2 }
What law of electro-magnetics explains this? I took my son to a science museum where they had a solenoid oriented vertically with a plastic cylinder passing through the solenoid. An employee dropped an aluminum ring over the top of the cylinder when there was no current going through the solenoid. Then they turned on the current going through the solenoid and they aluminum ring went flying up and off the top of the solenoid. What law of electro-magnetics causes the force on the aluminum ring?
I'll start this with Right Hand Grip rule for solenoids... "The coil (solenoid) is held in the right hand so that the fingers point the direction of current through the windings. Then, the extended thumb points the direction of magnetic field". (which would be along the axis of the coil) The higher the current, the more the magnetic field would be produced... For your example, let us assume the aluminium ring as a circular coil. When the uniform magnetic field is produced, there is a change in magnetic flux (such as this increase in magnetic field) along the axis of the ring, According to Faraday's law, induced current flows through the ring whose direction is given by Lenz's law. This induced current in the ring flows in a direction such that it opposes the magnetic field in the solenoid (the one which actually produces it). (But, the magnitude of induced magnetic field is always lesser than the field in the solenoid). Anyways, there's a repulsion. With the maximum repulsive force produced, the ring is thrown off from the solenoid. This force always depends on the magnitude of $B$ in the solenoid.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/34265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
If the electron is point like, then what is the significance of the classical radius of the electron? What is the physical meaning/significance of the classical radius of the electron if we know from experiments that the electron is point like? Is there similarly a classical radius of the photon? The W and Z bosons?
The classical electron radius is a length scale at which the classical self-energy of the electron completely accounts for the mass. It tells you where the classical theory of a pointlike electron breaks down. The compton wavelength tells you where quantum mechanics takes over. The ratio of the compton wavelength to the classical electron radius is the reciprocal of the fine-structure constant, and the fine structure constant tells you the strength of the successive quantum corrections. So the classical electron radius is telling you small the compton wavelength of the electron can be given a fixed mass of the electron (so the charge is changing), before the quantum theory would be as bad as the classical theory. QED is well in the safe region, having a classical electron radius much smaller than the Compton wavelength, and is therefore well described by a quantum field theory. This argument suggests that the theory of a massive electron whose charge is so big that it's Compton wavelength is smaller than its classical radius is inconsistent. This is the limit of large fine-structure constant, in which the theory of quantum electrodynamics is believed to be inconsistent, because of the Landau triviality issue.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/34437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
How much does electromagnetic radiation contribute to dark matter? EM radiation has a relativistic mass (see for instance, Does a photon exert a gravitational pull?), and therefore exerts a gravitational pull. Intuitively it makes sense to include EM radiation itself in the galactic mass used to calculate rotation curves, but I've never actually seen that done before... So: if we were to sum up all the electromagnetic radiation present in a galaxy, what fraction of the dark matter would it account for?
The luminosity of the Galaxy is currently estimated to be around $5\times10^{36}$ W and thus an integrated "mass loss" in the form of radiation of of order $10^{-3} M_{\odot}$/yr. But how much radiation is present in the Galaxy? An order of magnitude estimate could be that the Galaxy (including the dark matter) is of order 100,000 light years in radius and so contains about 100,000 years worth of mass in the form of radiation - i.e. about $100M_{\odot}$. If the CMB has a "mass" density of $5\times10^{-34}$ g/cm$^{3}$, the equivalent mass of CMB photons in the same volume is a few hundred $M_{\odot}$. These numbers are uncannily similar and of course both are completely negligible in a gravitational sense of order 1 part in $10^{10}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/34516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Parity, how many dimensions to switch? Parity is described in Wikipedia as flipping of one dimension, or - in the special case of three dimensional physics - as flipping all of them. Is there any simple rule that generalises both for any dimension? Like: "Flip an odd number of dimensions."?
If you have two coordinate systems with the same origin, you can represent a (linear) transformation of coordinates from one to another as a matrix. This matrix has either positive or negative determinant. This sign of the determinant is what gives the transformation its parity. (All this applies to any number of dimensions, not just 3.) If you compose multiple linear transformations, the matrix of the final transformation is the matrix product of their matrices. And the determinant of the result will be positive if and only if an even number (including 0) of the original matrices have a negative determinant. So, you can categorize linear transformations using the sign of their determinant, using their parity. Some (like rotations, scaling or sheering) preserve parity when composed with another, others (like reflection) flip it. Knowing this, it's easy to see that flipping $n$ of coordinates (regardless of the number dimensions) produces a matrix with $-1$ appearing $n$-times on the diagonal, so the transformation has odd parity if and only if $n$ is odd.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/34702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is it possible for a black hole to form for an observer at spatial infinity? To my knowledge if you calculate the coordinate time (time experienced by an observer at spatial infinity) it takes an infinite amount of time for an object to fall past the horizon of a Schwarzschild black hole. Doesn't this imply that it takes an infinite amount of coordinate time for a Schwarzschild Black Hole to form since the last bit of in-falling matter won't ever ross the horizon as observed by someone at spatial infinity? If so, is it possible for other types of black holes (Kerr etc.) to form in finite coordinate time?
Pick some maximum visible wavelength of light (say, the radius of the solar system). And let's consider only initial source frequencies below the rate at which, say, one photon per year is emitted. In a finite time, all of the light leaving the matter distribution below this frequency, as observed by a distant observer, will be redshifted beyond your maximum wavelength. It will appear practicably indistinguishable from a black hole. The actual plunge phase of this process will happen very, very quickly (think days, not centuries), so the object will go from emitting in the visible to essentially dark in a very short period of time.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/34816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Why does heterodyne laser Doppler vibrometry require a modulating frequency shift? On the wikipedia article (and other texts such as Optical Inspections of Microsystems) for laser Doppler vibrometry, it states that a modulating frequency must be added such that the detector can measure the interference signal with frequency $f_b + f_d$. Why couldn't you remove the modulating frequency $f_b$ and interfere the two beams with frequencies $f_0$ and $f_0+f_d$ to produce a signal with frequency $f_d$ at the detector? I haven't been able to find any reasoning on the subject. My first idea was that the Doppler frequency might fall inside the laser's spectral linewidth and thus not be resolvable, but for a stabilized low-power CW laser (linewidth on the order of KHz) and a typical $f_d$ in the tens of MHz range I don't see this being an issue.
The modulating frequency shift provides the central band frequency at $f_b$ From Doppler effects, we know that if the object vibrates away from the source, the frequency $f_d$ decreases (negative), and if it vibrates toward the source, $f_d$ increases (positive). Now as mentioned, with the modulating frequency shift from which the detected frequency is $f_b + f_d$, the detector now can discriminate the directions if the velocity is toward the detector (|$f_b|+|f_d|$) or away from the detector (|$f_b|-|f_d|$). Without central band frequency $f_b$, the detector can only see $f_d$ in single direction (positive), since it cannot detect negative value. In this arrangement, it is usually used for homodyne detection.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/34871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
What is the significance of action? What is the physical interpretation of $$ \int_{t_1}^{t_2} (T -V) dt $$ where, $T$ is Kinetic Energy and $V$ is potential energy. How does it give trajectory?
The quantity $$ S= \int_{t_1}^{t_2} (T -V) dt $$ is known as the classical action. There exists a physical law (called the "principle of least action") which says that the true path an object takes is that which minimizes $S$. Check that it's true. I'll throw a ball straight up. When the ball leaves my hand its kinetic energy $T$ is high, and since nature prefers to minimize the integral $S$, the potential energy of the ball $V$ rises quickly to minimize the integrand $T-V$. The principle of least action, then, explains why balls go up when you throw them. So why don't baseballs keep going into the stratosphere to make $T-V$ as small as possible? They would need a lot of kinetic energy to do that! So much that it would outweigh the additional negative contribution from $-V$. It turns out that the true path is somewhere in-between rising high and going fast, which is what we observe. (Balls slow down as they go up.) Beyond this qualitative argument one may use Variational Calculus to derive Newton's laws from the principle of least action.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/34946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Can you count “collapses”? How many collapses in the observable universe? If that’s too hard, how many collapses in 100 cc’s of boiling water in one second? In biology, the very first robin that is scientifically described is preserved and called the “type robin”. The “type robin” for collapse was described by Einstein in 1905, and won him the Nobel prize. It is called the photoelectric effect. (Later collapse was formalized by von Neumann as a mathematical projection.) A very similar effect is the building up of an Airy circle in a telescope. Another is the point by point emergence of an interference pattern in a two-slit experiment, perhaps very slowly. In these examples, “collapses” can be counted. When one photon is absorbed and detected, one collapse happens. When a second photon is absorbed, a second collapse happens. The collapse is caused by the photon hitting the detector. (Or the collapse is caused by the silicon atom absorbing the photon.) Collapses like these are countable and it makes sense to ask and answer how many there are in a given 4-volume of spacetime. So, how many?
No, as what counts as a collapse depends on how you separate your system from the environment. Note that detecting photons is not a collapse of the photon wave function in von Neumann's sense, as the photon is afterwards not in a position eigenstate, but completely disappeared. However, for certain simple systems, collapses (quantum jumps into eigenstates) can be experimentally observed, however, and then counted. See the references in the section ''Are there quantum jumps?'' of Chapter A1: Fundamental concepts in quantum mechanics of my theoretical physics FAQ at http://arnold-neumaier.at/physfaq/physics-faq.html . See also the section ''Observable collapse'' of Chapter A4: The interpretation of quantum mechanics in this FAQ.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/35046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Black-holes are in which state of matter? Wikipedia says, A black hole grows by absorbing everything nearby, during its life-cycle. By absorbing other stars, objects, and by merging with other black-holes, they could form supermassive Black-holes * *When two black-holes come to merge, don't they rotate with an increasing angular velocity as they come closer and closer (how does it from a neutron star? I mean, who's powerful?) And it also says, Inside of the event horizon, all paths bring the particle closer to the center of the black hole. * *What happens to the objects that are absorbed into a black-hole? Which state are they really are now? They would've already been plasma during their accretion spin. Would they be on the surface (deposited), or would they still be attracted and moved towards the center? If so, then the surface of black-hole couldn't be a solid.
Wikipedia says it's not a state of matter, but a property of spacetime. The gravitational singularity predicted by general relativity to exist at the center of a black hole is not a phase of matter; it is not a material object at all (although the mass-energy of matter contributed to its creation) but rather a property of spacetime at a location. It could be argued, of course, that all particles are properties of spacetime at a location,[13] leaving a half-note of controversy on the subject. http://en.wikipedia.org/wiki/State_of_matter Given the "half-note of controversy on the subject" and the valid objection to the suggestion that black holes are not states of matter, I propose we call it Singularium.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/35150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Why does observation collapse the wave function? In one of the first lectures on QM we are always taught about Young's experiment and how particles behave either as waves or as particles depending on whether or not they are being observed. I want to know what about observation causes this change?
The wavefunction is not a material object. It is not a wavey process in 3 dimensional space. (as is seen as soon as you consider the wavefunction of two or more particles in the many body problem). It is a mathematical object in 3n dimensional configuration space where n is the number of interacting particles. It essentially contains all the statistical information about a system that it is possible to have- kind of like a giant list. If you make a measurement you effectively add a condition that the system obeys so reducing the the possibilities and so you are now considering a subset of the original list. This is what the collapse of the wavefunction is. This is why a measurement can collapse the wavefunction everywhere instantaneously rather than propagating out from the measurement location at the speed of light as it would if the wavefunction were some sort of material thing.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/35328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 7, "answer_id": 3 }
The physics behind The Great Flood The book of Genesis floats (pardon the pun) some interesting numbers when discussing the Great Flood. For example, it rained for 40 days and 40 nights, and at the end of that time, the entire planet was covered in water. I think we can deduce how much water that would have had to be, estimating that the highest peaks in the Himalayas were covered with water. (8,848 meters above sea level) My questions are, how fast would the rain have had to come to raise the ocean level that high in 40 days and nights, how much would the mass of the earth have changed for this event, and would that significantly alter the strength of gravity on earth?
A typical tropical storm drops about 40inches/rain in 24hours (sorry for the medieval units!) So 40days/nights = 1600inches, or 40metres of water. If you want to cover even reasonable mountains you have to rain a lot harder than that = 200x harder.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/35460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Einstein's equation: Black hole solution Let Einstein's equations satisfy $ R_{\mu \nu } = 0 $. Suppose we solve it numerically with the aid of a computer. Can we know from the numerical solution if there is a black hole in the solutions? For example, how can you know when you solve Einstein's equation if your solution will be a black hole or other particular non-smooth solution?
It isn't clear if you're asking how to identify horizons, singularities or both. Singularities are easy because the curvature becomes infinite, but horizons are harder. Usually to find horizons you study the null geodesics i.e. the paths taken by light rays, but you have to be careful about your choice of co-ordinates. As it happens there's a Living Reviews article on just this subject and this would be a good place to start.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/35522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why does inverting a song have no influence? I inverted the waveform of a given song and was wondering what will happen. The result is that it sounds the exact same way as before. I used Audacity and doublechecked if the wave-form really is inverted. The second thing I tried was: I removed the right channel, duplicated the left one and set the duplicated layer as right channel. This way I made sure that both channels are exactly the same. Then I inverted the second channel only. I thought that this would create some kind of anti-noise, but it didn't. Why is that?
Re your last question: what you've achieved is essentially the same as if you wire one of your speakers the wrong way round so it moves in antiphase to the other speaker. In principle there will be points equidistant from both speakers where the sound waves cancel and you get a quiet spot. However as soon as you move closer to one speaker than the other you no longer get perfect calculation. Plus unless you're in an anechoic chamber you get sound reflections that mess up the cancellation. In practice it's very hard to get the sounds to cancel. This principle is used in active noise control to reduce noise, but it does require very precise control of the sound phase and volume. In the HiFi world connecting one speaker the wrong way round is something most of us have done at some time. It doesn't cancel the sound, but it does mess up the stereo imaging and make the whole thing sound rather muddy. This will be more pronounced the better the HiFi.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/35691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
Increased mass from signals traveling close to the speed of light As you travel close to the speed of light, it is to my understanding you gain mass. Does this also apply when the brain sends electrical signals to the muscles? Do the signals (that are traveling at the speed of light) cause the body to weigh more?
John Rennie's answer is correct. However. Even if the signals would travel through or nerves at relativistic velocity, transmitted by some particles, this would not increase our mass, because to give those particles the energy to reach that speed, we have to consume it from somewhere else (perhaps to burn some calories). So the mass they gain by moving near the speed of light is in fact given to them by your body, and there is no increase of mass. Let's discuss instead of a human with signals traveling through nerves, about a robot, which has optical fiber wires in which information travels at the speed of light. To create photons to be transmitted through the fibers, the robot consumes some energy. This energy is (partially) preserved in the photon. So if the photon increase the robot's mass due to its own movement mass, it is the mass which was "burned" to emit the photon. So its mass doesn't increases because of the photons traveling through its optical fibers wires. I can say that it loses mass, because these processes release heat (energy) in the universe. But this variation is too small anyway.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/35762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
A problem of missing energy when charging a second capacitor A capacitor is charged. It is then connected to an identical uncharged capacitor using superconducting wires. Each capacitor has 1/2 the charge as the original, so 1/4 the energy - so we only have 1/2 the energy we started with. What happened? my first thoughts were that the difference in energy is due to heat produced in the wire. It may be heat, or it may be that this is needed to keep equilibrium.
This is a Gedankenexperiment. If such an experiment results in a paradox, the experiment is set up the wrong way. And the answer is given. There is no electrical connection that doesn't show an inductance. So we should construct the setup as simple as possible. There are two sets of metal plates of no length in parallel at a distance forming two capacitors. There are two conductors of diameter zero connecting two plates of two capacitors respectivly. No more requisites are needed as those ideal capacitors don't show inductance and those two wires show inductance but no capacitance. Now we formulate the boundary condition that current is zero, one capacitors voltage is zero, one is non zero and we can simply show that this is an LC-oscillator so we will see sinusoidal convertion of electrostatic field energy to magnetic field energy and vice versa. We do not introduce a super conductor nor electromagnetic field nor radiation or any kind of object that creates losses. All of this is not part of said Gedankenexperiment. A problem arises when you introduce a switch to have this boundary condition "realized", to be able to charge one capacitor. A switch can only be closed by bringing together two connections which can only meet when there is an area. So if you bring the switches contact in proximity they form a capacitor and a current will start to flow. As the capacitance of the contacts at a initial distance can not be zero and as the distance must reach zero to close the contact, the capacity of this capacitor reached infinity and all the energy stored in this capacitor will be dissipated. as this charged capacitor stores energy and a short circuit will not be consistant with this condition.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/35843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 3 }
the difference between the operators between $\delta$ and $d$ In classical mechanics, when talking about the principle of virtual work, what is difference between $\delta r$ and $dr$? e.g. $W=\int \overrightarrow{F} \cdot \delta \overrightarrow{r} $ and $W=\int \overrightarrow{F} \cdot d \overrightarrow{r} $ . Why can one exchange the place of $d\delta$ and $d$ in derivative calculation? e.g. $d\delta r=\delta d r$?
In classical mechanics, $\delta$ is equitemporal variation. $\delta$ and $\mathrm{d}$ are practically the same for constant constraint, but when the constraint is time-varying, they are different. For example, if a bead is constrained to a moving string, $\delta r$ will be along the string, while $\mathrm{d}r$ won't be. Conceptually, $\delta$ is variation of a functional, while $\mathrm{d}$ is differential of a function. But in calculations, just change $\mathrm{d}$ to $\delta$, and set $\delta t=0$ and you will get the correct result.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/35908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why don't rockets tip over when they launch? Rockets separate from the launch pad and supporting structures very early in flight. It seems like they should tip over once that happens. * *Why don't they tip over ? *Is it due to a well designed center of gravity or do they somehow achieve aerodynamic stabilization ?
Nowadays, rockets use a Gimbaled Thrust System. The rocket nozzles are gimbaled (An appliance that allows an object such as a ship's compass, to remain horizontal even as its support tips) so they can vector the thrust to direct the rocket. In a gimbaled thrust system, the exhaust nozzle of the rocket can be swivelled from side to side. As the nozzle is moved, the direction of the thrust is changed relative to the center of gravity of the rocket. Early rockets had Vernier Thrusters which uses small rocket engines on either sides, to control the attitude (vs altitude) of a rocket. Nowadays, they are common in most satellites. In this Image, The middle rocket shows the normal flight configuration in which the direction of thrust is along the center line of the rocket and through the center of gravity of the rocket. On the left one, the nozzle has been deflected to the left and the thrust line is now inclined to the center line at a gimbal angle $a$. As the thrust no longer passes through the center of gravity, a torque is generated about the center of gravity and the nose of the rocket turns to the left. If the nozzle is gimbaled back along the center line, the rocket will move to the left. On the right one, the nozzle has been deflected to the right and the nose is moved to the right. Wikipedia says, In spacecraft propulsion, rocket engines are generally mounted on a pair of gimbals to allow a single engine to vector thrust about both the pitch and yaw axes; or sometimes just one axis is provided per engine. To control roll, twin engines with differential pitch or yaw control signals are used to provide torque about the vehicle's roll axis. The right & left gimbaling is necessary to direct the rocket to its original path, thereby maintaining its stability... This link gives a good explanation regarding the stability of rockets. This essay is also good, but it's somewhat big...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/35958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 3, "answer_id": 2 }
Problem book in Quantum mechanics with emphasis on physical(ly relevant) problems I am a second year undergraduate and studying quantum mechanics from sakurai's 'Modern Quantum Mechanics'. Is it a good idea to solve problems from sakurai, which are mostly mathematical in nature? I need a textbook that has physically relevant problems, maybe going even into condensed matter, or field theory in its exercises. This would probably help me to appreciate and understand qm better. Sorry if this question is too localised but I just had to post it.
There is no one ideal textbook or source of problems of any particular type, and even if you did find one, if you are at all serious about earning a degree and having a career in physics or engineering, you'll be best off doing all the problems you can find in all the textbooks you can get your hands on. Well, that might be absurd - there are too many books in the library written over the decades. But do keep at it, never resting just because you finished working some set of problems. Especially push yourself to do some problems that aren't the kind you prefer. Physics is not ever going to be easy. Besides, no matter how applied / theoretical / mathematical a text is, they're all relevant to physics. Physics progresses only by the interplay of experiment, applied physics, theoretical physics, and abstract math.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/36019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does the Sun's magnetic field continue to exist at such high temperatures? The temperature at the surface of the Sun is apparently well above 5000 C; I'm assuming the layers beneath the surface may be even hotter. At school, we learned that heating a metal beyond a certain temperature, specific to each metal, would demagnetize the magnet. How does the Sun's magnetic field continue to exist at such high temperatures?
The solar dynamo is responsible for the magnetic field. It has nothing to do with a magnet and thus is not affected by hight temperature. The sun is made of plasma which flows at the velocity V. This flow creates an electric field E=VxB, this electric field runs a current j through Ohm's law which in turn creates a magnetic field. The interaction between the current and the magnetic field creates a net force jxB that runs the plasma velocity V leading to a self sustained magnetic field. More details can be found there: http://rsta.royalsocietypublishing.org/content/360/1801/2741.full.pdf http://www.scholarpedia.org/article/Solar_dynamo
{ "language": "en", "url": "https://physics.stackexchange.com/questions/36182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Reason for the Gaussian wave packet spreading I have recently read how the Gaussian wave packet spreads while propagating. see: http://en.wikipedia.org/wiki/Wave_packet#Gaussian_wavepackets_in_quantum_mechanics Though I understand the mathematics I don't understand the physical explanation behind it. Can you please explain?
Though I understand the mathematics I don't understand the physical explanation behind it. I'll take a stab at it. For a free particle, momentum eigenstates are also energy eigenstates and thus have a simple time dependence, a time dependent phase with a frequency proportional to the energy of the state. A free particle with a gaussian wave function is then a continuous superposition of momentum, and thus energy, eigenstates. Since the phase of the different momentum eigenstates evolve at a different rate, the way the various components constructively/destructively add evolves in time. When all the phases "line up" just so, we get the minimum uncertainty wave packet. As time evolves, the wave packet spreads since the phases evolve at different rates.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/36430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
What are distinguishable and indistinguishable particles in statistical mechanics? What are distinguishable and indistinguishable particles in statistical mechanics? While learning different distributions in statistical mechanics I came across this doubt; Maxwell-Boltzmann distribution is used for solving distinguishable particle and Fermi-Dirac, Bose-Einstein for indistinguishable particles. What is the significance of these two terms in these distributions?
Suppose you have two distinct particles. If they are distinguishable (Like a helium-3 atom and a helium-4 atom), then you can switch their positions and the system changes. If they are indistinguishable (Like two protons), switching the two particles' positions makes no physical change because we do not know whether particles switched at all. I haven't studied advanced quantum mechanics, so I can't give a better explanation, but Wikipedia can http://en.wikipedia.org/wiki/Identical_particles#Distinguishing_between_particles. The number of permutations of the distinguishable particles is n! more than that of indistinguishable ones, so quantities like entropy can change depending on whether we can distinguish particles in our system. All three distributions can be derived from the grand partition function, but the derivations for Bose-Einstein and Fermi-Dirac distributions uses indistinguishability..
{ "language": "en", "url": "https://physics.stackexchange.com/questions/37556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 2 }
$E$ and $H$ fields created by fiber optics? When an EM wave travels down a conductor, it creates and electric and magnetic field around (H) the wire and normal to (E) the wire. My question is, when light travels down an optical material such as fiber optics, is there an similar magnetic and electric field created around the fiber? My gut tells me there should be, but I cant reason it out. I'm looking for some help with this thought experiment.
If we think about it for a moment, and if we know that light is an E and H field, then your gut feeling is right -- there must be one. However, if you look at the end of an optical fiber with light in it (taking appropriate safety precautions!) you see the light in the fiber, not around it. So the field must be inside, not outside the fiber. To see why, we will turn to ray optics. Imagine an optical fiber stretched straight. A ray of light entering the fiber at parallel incidence simply goes down the fiber and emerges from the other end. If a ray enters at an angle, however, then it might escape through the wall of the fiber. But if the angle is small enough, it will bounce off the inside wall of the fiber due to total internal reflection. This is perhaps best illustrated by this image courtesy of Wikipedia. So the light, and therefore the E and H fields, are confined inside the fiber. If you know a little about total internal reflection, you might realize this is not entirely true -- there are evanescent fields on the outside of the fiber. "Evanescent" means they don't propagate away from the fiber, but just decay exponentially, so this is entirely dissimilar from the fields around a current traveling through a conductor. Also, this description doesn't hold for a single-mode fiber, which can't be analyzed with ray optics -- you need to consider modes instead. Nonetheless, this is a good intuitive way of looking at it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/37803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do I figure out the probability of finding a particle between two barriers? Given a delta function $\alpha\delta(x+a)$ and an infinite energy potential barrier at $[0,\infty)$, calculate the scattered state, calculate the probability of reflection as a function of $\alpha$, momentum of the packet and energy. Also calculate the probability of finding the particle between the two barriers. I start by setting up the standard equations for the wave function: $$\begin{align}\psi_I &= Ae^{ikx}+Be^{-ikx} &&\text{when } x<-a, \\ \psi_{II} &= Ce^{ikx}+De^{-ikx} &&\text{when } -a<x<0\end{align}$$ The requirement for continuity at $x=-a$ means $$Ae^{-ika}+Be^{ika}=Ce^{-ika}+De^{ika}$$ Then the requirement for specific discontinuity of the derivative at $x=-a$ gives $$ik(-Ce^{-ika}+De^{ika}+Ae^{-ika}-Be^{ika}) = -\frac{2m\alpha}{\hbar^2}(Ae^{-ika}+Be^{ika})$$ At this point I set $A = 1$ (for a single wave packet) and set $D=0$ to calculate reflection and transmission probabilities. After a great deal of algebra I arrive at $$\begin{align}B &= \frac{\gamma e^{-ika}}{-\gamma e^{ika} - 2ike^{ika}} & C &= \frac{2e^{-ika}}{\gamma e^{-ika} - 2ike^{-ika}}\end{align}$$ (where $\gamma = -\frac{2m\alpha}{\hbar^2}$) and so reflection prob. $R=\frac{\gamma^2}{\gamma^2+4}$ and transmission prob. $T=\frac{4}{\gamma^2+4}$. Here's where I run into the trouble of figuring out the probability of finding the particle between the 2 barriers. Since the barrier at $0$ is infinite the only leak could be over the delta function barrier at $-a$. Would I want to use the previous conditions but this time set $A=1$ and $C=D$ due to the total reflection of the barrier at $0$ and then calculate $D^*D$?
The probability of finding a particle in an interval $a<x<b$ is given by the integral $$\int_a^b \psi^* \psi \, dx ,$$ assuming that your wave function is properly normalised. So in your case, you should calculate $$\frac{\int_{-a}^0 \psi_{II}^* \psi_{II} \,dx}{\int_{-\infty}^{-a} \psi_{I}^* \psi_{I} \,dx+\int_{-a}^0 \psi_{II}^* \psi_{II} \,dx} . $$ The numerator is the region you are interested in, the denominator takes care of the normalisation so that the probability will come out between 0 and 1. I'll leave it to you to calculate the integrals.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/37857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Mechanics Energy (Calculus) A particle moves with force $$F(x) = -kx +\frac{kx^3}{A^2}$$ Where k and A are positive constants. if $KE_o$ at x = 0 is $T_0$ what is the total energy of the system? $$ \Delta\ KE(x) + \Delta\ U(x) = 0$$ $$F(x) = -\frac{dU}{dx} = m\frac{dv}{dt} = m v\frac{dv}{dx}$$ Integrating to get U(x) and 1/2mv^2 I get $$\Delta\ U(x) = \frac{kx^2}{2} - \frac{kx^4}{4A^2}$$ $$\Delta\ KE(x) = -\frac{kx^2}{2} + \frac{kx^4}{4A^2}$$ Which Makes sense. But how do I find the function KE(x) where KE(0) = $T_0$? Do I Even need to? The total energy in the system is $T_0$ Correct? Also a kind of side note. What is really confusing me, is when should I add limits of integration and under what circumstances should I just use an indefinite Integral?
The question is a little tricky as stated... Because your force is conservative, it can be written as the gradient of a scalar potential field. But the potential field, i.e. the potential energy, is defined only up to a constant. That is, your potential energy field is $$U(x) = \frac{kx^2}{2}-\frac{kx^4}{4A^2}+U_0,$$ for any value of $U_0$. This U_0 comes up because we have done an indefinite integral of the force field to find it. So in all purity, the total energy of your particle is $T_0+U_0$, so it can be whatever you want, because you can choose $U_0$ freely... For gravitational potential the convention is to place zero energy at infinity, for potentials such as yours it makes more sense to have zero energy at the origin. This is equivalent to choosing $U_0=0$. Which is a very reasonable thing to do, but in no way mandatory. The full definite integral thing to arrive at conservation of energy from $F = ma$ is as follows, $$ma = F(x)$$ $$m\frac{dv}{dx}v=F(x)$$ $$mvdv = F(x)dx$$ $$\int_{v_0}^{v_1}{mvdv}=\int_{x_0}^{x_1}{F(x)dx}$$ $$\left.\frac{1}{2}mv^2\right|_{v_0}^{v_1} = \left. (-\frac{kx^2}{2}+\frac{kx^4}{4A^2})\right|_{x_0}^{x_1}$$ $$\frac{1}{2}mv_1^2 - \frac{1}{2}mv_0^2 = -\frac{kx_1^2}{2}+\frac{kx_1^4}{4A^2} + \frac{kx_0^2}{2}-\frac{kx_0^4}{4A^2}$$ $$\frac{1}{2}mv_1^2 +\frac{kx_1^2}{2}-\frac{kx_1^4}{4A^2} = \frac{1}{2}mv_0^2+ \frac{kx_0^2}{2}-\frac{kx_0^4}{4A^2}$$ $$T_1 +U_1 = T_0+U_0$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/37961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What happens when we cut objects? What is the role of the molecular bonds in the process of cutting something? What is the role of the Pauli exclusion principle, responsible for the "hardness" of matter? Moreover, is all the energy produced by the break of bonds transformed into heat?
First of all, Cutting is the phenomenon of applying increasing or constant (high) pressure over a small area of an object where the stress applied (both compression and shearing) overcomes the ultimate tensile strength of the object at that particular area. (Friction between solids also play a major role here). It's all regarding Elasticity man... And for your last part regarding heat emission, only negligible amount of heat is generated 'cause most of the heat produced is due to friction than breaking of intermolecular bonds. And, as the forces are very small around a particular area, providing an external energy such as Heat could break sufficient intermolecular forces or any other bonds and support cutting.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/38021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 4, "answer_id": 0 }
Is Heisenberg's matrix mechanics or Schrödinger's wave mechanics more preferred? Which quantum mechanics formulation is popular: Schrödinger's wave mechanics or Heisenberg's matrix mechanics? I find this extremely confusing: Some post-quantum mechanics textbooks seem to prefer wave mechanics version, while quantum mechanics textbooks themselves seem to prefer matrix mechanics more (as most formulations are given in matrix mechanics formulation.) So, which one is more preferred? Add: also, how is generalized matrix mechanics different from matrix mechanics?
Current standard textbooks teach a mixture of both wave mechanics and matrix mechanics although the emphasis is put more in the wave formulation because this is much more easy for most quantum problems. Matrix mechanics is simpler when dealing with the harmonic oscillator.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/38070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }
2D Ghost CFT and two-point functions For some reason I am suddenly confused over something which should be quit elementary. In two-dimensional CFT's the two-point functions of quasi-primary fields are fixed by global $SL(2,\mathbb C)/\mathbb Z_2$ invariance to have the form $$\langle \phi_i(z)\phi_j(w)\rangle = \frac{d_{ij}}{(z-w)^{2h_i}}\delta_{h_i,h_j}.$$ So a necessary requirement for a non-vanishing two-point function is $h_i = h_j$. Now consider the Ghost System which contains the two primary fields $b(z)$ and $c(z)$ with the OPE's $$T(z)b(w)\sim \frac{\lambda}{(z-w)^2}b(w) + \frac 1{z-w}\partial b(w),$$ $$T(z)c(w)\sim \frac{1-\lambda}{(z-w)^2}c(w) + \frac 1{z-w}\partial c(w).$$ These primary fields clearly don't have the same conformal weight for generic $\lambda$, $h_b\neq h_c$. However their two-point function is $$\langle c(z)b(w)\rangle = \frac 1{z-w}.$$ Why isn't this forced to be zero? Am I missing something very trivial, or are there any subtleties here?
The answer seems to be that, technically at least, the two point function $\langle b(z) c(w) \rangle$ does vanish on the sphere. In the context of the standard $bc$ ghost system that shows up in bosonic string theory, the simplest nonzero correlation function on the sphere that involves both $b$ and $c$ is $$ \langle c(z_1) c(z_2) c(z_3) c(z_4) b(w) \rangle. $$ This correlation function is a special case of (6.3.5) in Polchinski volume 1 and evaluates to $$ \frac{(z_1-z_2)(z_1-z_3)(z_2-z_3)(z_1-z_4)(z_2-z_4)(z_3-z_4)}{(z_1-w)(z_2-w)(z_3-w)(z_4-w)}. $$ The Kronecker delta function $\delta_{h_i h_j}$ that shows up in the result for $\langle \phi_i(z) \phi_j(w) \rangle$ described in OP's first formula can be argued for based on the transformations of the fields under inversion $z \to 1/z$. While $\langle b(z) c(w) \rangle$ cannot transform properly, as OP has noted, it would be reassuring to see that this five-point function does transform in the right way. Indeed, the conformal factor $$ \frac{z_1^2 z_2^2 z_3^2 z_4^2}{w^4} $$ takes precisely the right form to convert this correlation function into one involving only $1/z_i$ and $1/w$. (Note I am using the $SL(2, R)$ invariant vacuum state to compute these correlation functions. There is also the option of using so-called Q-vacua, some of which are obtained by acting with $c$ operators at the poles of the sphere. In this case, translation invariance is lost, and the correlation function $\langle \phi_i(z) \phi_j(w) \rangle$ can depend on a further dimensionless ratio $z/w$.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/38215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Breaking of conformal symmetry I am wondering something about the breaking of conformal symmetry: I know that it can be broken at the quantum level, anomalously, but I never encountered or heard about a model where it is broken "à la Higgs" with a potential whose true minimum would spoil this invariance (e.g. making appear a particular energy scale). I guess we would then get some Goldstone bosons, would there be something special about them?
In the following paper, we have found that breaking conformal symmetry gives rise to massive graviton. Here is the link to the paper https://doi.org/10.3389/fphy.2022.867766 Thank you.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/38277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Why is there a factor of 1/2 in the interaction energy of an induced dipole with the field that induces it? In this paper, there's the following sentence: ...and the factor 1/2 takes into account that the dipole moment is an induced, not a permanent one. Without any further explanation. I looked through Griffiths' electrodynamics to see if this was a standard sort of thing, but couldn't find anything. I was thinking it might be because the field of the dipole itself opposes the inducing field, but that doesn't quite seem right for some reason.
Because the black area is half the box below. To explain: move the dipole from an area of no field to an area of field strength E. As you do, there's a force proportional to the dipole moment and to the gradient of E. For a fixed dipole, this force depends only on the gradient (horizontal dashed line). But for an induced dipole, the dipole moment depends on E and grows linearly as you move from zero field to full strength, so on average it is only half as strong during that movement (solid diagonal line).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/38343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
How to make something charged using electricity? If I had a piece of metal and i wanted it to be negatively charged. How can I do that?
Electrostatic Induction is good to use. It's the phenomenon of inducing electric charges without any direct contact with a charge. This principle is used in capacitors. Even, Rubbing materials produce static electricity. Insulators could be charged by rubbing. But, Metals are probably charged using insulators..! When you bring a charged plastic or glass rod (probably negatively charged) near a metal piece positive charges which experience attractive Coulombic force move towards the end of metal nearer to the rod, while negative charges move to the other end due to repulsive force. If you ground the metal piece, negative charges flow to ground while positive charges stick to the end (due to attractive force)... However after removing the ground, the positive charges are distributed throughout the metal piece. Edit: After charging Insulators, charges could be transferred from Insulator to your metal piece by simply touching it..! (Thanks to @John) Also, Van de Graaff Generator is based on both Electrostatic induction and Corona discharge (Action of points) to produce high voltage of the order of $10^7V$. But, it's been in use to accelerate ions for nuclear disintegration purposes instead of charging metal pieces..!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/38397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
How do electrons repel? I understand the basics, protons have a positive charge, neutrons have no charge, and the electron has a negative charge. But looking at the lines of force from a proton, they flow outward and push each other away. But, the electrons flow inward or towards themselves. How does that make them repel? Wouldn't they be more neutral towards themselves? I understand that the lines of force cannot cross as well. I'm sure this is an easy answer for someone.
The lines of force represent the direction a free positive charge would move if one was present. The reason the lines of force are in the outward direction from a proton is because a proton will repel a proton, and thus move outwards. The reason the lines of force are inwards for an electron, is because an electron would attract a proton, thus the proton would move inwards. So to summarize, force lines are defined to be in the direction a POSITIVE charge would move, if one were present. This is just a human convention/definition. An electron has the opposite charge properties to a positive charge, and a free electron will move in the direction that opposes the force lines. So an electron will move away from another negative charge, and towards a positive charge.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/38452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Single plane Ring system Possible Duplicate: Why are our planets in the solar system all on the same disc/plane/layer? I've noticed this in many pictures, Planets are shown with a single ring around them (in some particular plane). Taking extreme case... As gravity should act in all the directions, such planets must be covered with asteroids all around them. Not just single ring in some single plane..! So, My question is: Why don't planets have many rings instead of just a single ring..?
Because of the rotation of the planet. E.g. Our galaxy rotates so it has also a stubble elliptical appearance. It is the same with planets orbiting sun but on a larger scale. Btw. if the planet did not rotate at all (which is in reality impossible in space) the rocks/ring material would be distributed "everywhere equally". However, it is impossible in real space, where everything is rotating in some way because of other forces of larger objects and the material must come from somewhere to the planet etc.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/38713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does a Faraday cage block magnetic field? I want to block the magnetic field of a very strong magnet, can I put it inside a Faraday cage to block its magnetic field?
If you want to block a magnetic field, a faraday cage made of mesh is a bad choice. You would need a cage made of solid metal sheets. A thick enough sheet would completely block the field on the other side. This site has a calculator that draws the magnetic field across a metal sheet: https://www.kjmagnetics.com/thickness.calculator.asp (it is from a company that sells magnets).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/38759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 3, "answer_id": 0 }
How is $ g^2 N$ held fixed in the large N limit? In 't Hooft's original paper: http://igitur-archive.library.uu.nl/phys/2005-0622-152933/14055.pdf he takes $N \rightarrow \infty $ while $ g^2 N$ is held fixed. Is this just a toy model? Or is there some reason to believe that $g^2$ goes like $\frac{1}{N}$ for large coupling? Thanks a lot.
The parameters $g^2$ and $N$ are independent of each other so it is meaningless to ask whether $g^2$ goes like $1/N$ for large or small coupling "in general". In general, it may go but it doesn't have to go. But if it does go, i.e. if $g^2 N$ is kept fixed, then one may say new interesting things. If one introduces a new symbol $\lambda = g^2 N$ for the 't Hooft coupling, the condition simply says that $\lambda$ is kept fixed – which is natural, especially in the dual stringy interpretation of the same physics. I need to dedicate a special paragraph to an error in your question which is probably not just a typo, due to the complicated work you had to go through to write the fractions. What is kept fixed in 't Hooft's limit isn't $g^2/N$; it is $g^2 N$. It's the product, not the ratio! So $g^2$ indeed goes like $1/N$ and not $N$ if $N$ is sent to infinity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/38807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Macroscopic quantum gravity phenomena A theory of quantum gravity is said to be needed when quantum and gravitational effects are strong at the same time i.e. at black hole singularities and at the big bang. This also makes it difficult to test quantum gravity. But what about testing macroscopic quantum phenomena in different gravity regimes like flying a superconductor or liquid helium into Earth orbit and back again - would you expect gravitational time dilation or high-g accelerations to alter macroscopic quantum behaviour in a way that could test quantum gravity theories ?
No, quantum gravitational effects operate at too small a length scale to affect the sort of phenomena mentioned in the Wikipedia article you cite. One possible effect is that spacetime may not be even on the Planck scale and this may very slightly change the propagation of light. This is only measurable over enormous distances like the radius of the observable universe, but it's been suggested that it might be measurable in phenomena like gamma ray bursts. See for example this paper, or Google for lots of related articles. However, at the moment there is no unambiguous evidence for such effects, and not all theorists are convinced they exist anyway.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/38857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Is "Egyptian Year" the same as a modern sidereal year? Copernicus uses the term "Egyptian Year" throughout his discussions of the movements of the Earth, and of his and other models of the movements of the planets; but is unclear from his text, or from the general definitions I've found, what this corresponds to in modern astronomical terms. What, precisely, is an "Egyptian Year"? Is it identical with a modern sidereal year; if not, what are the correct conversions between the two?
Egyptian year is not the same as sidereal year. Egyptian year is exactly 365 days, whereas sidereal year is approximately 365.256 days.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/38923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating car's acceleration from change in angle of hanging object? The question essentially is based on a situation like this- A car has a small object hung from the cieling on a string (apparently at an angle of 0 degrees to the ceiling). The car is accelerating and the object is now hanging at a 30 degree angle (to the ceiling). How would I figure out how much the car is accelerating. PS - This is homework but Im stuck and would appreciate any advice. Thanks. Edit: changed angle from 45 to 30.
This problem can be tackled using the equivalence principle. This basically means that the accelerating car can be thought of from the perspective of the hanging object, as a horizontal gravitational field, with an acceleration equal to that of the car. Therefore we effectively have two forces acting on the object. One downwards of $mg$, the other horizontally of $ma$, where a is the acceleration of the car. The angle of the resultant force is given by $\tan \theta = \frac{m a}{m g} = \frac{a}{g}$ Therefore $a = g \tan \theta$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/39121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Schrödinger and thermodynamics I heard that Schrödinger pointed out that (classical/statistical) thermodynamics is impaired by logical inconsistencies and conceptual ambiguities. I am not sure why he said this and what he is talking about. Can anyone point some direction to study what he said?  
I think you need to be a little more specific as to the question. The inconsistency of classical mechanics with atomic physics is found by the attempted classical analysis of electron orbital behavior. If the electron orbit is modeled classically, then the electron should give off radiation in accordance to the Larmor formula. It is this inconsistency that led Bohr to build his postulates of quantum mechanics in his model of the hydrogen atom. Schrodinger published his improvement in 1926 in four parts. Quantisation as a Problem of Proper Values, Part I, Part II, Part III, Part IV. You can also find his discussion in Physical Review in 1926. An additional good paper is the translation of the "Schrodinger's Cat Paradox".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/39168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What is predicted to happen for electron beams in the Stern-Gerlach experiment? The Stern–Gerlach experiment has been carried out for silver and hydrogen atoms, with the result that the beams are deflected discretely rather than continuously by an inhomogenous magnetic field. What is theoretically predicted to happen for electron beams?
The splitting of the beam by the Stern Gerlach apparatus is one of the great myths of modern physics. The orignial experiment wasn't done with a pencil-shaped beam but with a fan-shaped beam. While it is true that the fan-shaped beam is split in two, the case of the pencil-shaped beam is quite different. I analyze it in this blog posting. The actual result for the pencil beam is a donut shape. I don't believe there is any way to get just two dots on the collection screen the way people always describe it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/39301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Why the kilogram is not defined? Possible Duplicate: Why do we still not have an exact definition for a kilogram? I was thinking about SI units. I found the following definition for the base units: * *Meter: distance covered by light in 1/299792458 seconds. *Second: duration of 9192631770 periods of the radiation corresponding to the transition between two levels of fundamental state of 133-Cesium. *Kelvin: 1/273.16 of the thermodynamic temperature of water triple point *Mole: Number of atoms in 0.012 kg of carbon 12 *Candela: [...] *Ampere: [...] I searched the definition of the kilogram and I found only this one: Mass of the international prototype of kilogram. Why such definition? It is impossible to well define the kilogram? Why? By my point of view the mass of the prototype will change a little with time. Is this effect considered? And what about definition of mole: it is based on kilograms, so also mole definition is "impossible"?
It is possible to define kilogram, but right now the accuracy would be worse than for the prototype. And yes, the mass of the prototype is changing a bit, so efforts are being made to introduce a definition of kilogram.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/39421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Would a spin-2 particle necessarily have to be a graviton? I'm reading often that a possible reason to explain why the Nobel committee is coping out from making the physics Nobel related to the higgs could be among other things the fact that the spin of the new particle has not yet been definitively determined, it could still be 0 or 2. This makes me wonder if the spin would (very very surprisingly!) finally be discovered to be 2, this then necessarily would mean that the particle has to be a graviton? Or could there hypothetically be other spin-2 particles? If not, why not and if there indeed exist other possibilities what would they be?
There are theoretical arguments that a massless spin-2 particle has to be a graviton. The basic idea is that massless particles have to couple to conserved currents, and the only available one is the stress-energy tensor, which is the source for gravity. See this answer for more detail. However, the particle discovered at LHC this year has a mass of 125 GeV, so none of these arguments apply. It would be a great surprise if this particle did not have spin 0. But it is theoretically possible. One can get massive spin 2 particles as bound states, or in theories with infinite towers of higher spin particles.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/39476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 3, "answer_id": 0 }
Why would Klein-Gordon describe spin-0 scalar field while Dirac describe spin-1/2? The derivation of both Klein-Gordon equation and Dirac equation is due the need of quantum mechanics (or to say more correctly, quantum field theory) to adhere to special relativity. However, excpet that Klein-Gordon has negative probability issue, I do not see difference between these two. What makes Klein-Gordon describe scalar field while Dirac describe spin-1/2 field? Edit: oops. Klein-Gordon does not have non-locality issue. Sorry for writing wrongly. Edit: Can anyone tell me in detail why $\psi$ field is scalar in Klein-Gordon while $\psi$ in Dirac is spin-1/2? I mean, if solution to Dirac is solution to Klein-Gordon, how does this make sense?
Spin is a property of the representation of the rotation group $SO(3)$ that describes how a field transforms under a rotation. This can be worked out for each kind of field or field equation. The Klein-Gordon field gives a spin 0 representation, while the Dirac equation gives two spin 1/2 representations (which merge to a single representation if one also accounts for discrete symmetries). The components of every free field satistfy the Klein-Gordon equation, irrespective of their spin. In particular, every component of the Dirac equations solves the Klein-Gordon equation. Indeed, the Klein-Gordon equation only expresses the mass shell constraint and nothing else. Spin comes in when one looks at what happens to the components. A rotation (and more generally a Lorentz transformation) mixes the components of the Dirac field (or any other field not composed of spin 0 fields only), while on a $k$-component spin 0 field, it will transform each component separately. In general, a Lorentz transformation given as a $4\times 4$ matrix $\Lambda$ changes a $k$-component field $F(x)$ into $F_\Lambda(\Lambda x)$, where $F_\Lambda=D(\Lambda)F$ with a $k\times k$ matrix $D(\Lambda)$ that depends on the representation. The components are spin 0 fields if and only if $D(\Lambda)$ is always the identity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/39542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 5, "answer_id": 0 }
Why do physicists think that the electron is an elementary particle? When we first discovered the proton and neutron, I'm sure scientists didn't think that it was made up of quark arrangements, but then we figured they could be and experiments proved that they were. So, what is it about the electron that leads us to believe that it isn't a composite particle? What evidence do we have to suggest that it it isn't?
Why do physicists think that the electron is an elementary particle? Because: 1) The standard model considers the leptons elementary particles. As it describes very successfully most of the data gathered by particle physics studies there is no reason to question the hypothesis of elementary leptons. 2)experiments testing for compositness of leptons give only lower limits for the scale of the appearance of compositeness. See for example this recent publication from LHC data for electrons and muons. The exclusion region in the compositness scale Lamda and excited lepton mass M theparameter space is extended beyond previously established limits. For L = M , excited lepton masses are excluded below 1070 GeV/c2 for e^* and 1090 GeV/c2 for mu^* at the 95% confidence level. Compositness is completely unpopular with theorists but a number of experimentalists keep on testing for it when new data is available, which is as it should be.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/39590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 1 }
Calculating force required to stop bungee jumper Given that: * *bungee jumper weighs 700N *jumps off from a height of 36m *needs to stop safely at 32m (4m above ground) *unstretched length of bungee cord is 25m Whats the force required to stop the jumper (4m above ground) First what equation do I use? $F = ma$? But even if $a = 0$ $v$ may not equals 0 (still moving) $W = F \Delta x$? Can I say if $\Delta x = 0$ object is not moving? Even then, I don't know the work ... I tried doing: $-32 = \frac{1}{2} (-9.8) t^2$ $t = 2.556s$ Then I'm stuck ... I know $t$ but I cant seem to use any other equations... $v_f, v_i =0 $
Well, I guess start by forgetting that the bungee is a spring, and would apply a non-constant force. But we'll ignore that first and imagine the bungee applies a constant force. You jump off the bridge at 36m, plunge 25m to 9m from the ground, which leaves you 5m to come to a stop. So, we can use the equations of constant linear motion to compute how fast you're going the moment the bungee tightens up: $v^2 = v_0^2 + 2a(r-r_0)$ So, in our example you'll be heading downwards at $\sqrt(0+2*9.8\frac{m}{s^2}(25m)) = 22.14\frac{m}{s}$ Then, assuming the bungee applies a constant force, we again use the initial equation to figure out the rate of deacceleration. $0 = (22.14 \frac{m}{s})^2 + 2a(5m) => a= \frac{(22.14 \frac{m}{s})^2}{2*5m} = 49 \frac{m}{s^2}$ Which, not so surprisingly, works out to be the same as $g(25m/5m)$, or $g$ times the falling distance divided by the stopping distance. Now that you know your deacceleration, multiply that by your mass and you've got your force. However, bungees actually don't apply a constant force, they apply a fairly linear force relative to their displacement for most of their stretchy range. You'll have to use Hooke's Law, the formula for the spring constant $F=-kx$ to more accurately model the system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/39622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What are some of the best books on complex systems and emergence? I'm rather interested in getting my feet wet at the interface of complex systems and emergence. Can anybody give me references to some good books on these topics? I'm looking for very introductory technical books.
This answer contains some additional resources that may be useful. Please note that answers which simply list resources but provide no details are strongly discouraged by the site's policy on resource recommendation questions. This answer is left here to contain additional links that do not yet have commentary. * *James P. Sethna is one of the leading figure in this area. You can refer to his book without any doubt. http://www.lassp.cornell.edu/sethna/
{ "language": "en", "url": "https://physics.stackexchange.com/questions/39712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 5, "answer_id": 3 }
How robust is Kramers degeneracy in real material? Kramers theorem rely on odd total number of electrons. In reality, total number of electrons is about 10^23. Can those electrons be so smart to count the total number precisely and decide to form Kramers doublets or not?
Remember that for crystalline materials, we usually assume an infinite number of particles, and that electrons do not interact. This allows us to Fourier transform and see that each pseudo-momentum $k$ is independent --- essentially to consider a single unit cell. In this context, Kramer's theorem states that if there is an odd number of electrons per unit cell (we ignore proton and neutrons if we don't care about hyperfine structure; otherwise we would), and assuming time reversal invariance, there is (at least) a two-fold degeneracy of all energy levels. Indeed, this may be seen as the basis of topological insulators.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/40895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Distribution of charge on a hollow metal sphere A hollow metal sphere is electrically neutral (no excess charge). A small amount of negative charge is suddenly placed at one point P on this metal sphere. If we check on this excess negative charge a few seconds later we will find one of the following possibilities: (a) All of the excess charge remains right around P. (b) The excess charge has distributed itself evenly over the outside surface of the sphere. (c) The excess charge is evenly distributed over the inside and outside surface. (d) Most of the charge is still at point P, but some will have spread over the sphere. (e) There will be no excess charge left. Which one is correct and why? I guess it is some kind of electrostatic induction - phenomena going on. Am I right? I understand that excess charge is distributed over hollow sphere and that negative and positive charges are distributed opposite sides, but don't know which one positive or negative go to inside surface.
B is correct, but this is due to the Coulomb's law, the fact that the force between charges decays as the inverse square of the distance. It is not due to the mere fact that like charges repel, as this doesn't explain why all the charges would end up at the surface. Coulomb's law can be shown to be equivalent to Gauss' law which says that the total charge contained inside a closed surface divided by $\varepsilon_0$ is equal to the integral of the component of the electric field along the outward normal of the surface over the closed surface. A charge inside the metal will experience a total force proportional to the electric field due to all the other charges. The charge distribution can thus only be in equilibrium if the total electric field inside the metal is zero. Gauss' law then implies that any surface contained within the metal contains a charge of zero, therefore there cannot be a net charge anywhere inside the metal when the charge distribution has settled down. We can thus conclude that all of the charge must reside at the surfaces of the sphere. If we now apply Gauss' law by taking a spherical closed surface that runs inside the metal, we find that the total charge contained inside that surface is zero. This means that the charge cannot reside on the inner surface.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/40993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Why are Euler's equations of motion coupled? Physical explanation I have a problem with one of my study questions for an oral exam: Euler’s equation of motion around the $z$ axis in two dimensions is $I_z\dot{\omega}_z = M_z$, whereas it in three dimensions is $I_z\dot{\omega}_z =-(I_y-I_x)\omega_x\omega_y+M_z$, assuming that the $xyz$ coordinate systems is aligned with the principal axis. Why does Euler’s equation of motion for axis $z$ contain the rotational velocities for axes $x$ and $y$? How can one explain this physically? I mean I can derive Euler's equation of motion, but how can I illustrate that the angular velocities are changing in 3 dimensions?
As explained on Wikipedia, the nice tensor form of the equations is $$ \mathbf{I} \cdot \dot{\boldsymbol\omega} + \boldsymbol\omega \times \left( \mathbf{I} \cdot \boldsymbol\omega \right) = \mathbf{M} $$ This reduces to your equations if one diagonalizes the tensor of the moment of inertia $I$ and labels the diagonal entries etc. The three components are mixed with each other because quantities like $\vec\omega$ and $\vec M$ are really associated with rotations in space and rotations around the axis $x,y,z$ don't commute with each other – unlike translations. Translations commute with each other which is why the 3 components in $\vec F=m\vec a$ don't mix with each other. For example, take the Earth, rotate it by 90 degrees around the $x$ axis, then 90 degrees around $y$ axis, then you rotate back by 90 degrees but first around $x$ axis again, so that you aren't undoing the $y$ rotation immediately, but then you undo the $y$ rotation, too. You don't get back where you have been: instead, you end up rotating the Earth around the $z$ axis. We say that rotations form the group $SO(3)$ which is non-abelian, $gh\neq hg$. The moment of force wants to rotate the rigid body around an axis but because it was already rotating around another axis given by $\vec \omega$ and the rotations don't commute with each other, the effect of the moment of force also influences the "remaining third" component. A natural way to write the vectors $\vec \omega, \vec M$ is actually an "antisymmetric tensor" – they're "pseudovectors", not actual vectors. At any rate, when you correctly derive the equations, you should reproduce what Euler got.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Do spacecraft engines suffer from carbon accumulation the way typical petrol/kerosene engines do? Just wondering whether the spacecraft engines/drives, or their booster rockets accumulate carbon the way car/truck engines do. What about ion/methane drives?
Not typically. In fact, the opposite generally occurs. The high temperatures and velocities in the rocket motor tend to cause erosion (ablation) along the nozzle. There is considerable research into the ablation of the nozzles because it changes the shape and thus the thrust characteristics. See for example this paper, and a simple search will reveal many more. It's also important to note that many spacecraft engines don't use carbon-based fuels. Solid rocket motors typically do, the binder material is usually a carbon-based material. But some liquid rocket engines are hydrogen and oxygen, so no carbon is involved.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why does bad smell follow people (assuming they are not the source)? When you are sitting in a room where there is a source of bad smell, such as somebody smoking or some other source of bad smell, it is often a solution to simply move to another spot where bad smell is not present. Assuming you are not actually the source of the smell, this will work for a while until you notice the smell has somehow migrated to exactly the spot where you are now sitting. Frustrating. This got me thinking about the fluid mechanics of this problem. Treat bad smell as a gas that is (perhaps continuously) emitted at a certain fixed source. One explanation could be that human breathes and perhaps creates a pressure differential that causes the smell to move around. Is there any truth to this? Please provide a reasoned argument with reference to the relevant thermodynamic and/or fluid quantities in answering the question. Theoretical explanation is desired, but extra kudos if you know of an experiment.
Fluid dynamics (more precisely, the gas dynamics) plays of cause, the role, especially if you account also for diffusion of smell agents. The latter are usually organic molecules. In most cases it is diffusion that follows you. However, also the gas flow certainly plays a role, and one cannot say in advance, what is more important. In principle, a research along the line of your question is quite reasonable. The question is in the applied area: how to best organize ventilation in order to fast remove bad smells, or in a more general statement, to renew the air in some closed room. I did not follow the specific question concerning smells, but people are very active in very closely related problems concerning ventilation of rooms and e.g. car cabins. You may find lots of literature on these subjects.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
What is the difference between a spinor and a vector or a tensor? Why do we call a 1/2 spin particle satisfying the Dirac equation a spinor, and not a vector or a tensor?
It can be instructive to see the applications of Clifford algebra to areas outside of quantum mechanics to get a more geometric understanding of what spinors really are. I submit to you I can rotate a vector $a = a^1 \sigma_1 + a^2 \sigma_2 + a^3 \sigma_3$ in the xy plane using an expression of the following form: $$a' = \psi a \psi^{-1}$$ where $\psi = \exp(-\sigma_1 \sigma_2 \theta/2)= \cos \theta/2 - \sigma_1 \sigma_2 \sin \theta/2$. It's typical in QM to assign matrix representations to $\sigma_i$ (and hence, $a$ would be a matrix--a matrix that nonetheless represents a vector), but it is not necessary to do so. There are many such matrix representations that obey the basic requirements of the algebra, and we can talk about the results without choosing a representation. The object $\psi$ is a spinor. If I want to rotate $a'$ to $a''$ by another spinor $\phi$, then it would be $$a'' = \phi a' \phi^{-1} = \phi \psi a \psi^{-1} \phi^{-1}$$ I can equivalently say that $\psi \mapsto \psi' = \phi \psi$. This is the difference between spinors and vectors (and hence other tensors). Spinors transform in this one-sided way, while vectors transform in a two-sided way. This answers the difference between what spinors are and what tensors are; the question of why the solutions to the Dirac equation for the electron are spinors is probably best for someone better versed in QM than I.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 4, "answer_id": 2 }
Same momentum, different mass The question is: if * *A bowling ball and ping pong ball *are moving at same momentum *and you exert same force to stop each one *which will take a longer time? or some? *which will have a longer stopping distance? So I think I can think of this as: $$F = \frac{dp}{dt} = m \cdot \frac{v_i - 0}{\Delta t} = \frac{p_i}{\Delta t}$$ Since both have same momentum, given same force and momentum, time will be the same? Is this right? Then how do I do the stopping distance one?
You're right about the stopping time, if you continuously apply a constant force, this will indeed be true. The stopping distance will probably not be the same, as the pingpong-ball is moving much faster initially (why?). Can you determine the velocity of the ball as a function of time? How will you use this velocity for determining the stopping distance?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
The earth's magnetic field This might sound like a silly question. Is it possible for the earth's magnetic field to actually destroy or harm earth? (implosion, crushing etc.)
Yes, the Earth's magnetic field can harm things on the Earth. For example, when a large solar flare hits the earth it causes changes in the Earths magnetic field and these changes cause a voltage to be generated in any piece of wire. When those pieces of wire are power cables it can knock out elecricity supplies. See for example this article, though bear in mind newspapers are generally sensationalist and solar flares probably won't end civilisation for a while yet. However if you're asking from the perspective of a creature less reliant on modern technology, for example an ant, it's pretty unlikely that there's enough energy stored in the Earth's magnetic field to inconvenience you. As both Crazy Buddy and Michael Luciuk have pointed out, the Earth's magnetic field does an excellent job of protecting us from solar radiation. Without it not only would we all die of radiation damage, but the unchecked solar radiation would strip away the Earth's atmosphere. So the worst damage the Earth's magnetic field could do would be due to it not existing.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
constraint on scaling dimension How can we show that for any scalar operator $\Delta\geq1$ (where $\Delta$ is the scaling dimension)? Where can I find a reference for reading where it comes from?
This is a consequence of the Lehman spectral representation for a physical scalar operator. The two point function of this operator (the expected value of the operator with it's conjugate) can be written as an integral over propagators: $$ \langle \bar{\phi}(p)\phi(p') \rangle = (2\pi)^d\delta^d(p-p') \int_0^\infty {\rho(s)\over p^2 - s + i\epsilon} ds $$ Where each propagator falls off as ${1\over x^2}$ at short distances in 4 dimensions, and $\rho(s)>0$ for all s (because of Hilbert space positivity--- this is the norm of a state, namely $||\phi(p)|0\rangle||$). A superposition of positive propagators falling off as ${1\over p^2}$ with positive coefficients annot produce a falloff at large p which is faster than ${1\over p^2}$. This means that the asymptotic scale dimension of the scalar operator can't be less than 1 in 4 dimensions, it can't be less then 1/2 in 3 dimensions, and it can't be negative in 2 dimensions. This is not exactly mathematically true, because you can engineer a spectral weight which is growing near s=0 as a power law, to produce faster than 1/p^2 falloff. But it is physically true anyway, because such a growth requires an infinite number of particle species at p=0, which is inconsistent with the usual idea that a quantum field theory has a finite number of elementary fields, with a finite thermal entropy. The way to understand this is that superposing any finite tower of particles with positive spectral weights always leads to 1/p^2 falloff or slower, and a ${1\over x^a}$ propagator with $a\le 2$ The Kallen-Lehman spectral representation is a standard field theory result, it is found in most standard textbooks. The original paper is reprinted in Schwinger's reprint volume "Quantum Electrodynamics".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Many-worlds: Where does the energy come from? With regard to the theory that each time a wave function collapses the universe splits so that each possible outcome really exists - where does all the energy required to create all the new universes come from?
There is no energy required to do that. Unitary evolution preserves energy precisely. The reason is the way energy is calculated in quantum theory, and if that is applied to MWI then each branch only contributes with its squared modulus branch amplitude to the total energy. This is the only consistent way to count energy in quantum theory.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 0 }
Why do green lasers appear brighter and stronger than red and blue lasers? This is mostly for my own personal illumination, and isn't directly related to any school or work projects. I just picked up a trio of laser pointers (red, green, and blue), and I notice that when I project them, the red and the blue appear to be dimmer to my eye than the green one. I had a fleeting suspicion that, perhaps this is an effect of blue and red being at the periphery of the visual light scale, but I honestly have no idea if this is the case or if it's just my eyes playing tricks on me. All three lasers have the same nominal strength, in this case.
Human color vision is based on four types of receptors in the retina: rods, and three types of cones. Their response to different wavelengths is shown in this graph: . It shows clearly how certain wavelenghts, mostly around the yellow-green portion of the spectrum, are absorbed more strongly, and by more types of cells, than the rest. So it is normal that, even with equal powers, some colors are seen brighter than others. Actually, digital cameras often filter their CCD array with a Bayer mask, which has twice as many pixels filtered green, as red or blue, to better simulate the eye's color sensitivity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 1 }
Why do physicists believe that particles are pointlike? String theory gives physicists reason to believe that particles are 1-dimensional strings because the theory has a purpose - unifying gravity with the gauge theories. So why is it that it's popular belief that particles are 0-dimensional points? Was there ever a proposed theory of them being like this? And why? What reason do physicists have to believe that particles are 0-dimensional points as opposed to 1-dimensional strings?
Canonical particles possess an actual radius of hardness, which is determined by the Compton's expression $\lambda_\text{Compton} = \frac{h}{mc}$. One can read more about it here http://inerton.wikidot.com/canonical-particle Why do particle physicists speculate about point-like particles? It seems to me this is associated with their education; namely, their teachers told them wrong things and implanted an abstract tunnel vision approach to the reality. It is a pity but this is the truth.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 7, "answer_id": 6 }
Causal and Global structure of Penrose Diagrams What kind of global and causal structures does a Penrose diagram reveal? How do I see (using a Penrose diagram) that two different spacetimes have a similar global and causal structure? Also, I have the following metric $$ds^2 ~=~ Tdv^2 + 2dTdv,$$ defined for $$(v,T)~\in~ S^1\times \mathbb{R},$$ e.g. $v$ is periodic. This is the according Penrose diagram: Is the Penrose diagram that I have drawn correct?
A Penrose diagram of a metric $g_{ab}$ is used to represent the conformal structure of $g_{ab}$. Generally light rays move at $\frac{\pi}{4}$ from the upward vertical and the spacetime considered is spherical symmetric. The metric, $\overline{g_{ab}}$, on the Penrose diagram satisfies: $\overline{g_{ab}}=\Omega^{2} g_{ab}$. This implies that timelike (null, spacelike) vectors remain timelike (null,spacelike). From this, one can see that all concepts given in terms of timelike curves or null curves as the sets $I^{+}, J^{+}$ will remain the same. So all, the causal structure given in term of those sets will be preserved. The most common use for Penrose diagrams is to study the behavior at infinity of the different type of geodesics in maximally extended spacetimes. That is the reason the conformal boundaries $i^{o},i^{+},i^{-}, \cal{I}^{+}$ and $\cal{I}^{-}$ are such an important feature of the diagrams and they basically represent the 'boundary' at infinity of a certain class of geodesics.Two spacetimes have the same diagram if they are conformal this implies their behaviour at infinity is the same. Having said that, there is fundamental information about the global structure of the spacetime and its causality that is not represented such as geodesic incompleteness (in the sense of distinguishing between singular point at finite distance and infinity) or isometries. The conformal structure preserves all information about angles, but loses information about lengths. For the metric your asking for this paper might be useful. Mistier C, Taub-NUT space as Counterexample to Almost Everything, Lectures in Math., Vol. 8, pp. 160-169.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
Why do birds sitting on electric wires not get shocked? When we touch electric wires, we get shocked. Why don't birds sitting on electric wires not get shocked?
Because birds stand on a same electric wires, they are at an electric potential. The reason why people can be shocked, is because a person's body is a conductor, and when we touch the wire, there is a high electric potential on electric wire. But when birds stand on wires, there are always on the same electric potential, so they won't get shocked.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 1 }
Clarification on a Goldstein formula steps (classical mechanics) At page 20 of Classical Mechanics' Goldstein (Third edition), there are these two steps given between eqs. (1.51) and (1.52): $$\sum_i m_i \ddot {\bf r}_i \cdot \frac{\partial {\bf r_i}}{ \partial q_j}= \sum_i [\frac {d}{dt}(m_i {\bf v}_i \cdot \frac{\partial {\bf v}_i}{\partial \dot q_j})-m_i {\bf v}_i \cdot \frac{\partial {\bf v}_i}{\partial q_j}]$$ and $$\sum_j \{ \frac{d}{dt}[ \frac{\partial}{\partial \dot q_j}(\sum_i \frac{1}{2}m_i v^2_i)] - \frac{\partial}{\partial q_j}(\sum_i \frac{1}{2}m_i v^2_i)-Q_j \}\delta q_j .$$ Why does "$ \frac {1}{2}$" appear in the second formula?
The $\frac{1}{2}$ is due to the differentiation rule $$\frac{\partial }{\partial \dot q_j}({\bf v}_i \cdot {\bf v}_i ) ~=~2{\bf v}_i \cdot \frac{\partial {\bf v}_i}{\partial \dot q_j},$$ and $$\frac{\partial }{\partial q_j}({\bf v}_i \cdot {\bf v}_i ) ~=~2{\bf v}_i \cdot \frac{\partial {\bf v}_i}{\partial q_j}.$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Basic question on experimental plots On the following Higgs $\rightarrow$ Tau Tau plot, since we are plotting the ratio of $\frac{\sigma}{\sigma_{SM}}$ on the y axis, shouldn't the expected for this be 1? i.e., shouldn't the expected 68% and 95% be centered at a dotted line at 1? Anything else seems to imply we are expecting something other than the Standard Model...
In addition to referring you to the previous question and answer that David linked, I will try once more to postulate my interpretation of these graphs, called "Brazil bands". In my opinion they are the phenomenologist's attempt to extract limits out of very few events. Once there are enough events this type of plots and their yoga positions ( catching your right ear behind your back with the left hand) are abandoned, as the Higgs mass plot of CMS shows . The use of these Brazil plots is to concentrate the attention to regions which are not excluded even by scarce data, and thus give a hope of finding a desired higgs there. Now that we have it they are useless. Where the Higgs is the value should be 1 if it is a standard model Higgs. We see in the plot you give above that the measured crossection over the crossection calculated for the standard model Higgs the value is 1 at the value of 125 GeV within errors. Thus it is consistent with the real Higgs seen when the statistics improved. The confusion arises because there are two Monte Carlo simulations entering the "expected" plot. The reason is that one is necessary to get the theoretical value , since it cannot be found analytically, to large enough accuracy so that statistical errors would be irrelevant. The expected curves are curves that in the numerator immitate the data, i.e. if the data has 10 events a monte carlo is generated with 10 events and passed through all the limitations of the experimental setup, and the denominator the pure theory monte carlo. This ratio is distorted : The fewer statistics as the mass increases in collusion with the detector limitations and errors from the ideal Higgs mass at each point create the distorted from 1 ratio seen in your plot. When one has adequate statistics, the one and only Higgs would appear as 1 in the observed ratio and all the rest of the x axis would be depressed bellow 1, since the computed crossection would be much larger for the putative higgs mass over what the data has at that value , since the Higgs is at 125 GeV and only then will the ratio be 1. The expected over observed would be at 1 all the way through, as you observed. As I said when one has enough statistics this type of plots are useless.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/41990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
How do eigenstates of harmonic oscillators with different frequencies compare? Suppose I have a harmonic oscillator with frequency $\Omega_1$ and another one with frequency $\Omega_2$. Is there a simple relationship between the eigenstates of the two? Especially, how would the ground-state of one of them be expressed in terms of eigenstates of the other one? An application of my question would be a harmonic oscillator whose frequency can be controlled. Suppose then I start out in the ground state and then suddenly change the frequency. I'd expect that I'm then not in a ground state of the (new) oscillator any more, and I'd be interested in the time evolution of my state. For that, I need to do a basis transform of my groundstate. The problem seems basic enough to me that there should be previous work done on it. A brute force solution would probably be to perform integrals over the eigenstates in real space, but I have hope that an algebraic solution in terms of creation and destruction operators exists.
They are related by rescaling x linearly, so they are just more compressed/expanded but have the same shape. $$ \psi'_n (x) = \psi_n(\sqrt{\Omega_1\over \Omega_2} x)$$ The reason is dimensional analysis--- the scale of x is determined from the physical length scale, which is the decay-constant of the ground state Gaussian: $$ \Delta X = \sqrt{\hbar \over m \Omega} $$ mass times frequency is momentum over length, and $\hbar$ is momentum times length, so the ratio inside the square root has units of length squared. Don't bother with the formulas you find in books, set m and $\Omega$ to 1, and use the dimensional analysis scaling laws to find the rest of the solutions. There is only one harmonic oscillator, up to choice of units.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/43090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Intuition for multiple temporal dimensions It’s easy, relatively speaking, to develop an intuition for higher spatial dimensions, usually by induction on familiar lower-dimensional spaces. But I’m having difficulty envisioning a universe with multiple dimensions of time. Even if such a thing may not be real or possible, it seems like a good intellectual exercise. Can anyone offer an illustrative example?
If you are happy to focus on the intuitive without worrying about too much maths, the classic work on multiple time dimensions in physics must surely be J W Dunne's The Serial Universe - the second edition, published in 1942, is shorter and (comparatively) more readable. It was the sequel to his bestseller An Experiment with Time and elaborated on the role of the observer in modern physics. His regress of multiple time dimensions was inhabited by a similar regress of observers, thus (in retrospect) proffering a solution to the Schroedinger's Cat + Wigner's Friend regress of discrete real observers.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/43322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 4, "answer_id": 2 }
Why do diamonds shine? I have always wondered why diamonds shine. Can anyone tell me why?
The phenomenon you're looking for is called total internal reflection. You could also have a look at this link for more information. To draw a comparison with glass : In glass (for the most part) when you incident light onto it, it gets refracted on one surface, and gets refracted again at the other surface and leaves the material. This doesn't always happen, there is some total internal reflection happening, but the 'critical angle' for glass is really high so you don't usually see it happening. But diamond on the other hand has a really high refractive index ($\approx 2.4$) and because of that the critical angle for total internal reflection to occur is much smaller. So a greater percentage of the incident light gets internally reflected several times before it emerges from the diamond, making the diamond look really shiny. Edit : As @JohnRennie has also mentioned - It's also the shape that matters to the shiny-ness. Uncut diamond doesn't look as bright since the angles of incidence isn't made to be beyond the critical angle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/43361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 3 }
Wheatstone bridge galvanometer error We had to measure the resistance of $R_x$, we balanced the Wheatstone bridge and did calculations. My question is: we didn't include galvanometer error into calculations. Why is that? I read that it's very precise, but that doesn't seem like a good enough explanation in exact sciences :/ Edit: The precision is not the case as I was told, I need to go into more detail. When I measured, I set a value on the potentiometer, then I adjusted the adjustable resistor, so that the galvanometer would show zero. Is the error somehow compensated? ɛ - electromotive force. K - switch. R - adjustable resistor. C - potentiometer.
The huge advantage of bridge measurements is that you're only using the galvanometer to determine when the current between nodes C and D is 0. For this particular case, it's easy to calibrate the galvanometer exactly (or as good as your eyesight, anyway): before you apply any voltage to the circuit, note the galvanometer reading: that's 0! Compare this technique to a straightforward V/I measurement, where the current meter could read any value, and will in general have some error.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/43437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Solving time dependent Schrodinger equation in matrix form If we have a Hilbert space of $\mathbb{C}^3$ so that a wave function is a 3-component column vector $$\psi_t=(\psi_1(t),\psi_2(t),\psi_3(t))$$ With Hamiltonian $H$ given by $$H=\hbar\omega \begin{pmatrix} 1 & 2 & 0 \\ 2 & 0 & 2 \\ 0 & 2 & -1 \end{pmatrix}$$ With $$\psi_t(0)=(1,0,0)^T$$ So I proceeded to find the stationary states of $H$ by finding it's eigenvectors and eigenvalues. $H$ has eigenvalues and eigenvectors: $$3\hbar\omega,0,-3\hbar\omega$$ $$\psi_+=\frac{1}{3}(2,2,1)^T,\psi_0=\frac{1}{3}(2,-1,-2)^T,\psi_-=\frac{1}{3}(1,-2,2)^T$$ Respectively. Could anyone explain to me how to go from this to a general time dependent solution, and compute probabilities of location? I have only ever encountered $\Psi=\Psi(x,y,z,t)$ before, so I am extremely confused by this matrix format. I would be extremely grateful for any help!
The general solution is $$\psi(t)=\sum_k c_k e^{-itE_k/\hbar}\psi_k$$ where the $\psi_k$ form a basis of eigenvectors with corresponding eigenvalues $E_k$, and the $c_k$ are constant. You can match arbitrary initial conditions at $t=0$ by expanding the initial state in the eigenbasis; this will determine the valued for the $c_k$. [Edit] To get the statistical interpretation: The expectation of the Hermitian observable $A$ at time $t$ is given in the Schroedinger picture by $$\langle A\rangle_t:=\psi(t)^*A\psi(t).$$ Here it is assumed that $\psi(t)$ has norm 1. As the squared norm is preserved by the dynamics, this gives a well-defined expectation (i.e.,, the expectation of the identity matrix is 1 at all times).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/43578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Definition of Fine-Tuning I've looked in and out the forum, and found no precise definition of the meaning of fine-tuning in physics. QUESTION Is it possible to give a precise definition of fine-tuning? Of course, I guess most of us understand the empirical meaning of the phrase... but it seem so ethereal, that's the reason behind my question.
John Rennie's answer describes the term "fine tuning" as used in high-energy physics, but the term is often used in a very different way in the study of critical phenomena. In that context, it often has a much sharper definition: a Hamiltonian is "fine-tuned" if it lies on a particular lower-dimensional submanifold of Hamiltonian parameter space (typically a critical hypersurface). In this case we can even say that multicritical points are more fine-tuned than singly critical points, because they lie on a submanifold of higher codimension (and are corresponding more difficult to achieve in an experiment).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/43632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Current in series resistors and voltage drop in parallel resistors When we have resistors in series, the current through all the resistors is same and the voltage drop (or simply voltage) at each resistor is different. Question 1: It is fine that voltage drop (potential drop) across each resistor is different because each resistor offers different resistance (suppose). but how is the current through each resistor same? If we have resistors of different resistance, shouldn't the current be different through each resistor? Similarly, when we have resistors in parallel, the current through each resistor is different but the voltage drop at each resistor is same. Question 2: Current through each resistor is different because resistance of each resistor is different (suppose). but how is the voltage drop across each resistor same here? Shouldn't the voltage drop at each resistor be different because each resistor offers different resistance?
Look friend. The number of charge carriers in a circuit is constant. So current means flow of charge carriers. It is constant. Voltage source provides energy to these charge carriers to flow through the ckt and some energy is droped in moving through resistance. Understood???
{ "language": "en", "url": "https://physics.stackexchange.com/questions/43782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 4 }
If wave packets spread, why don't objects disappear? If you have an electron moving in empty space, it will be represented by a wave packet. But packets can spread over time, that is, their width increases, with it's uncertainty in position increasing. Now, if I throw a basketball, why doesn't the basketball's packet spread as well? Wouldn't that cause its uncertainty in position to increase so much to the point it disappears? EDIT: I realize I wasn't clear what I meant by disappear. Basically, suppose the wave packet is spread over the entire Solar System. Your field of vision covers only an extremely tiny part of the Solar System. Therefore, the probability that you will find the basketball that you threw in your field of vision is very small.
AFAIK there is no answer to this question, that is why there are few "theories" that try to answer this question like GRW GRW EDIT: Let me just elaborate because I gave you the most interesting answer without directly addressing your question. There is no need for you to throw the ball. The atoms of the ball are constrained by their mutual potential so the waves do not spread in the sense of free particles. @Twistor59 answer is heuristic usually given in textbooks, but obviously there are no 100 g particles as such. The main issue is why we don’t see the ball at 100 meters or whatever since wavefunctions have non zero probability throughout space. The measurement problem is a close relation to your “proper “question but a bit different. GRW is more concerned with that.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/43860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Equivalent spring equations for non-helical coil shapes? The compression spring equations are generally given for helical coil. What are the equivalent equations for alternative coil shapes, like oval?
The mechanics of all spring problems is derived from Hooke's law which models the properties of springs for small changes in length. Even for helical springs this law is a (very good) approximation. You can consider different spring geometries, different materials, larger changes in length... and this approximation will obviously worsen. Of course one can try to create more complicated models to take this matters into account, but it's generally not worth it and, almost certainly, you are never going to see different equations for spring dynamics. For further reading on the limitations of Hooke's law click here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/43929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Relativistic momentum I have been trying to derive why relativistic momentum is defined as $p=\gamma mv$. I set up a collision between 2 same balls ($m_1 = m_2 = m$). Before the collision these two balls travel one towards another in $x$ direction with velocities ${v_1}_x = (-{v_2}_x) = v$. After the collision these two balls travel away from each other with velocity ${v_1}_y = (-{v_2}_y) = v$. Coordinate system travells from left to right with velocity $u=v$ at all times (after and before collision). Please see the pictures below where picture (a) shows situation before collision and picture (b) after collision. Below is a proof that Newtonian momentum $mv$ is not preserved in coordinate system $x'y'$. I used $[\, | \,]$ to split $x$ and $y$ components. $p_z'$ is momentum before collision where $p_k'$ is momentum after collision. $$ \scriptsize \begin{split} p_z' &= \left[ m_1 {v_1}_x' + m_2 {v_2}_x'\, \biggl| \, 0 \right] = \left[ m_1 0 + m_2 \left( \frac{{v_2}_x - u}{1-{v_2}_x\frac{u}{c^2}} \right)\, \biggl| \, 0 \right]= \left[ m \left( \frac{-v - v}{1+ v \frac{v}{c^2}} \right) \, \biggl| \, 0 \right] \\ p_z' &= \left[ - 2mv \left( \frac{1}{1+ \frac{v^2}{c^2}}\right) \, \biggl| \, 0 \right] \end{split} $$ $$ \scriptsize \begin{split} p_k' &= \left[-2mv \, \biggl| \,m_1 {v_1}_y' + m_2 {v_2}_y'\right]=\left[ -2mv \, \biggl| \, m_1 \left( \frac{{v_1}_y}{\gamma \left(1 - {v_1}_y \frac{u}{c^2}\right)} \right) + m_2 \left( \frac{{v_2}_y}{\gamma \left(1 - {v_2}_y \frac{u}{c^2}\right)} \right) \right]\\ p_k' &= \left[ -2mv \, \biggl| \, m \left( \frac{v}{\gamma \left(1 - v \frac{v}{c^2}\right)} \right) - m \left( \frac{v}{\gamma \left(1 - v \frac{v}{c^2}\right)} \right)\right]\\ p_k' &= \left[ -2mv \, \biggl| \, 0 \right] \end{split} $$ It is clear that $x$ components differ by factor $1/\left(1+\frac{v^2}{c^2}\right)$. QUESTION: I want to know why do we multiply Newtonian momentum $p=mv$ by factor $\gamma = 1/ \sqrt{1 - \frac{v^2}{c^2}}$ and where is the connection between $\gamma$ and factor $1/\left(1+\frac{v^2}{c^2}\right)$ which i got?
Assume that the relativistic momentum is the same as the nonrelativistic momentum you used, but multiplied by some unknown function of velocity $\alpha(v)$. $$\mathbf{p} = \alpha(v)\,\, m \mathbf{v}$$ Then in the primed frame, the total momentum before the collision is just what you had, but multiplied by $\alpha(v_i)$, with $v_i$ the speed before collision. The momentum after the collision is again what you had, but multiplied by $\alpha(v_f)$, with $v_f$ the speed after the collision. In order to conserve momentum we must have $$ \alpha(v_i) \frac{-2mv}{1+v^2} = -2mv \,\alpha(v_f)$$ For simplicity, I'm suppressing factors of $c$. After the collision, you have a mistake in your velocity transformations. The vertical speed is just $v/\gamma$. That makes the speed of each ball $v_f = (v^2 + (v/\gamma)^2)^{1/2} = v \left(2-v^2\right)^{1/2}$ Plugging in $v_i$ and $v_f$ into the previous equation and canceling some like terms we have $$ \alpha\left(\frac{2v}{1+v^2}\right) \frac{1}{1 + v^2} = \alpha\left(v[2-v^2]^{1/2}\right)$$ If you let $\alpha(v) = \gamma(v)$ and crunch some algebra you'll see that the identity above is satisfied. As for your original point, a desire to understand why momentum has a factor $\gamma$ in it, analyzing situations like this one is helpful, but ultimately it is probably best to understand momentum as the spatial component of the energy-momentum four-vector. Since it is a four-vector, it must transform like any other four-vector, $\gamma$'s and all.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/43969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 0 }
Definition of scattered particle? Compare the number of scattered particles: $N_s=Fa\int\sigma(\theta)d\Omega$ With the total number of incident particles: $N_{in}=Fa$ Here, F is the flux of incoming beam, a the area. sigma the crossection and omega the solid angle. Why isnt $N_s=N_{in}$? How does one define which particles are scattered and which are not, arent they all interacting with the target to some degree? Isnt particles conserved normally? Do almost all the particles either pass right through almost undetected or are scattered signficiantly, so what we are really integrating over is a sphere surrounding the target except a spot of area $a$ where the beam exits?
With the exception of stopping targets, there is going to be a portion of the beam which continues down the beam pipe (i.e. misses even the innermost elements of the detector), which would usually be counted as "unscattered" for that purpose. In that case the integration is often over the solid angle covered by active detector elements.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/44095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to compute the expectation value $\langle x^2 \rangle$ in quantum mechanics? $$\langle x^2 \rangle = \int_{-\infty}^\infty x^2 |\psi(x)|^2 \text d x$$ What is the meaning of $|\psi(x)|^2$? Does that just mean one has to multiply the wave function with itself?
$\psi$ can be thought of as a complex column vector with infinitely many entries indexed by the variable $x$. Entry at $x$ th position is denoted as $\psi(x)$. $|\psi(x)|^2$ is then mode square of the entry at $x$th position. The expression $\displaystyle\int_{-\infty}^{\infty}x^2|\psi(x)|^2dx$ can be heuristically understood as : $[*,*,\overline{\psi(x)},*,*]\left[ \begin{array}{cc} * & 0 & 0 & 0 & 0\\ 0 & * & 0 & 0 & 0\\ 0 & 0 & x^2 & 0 & 0\\ 0 & 0 & 0 & * & 0\\ 0 & 0 & 0 & 0 & *\end{array}\right]\left[\begin{array}{cc}*\\*\\\psi(x)\\*\\*\end{array}\right]$ Where $[*,*,\overline{\psi(x)},*,*]$ is infinite dimensional row vector which is transpose conjugate of column vector $\psi$; And in the middle we have infinite dimensional diagonal matrix whose $(x,x)$th entry is $x^2$. This is in general true in QM. Any observable $A$ can be written as a hermitian matrix which acts on space of column vectors (the state space), and its expectation value for a given column vector $\psi$ is defined as $\langle A\rangle_{\psi}=\psi^{\dagger}A\psi = \displaystyle\sum_{i,j}\overline{\psi_i}A_{ij}\psi_{j} $. In this infinite dimensional case as above sum is replaced by integral over the continuous indices.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/44147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Right topology for infinite dimensional "Hilbert" spaces with indefinite or semidefinite norm For positive definite infinite dimensional Hilbert spaces, there is the standard Cauchy norm topology. What if this state space has an indefinite norm or a positive semidefinite one, as in gauge theories or Faddeev-Popov ghosts? Which infinite sums are valid, and which aren't? Similarly, for the algebra of operators, which norm topology do we choose? Not the W*-one? The C* one?
The topology is imposed only on the physical Hilbert space, which has a positive definite metric. If you need a topology outside, you are free to choose any that suits your purposes, but there is no canonical one. As there currently is no mathematically rigorous version of interacting quantum gauge fields, the question of which infinite sums, limits, etc., are valid can currently not be answered. There is significant rigorous work by Strocchi on a C^*-algebraic framework for gaunge fields in an indefinite metric setting (probably in his book ''Selected Topics on the General Properties of Quantum Field Theory'', though I don't have it available to check). But it hasn't lead so far to substantial results in the interacting case.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/44339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A thought about Quasars If Quasars are "beams" of energy exiting a super-massive black hole, in order for them to get through the black-hole's event horizon, they'd have to be traveling faster than the speed of light. My question is the following: My understanding of particle physics is still sketchy, nevertheless I think that in order for a particle to move faster than the speed of light, it must be mass-less. What particles are involved in the creation and furthermore, the mechanics of quasars?
The beam of energy is thought to originate outside the event horizon and be associated with an accretion disk formed by matter falling into the black hole.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/44571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The requirements for superconductivity Which properties are sufficient evidence for a material to be not superconducting? I am looking for a set of statements like If the material is semiconducting, it is not superconducting Edit: I am not looking for a definition of superconductivity, or for introductional literature like the famous W. Buckel. I am looking for properties, that would forbid superconductivity. If you have a source for it i would be very glad. As far I remember magnetic atoms will forbid superconductivity too, but i could not find a source yet.
Unfortunately, there is no hard and fast set of rules for superconductivity yet. The elemental superconductors and several of the metallic alloys seem to follow a set of rules whereas the high Tc cuprate compounds follow a different set of rules. The latest family of superconductors namely the Oxy-Iron- Pnictides/Chalcogenides meet an altogether different set of conditions. 1) Most metals and metallic alloys - BCS theory, Cooper Pairing, Electron-Phonon interaction, No coexistence of magnetism with superconductivity, obey Tc < 30 K rule. Mostly s-wave pairing symmetry of cooper pair wave function. 2) High Tc Cuprates - No to BCS theory, possibly cooper pairing, may not have electron-phonon coupling, etc. Tc > 30 K to 165 K. 30 K barrier broken. Competing orders of magnetism and superconductivity. d-wave or related pairing symmetry. No theory yet. 3) Iron Pnictides/ Chalcogenides - Co-existence of competing orders of magnetism and superconductivity, different pairing symmetry when compared with conventional and cuprate superconductors, possibly p-wave pairing. Tc's span from very low temperatures to around 60 K within 7 years of discovery. Tc's poised to increase. May provide crucial understanding towards high Tc mechanism of superconductivity. No theory yet. So, there seems to be no hard and fast rules set yet.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/44862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Velocity vs Time Bounce Could someone please explain the trajectory of the ball that is bouncing in this picture... The vertical component of the velocity of a bouncing ball is shown in the graph below. The positive Y direction is vertically up. The ball deforms slightly when it is in contact with the ground. I'm not sure what the ball is doing and when, what happens at 1s?
Remember $$v=\frac{\text{d}x}{\text{d}t}$$ Then $$x=\int_0^t v(t) \text{d}t$$ Simple integration gives you the following plot of $x(t)$:
{ "language": "en", "url": "https://physics.stackexchange.com/questions/45007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 4 }
Hamiltonian in position basis Let $ H = \frac{-h^2}{2m}\frac{\partial^2 }{\partial x^2}$. I want to find the matrix elements of $H$ in position basis. It is written like this: $\langle x \mid H \mid x' \rangle = \frac{-h^2}{2m}\frac{\partial^2}{\partial x^2} \delta(x -x')$. How do we get this? are we allowed to do $\langle x | \frac{\partial^2}{\partial x^2} \mid x' \rangle = \frac{\partial^2}{\partial x^2} \langle x \mid x' \rangle$? Why? It seems some thing similar is done above.
$$<x|H|x'>=<x|-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}|x'>$$ $$=-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}<x|x'>$$ $$=-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}\delta(x-x').$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/45055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 5 }
Why are different frequency bands used in different countries? Why are different frequency bands used in different countries despite ITU's effort for a common frequency band use? There's got to be a reason behind this. For instance, U.S.-based Verizon Wireless uses the 700 MHz frequency band for its LTE service while European TeliaSonera and South Korean SKT uses 1800/2600 MHz frequency bands and 850/1800 MHz frequency band, respectively.
The reason for this is mainly operational, rather than to do with the laws of physics. Radio spectrum is a very scarce resource, and is managed independently by each country's regulatory authority. In order to allocate spectrum to a mobile operator, the national regulatory authority has to make sure that spectrum is not being used by any other services. The spectrum that an operator would like to use for its cellular services might already be in use in that country for other purposes - radar, military communications, RFID(as mentioned in the previous answer), so the regulator has to carefully manage the spectrum to avoid conflict. Another factor is that the spectrum required to build a modern cellular system has to be in reasonably sized contiguous chunks in order for the system to function (e.g. a spare free 50KHz of bandwidth here and there is no use!). This puts further constraints on the regulators for allocating the spectrum. The result of all this is a rather inhomogenous country-dependent allocation of spectrum for mobile services. If you now ask about a particular technology, such as LTE, a further complication is that a mobile operator may already be using some of the LTE spectrum for another radio technology (such as WCDMA), and may not consider it economically justifiable to switch to LTE at this time. So the reasons are regulatory/technological/economic rather than physical. As far as the laws of physics go, there is a relatively small bandwidth of spectrum which has suitable radio propagation properties for cellular communications (reasonable distances, in-building penetration), so there is enormous competition for this spectrum!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/45121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Would a gauss rifle based on generated magnetic fields have any kickback? In the case of currently developing Gauss rifles, in which a slug is pulled down a line of electromagnets, facilitated by a micro-controller to achieve great speed in managing the switching of the magnets, does the weapon firing produce any recoil? If so, how would you go about calculating that recoil?
Simple answer when you think about it: You are imparting a force to accelerate the slug, so you're going to get an equal and opposite reaction. In a normal rifle, the explosion accelerates the bullet rapidly and you get recoil. In a gauss rifle, the acceleration will be a bit lower, but for a slightly longer time (the entire length of the barrel), so for the same muzzle velocity you will be able to calculate the recoil in the exact same way.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/45208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
How distorted does the Andromeda Galaxy appear to us due to the speed of light? The Andromeda Galaxy appears to us at an angle to the galactic disk, i.e. we are not in the Andromeda Galaxy disc's plane, nor are we near the direction that the galaxy's pole points. Therefore, due to the geometry and distances involved, it would seem to me that we are seeing the 'far edge' a few thousand years later than we are seeing the 'near edge'. How far can a galaxy spin during that those few thousand years, and therefore how distorted are we seeing Andromeda? Can we infer how Andromeda or other large spiral galaxies seen at an angle would look if light were to travel instantly? Might this phenomenon account for some of the unusual spin properties measured in galaxies, which is attributed to dark matter?
Andromeda is around 70,000 light years across (depending on where you make the edge) so yes the positions of individual stars are shifted. But since it typically takes 250 million years for a galaxy to rotate they are only shifted by 70/250,000 of a circle = 0.1deg. The rotation curve of a galaxy, which tells us the actual mass, depends on the velocity of stars relative to the center of the galaxy so their rotation position doesn't matter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/45255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }