Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why does humidity cause a feeling of hotness? Imagine there are two rooms kept at the same temperature but with different humidity levels. A person is asked to stay in each room for 5 minutes. At the end of experiment if we ask them which room was hotter, they will point to the room with the higher humidity. Correct right? How does humidity cause this feeling of hotness?
When the ambient humidity is high, the effectiveness of evaporation over the skin is reduced, so the body's ability to get rid of excess heat decreases. Human beings regulate their body temperature quite effectively by evaporation, even when we are not sweating, thanks to our naked skin. (This, supposedly, is also what made it possible for early hominids to become hunters by virtue of being effective long-distance runners.) Humans are so good at this, we can survive in environments that are significantly hotter than our body temperature (e.g., desert climates with temperatures in the mid-40s degrees Celsius) so long as the humidity remains low and we are adequately hydrated. (Incidentally, this is also why we are more likely to survive being locked in a hot car on a summer day than our furry pets.) In contrast, when the humidity is very high, even temperatures that are still several degrees below normal body temperature can be deadly already.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/196127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 4, "answer_id": 0 }
In general, why do smaller guns have more felt recoil? Why is recoil easier to control on a more massive gun compared to a smaller gun with the same bullet. Presumably the bullet leaves both guns with the same momentum, but the larger gun seems easier to control. Since the momentum you have to control is the same in both cases, why do we perceive less recoil on a bigger gun?
Newtons 3rd Law: When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body. Momentum is product of mass and velocity. The heavier gun has more mass, so, for the same momentum, it must have less "backwards" velocity, so less felt recoil.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/196312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
How can the magnetic field surrounding a current-carrying wire ever be uniform? My book says that to find the force a current-carrying wire exerts on a moving charge, one uses $B = (\mu_0 / 2\pi)(I/r)$ to find the magnitude of magnetic field around the wire, and then uses that to find $F_B = q v B \sin\theta$, which is the same formula as that for uniform field between the planes of a permanent magnet. How can the field around the wire ever be uniform since intuitively it weakens as distance increases?
The field around the wire isn't uniform. When you calculate the force on a charge in a magnetic field, you use the value of the field at the point where the particle is. So, $$F_B = qvB\sin\theta$$ is not just for a constant field. If the field varies from position to position in space, then the force the particle feels will also vary. This equation is introduced with a constant field because the math is easier and because the resulting motion is a simple circle. So, when you calculate the force of a current-carrying wire on a particle, the resulting force is only for that moment in time. If the particle moves towards or away from the wire, the force on the particle will be different. $F_B = qvB\sin\theta$ is always true, but that doesn't mean the force is constant. Below is a picture of a proton traveling at $10\ m/s$ next to a wire carrying $1\ A$ of current. The proton starts off at the bottom of the picture traveling upwards parallel to the wire in the same direction as the current. Notice that as the proton gets closer to the wire, the magnetic field increases, so the turn gets tighter. Once the proton turns $180^o$, it starts moving away from the wire, and the turns become wider.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/196524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there some no-go theorem for $D=9$ Kaluza Klein QCD+EM? While QCD is a typical product of AdS/CFT and some other research trends in extra dimensions, I have never found in the literature an example producing the non-chiral part of the standard model, colour plus electromagnetism, or even color alone, from D=9 Kaluza Klein. In principle such theory could be obtained * *by compactification on the 5-sphere $S^5$ and adding some Higgs to break $SO(6)$, or *by compactification on the product manifold $CP^2 \times S^1$, producing directly the gauge group $SU(3) \times U(1)$. Besides, QCD alone could be got from a $D=8$ theory on $CP^2$. The traditional argument about the absence of chiral fermions do not apply here as both QCD and EM are defined with Dirac fermions. So if there is a non-go theorem at work, forbidding the scheme, must be a different one. Here the question: Is there one? Or, as a counter-proof of my question, is the example actually done in the literature and it only happens that I have not searched deep enough?
With an answer selected (and bounty awarded) it is time to open a community wiki for explicit references on work along the line of getting QCD + EM, or alternatively QCD alone or QCD + "4th colour" extracting the group from the extra dimensions. * *An early 1975 work of founding fathers of string theory claims to have a O(6) and then a SU(4) group from the compactification of the usual six extra dimensions, but it does not use it for colour but, as will be traditional in the future, for generations. It is "Dual Field Theory of Quarks and Gluons" by J Scherk and John H. Schwarz. But it mentions an even earlier, 1965, work by Y. Neumann also pivoting on six extra dimensions and SU(4): http://www.sciencedirect.com/science/article/pii/0031916365902258 http://inspirehep.net/record/44806?ln=es https://inspirehep.net/record/49123?ln=es *A 2012 presentation of S. V. Bolokhov includes an example that claims to derive colour from D=8 via Kaluza Klein in a torus with a non-flat metric. (Thanks Olaf Matyja for this reference) It refers to previous work for other Russian authors, perhaps justifying the torus+metric methodology
{ "language": "en", "url": "https://physics.stackexchange.com/questions/196610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Conical train wheels I've been reading about how the conical shape of train wheels helps trains round turns without a differential. For those who are unfamiliar with the idea, the conical shape allows the wheels to shift and slide across the tracks, thus effectively varying their radii and allowing them to cover different distances while rotating at the same angular velocity. A cross-sectional view of the tracks and wheels generally looks something like: But what about a configuration like the following? I read in an online article that wheels in the second configuration may more easily slip and derail from the tracks (assuming there are no flanges to prevent them from doing so). But I can't convince myself using physics why that might be. Is one of these two configurations actually more reliable than the other?
The contact with the rail creates a kinematic center of rotation where the reaction forces meet. The rail car will tend to rotate about this center as a result of side loads. * *If the center is above the center of mass, the rail car acts like a hanging pendulum. A small deflection will cause a restoring torque opposing the swing. *If the center is below the center of mass, the rail car acts like an inverted pendulum. A small deflection will cause a positive feedback amplifying the swing. As a side effect the rail car will turn away from the turn instead of into the turn when the cone is the other way around.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/196726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 6, "answer_id": 0 }
Why is the index of refraction different for different wavelengths? The index of refraction can be written as $$n=\frac{\lambda_v}{\lambda_m}$$ where $\lambda_v$ is the wavelength in a vacuum and $\lambda_m$ is the wavelength in the medium. I’ve been told that since wavelength appears in the definition of an index of refraction, an index of refraction varies with wavelength. However, why would that be the case? The index of refraction is a ratio; if a wavelength of one wave is different from that of another wave passing through the same medium, the index of refraction should not be different for each wave, since they would have had different wavelengths in a vacuum too. So why is the index of refraction dependent on the wavelength?
I think you will have an easier time viewing the index of refraction from a speed-point of view. Consider the following: The energy of a given photon is determined by its frequency (color): $E = h \nu $ (h being the Planck constant) Assuming the photon does not lose energy when entering the material, its frequency must be conserved. However, as light is an electromagnetic wave, its propagation in a material differs fundamentally between the vacuum and a crystalline material. In the material, the electric field of the photon will be harder to 'produce', since the crystal's electrons will react to it. The higher the frequency, the stronger this effect is - the light gets increasingly slower. The refractive index for photons with a certain wavelength is $n(\nu) = \frac{c_0}{c_m(\nu)}$ with $c_0$ being the speed of light in vacuum (equal for all wavelengths as far as we know) and $c_m(\nu)$ the speed of a photon of a certain frequency $\nu$ in the material. Using $c = \lambda \nu$ (and assuming constant frequency) you will arrive at the expression you were using.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/196803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Can a magnet damage a compass? I've heard the claim before that a magnet can ruin a compass, and was about to repeat it to my son when I realized it sounds like complete nonsense. Googling turned up such unsubstantiated and illogical answers as this one and unanswered questions as this one but nothing that sounded reasonable to me and gave a convincing explanation. Perhaps my Google bubble is at work. Anyway, since SE is generally very reliable, I thought this was the right place to ask, before I pass on untested nonsense to my son. Help me break the chain of untested pseudoscience via oral tradition: does a magnet actually do permanent damage to a compass, or just temporarily prevent it from detecting magnetic north? If it actually does do this, please explain how that is so.
Yes a magnet can damage a compass. The compass needle is a ferromagnetic material. The degree to which a ferromagnetic material can "withstand an external magnetic field without becoming demagnetized" is referred to as its coercivity. Another magnet near the compass needle imposes a magnetic field upon the compass needle. It is a matter of the strength of the magnetic field imposed upon the compass needle and the coercivity of the needle material whether or not the magnetic properties of the compass needle are damaged.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/196996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 0 }
Dirac delta function definition in scattering theory I'm studying scattering theory from Sakurai's book. In the first pages he gets to the following expression: $$\langle n|U_I(t, t_0)|i\rangle=\delta_{ni}-\frac{i}{\hbar}\langle n|V|i\rangle\int_{t_0}^t e^{i\omega_{ni}t'} dt',\tag{1.9}$$ where $U$ is the propagator in Dirac's interaction picture and $V$ is a potential operator. So given that scattered states are only defined asymptotically we want to send $t \to \infty$ and $t_0 \to -\infty$, so I would say that the integral becomes immediately a Dirac's delta because that's just its integral representation! But he says: let's define a $T$ matrix such that: $$\langle n|U_I(t, t_0)|i\rangle=\delta_{ni}-\frac{i}{\hbar}T_{ni}\int_{t_0}^t e^{i\omega_{ni}t'+\varepsilon t'} dt'.\tag{1.10}$$ And then keeps going. I don't get this! Why do we need this small parameter $\varepsilon$? He then says that it's going to be sent to zero and that it makes sure the integral does not diverge. I don't quite get this prescription. Can anyone help me understand this strategy?
If you go to these limit right away and get delta functions you might later face problems such as need to evaluate meaningless expressions, e.g. $\delta(x)^2$ or $\delta(0)$. This can often be avoided by adding so called regulators (in this case this role is played by $\epsilon$). These should be removed after all manipulations. If you get final results for physical quantities which are regular in the limit $\epsilon \to 0,$ you are happy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/197346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Are Hubble Telescope Images in true color? Like many others, I have marveled at the images made available from the Hubble Space Telescope over the years. But, I have always had a curiosity about the color shown in these images. An example is shown below. Are the colors we see, such as the yellows, blues, and so on the true colors or are they applied by some kind of colorization method to enhance the image quality for realism.
Sort of. As Space.com writes, The raw Hubble images, as beamed down from the telescope itself, are black and white. But each image is captured using three different filters: red, green and blue. The Hubble imaging team combines those three images into one, in a Technicolor process pioneered in the 1930s. (The same process occurs in digital SLRs, except that in your camera, it's automatic.) Why are the original images in black and white? Because if Hubble's eye saw in color, the light detector would have to have red, green and blue elements crammed into the same area, taking away crucial resolving capability. Without those different elements, Hubble can capture images with much more detail. As an interesting aside, the Wide Field Camera 3 sees in wavelengths other than visible light, as do the Cosmic Origins Spectrograph and the Space Telescope Imaging Spectrograph. NASA goes into a litte detail about the process here, as well as some of the rationale behind choosing some colors. Some of the reasons for using artificial colors include showcasing elements whose emission lines are out of the visible spectrum, and showing features that are too dim at visible wavelengths. Remember, CCD detectors usually don't see the same things that humans do, and Hubble can see outside the visible spectrum.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/197487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 3, "answer_id": 1 }
How to predict bound states in a 1D triangular well? Assume we have a (single) particle in a potential well of the following shape: For $x \leq 0$, $V = \infty$ (Region I) For $x \geq L$, $V = 0$ (Region III) For the interval $x > 0$ to $x < L$, $V = -V_0\frac{L-x}{L}$ (Region II). The potential geometry is reminiscent of the potential energy function of a diatomic molecule (with $x$ the intra-nuclear distance). In Region II the potential energy is a field with (positive) gradient $\frac{V_0}{L}$. A few observations: In Region II, $V(x)$ is non-symmetric, so we can expect eigenfunctions without definite parity. In Region II we can expect $\psi(0) = 0$. We can also expect $\psi(\infty) = 0$, so the wave functions should be normalisable. A quick analytic look at the Schrödinger equation in Region II using Wolfram Alpha’s DSolve facility shows the solutions involve the Airy Functions $A_i$ and $B_i$. For $\frac{V_0}{L} = 0$, the problem is reduced to an infinite potential wall (not a well). Incoming particles from Region III would simply be reflected by the wall at $x = 0, V = \infty$. There would be no bound states. And this raises an interesting question: for which value of $\frac{V_0}{L}$ is there at least one bound state and approximately at which value of the Hamiltonian $E$? I have a feeling this can be related to the Uncertainty Principle because aren’t the confinement energies of bound particles in 1 D wells inversely proportional to $L^2$? If so would calculating a $\sigma_x$ not allow calculating a $\langle p^2 \rangle$ and thus a minimum $E$ for a bound state?
The wavefunction $\psi(x)$ satisfies $$ -\frac{\hbar^2}{2m}\psi'' + V_0\left(\frac{x}{L} - 1\right) \psi = E\psi, \quad 0 \leq x \leq L\\ -\frac{\hbar^2}{2m}\psi'' = E\psi, \quad x > L $$ Since the bound states have $E < 0$ let's introduce $$ k = \frac{\sqrt{-2mE}}{\hbar}\\ \varkappa = \frac{\sqrt{2mV_0}}{\hbar} $$ Then $$ \psi'' - \varkappa^2\frac{x}{L}\psi = (k^2 - \varkappa^2) \psi, \quad 0 \leq x \leq L\\ \psi'' = k^2 \psi, \quad x > L $$ Introducing new dimensionless coordinate $\xi$ by $$ x = \sqrt[3]{\frac{L}{\varkappa^2}} \xi + L - L \frac{k^2}{\varkappa^2}\\ x = L + \frac{\xi}{\gamma} -\frac{k^2}{\gamma^3}, \quad \gamma \equiv \sqrt[3]{\frac{\varkappa^2}{L}} $$ the equation can be reduces to Airy equation $$ \psi''(\xi) - \xi \psi(\xi) = 0\\ \psi(\xi) = \cos \alpha \operatorname{Ai}(\xi) + \sin \alpha \operatorname{Bi}(\xi)\\ $$ Since we're solving Airy equation in a limited domain $x \in [0, L]$ we cannot throw away the $\operatorname{Bi}(\xi)$ part. At $x = L$ the solution should satisfy $$ \psi'(L) = -k \psi(L) $$ since $$ \psi(x) = C_3 e^{-kx}, \quad x > L. $$ We have following conditions to determine $k$: $$ \text{For } \xi_1 = \frac{k^2}{\gamma^2} - L\gamma\implies \cos \alpha \operatorname{Ai}(\xi_1) + \sin \alpha \operatorname{Bi}(\xi_1) = 0\\ \text{For } \xi_2 = \frac{k^2}{\gamma^2} \implies \frac{\cos \alpha \operatorname{Ai}'(\xi_2) + \sin \alpha \operatorname{Bi}'(\xi_2)}{\cos \alpha \operatorname{Ai}(\xi_2) + \sin \alpha \operatorname{Bi}(\xi_2)} = -\frac{k}{\gamma}. $$ Eliminating $\alpha$ one gets $$ \frac{\operatorname{Bi}(\xi_1) \operatorname{Ai}'(\xi_2) - \operatorname{Ai}(\xi_1) \operatorname{Bi}'(\xi_2)}{\operatorname{Bi}(\xi_1) \operatorname{Ai}(\xi_2) - \operatorname{Ai}(\xi_1) \operatorname{Bi}(\xi_2)} = -\frac{k}{\gamma}. $$ To simplify further let's introduce dimensionless $z = \frac{k}{\gamma}$ and parameter $q = \gamma L = \sqrt[3]{L^2 \varkappa^2} = \sqrt[3]{\frac{2mV_0L^2 }{\hbar^2}}$. Thus we need to study the following equation for $z \geq 0$: $$ \frac{\operatorname{Bi}(\xi_1) \operatorname{Ai}'(\xi_2) - \operatorname{Ai}(\xi_1) \operatorname{Bi}'(\xi_2)}{\operatorname{Bi}(\xi_1) \operatorname{Ai}(\xi_2) - \operatorname{Ai}(\xi_1) \operatorname{Bi}(\xi_2)} = -z, \quad \xi_1 = z^2 - q, \;\xi_2 = z^2. $$ Manipulating with the plot of the function one can see that for $q \leq q_\text{cr}$ there are no solutions and when $q > q_\text{cr}$ there are. While $z = 0$ is not a solution ($\psi(+\infty) \neq 0$), it is useful to determine $q_\text{cr}$. That would be the least solution to the following system (plugged $z = 0$): $$ \operatorname{Bi}(-q_\text{cr}) \operatorname{Ai}'(0) - \operatorname{Ai}(-q_\text{cr}) \operatorname{Bi}'(0) = 0\\ \frac{\operatorname{Ai}(-q_\text{cr})}{\operatorname{Bi}(-q_\text{cr})} = \frac{\operatorname{Ai}'(0)}{\operatorname{Bi}'(0)}\\ q_\text{cr} \approx 1.9863527074304728\\ V_0 = \frac{\hbar^2}{2mL^2} q_\text{cr}^3 \approx 7.837347 \frac{\hbar^2}{2mL^2}. $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/197676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How do I know what variable to use for the chain rule? In my textbook the tangential acceleration is given like this: $$a_t=\frac{dv}{dt}=r\frac{dw}{dt}$$ $$a_t=rα$$ I understand that the chain rule is applied here like this: $$a_t=\frac{dv}{dt}=\frac{dv}{dw}\frac{dw}{dt}=rα$$ What I don't understand is why we have to apply the rule in this specific way. Say I write like this: $$a_t=\frac{dv}{dθ}\frac{dθ}{dt}$$ This way, I end up with entirely different result. How do I know how the chain rule must be applied?
What I don't understand is why we have to apply the rule in this specific way? How do I know how the chain rule must be applied? We don't have to. You don't know. Somebody just found out that by using that specific method, the result ended up neat and simple. Nothing is wrong with another method. You get the same thing in another expression. Let's try to interpret the terms in your result: $$a_t=\frac{dv}{dθ}\frac{dθ}{dt}$$ $\frac{dθ}{dt}$ clealy equals $\omega$. $\frac{dv}{dθ}$ is a bit tougher - something like instantanous speed change per angle change. If you have a way to measure this, then the formula: $$a_t=\frac{dv}{dθ}\omega$$ is just as useable. Just not as neat. I mean, it is kinda smart that $a_t=r\alpha$ has the same shape as $v=r\omega$ and $s=r\theta$. Makes the overview much better, when we can end with a similar and simple result. Note, if you already have the expression $v=r\omega$ at hand, then you don't even need chain rules to reach the simple expression: $$a_t=\frac{dv}{dt}=\frac{d(r\omega)}{dt}=r\frac{d\omega}{dt}=rα$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/197783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Difference between $|d{\bf r}|$ and $d|{\bf r}|$ What is the difference between $|d{\bf r}|$ and $d|{\bf r}|$ and why are both of them not always equal to each other? My question might seem stupid to some and will probably get downvoted but I have thought on the question but still can't comprehend any difference between the two. I was reading Irodov's Mechanics as an extra reading,when I came upon this! The book has given an example at the footnote but I still can't understand. :/
If $$ \overrightarrow{r}=r_{x}\widehat{i}+r_{y}\widehat{j} $$ then $$ \left | \overrightarrow{r} \right |=\sqrt{r_{x}^{2}+r_{y}^{2}} $$ and $$ d\left | \overrightarrow{r} \right |=\frac{r_{x}dr_{x}+r_{y}dr_{y}}{\sqrt{r_{x}^{2}+r_{y}^{2}}} $$ on the other hand $$ d\overrightarrow{r}=dr_{x}\widehat{i}+dr_{y}\widehat{j} $$ and $$ \left | d\overrightarrow{r} \right |=\sqrt{dr_{x}^{2}+dr_{y}^{2}} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/197989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Stability and Laplace's equation Consider four positive charges of magnitude $q$ at four corners of a square and another charge $Q$ placed at the origin. What can we say about the stability at this point? My attempt goes like this. I considered 4 charged placed at $(1,0)$, $(0,1)$, $(-1,0)$, $(0,-1)$ and computed the potential and and it's derivatives. When using the partial derivative test, the result was that origin is a stable equilibrium position. $$V(x,y)=k[\frac{1}{\sqrt{(x-1)^2 + y^2}}+\frac{1}{\sqrt{(x+1)^2 + y^2}}+\frac{1}{\sqrt{x^2 + (y-1)^2}}+\frac{1}{\sqrt{x^2 + (y+1)^2}}] $$ $$\partial_x V= -k[\frac{x-1}{((x-1)^2 + y^2)^\frac{3}{2}}+ \frac{x+1}{((x+1)^2 + y^2)^\frac{3}{2}} + \frac{x}{(x^2 + (y-1)^2)^\frac{3}{2}} + \frac{x}{(x^2 + (y+1^2)^\frac{3}{2}}] $$ $$\partial_{xx} V= k[\frac{2(x-1)^2 -y^2}{((x-1)^2 + y^2)^\frac{5}{2}} + \frac{2(x+1)^2 -y^2}{((x+1)^2 + y^2)^\frac{5}{2}} + \frac{2x^2 -(y-1)^2}{(x^2 + (y-1)^2)^\frac{5}{2}} +\frac{2x^2 -(y+1)^2}{(x^2 + (y+1)^2)^\frac{5}{2}}] $$ $$\partial_{yx} V= 3k[\frac{(x-1)y}{((x-1)^2 + y^2)^\frac{5}{2}} + \frac{(x+1)y}{((x+1)^2 + y^2)^\frac{5}{2}} + \frac{x(y-1)}{(x^2 + (y-1)^2)^\frac{5}{2}} +\frac{x(y+1)}{(x^2 + (y+1)^2)^\frac{5}{2}}] $$ $ \partial_{yy}$ is same as $\partial_{xx}$ except for x and y exchange by symmetry. At the origin(equilibrium point), $\partial_{xx}$ and $\partial_{yy}$ are positive while $\partial_{yx}=\partial_{xy}=0$ Hence by the partial derivative test for stability I have a stable equilibrium. A local minimum of potential. Now starts my confusion. According to what I learnt in Laplace's equation $\Delta V=0$ for potential, in a charge free region(I take the region without charges with origin in it) there can never have a local minima or maxima within the boundary. This contradicts with the above conclusion that we have a minimum of potential. Please help me, to see the cause of this contradiction.
The Coulomb Potential is a solution to Laplace's equation in 3 dimensions. In 2 dimensions the equivalent solution is a logarithmic potential. You have written down the Coulomb potential for 4 charges but then treat the problem as 2 dimensional, which is causing your problems. To resolve this you need to add a load of $z^2$s to your potential. If you consider placing a fifth charge at the origin it will be repelled by each of the 4 original charges, so it is not surprising that the forces acting on it in the xy plane push it back to the centre. It is, however, clearly unstable in the z direction as it is repelled by the entire existing arrangement of charges.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/198094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why aren't calculation results in error propagation at the center of the range? We have two copper rods, with $L_1$ and $L_2$ as their lengths respectively, and we want to glue the two bars together, with glue that's infinitesimally thin. $$\begin{align} L_1 &= 20 ± 0.2\ \mathrm{cm} \\ L_2 &= 30 ± 0.5\ \mathrm{cm} \end{align}$$ To calculate the length of the composite bar, $L$, as well as its uncertainty, we can do the following (which I admit is a rather crude method, but is done for completeness): $$\begin{align} L_\text{MAX} &= 20.2 + 30.5 = 50.7\ \mathrm{cm} \\ L_\text{MIN} &= 19.8 + 29.5 = 49.3\ \mathrm{cm} \end{align}$$ Therefore, $L = 50 ± 0.7\ \mathrm{cm}$. This, although is a long method, is a correct as the length $L$ is just a sum of the values of $L_1$ and $L_2$ with an uncertainty of the range of values possible divided by two. Now, if we want to calculate the area with the following length and width, as well as the uncertainty, we could use a method similar to the one described above: $$\begin{align} W &= 20 ± 0.2\ \mathrm{cm} \\ L &= 10 ± 0.2\ \mathrm{cm} \\ A_\text{MAX} &= (20.2\ \mathrm{cm})(10.2\ \mathrm{cm}) = 206.04\ \mathrm{cm}^2 \\ A_\text{MIN} &= (19.8\ \mathrm{cm})(9.8\ \mathrm{cm}) = 194.04\ \mathrm{cm}^2 \end{align}$$ In this case, the answer without the uncertainty, $10\ \mathrm{cm} \times 20\ \mathrm{cm} = 200\ \mathrm{cm}^2$, is not the center of our range of values. Although 200 isn't the smack center, there does exist a center, which in this case is 200.04. The actual uncertainty of the area is 6, which is in fact half of the range of the maximum and minimum, giving us a final answer of $200 ± 6\ \mathrm{cm}$. The way we have defined the propagation of uncertainties in physics is such that the answer is not necessarily the smack center of the minimum and maximum value range, but is instead the product of the two measurements, the two lengths in this case. This approach to the first problem made a lot of intuitive sense, however, I cannot understand why the final answer is 200 (which is not the center of the range) ± 6, and why this answer gives a different range of values than the range calculated using the long, crude method. I am a high school student who has not covered calculus yet, which is what prevented me from understanding the proof of adding fractional or percentage uncertainties when we multiply or divide quantities. Any help will be greatly appreciated, thanks in advance.
When proving addition of fractional uncertainties, one neglects the product of uncertainties. In your case the product of fractional uncertainties is (0.01)(0.02) = 0.0002 which is considerably less than the sum of them 0.01 + 0.02 = 0.03. This is the discrepancy you see in multiplication example. Nevertheless if you take significant figures only in your example, then 200.04 ± 6 becomes 200 ± 6 and the answers are the same.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/198175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Is heat conduction impeded at interfaces between dissimilar materials? Sound in air essentially echoes off concrete walls, rather than penetrating them, because of the difference in the material properties of air and concrete. By analogy, are there pairs of solid materials where their interface would be very inefficient at propagating heat? Perhaps one material has heavy atoms and soft bonds and the other has light atoms and stiff bonds, and neither has free electrons. If this phenomenon exists could it be used to create super-insulators, by laminating together large numbers of very thin layers of the two materials?
It turns out that there can be resistance to heat flow at an interface between two different materials, even if there are no gaps. It is discussed on Wikipedia, under the heading "Interfacial thermal resistance". It is associated with mismatches in the frequencies of the thermal vibrations (quantized as phonons) associated with the different materials.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/199357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
thin film interference of light In a thin film interference (reflective system) I know that condition for maxima is $$2\mu t\cos(r)=(2n\pm 1)\frac{\lambda}{2}$$ and for minima is $$2\mu t\cos(r)=n\lambda$$ and for transitive system it's just the opposite. but what happens if then film is very small such that $$\lim_{t \to 0}$$ i.e. thin film is too thin? My teacher told my that condition for minima is satisfied because then $\delta x = \lambda /2$ and hence film appears dark. How is this possible? and similarly what happens of film in too thick ? I am guessing interference does't happen then , but what would be explanation for it ?
If the film is thinner than half a wavelength then there is no thin-film interference (until you get into complex surface plasmon effects). Thin-film interference works with thick films so long as they are multiples of half a 1 or 1/2 a wavelength. The main practical difficulty is making thick films that are uniform refractive index, and flat on the top. You can demonstrate interference fringes in air with relative large distances https://en.wikipedia.org/wiki/Newton%27s_rings
{ "language": "en", "url": "https://physics.stackexchange.com/questions/199408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Eddy Currents – Tubes with slits When a magnet falls down a tube, eddy currents form and flow around the tube, perpendicular to the direction in which the magnet falls. However, when there is a vertical slit in the tube, are either no eddy currents formed (since they cannot complete a rotation), or alternatively do much smaller eddy currents form, as suggested by the following figure? Further to this, a recent HSC (Australian Physics Examination) tested this, The official answer to this question C. If it is true that small eddy currents are formed, why then is this the case?
Yes, there should be smaller eddy currents formed when the tube has a slit in it. Those same eddy currents will also be formed in the tube without a slit, on top of the current that goes all the way around the circle. The important thing is that the primary current gets cut off by the slit, reducing the amount of energy dissipate compared to the slitless tube. So, the answer I would expect to be correct would be that the falling order would be: plastic, slitted copper, and slitless copper. That the answer comes out as the slitted copper and plastic ring hitting simultaneously isn't too surprising to me because the eddy currents in the slitted ring are quite small, and don't dissipate enough energy to slow the object down significantly. It's kind of like dropping an orange and a bowling ball at the same time - the bowling ball experiences more air resistance, but it should also have a higher terminal velocity because it has more mass. It would be interesting to see this experiment done with a very long drop viewed by both high speed cameras and thermal cameras.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/199560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is a bomb's shockwave strong enough to kill? I'm watching a movie, The Hurt Locker, and the first scene shows an IED explosion which kills a soldier. Of course movies don't depict explosions with maximum realism, but I noticed the debris and smoke / flame didn't reach him, and it made me curious about whether invisible aspects of an explosion - heat or concussive blast can be lethal (without carrying shrapnel). How strong are the unseen forces from an explosion such as a road side bomb? Strong enough to be lethal?
The other answers already mention pressure and heat. A bomb sets nearby bodies in motion with a speed depending on the strength of the explosion, the distance to the body, and how much surface area of the body was facing the bomb. While - as explained in the other answers - being set in motion is rarely lethal, being smashed against a wall can easily lead to lethal internal bleeding. It gets even worse if there are pointy objects between the body and the wall. Explosions indoors are much more unpleasant than explosions outdoors. Additionally, there's the chance of objects/debris in the surroundings acting as a random substitute for building shrapnel into the bomb.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/199730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 4, "answer_id": 3 }
Thermodynamics about turbines 1.A turbine is rated at 650 hp when the flow of water through it is 0.85 m3/sec. Assuming an efficiency of 84%, what is the head acting on the turbine.
This is a nice question on $\color{blue}{\text{Hydraulic Turbine}}\ \color{red}{ \text{(FLUID MECHANICS)}}$ Given $$\text{rated power}, P_{\text{rated}}=650 \ H.P.=650\times 746=484900 \ W$$ $$\text{discharge}, Q=0.85\ m^3/sec$$ $$\text{efficiency }, \eta=84\ \text{%}$$ Let, $H$ be head under which the turbine is working Now, the efficiency of the turbine $$\eta=\frac{\text{Output power of turbine}}{\text{Rated power of turbine}}$$ $$=\frac{\rho g Q H}{P_{\text{rated}}}$$ Where, $\rho=\text{density of water}=1000\ kg/m^3$ & $g=9.81\ m/sec^2$ $$H=\frac{\eta P_{\text{rated}}}{\rho gQ}$$ Now, setting the corresponding values, we get $$H=\frac{0.84\times 484900}{1000\times 9.81\times 0.85}$$ $$H\approx 48.8476\ m$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/199936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does uncertainty/error propagate with differentiation? I have a noisy temperature (T) vs. time (t) measurement and I want to calculate dT/dt. If I approximate $dT/dt = \Delta T/\Delta t$ then the noise in the derivative gets too high and the derivative becomes useless. So I fit a smoothing spline (smoothing parameter say 'p') to the measured data and get $dT/dt$ by piecewise differentiation of the spline. Is there a way to obtain uncertainty in this $dT/dt$ based on uncertainty in T?
Model two consecutive measurements as the real values plus some noise. Call the first measured temperature $T_1$ and the second $T_2$. Call the measured noises $\gamma_1$ and $\gamma_2$, and suppose that they are drawn from a distribution $\Gamma(\gamma)$ and are uncorrelated. The (approximation to the) derivative is $$\text{Derivative} \approx \frac{(T_2 + \gamma_2) - (T_1 + \gamma_1)}{\Delta t} \, .$$ Note that the derivative is itself a random variable because the $\gamma$'s are random variables. What is the probability distribution of this new random variable? Focus first on the numerator. Here we have a deterministic part $T_2 - T_1$ and a stochastic part $\gamma_2 - \gamma_1$. The trick thing you may not know is how to figure out the probability distribution of the sum or difference of two random variables; in fact the answer is not at all trivial. Given two random variables $x$ and $y$ with distributions $X(x)$ and $Y(y)$, the random variable $z$ defined by $z = x + y$ has distribution $$ Z(z) = (X \otimes Y)(z) \equiv \int_{-\infty}^\infty X(w) Y(z - w) \, dw \, .$$ This integral is called a convolution. Anyway, the point is that the probability distribution $P_{\gamma_2 - \gamma_1}$ of $\gamma_2 - \gamma_1$ is the convolution of the distributions of $\gamma_2$ and $-\gamma_1$, which is $$P_{\gamma_2 - \gamma_1}(\gamma) = \int_{-\infty}^\infty \underbrace{\Gamma(-\gamma')}_{\text{from }-\gamma_1} \underbrace{\Gamma(\gamma - \gamma')}_{\text{from }\gamma_2} \, d \gamma' \, .$$ As an example, suppose the noise is Gaussian distributed with standard deviation $\sigma$, $$\Gamma(\gamma) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(\frac{-\gamma^2}{2 \sigma^2} \right) \, .$$ In this case we can do the integral, and the result is $$P_{\gamma_2 - \gamma_1}(\gamma) = \frac{1}{\sqrt{2\pi} (\sqrt{2} \sigma)} \exp \left( \frac{-\gamma^2}{2 (\sqrt{2}\sigma)^2}\right) \, ,$$ which is just a Gaussian with standard deviation $\sqrt{2} \sigma$. Now remember we also divide by $\Delta t$, and doing this too modifies the distribution. The result is that the probability distribution is still a Gaussian where the standard deviation turns out to be $\sqrt{2}\sigma / \Delta t$. So that's your answer: the error in the derivative is completely described by a Gaussian probability distribution with standard deviation $\sqrt{2} \sigma / \Delta t$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/200029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 2 }
Is there experimental verification of the s, p, d, f orbital shapes? Have there been any experiments performed (or proposed) to prove that the shapes of the s,p,d,f orbitals correspond to our spatial reality as opposed to just being a figment of the mathematics that give us something to visualize?
A few years ago the XUV physics group at the AMOLF Institute in Amsterdam were (to my knowledge the first to be) able to directly image the orbitals of excited hydrogen atoms using photoionization microscopy. For more details see the paper, Hydrogen Atoms under Magnification: Direct Observation of the Nodal Structure of Stark States. A.S. Stolodna et al. Phys. Rev. Lett. 110 213001 (2013). This was actually featured as one of Physics World Top 10 Breakthroughs of the year 2013. There is a nice open access Viewpoint on this if you want to read more Viewpoint: A New Look at the Hydrogen Wave Function, C.T.L. Smeenk, Physics 6, 58 (2013) For a more in-depth look, see Taking snapshots of atomic wave functions with a photoionization microscope. A.S. Stodolna. PhD thesis, Radboud Universiteit Nijmegen, 2014.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/200143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 2, "answer_id": 0 }
Which power equation to use: $P = I^2 * R$ or $P = V^2 / R$? Given are ideal max voltage $V = 200\;\mathrm{V}$ and max current $I = 5\;\mathrm{A}$. Therefore: * *ideal resistance is $$R = \frac VI = \frac{200 \;\mathrm{V}}{5\;\mathrm{A}} = 40 \;\mathrm{\Omega}$$ *ideal max power is $$P=IV = 5 \;\mathrm{A}* 200\;\mathrm{V} = 1000\;\mathrm{W}$$ *1st power equation: $$P = I^2 * R$$ *2nd power equation: $$P = \frac{V^2}R$$ Say the real resistance is $$R = 20 \;\mathrm{\Omega}.$$ I presume I am to use the first equation since the other one gives a power above the max power and can't be true. $$P = I^2 * R = 25 * 20 \;\mathrm{W}= 500\;\mathrm{W}$$ or $$P = \frac{V^2}R = \frac{40000}{20} \;\mathrm{W}= 2000\;\mathrm{W}$$ What if the real resistance was greater than the ideal, e.g. $R = 60\;\mathrm{\Omega}$. Then I presume I would use the second equation since the first one is above the max power. $$P = I^2 * R = 5^2 * 60 \;\mathrm{W}= 25 * 60 \;\mathrm{W}= 1500\;\mathrm{W}\\ P = \frac{V^2}R = \frac{40000}{60} \;\mathrm{W} = 666\;\mathrm{W}$$ I think I have found out which equation to use, however I would like to know why this is the case.
You have changed the resistance from $40\Omega$ to $20\Omega$ and $60\Omega$ but did not change anything else. You must always allow for $$V=I*R$$ If the resistance halves but the voltage stays the same, then the current doubles, and hence your power quadruples. With $20\Omega$ the current is: $$I=V/R=200/20=10A$$ Power then becomes: $$P=I^2R=10^2*20=2kW$$ $$P=V^2/R=200^2/20=40000/20=2kW$$ The same applies when you change the resistance to $60\Omega$: $$I=200/60=3.33A$$ $$P=3.33^2*60=666.6W$$ $$P=200^2/60=666.6W$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/200234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Electric field a distance z above the midpoint of a straight line segment In Griffiths there's an example to evaluate the Electric field a distance z above the midpoint of a straight line segment of length 2L. Which carries a uniform charge $\lambda$. In that calculation, the author used the fraction of charges dq which placed on the left and also the right side. Then he integrated from 0 to L (in 3rd Edition). Why didn't he only pick the only left/right portion charge dq then integrate that from 0 to 2L? $$d\mathbf E=2\frac1{4\pi\epsilon_0}\left(\frac{\lambda\:dx}{\mathcal r^2}\right)\cos\theta\:\hat{\mathbf z}$$
He is making use of a well-chosen coordinate system to create a symmetric system. That greatly simplifies the concept and makes the integral easy. I bet that somewhere, he doubles the result of the integral. He has also made an argument that the $x$-components will add to zero (again, using symmetry). Choosing a coordinate system to create symmetry is a valuable skill. You should study this solution carefully and learn from Griffiths's technique.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/200329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the change in orbital of an electron the only way a photon is created I would like to know if there are any other ways in which photon's are being emitted other than in the case an electron's orbital around a nucleus changes.
Yes. There are loads of physical processes in which photons are created. It won't be possible to list them all out but well known examples are matter-antimatter annihilation (e.g. electron-positron annihilation, at lower energies.), the acceleration of charged particles, radioactive decay (notably, Gamma Decay), etc.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/200520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How relevant is the Heisenberg Uncertainty Principle? I was originally surprised to see that, $$\Delta x \cdot \Delta p \gt {{\hbar} \over 2}$$ But, then I realized that $\hbar/2=5.27 \cdot 10^{-35}$. According to this other question, the smallest length ever measured was on the order of $10^{-18}$. Of course at that point, I bring the Planck length into consideration. It's order of magnitude is $10^{-35}$. I was quite shocked to see that the uncertainty is so small compared to this unit and our practical probing unit. My question is this. How relevant is the Heisenberg Uncertainty Principle in the lab? Does it really limit what can be probed at a practical level, or is it a theoretical limit still? In addition, if the Planck scale is shown to be the shortest meaningful length, is having a limit on uncertainty only 5 times larger than that fundamental length really that inconvenient?
In a hydrogen atom the kinetic energy is on the order of $8$ eV. From $T = \frac{p^2}{2m}$ we get that the typical momentum is about $3$ keV/c ($m = 511$ keV/$c^2$). On the other hand $\hbar/(2a_0)\approx 1.2$ keV/c where $a_0$ is the Bohr radius, which is about the size of a hydrogen atom. Since these quantities are of similar magnitude the Heisenberg principle is certainly very relevant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/200631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can you huddle next to a fridge in sub-zero temperatures and keep warm? There's a saying I've heard in so many places.. "It was so cold that we used to huddle next to our refrigerator to keep warm..." I had heard this phrase uttered some 30 or so years ago, and it's stuck with me ever since... Which gets me thinking... Imagine it's -40 degrees (Fahrenheit or Celsius, it's the same number for both scales). Your fridge is by comparison capable of blasting chilled air at +4 degrees Celsius (39.2 degrees Fahrenheit)... give the temperature difference between environment and the refrigerator, could an average human of body temperature of ~37 deg C potentially warm themselves by an open fridge blasting chilled air at +4 deg C in a surrounding environment of -40 deg C and keep "warm"?
Refrigerators are not designed to warm up air. If the outside temp is -40 C and you open the door of a fridge set to 4C, the air in the open fridge will quickly cool to -40C and the fridge compressor will turn off. Refrigerators are designed to maintain a maximum temperature setting, not a minimum temperature setting. Furthermore if you leave the fridge door shut, since the fridge is not perfectly insulated, the air inside will eventually cool below 4C and the fridge will turn off, the air inside will eventually reach equilibrium with the outside -40C temp.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/200715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
One-point function and vacuum expectation value in $\phi^4$-theory The one-point function (and all other odd correlation functions) in the $\phi^4-$theory, for example, calculated from the generating functional, always gives zero value in absence of external source i.e., $J=0$. To prove this it requires the invariance of the action under $\phi\to -\phi$. However, if there is spontaneous symmetry breaking (SSB), the one-point function simply represent the vacuum expectation value of the field operator $\phi$ and is non-zero. But symmetry of the action continues to hold even after SSB takes place. How do we reconcile these two apparent contradictions?
You should work out the minimum energy state of your system (classically) to find the vacuum expectation value. I assume you're working with the standard $\phi^4$-Lagrangian $$\mathcal L=\frac{1}{2}(\partial \phi)^2-\frac{1}{2}m^2\phi^2-\frac{\lambda}{4}\phi^4 $$ which corresponds to the Hamiltonian $$\mathcal H=\frac{1}{2}\dot\phi^2+\frac{1}{2}(\nabla\phi)^2+\underbrace{\frac{1}{2}m^2\phi^2+\frac{\lambda}{4}\phi^4}_{=: V}$$ It is easy to see that the lowest energy solution for arbitrary $V(\phi)$ is always $\phi=\text{constant}$, and in this case the potential is minimized by $\phi=0$. Thus, the true vacuum of the theory is, indeed, located at $\phi=0$ (this indeed also yields the one-point function $\langle \phi\rangle$). Now, to see the difference with spontaneous symmetry breaking, one really only needs to look at the relevant Lagrangian: It has a different potential. Usually, the potential for something similar to the abelian Higgs model is of the form $$V(\phi)=-\frac{1}{2}m^2\phi^2+\frac{\lambda}{4}\phi^4$$ which we can easily minimize to find that the lowest energy state corresponds to $$\phi^2=\frac{m^2}{\lambda}$$ so that we see that the true vacuum of theory is not located at the "origin", i.e. we find a nonzero vacuum expectation value.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/200914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What is the time period of an oscillator with varying spring constant? It is well known that the time period of a harmonic oscillator when mass $m$ and spring constant $k$ are constant is $T=2\pi\sqrt{m/k}$. However, I would be interested to know what the time period is if $k$ is not constant. I have searched hours after hours for right answers from Google and came up with nothing. I am looking for an analytical solution.
From Newton's second law we have (whether $k$ is constant or not) that: \begin{equation} m\ddot{x}+kx=0 \end{equation} The only difference is whether or not $k$ is a function of $t$ or not. If it is a function of $t$, the only general way to solve this differential equation is by using Taylor expansions. Let us take: \begin{equation} x\left(t\right)=\sum_{n=0}^\infty a_nt^n \end{equation} and: \begin{equation} k\left(t\right)=\sum_{n=0}^\infty b_nt^n \end{equation} Our differential equation then becomes: \begin{equation} \begin{aligned} m\ddot{x}+kx&=0\\ \implies\sum_{n=2}^\infty mn\left(n-1\right)a_nt^{n-2}+\left(\sum_{n=0}^\infty b_nt^n\right)\left(\sum_{n=0}^\infty a_nt^n\right)&=0\\ \implies\sum_{n=0}^\infty\left[m\left(n+2\right)\left(n+1\right)a_{n+2}+\sum_{i=0}^na_ib_{n-i}\right]t^n&=0\\ \implies m\left(n+2\right)\left(n+1\right)a_{n+2}+\sum_{i=0}^na_ib_{n-i}&=0\forall n\\ \implies a_{n+2}&=-\frac{\sum_{i=0}^na_ib_{n-i}}{m\left(n+2\right)\left(n+1\right)}\forall n \end{aligned} \end{equation} As the $k\left(t\right)$ is known all of the $b_n$ are known, and if we know two of our initial conditions two of the $a_n$ are known (let us say $a_0$ and $a_1$). Using this recurrence relation, one can read off all of the $a_n$--that is, one knows all of the coefficients of the Taylor series for $x$. You can't really see too much more analytically in this super general case (to find a period, one would have to find a $k\left(t\right)$ that generated $a_n$ such that $x\left(t\right)$ was periodic, and read off the period from that function), but a good sanity check is to check if we recover our same answer when $k$ is a constant $k_c$; that is, when $b_0=k_c$ and $b_n=0$ for all $n>0$. In this case we find that: \begin{equation} \begin{aligned} a_2&=-\frac{a_0k_c}{2m}\\ a_3&=-\frac{a_1k_c}{6m}\\ a_4&=\frac{a_0k_c^2}{24m^2}\\ &\vdots \end{aligned} \end{equation} Following the pattern, we notice that the $a_n$ for even $n$ give the Taylor series for $a_0\cos\left(\sqrt{\frac{k_c}{m}}t\right)$ and the $a_n$ for odd $n$ give the Taylor series for $a_1\sin\left(\sqrt{\frac{k_c}{m}}t\right)$, yielding an angular frequency of $\sqrt{\frac{k_c}{m}}$ and therefore a period of $2\pi\sqrt{\frac{m}{k_c}}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/201078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
A heavy rope is attached to one end of a lightweight rope If one end of a heavy rope is attached to one end of a lightweight rope, a wave can move from the heavy rope into the lighter one. (a) What happens to the speed of the wave? (b) What happens to the frequency? (c) What happens to the wavelength? My instructor hasn't gone over any of this in class (it's for a reading assignment), so what I've guessed so far just off the equations the book gives. (a) $v=\sqrt{\frac{F}{\mu }}$ So, as the mass per unit length ($\mu$) goes down, the velocity will increase (b) $v=\lambda f$ Now I am unsure. There is no way to tell (from this one equation) whether the wavelength ($\lambda$) will increase, decrease, stay constant. Is it determinable at all? (c) Same problem as with (b).
My intuition is that the frequency should stay the same because the waves in the light rope are caused by the waves in the heavy rope. The point where the ropes attach will oscillate with a common frequency. So, for $(b)$, the frequency would be the same. For $(c)$, use the equation $v= f\lambda$. You already correctly determined that the velocity increases; so, if the frequency stays the same, the wavelength must increase.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/201252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why don't both equivalent forms of this delta function give the correct answer? I am a bit confused on a basic problem involving a Dirac delta function being integrated over in a multiple integral. The original problem is to find the probability distribution in position-momentum $(z,p)$ space of a ball bouncing up and down, given that the ball reaches a maximum height of $z=h$. Now, a basic conservation of energy argument gives us that $p(z) = \pm m \sqrt{2g(h-z)}$ where $m$ is mass and $g$ is gravitational acceleration. We also know that the probability density of finding the ball at $z$ is inversely proportional to the velocity of the ball at $z$, namely $p(z)/m$. So, the probability distribution is $$ P(z,p) = \frac{C}{\sqrt{2g(h-z)}}\left[ \delta(p-m\sqrt{2g(h-z)})+\delta(p+m\sqrt{2g(h-z)}) \right] $$ where $C$ is a normalization factor. To find $C$, we can simply integrate $P$ over $z$ and $p$ and set the result equal to $1$, using the delta functions to do the $p$ integral and integrating $z$ from $0$ to $h$, and we get the correct answer. My question is, why does this strategy not work if we write $$ P(z,p) = \frac{D}{|p|} \delta(z-h+p^2/(2m^2g))? $$ In this case, when we attempt to find the normalization $D$, if we try getting rid of the delta function by integrating over $z$ and then we integrate over the appropriate range of $p$, we fail because the $1/|p|$ integral diverges, whereas the $1/\sqrt{2g(h-z)}$ integral converges. What am I missing here?
TL;DR: Substitution inside the delta function yields a Jacobian factor $$ \tag{1} \delta(f(v))~=~ \sum_{v_{(0)},f(v_{(0)})=0 }\frac{1}{| f^{\prime}(v_{(0)})|} \delta(v-v_{(0)}). $$ Here the sum is over the zeroes $v_{(0)}$ of the function $f(v)$. Let us for simplicity consider velocity $v$ rather than momentum $p=mv$. So energy conservation $$\tag{2} \frac{1}{2}v^2+gz ~=~gh, \qquad 0\leq z\leq h, \qquad |v|~\leq~\sqrt{2gh}, $$ yields a parabola in the $(v,z)$ plane. If we define a function $$\tag{3} f(v)~:=~z-h +\frac{v^2}{2g}, $$ then eq. (1) becomes $$ \tag{4} \delta(z-h +\frac{v^2}{2g}) ~=~\frac{g}{|v|}\sum_{\pm} \delta(v\pm \sqrt{2g(h-z)}). $$ Note in particular that the factor $\frac{1}{|v|}$ on the right-hand side of eq. (4) does not appear on the left-hand side.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/201423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why do I not observe single/double escape peaks for K-40 A question on gamma spectrometry here. If I'm looking at a background gamma spectrum with a big peak at 1460KeV (approximately 180 counts) and I attribute this peak to the presence of K-40, should I expect to see the single and double escape peaks for K-40? If I should expect to see them and I don't, can I then in fact rule out the possibility of this peak being due to K-40? Thanks
Very hard to tell without knowing how the spectrum was produced (type and size of the detector, resolution, anticompton, ...). Anyway 180 counts do not seem so many. The single escape peak is normally weaker and the double escape even more, especially if you are just above the pair production threshold. Sounds reasonable that you may not have significant counts in those bins.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/201497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Eigenspaces of angular momentum operator and its square (Casimir operator) The casimir operator $\textbf{L}^2$ commutates with the elements $L_i$ of the angular momentum operator $\textbf{L}$: $$ [\textbf{L}^2, L_i] = 0. $$ However, the $L_i$ do not commute among themselves: $$ [L_i, L_j] = i\hbar\epsilon_{ijk}L_k. $$ This makes sense so far, but it leaves me wondering how their eigenspaces relate to each other. I remember some theorem that diagonalizable, commuting matrices share their eigenspaces. If those operators could be expressed as complex matrices (in the finite-dimensional case), they surely are diagonalizable. So it follows that $\textbf{L}^2$ has the same eigenspaces as the three $L_i$, but that would imply that they commute among themselves, which is not the case. What am I missing? What is the relation between the eigenspaces of these operators?
When I was asking this question, I didn't understand the relation between the commutativity of two operators and their eigenspaces: If an operator $A$ commutates with another operator $B$, then $A$ leaves the eigenspaces of $B$ invariant: $$ B\psi = \epsilon\psi \implies BA\psi = AB\psi = \epsilon A\psi $$ But this does not imply that $\psi$ is an eigenstate of $A$. Maybe I mistook "leave B's eigenspaces invariant" for "B's eigenvectors are A's eigenvectors".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/201786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Light Absorption of a glass I've the $n$ (refractive index of the glass sheet ) and $t$ (the thickness of the glass sheet) with this information, how can I find the amount light absorption of the glass sheet?
To add to Rob Jeffries's answer: the absorption data for glass are separate from the refractive index and are measured by measuring the attenuation of light through a known thickness of glass, after taking account for the reflected amounts as described in Rob's answer. Theoretically, the refractive index and the absoption data are united in a complex propagation constant for the material, which is an analytic function of a complexified frequency in the right half plane. This means that the refractive index and absorption are actually related by the Hilbert transform, known in this context as the Kramers Kronig relationships. So one can theoretically calculate the imaginary part of the RI: the catch is that one needs to know the refractive index for all wavelengths. This means that practically, one must resort to the measurement described in my first paragraph and effectively treat the refractive index and absorption as separate data for the glass.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/201891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Humans on earth seen from traveling space ship If I would stand on a space ship traveling with a speed of 0,99c I would be moving 7 times slower from Earth's perspective. But if I would look back on Earth I would see everything moving 7 times faster then from my perspective. Right?
Absolutely not. Both you and the people on Earth would see each other moving $7$ times slower. To repeat myself: you would see them going $7$ times slower and they would see you going $7$ times slower. This is because of the main principle of special relativity. As long as neither of you is accelerating there is nothing to choose between your frame of reference and the Earth's frame of reference. Just as the people on Earth are free to describe themselves as stationary and you as moving at $0.99c$ you are free to describe yourself as stationary and them moving in the opposite direction at $0.99c$. Neither point of view has a more valid claim to being "stationary" than the other so both must see the other as being slowed down.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/202275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Directional subwoofer? I was thinking. The subwoofers that I've seen are a circular parabolic surface section (or perhaps a circular circlic(?) surface section?) and are considered omni directional. I would guess that this is because the longitudinal waves would have to move through the focus of the parabola/circular section, dispersing the wave in all directions in front of the speaker (and behind, depending on the acoustic shielding). However, if a subwoofer was made from a circular triangular surface section, whose height is the same as the radius of the circular section: Would this make the subwoofer directional? I.e. could I point at someone very far away and it would be heard but wouldn't be heard by those not in it's path?
A typical subwoofer range might go all the way up to 200Hz. That would produce a wavelength of over 1.5m. Lower sounds will have even longer wavelengths. A lot of the energy from the sound is just going to step around objects that are much smaller in size. If that cone is small, the shape doesn't matter much. The sound isn't being reflected inside it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/202504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Dissolving photoconductor (TiOPc) from Laser Printer drum possible? When I was thinking of a Lab-On-a Chip Application which combines a lensless microscope and an optical tweezers I saw the ODEP-concept:(http://pubs.rsc.org/en/content/articlelanding/2013/lc/c3lc50351h#!divAbstract). This works like a laser printer. The probe is sandwiched between two transparaent conducting surfaces (e.g. ITO idium tin oxide), where one of the layers has a photoconductive substance on it. When putting an osciallating voltage (eg. +/-20V at 1Mhz) to the two electrodes one can switch on/off the charges on the surfaces. The result is an electric field gradient which is capable of manipulating particles or biological cells. I was thinking to use "standard" materials from old laser printers. The blue printer drum has several layers as seen here. The newer ones use TiOPc which fits perfectly to the "lab-on-a-chip" application. Does any think it's possible to get the TiOPc or the entire layer apart from the alluminium drum to put it back on an ito-coated glass substrate?
I think the TiOPc on printer drums may be TiOPc nanoparticles embedded in some kind of organic binder, rather than solid TiOPc. See this: http://patents.justia.com/patent/20140054510 for example for the challenges of dissolving TiOPc in anything. The binder on the other hand should be easy to dissolve; try acetone first, if that doesn't work, perhaps brake cleaner or paint stripper (in a fume hood! you don't want to inhale it).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/202584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is configuration space in any similar to vector spaces? The question may sound silly. If it is I'm sorry for it but I just couldn't find an answer anywhere else. I have just learned about vector spaces and their properties and on the other hand have also started with Lagrangian mechanics. The author writes: "The configuration of the system of N particles, moving freely in space, may be represented by the position of a single point in 3N dimensional space, which is called the configuration space of the system." My questions here are how is it possible for us to visualize the position of say 700 particles using just a single point in 2100 dimensional space? I cannot make any sense of it. Since there are no constraints, for every particle we are adding up 3 dimensions; What is the advantage in doing so? And is this in anyway related to vector spaces?
They are not related structurally: Configuration space is a manifold which in general has no vector space structure. For example $\mathbb{R}$, the configuration space of a free particle moving on a line can be viewed as a vector space (you can sensibly "add" two configurations to get a new one and so on), but if you constrain it to move on a circle this structure is lost. There is an interesting link between the two though which appears in quantum mechanics. One can naturally associate a vector space to a classical configuration space: If your system has configuration space $M$, one can take the vector space to be $L^2(M)$, the space of square integrable functions on $M$. This is the natural setting of the quantum version of this classical system. (Thanks to ACuriousMind for pointing out the very fundamental flaw in the first version of this answer!)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/202871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Nucleon-meson interaction Suppose interaction lagrangian between neutron-proton doublet and $\pi$-mesons: $$ \tag 1 L_{\pi pn} = \bar{\Psi}\pi_{a}\tau_{a}(A\gamma_{5} + B)\Psi , \quad \Psi = \begin{pmatrix} p \\ n\end{pmatrix} $$ Is it possible to derive it from the first principles? I realize that proton and neutron aren't pseudogoldstone bosons, like pions, and thus their lagrangian cannot be derived simply. But, maybe, it is possible to derive directly $(1)$.
Now I know an answer, so I draw it here. Direct derivation of nucleon-meson interaction is possible from chiral perturbation theory, which arises from the QCD spontaneous symmetry breaking. We look for finite classical field configurations which leaves chiral action finite. Since homotopic group $\pi_{3}(SU(3)) = Z$ is nontrivial, then such configurations exist. They are called skyrmions. Corresponding Maurer-Cartan invariant, which defines skyrmion winding number, coincides with anomalous baryon charge which can be obtained formally from gauging baryon number symmetry in Wess-Zumino term. Also, the spin of winding number one skyrmion is one half. We may therefore try to identify skyrmions with proton and neutron. The next steps are straightforward. Defining proton and neutron fields through skyrmion solution, we may, due to general logic of perturbation field theory, treat proton-neutron state as the ground state of theory, and calculate perturbations near it. Perturbations in chiral perturbation theory are defined in terms of goldstone bosons. Considering the pion sector and calculating low-energy matrix elements $\langle \psi_{n}| A_{\mu}^{a}|\psi_{m}\rangle$, where $\psi_{n}$ is proton-neutron doublet and $A_{\mu}^{a}$ is axial urrent in terms of skyrmion field, we may obtain low-energy theorems which contain axial couplings from my question.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/203074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is the electrostatic field really static? Does thermal vibrations not affect it? We know that if a conductor has any net charge, the charges reside on the surface. The electric field immediately outside the surface is perpendicular to the surface. But the charged particles, say the conductor has net electrons, will be in thermal vibration and increase in temperature will increase the vibrations. So, the net electrons vibrating on the surface will lead to changing electric fields outside the surface, not perpendicular all the time. Doesn't that cause a magnetic field, however small?
Yes you are right. You end up having a varying electric field which generates a varying magnetic field which in turn generates an electric field etc... This causes a particular type of radiation called black body radiation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/203276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why can we see through rain? I am gazing through my office window into a heavy rain. I am thinking that raindrops are like small lenses that bend the light. Thus I am surprised, that I can clearly see other buildings through the window. So, why is it that we can see through the rain? Is the density of raindrops simply too low?
Two main reasons. First, the raindrop density is really low. Recall how it may sometimes seem it's pouring rain but you go out and barely get hit with some 10 droplets per second. It makes sense, when it's raining, it's still mostly air. If rainfall is $10\, {\rm mm/h}$ at $10\,{\rm m/s}$, the density of droplets must be the quotient of these fluxes ($\sim 3.6\cdot 10^{-6}$). That is, in heavy rain, only a few parts per million of air is droplets (comparable to cloud density). An important factor here is also projection: a visual image is 2D projection of the droplets: raindrops are huge and so this small percentage of volume is mostly concentrated in a few dots at any given time (+blur helps even more). Fog is worse mainly because it covers your field of view more efficiently: a droplet of volume $V$ covers $V^{2/3}$ of your vision. $N$ droplets of volume $V/N$ cover $N^{1/3} V^{2/3}$. Scattering difference also help to obscure even more efficiently, but mostly it's just fragmentation. Another very important part is the motion blur. Droplets are so fast that within time resolution of a human eye (let's say a 20-50 Hz, depending on the light conditions), the droplet travels up to a metre distance. So the droplet never fully obscures a certain part of your visual field, it only "blocks" your vision for a fraction of the "exposure time". That being said, when you are looking through a sufficient amount of rain, it does lower visibility quite a lot. Curtains of rain on the horizon are a common sight (possibly with a rainbow, which is, again, see-through).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/203576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 6, "answer_id": 5 }
Fleming's right hand and left hand rule Why are there two rules: Fleming's left hand and right hand rules? What is the difference between the two and why can't we use just one rule? Suppose the magnetic field is from right to left and the motion of the wire is downwards then according to the right hand rule the induced current will be in the straight direction. But if we use the left hand rule in this same situation to find the direction of motion of wire then it shows that the direction of wire is upwards. Please help me.
Similarities: in both the rules the thumb gives the direction of force/motion, the index finger gives the direction magnetic field and the middle finger gives the direction of current. Differences: 1) Left hand rule: This rule is used when magnetic field direction and current direction are given and you have to find the direction of force/motion of the conductor. 2) Right hand rule: this rule is used when the magnetic field and force/motion of the conductor is given and you have to find the direction of the current. In both the rules all three should be perpendicular to each other.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/203762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Galvanic Cells and Electric Potential In a battery or a galvanic cell, the electric potential of the battery is due to a difference of charges between the two cells like in a capacitor? So it is the electric field due to this separation that is driving the electrons? if yes, why we call it electromotive force of a battery (EMF) ?
The electric field is established only when we connect +ve and -ve of a battery with some resistance between them. There will be no electric field when the battery is in ideal state. But we need a measure for expressing the power of battery so our physicists introduced EMF, because measuring it with electric field doesn't make sense when battery is ideal. How current flows in galvonic cells: The current flows mainly beacuse of the wire connecting them. The wire will have free electrons since it is a conducter. When a wire is connected b/w two different potential chemicals. The first electron (excess electrons in the ion) transfers from -ve potential liquid to wire and then in wire the electrons flow and it passes one electron to the +ve potential liquid. This link explains how electrons travel in galvonic cells Extra Material : Check this to know about how electrons travel in wire
{ "language": "en", "url": "https://physics.stackexchange.com/questions/203963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why do people wear black in the Middle East? I have read various dubious explanations as to why people often wear black in the heat, from cultural to somehow encouraging the evaporation of sweat (unconvincing). So, does anyone know what, if any benefit there is to black clothing in hot dry conditions? It is certainly counterintuitive.
As it was explained in one of Halliday's books, the reason is that the black dress heats the air inside it up, thus causing a continuous flow of air in between the skin and the dress. The cold air flows in from below, gets heaten up, and gets out from above, providing a continuous ventilation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/204012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find constant acceleration with only initial speed and distance Given the problem: "A car moving initially at 50 mi/h begins decelerating at a constant rate 60 ft short of a stoplight. If the car comes to a stop right at the light, what is the magnitude of its acceleration?" While this problem seems simple, I can't seem to find the correct formula to use. Most formulas I am finding require the use of time (t) which is not given in the problem statement. What formula(s) do I use to solve this problem? Am I supposed to use distance as the unit of time somehow? Or should I use some sort of derivation to get the number needed?
Well let's pick the $(Ox)$ axis as a reference of frame and take the origin of time the instant the car starts deccelerating with a magnitude $a$: Since we are talking about decceleration it is clear that: $$ \frac{dv}{dt}=-a $$ So: $$ v(t)=v_0-at $$ And: $$ \frac{dx}{dt}=v_0-at\\ x(t)=v_0 t -\frac{a}{2}t^2 $$ Now we know that the car will stop at the red light at some instant $t_a$, so: $$ v(t_a)=0\\ v_0-at_a=0\\ t_a=\frac{v_0}{a} $$ At the same instant $t_a$ the car would have rolled $d=60 ft$ as you stated, so: $$ x(t_a)=d\\ v_0t_a-\frac{a}{2} {t_a}^{2}=d\\ v_0\frac{v_0}{a}-\frac{a}{2}{\frac{v_0}{a}}^{2}=d\\ \frac{{v_0}^{2}}{a}-\frac{{v_0}^{2}}{2a}=d\\ \frac{{v_0}^{2}}{2a}=d\\ a=\frac{{v_0}^{2}}{2d} $$ Where $v_0=50mi/h$. Plug in the values and pay attention to the units in order to get your answer. Note: More generally we prove that for a uniformly deccelerating particle with acceleration $a$ that has initial velocity $v_i$ and final velocity $v_f$ that traveled a distance $d$ we obtain the following relation: $$ 2ad={v_0}^{2}-{v_f}^{2} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/204103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What's the difference between the work function and ionisation energy? In a particular textbook, the work function of a metal (in the context of the photoelectric effect) is defined as: the minimum amount of energy necessary to remove a free electron from the surface of the metal This sounds similar to ionisation energy, which is: the amount of energy required to remove an electron from an atom or molecule in the gaseous state These two energies are generally different. For instance, Copper has a work function of about 4.7eV but has a higher ionisation energy of about 746kJ mol-1 or 7.7eV. I've sort of figured it's because the work function deals with free electrons whilst ionisation is done with a valence electron still bound within the atom. Is the difference due to the energy required to overcome the attraction of the positive nucleus?
There is definitely a relationship between the work function and ionisation energy of the elements. See the above figure in which I plotted the work functions (blue) and ionisation energies (yellow) of the elements named in the table of the former answer. If you plot them against each other, it shows an definite, though loose, relationship. I bet there really is a relation between the two, but I do not know exactly what it is and why it exists.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/205310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 0 }
When does the concept of electric field in classical electrodynamics fail, and QED is needed? It is really hard to find reference to when the traditional concept of electric wave, especially TEM wave, fails, and needs to be replaced by quantum electrodynamics. So when does the concept fail? At high frequencies of electric field?
.Your title asks about the electric field. The content is about the electromagnetic waves, two different entities. Electromagnetic waves emerge from an innumerable number of single photons. As one cannot have water waves with just a few molecules but need of the order of $10^{23}$ (avogadros number) one cannot measure electromagnetic waves if the photons are few . The electric field depends on a large number of charged molecules . If that number is small it will end up with the order of magnitude of the electron charge, a small number , and special scattering experiments in vacuum will be needed to detect it. As a rule of thumb, the interface between a successful classical description and the need for a quantum mechanical one depends on $\hbar$, a very small number. When the dimensions of the study can assume it as zero than the classical regime is fine. If the values measured are commensurate with the various Heisenberg uncertainty equations, then quantum mechanical estimates are necessary.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/205442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 0 }
Hermitian 2x2 matrix in terms of pauli matrices In my studies, I found the following question: Show that any 2x2 hermitian matrix can be written as $$ M = \frac{1}{2}(a\mathbb{1}+\vec{p}\cdot \vec{\sigma}) $$ with $a=Tr(M)$, $p_i = Tr(M\sigma_i)$ and $\sigma = \sigma_x \hat{i}+\sigma_y \hat{j}+\sigma_z \hat{k}$. I did show that this equation works, but I want to know how to prove it just working with the fact that the Pauli matrices span a basis in 2x2 Hilbert space and that M is hermitian.
I did show that this equation works, but I want to know how to prove it just working with the fact that the Pauli matrices span a basis in 2x2 Hilbert space and that M is hermitian. You can do this if you can specify exactly what you mean by "span a basis in 2x2 Hilbert space," which sounds really convoluted and mathematically wrong for me. For example, if you assume that the Pauli matrices with real coefficients span the 2x2 tracefree Hermitian matrices, then there must be some $\vec p$ which does this, as $M - (\operatorname{Tr} M) I$ is tracefree and Hermitian. (You'll need the fact that the trace is the sum of the eigenvalues and Hermitian matrices have real eigenvalues, hence the trace is real: then this is simple.) The formula for each of the $p_i$ then can be a simple consequence of the algebraic form you've already found; since $\sigma_a \sigma_b = \delta_{ab} + i \epsilon_{abc} \sigma_c$ the form $p_0 I + p_1 \sigma_1 + p_2 \sigma_2 + p_3 \sigma_3$ lends itself especially to being multiplied by a Pauli matrix and using trace to extract out the $p_i.$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/205524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How do you determine the symmetry of spatial wave functions? I have been reading about the ways to determine the ground of state of an atom. There are three Hund's rules in determining which electronic state is a ground state. And the second rule says you need to maximize the orbital angular momentum while considering the symmetry problem of the total wave function. I know that you need either spin or spatial wave functions to be symmetric. For spin, it is either singlet (anti-sym) or triplet (sym). However, when it comes to s, p, d, f, spatial wave functions corresponding to different orbital angular momentum, how do you know which one is symmetric and which is not? For example, carbon 1s2 2s2 2p2 Maximize spin: $S=1$ (triplet, symmetric) Maximize $L$: $L=2$ or $L=1$ I know that $L=1$ is the correct answer but I don't know why is $L=1$ (p) is antisymmetric while $L=2$ (d) is symmetric. Are there general rules for this to determine this symmetric properties of spatial wave functions.
Simply put, angular momentum eigenfunctions with total angular momentum quantum number $L$ will have well-defined parity $(-1)^L$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/205771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question Regarding torricelli's theorem/Law I recently studied about bernoulli's equation/principle. After the derivation of the said equation , my book gave some applications of the principle, which include torricelli's theorem/law. In deriving torricelli's law from bernoulli's principle, the pressure at the opening of the tank in which the fluid is contained , is said to be equal to the same pressure which is applied at the the top surface of the applied fluid , namely the pressure of the atmosphere. But my book also states that the pressure drops (according to bernoullis principle) when the fluid passes through a narrow pipe or opening and its velocity increases. So why does the pressure remain the same in this situation ? Why doesn't it change? Any help would be much appreciated , THANKS. Could you please answer in simple and easy to learn terms , Thanks AGAIN.
Here is a proof on wikipedia if any one else wants to follow along. The proof states that the pressure in the water is zero (I will take atmospheric pressure to be zero) after it has exited the hole. This is because there is no longer fluid on top of it after it goes out the hole. However, the proof does not say that the pressure is zero immediately inside the hole. In fact, the pressure there will be given by $\rho g h$, (where $\rho$ is the fluid mass density, $g$ is the acceleration of gravity, and $h$ is the depth in the water) as you would have expected. So what happens is as the water moves through the hole it moves to a region of high pressure ($\rho g h$) to a region of zero pressure. This pressure gradient causes the water to accelerate to a speed given by $\dfrac{1}{2} v^2 = gh$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/205861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Motion of center of mass I was reading about COM and forces and came upon this in my book. If a projectle explodes in air in different paths,the path of the centre of mass remains unchanged.This is because during explosion no external force (except gravity ) acts on the COM. My question is, even though the author realises that there is gravity acting on the particle yet he goes on to conclude that the path of COM remains unchanged. But I learned that path will change whenever there is an external unbalanced force.Here gravity acts but why has the author neglected its effect ? (or am I mistaken somewhere?)
The passage means that the center of mass follows the same parabolic trajectory it would have followed had there been no explosion. This includes the effect of gravity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/206127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is the K shell electron preferred in the photo electric effect? I have read in many books and on Internet as well that photoelectric effect is only possible when an electron is emitted from the K shell of the metal. Why not other bonded electrons?
The term "K-shell" stems from an older, now less used terminology for the 'electron shells' of multi-electronic atoms. In this terminology, electrons with Principal Quantum Number $n$ equal to 1 where said to belong to the K-shell, those with $n=2$ the L-shell, those with $n=3$ the M-shell etc. For an alkali metal like sodium, the electron configuration is $1s^22s^22p^63s^1$, so it has 2 electrons in the K-shell, 8 in the L-shell and 1 in the M-shell. The inner electrons in the K and L-shells are much more tightly bound to the nucleus (due to electrostatic attraction between the positively charged nucleus and the negatively charged electrons) and cannot be 'knocked out' of their orbitals by visible light (which is not energetic enough). In the case of sodium only the unpaired $3s^1$ electron (M-shell) is energetically within reach of visible light photons because it is further away from the nucleus and has been shielded from electrostatic attraction by the K and L-shells. This is generally true for all alkali metals, which have low ionisation energies due to the cited reasons. Alkali metals are therefore ideally suited to demonstrate the photo-electric effect.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/206263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
When I take a Gaussian surface inside an insulating solid sphere, why does the outer volume have no effect on the electric field? Say I try to find the magnitude of the electric field at any point within an insulating solid sphere. I know that in the case of a conductor, the electric field within it is 0. However, I have not learned anything about an insulator, so I assume that it would not be 0. I used Gauss' Law and calculated the charge of the volume within the Gaussian surface, the radius of which is equal to the distance between the point of interest to the center of the sphere. So I got the right answer, but I want to know the physics behind it. Why does the remaining volume of the insulating sphere, which is just right outside the Gaussian surface, have no effect on the electric field at that point? Even to me, my question sounds flawed as I am pretty much asking why an insulator has no effect on an electric field. However, I just don't think it would be that simple.
This is somewhat similar to why the rest of the earth doesn't influence the gravitational field inside it. By the same logic, the net electric force of all of the charges on 1 half of the outer side cancel each other due to the presence of corresponding charges on the other half, resulting in no net field due to the outer shell charges.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/206379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Finding action-angle variables Given a 1 d.o.f Hamiltonian $H(q,p)$ what is the general procedure for finding action angle variables $(I, \theta)$? I have read the Wikipedia page on action angle variables and canonical transforms but have difficulty applying the general methods to specific problems. Can someone explain the method to me using a simple general example?
In local coordinates the canonical transformation to action angle coordinates $(q,p)\rightarrow (Q,P)$ can be related by, \begin{equation} \boxed{P_i=\frac{1}{2\pi}\oint p_idq^i \ \ \ \ \ \text{and}\ \ \ \ \ Q^i=\frac{\partial }{\partial P_i}\int p_idq^i} \end{equation} For Example: Consider the one dimensional harmonic oscillator with the following Hamiltonian $H=\frac 1{2m}\big[p^2+m^2\omega ^2q^2\big]$. Rearrange this for $p$ and take the hypersurface $H=E$. \begin{equation} p=\pm \sqrt{2mE-m^2\omega ^2q^2} \end{equation} Then use the above equation to compute $P$. \begin{equation} P=\frac{1}{2\pi }\oint \sqrt{2mE-m^2\omega ^2q^2}dq \end{equation} The integral is now over $0$ to $2\pi$ which is easier to handle. This works out as, \begin{equation} \frac {1}{2\pi}\oint ^{2\pi}_{0}\cos^2Q\ dQ\cdot \frac {2E}{\omega} =\frac{E}{\omega} \end{equation} Therefore we have used the quoted formula to compute the action variable for the harmonic oscillator.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/206570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Infinitesimally change a operator in QM Reading Balian, "From Microphysics to Macrophysics", I've found the following identity: If we change the operator $\hat{{\mathbf{X}}}$ infinitesimally by $\hat{{\delta\mathbf{X}}}$, the trace of an operator function $f(\hat{{\mathbf{X}}})$ can be differentiated as if $\hat{{\mathbf{X}}}$ and $\delta\hat{{\mathbf{X}}}$ comutted: $$\delta\operatorname{Tr}f(\hat{{\mathbf{X}}})=\operatorname{Tr}\left(\delta \hat{{\mathbf{X}}}f'(\hat{{\mathbf{X}}})\right).$$ What does "change an operator by $\delta \hat{{\mathbf{X}}}$" mean mathematically in this context? How I can prove that identity?
Consider a one-parameter family of operators $X + \epsilon Y$, and let $f$ be an analytic function. Then we formally use linearity of the trace to obtain \begin{align} \mathrm{tr}[f(X + \epsilon Y)] = \mathrm{tr}\left[\sum_{n=0}^\infty c_n(X+\epsilon Y)^n\right] = \sum_{n=0}^\infty c_n\mathrm{tr}[(X+\epsilon Y)^n] \end{align} But notice that \begin{align} (X+\epsilon Y)^n = X^n + \epsilon (YX^{n-1} + XYX^{n-2} + \cdots + X^{n-1}Y) + O(\epsilon^2) \end{align} so by the cyclicity and linearity of the trace we have \begin{align} \mathrm{tr}[(X+\epsilon Y)^n] = \mathrm{tr}(X^n) + n\cdot\mathrm{tr}(\epsilon YX^{n-1}) + O(\epsilon^2) \end{align} Plugging this back into the power series for $\mathrm{tr}[f(X+\epsilon Y)]$ gives \begin{align} \mathrm{tr}[f(X+\epsilon Y)] &= \sum_{n=0}^\infty c_n\mathrm{tr}(X^n) + \sum_{n=0}^\infty c_n n\,\mathrm{tr}(\epsilon Y X^{n-1}) + O(\epsilon^2) \\ &= \sum_{n=0}^\infty c_n\mathrm{tr}(X^n) + \epsilon\cdot\mathrm{tr}\left(Y\cdot \sum_{n=0}^\infty c_n n\, X^{n-1}\right) + O(\epsilon^2)\\ &= \sum_{n=0}^\infty c_n\mathrm{tr}(X^n) + \epsilon\cdot\mathrm{tr}\left(Y f'(X)\right) + O(\epsilon^2) \end{align} It follows that \begin{align} \frac{d}{d\epsilon}\bigg|_{\epsilon = 0}\mathrm{tr}[f(X+\epsilon Y)] = \mathrm{tr}\left(Y f'(X)\right) \end{align} Now simply make the the notational identifications $Y = \delta X$ and \begin{align} \frac{d}{d\epsilon}\bigg|_{\epsilon = 0} \mathrm{tr}[f(X+\epsilon Y)] = \delta \,\mathrm{tr}[f(X)] \end{align} and, the desired result is now immediate.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/206687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Fluid speed and fluid density How does fluid density affect fluid speed? Basically I am trying to figure out if, with all other quantities remaining constant, would an increase in fluid density cause the fluid speed to increase/decrease? For example, would water and honey have different fluid speeds in a pipe, because their densities are very different? I know that: $$Av = Av$$ and $$P + ρgh + (1⁄2) ρv^2$$ But does an increase in density lead to an increase/decrease in fluid speed? How so?
You have to ask yourself -- what is driving the flow? Would the driver change or stay the same as your fluid changed? What are the variables conserved in a flow (ie. speed, energy, density, momentum, temperature, etc. -- I intentionally listed some that are conserved, some that are not). In other words, think about how you are defining the problem. And how does the problem relate to the underlying laws of physics? If the mechanism driving the flow is the same in both cases, you will get a different answer than if the mechanism driving the flow changes. Once you figure out what makes a fluid move, and then decide if that thing has changed or not, you can then decide what quantities are conserved. Once you figure those things out, in that order, you can decide how the speed changes as the density changes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/206867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Coffee Straw Physics When I put my little, cylindrical coffee straw into my coffee, the liquid immediately rises about half a centimeter up the straw without provocation. This is also the amount of coffee that the surface tension of the coffee will allow to stay in the straw when removed from the liquid in the cup. Keep in mind that all the while, the top end of the straw is open. Why does the level of the liquid in the straw insist on being higher than the level of all the liquid in the cup?
You have 3 different materials in your experiment: a liquid (coffee, could be water), a solid (plastic straw) and a gas (air). You have interfaces between all three: liquid-air, liquid-solid and solid-air. In the case of the plastic of your straw, adhesion forces are stronger between plastic and water than plastic and air: so a force will tend to make the water spread on the plastic. This force balances with gravity to set the height of the capillary rise.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/206971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Taking the classical limit $v\ll c$ in special relativity I'm trying to understand some of my class notes. My professor reached 2 equations: $$m_0c^2 \frac{d\gamma}{dt}=\textbf{F} \cdot \textbf v\tag{1}$$ and $$m_0\frac{d\gamma \textbf v}{dt}=\textbf F\tag{2}$$ where the bold letters are 4-vectors. He then wrote that when considering the limit case $v\ll c$, $\gamma \approx 1+{v^2}/{2c^2}$, such that eq. (1) reduces to $$\frac{dE_{\text{kinetic}}}{dt}=W$$ That is, the change in classical kinetic energy is equal to the work done on the system. So far so good. But then he wrote that eq.(2) reduces to $$F=m_0 \frac{dv^i}{dt}$$ for $i=1, 2, 3$, which is Newton's 2nd law. As far as I can see, eq.(2) reduces to Newton's law only when considering $\gamma \approx 1$. It wouldn't reduce to Newton's law if we had made the same approximation used in eq.(1), namely that $\gamma \approx 1+ {v^2}/{2c^2}$. Does this mean that the formula that relates the change in kinetic energy to the work done on the system is accurate in a broader range of speeds than Newton's second law? Because that's accurate for low speeds up to 2nd order in the Taylor expansion of $\gamma$, while Newton's second law is only accurate for low speed up to the 1st order in the Taylor expansion.
The small parameter in question ought to be $\beta=v/c$. If you expand $\gamma\approx 1+ 1/2 \beta^2 - 1/8 \beta^4$, you find $$ \begin{align} \gamma &\approx 1+ 1/2 \beta^2 +\mathcal{O}(\beta^4)\\ c^2\gamma &\approx c^2 + 1/2 v^2 -1/8 v^2\beta^2 + \mathcal{O}(\beta^4) \end{align} $$ Neglecting $\beta^2$ and higher powers, we find that $$ \begin{align} \gamma &\approx 1\\ c^2\gamma &\approx c^2 + 1/2 v^2 \end{align} $$ The approximations in your example turn out to be the same order in $\beta$. One isn't more accurate than the other.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/207077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How force is transferred from one body to another If there are 3 coins , namely 1 , 2 and 3 as in figure. When coin $1$ strike coin $2$ ,the coin $2$ passes the force to coin $3$ and the coin $3$ moves away. Case :1 How does this happen? What exactly happens there and passes the force on coin $1$ to coin $3$? How does the force cause movement? I mean that when we push any object, why does it moves? Case 2: What will happen if there are just 2 coins and coin $1$ strikes coin $2$ and coin $2$ moves? How and why does coin 2 move in this case? Please don't say that there is no opposite force or net force is not equal to 0.
A different example from your coins, but the same idea, is Newton's cradle. Pull one ball away, it hits the first ball in the line and comes to nearly a complete halt. The ball on the opposite side, like your coins, gets most of the initial velocity and almost instantly swings in an arc nearly, but not quite as high as the height of ball you dropped at the start. This example demonstrates that the final ball receives most of the energy and momentum that was in the first ball. A wave of compression moves through the intermediate balls. I mean that if any force is applied on a body how does the body moves.This could be a foolish question but i am really stuck on it. As to how a force makes a body move, the second body has no choice but to move, if can move freely, because of the law of conservation of energy. Possibly, rather than thinking about how the body moves, ask yourself, what would happen if bodies did not move when a force is applied to them?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/207175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
When I boil a kettle, what stops all the water from turning (exploding!) in to steam in one go once it reaches 100°C? While making a cup of tea in the office kitchen, a colleague asked me this question and neither of us could answer with any certainty. We're guessing it has something to do with the pressure of the column of water, or temperature differences between the top and bottom, but does anyone know the real reason?
If you want to see all water in a container immediately turn to steam, you need a transparent container that you can seal. Fill the container 50% with water and tightly seal it. Place the container on an open flame and let it heat up. While it is heating, walk far away and watch the container through binoculars from some distance (e.g., 50-100 m should do it). Assuming that the container is a strong one, the water will heat up to a temperature much higher than 100 C before rupturing, meaning that once it does rupture, the water that is exposed to the atmosphere will be substantially superheated. At that point, very much of the water will immediately flash to steam. The vigorous expansion that results will hurl pieces of the container in all directions at high velocity, which is why you want to be far away. In other words, do not try this in your kitchen.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/207295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "72", "answer_count": 7, "answer_id": 0 }
Near energy In the null of a Hertzian dipole Since $\mathbf E = -∇Φ - ∂\mathbf A/∂t$ one expects an oscillating $\mathbf E$ field even in the null of a Hertzian Dipole unless the two right hand side terms cancel -- which they do in the far field of the null. However, in the near field of the null, the terms do not completely cancel, leaving a residual oscillating E-field. Since the null has, by definition, no $∇ × \mathbf A$ curl in the oscillating $\mathbf A$, there is no $\mathbf B$ thence no $\mathbf H$ field and therefore no $\mathbf E × \mathbf H$ and since $\mathbf E × \mathbf H$ is the only accepted definition for the dipole's Poynting vector, there is no accepted way for energy to be locally available at points along the dipole's null. If one places a particle of charge $q$ and mass $m$ along the null, it must experience a force, $\mathbf F=q\mathbf E$ and thence acceleration $\mathbf F=m\mathbf a$. Where does this energy come from, and how is it delivered without violating locality?
I believe this apparent contradiction to stem from a misunderstanding that energy can be transferred only by the one mechanism to which the Poynting vector applies: The Poynting vector is defined as ExH, and applies to a "launched" electromagnetic wave in propagation. In your example, the energy is being transferred by a quasi-static E-field in the absence of a B-field. There is no H, therefore there is no ExH, hence there is no Poynting vector. The Poynting vector does not apply to your example. N.B. the lack of an electromagnetic wave described by a Poynting vector in no way implies that energy cannot be transferred via other means. Energy may be transferred in many ways. E.g. kinetic energy transferred to a test-charge via a static electric field is always transferred via means other than those described by the Poynting vector. Ditto transfer of energy via lone magnetic, weak, strong, or gravitational fields, or mechanical, acoustic, thermodynamic, etc. mechanisms.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/207399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Most general Ansatz for cylindrically symmetric metric in GR? How would the most general Ansatz for a cylindrically symmetric metric in GR look like? To make this question more substantial, here is an example of what I have in mind. I ask this question in the spirit of how the Scharzschild solution can be derived from an Ansatz. For Schwarzschild, as for instance described in Carrolls book, we first start with flat Minkowski space $$ds_{\text{Minkowski}}^2=-dt^2+dr^2+r^2d\Omega$$ and generalize it by modifying components. First we assume time independence as well as time reversal symmetry, which means that any term should be independent of $t$ and any cross components $dtdx_i$ must vanish. Then, perfect spherical symmetry demands that the $d\Omega^2$ part of the metric remains unchanged. Finally, we would define the $r$ coordinate such that the most general Ansatz becomes: $$ds^2=-e^{2\alpha(r)}dt^2+e^{2\beta(r)}dr^2+r^2d\Omega^2$$ Now, in the case of cylindrical symmetry I am interested in an Ansatz that does not assume time independence or time reversal symmetry. I am tempted to write $$ds_{\text{cylinder}}^2=-e^{2\alpha(t,r,z)}dt^2+e^{2\beta(t,r,z)}dr^2+r^2d\phi^2+e^{2\gamma(t,r,z)}dz^2$$ But I feel that this expression neglects some cross components between different variables which also would have to appear. What do you guys think? PS: Also, please note that I am including a $z$-dependence in the factors. A perfect cylinder would have a translation invariance in the $z$-direction. But what I am interested in is a situation where there is only a killing vector $\partial_\phi$, but no killing vector $\partial_z$.
According to "Exact Solutions of the Einstein Field Equations", the most general cylindrically symmetric metric is \begin{equation} ds^2 = e^{-2U} (\gamma_{MN} dx^M dx^N + W^2 d\phi^2) + e^{2U} (dz + A d\phi)^2 \end{equation} With Killing vectors $\eta = \partial_\phi$ and $\zeta = \partial_z$, and all functions independant of $z$ and $\phi$. The other two coordinates can be chose such that \begin{equation} \gamma_{MN} dx^M dx^N = e^{2k}(d\rho^2 - dt^2) \end{equation} Additionally, if you also have the reflection symmetry $\phi \rightarrow -\phi$ and $z \rightarrow -z$, $A$ can be made to be 0. Edit : If you only have a Killing vector for $\phi$, it is not cylindrically symmetric, it is an axisymmetric spacetime. For just one Killing vector, what you can generally do is just remove the dependance of the coordinate from the metric and eliminate the cross terms with that coordinates.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/207610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Mass - Unification of inertial and gravitational definitions As a kinetic definition, mass of a body is a measure of the translational inertia of the body. There is also the gravitational definition of mass. Can these definitions (inertial and gravitational) empirically be proved to be equivalent? Also, are these definitions applicable on a quantum scale? Finally, if the 2 definitions of mass are empirically equivalent, can a single definition be made to encompass the 2?
A priori, they could have been different things. The Equivalence Principle - the hypothesis that they are actually the same - is a core input to the theory of General Relativity. To the extent that General Relativity is empirically validated, we have evidence that these really are the same. There's no complete theory of quantum gravity, so I think we'd have to say that we don't know if the equivalence really holds all the way to the quantum scale.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/207720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Method of image charge for cylindrical conductor I am simply puzzled that only for spherical and planar conducting surfaces the method of images is applied. Is it (really) impossible to find image charge or charge distribution which can simulate the behaviour of potential in the volume of interest. Is there any method which may be used to find the image charge/charge distribution ?
This work is an investigation into the nature of the imgage of a point charge on the axis of the cylinder: https://www.researchgate.net/publication/338881609_The_image_of_a_point_charge_in_an_infinite_conducting_cylinder?showFulltext=1&linkId=5e30d95f458515072d6aab92 It finds that the image is made up of a disk surface charge extending out to infinity from a radius of 2 times the cylinder radius, also with singular rings of radius 2,4,6... . The image rings are apparently equivalent to point charges placed in complex space, but the surface charge is somewhat chaotic. For a point charge off axis, the image is even more chaotic.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/207918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
How to find kinetic energy given relativistic linear momentum? The relativistic energy of a particle is given by the expression \begin{equation} E^2 = m^2c^4 + p^2c^2 \end{equation} The rest energy is $E_{0}=mc^2$ and the momentum is $p=mc$. In the rest frame, the kinetic energy is $T=E-mc^2$. Ok, now in another frame of reference, we must include the Lorentz factor $\gamma$, where $\gamma=\frac{1}{\sqrt{1-v^2/c^2}}$. In a different reference frame, momentum is $p=\gamma mc$ and the kinetic energy is $T=(\gamma-1)mc^2$. Are these expressions correct? If so, I am confused. I have a question which asks me "The relativistic momentum is $p=mc$. What is the kinetic energy?". Should I conclude this is $T=E-mc^2=mc$? That is, the kinetic energy is also $p=mc$? Or is the correct conclusion that $\gamma=1$ and therefore the kinetic energy is $T=0$?
By definition, these equations are true in any frame. Linear momentum is $p=\gamma mv$, energy is $E=\gamma mc^2$, rest energy is $E_{0}=mc^2$, kinetic energy is $T=(\gamma-1)mc^2$. We've been given that in this frame $p=mc$. From this, we must conclude that $p=\gamma mv = mc$. Correct? $p=\gamma m v$ true in any frame. This means $\gamma v = c$, i.e. $v/\sqrt{1-v^2/c^2} = c$ This is equivalent to $\sqrt{1-v^2/c^2}=v/c$ or $v=\frac{1}{\sqrt{2}}c$. Now, using the expression for kinetic energy $T=(\gamma-1)mc^2$, we find $\gamma$. Substituting $v=\frac{1}{\sqrt{2}}c$ into $\gamma$, I find $\gamma=\frac{2}{\sqrt{3}}$, which is approximately 1.15. Therefore, $T$ is around $(0.15)*mc^2$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/208104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
minimum hyperbolic distance for Rutherford Scattering I saw in a textbook that gave the minimum distance for a hyperbolic trajectory in Rutherford scattering is given by $$r_{min}= \frac{1}{4\pi\epsilon_0}\frac{Z_1Z_2e^2}{2E}\left(1+\frac{1}{sin\frac{\theta}{2} }\right) $$ But I'm not sure how to derive it? Could anyone help me fill in the gaps? My idea is to consider Energy and Momentum Conservation $$\frac{1}{2}mv_0^2 = \frac{1}{2}mv_{min}^2 + \frac{1}{4\pi\epsilon}\frac{Z_1Z_2e^2}{r_{min}} $$ $$mv_0b = mv_{min}r_{min} $$ =>$$ v_{min}=\frac{v_0b}{r_{min}}$$ which results in a quadratic equation $$r_{min}^2-\frac{Z_1Z_2e^2}{4\pi\epsilon_0E}r_{min} - b^2 = 0 $$ Which i tried solving but couldn't get the answer. So I'm wondering if my approach is right, and if not, do let me know how do I do it!
You're right on track but don't have enough equations. The equations which give you the solution are: \begin{align} k&=\frac{Z_1 Z_2 e^2}{4 \pi \varepsilon_0} \tag{1}\\ E&=\frac{1}{2}m v_{min}^2+\frac{k}{r_{min}} \tag{2}\\ \frac{1}{2}m v_{min}^2&=E \frac{b^2}{r_{min}^2} \tag{3}\\ b&=\frac{k}{2 E} \cot\left(\frac{\theta}{2}\right)\tag{4} \end{align} Equation 1 is just shorthand. Equation 2 is your second equation, but with $\frac{1}{2} mv_0^2$ replaced with $E$. Equation 3 is your fourth equation ($v_{min}=\frac{v_0 b}{r_{min}}$), but squared, multiplied by $\frac{1}{2}m$, and with $\frac{1}{2} mv_0^2$ replaced by $E$. Equation 4, finally, is the important result of Rutherford scattering relating the impact parameter to the scattering angle. Because I'm lazy I threw it into Mathematica, but if you make these substitutions (most importantly: Getting $v_{min}$, $v_0$, and $b$ OUT of your quadratic equation), it should be easy to do by hand. The mathematica code FullSimplify[ Solve[{eE == 1/2 m vmin^2 + k/rmin, 1/2 m vmin^2 == eE b^2/rmin^2, b == k/(2 eE) Cot[\[Theta]/2]}, {rmin, vmin, b}], Assumptions -> {eE > 0, m > 0}] spits out, as one solution, $$\frac{k \left(\sqrt{\csc ^2\left(\frac{\theta }{2}\right)}+1\right)}{2 \text{eE}}$$ which is exactly what you need. ("eE" is a single variable used in the mathematica code, and is not a product. This is because mathematica hates single capital letters. It represents the energy of the system, $\frac{1}{2} m v_0^2$.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/208304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can the unstable particles of the standard model be considered particles in their own right if they immediately decay into stable particles? How can the unstable particles of the standard model be considered particles in their own right if they immediately decay into stable particles? It would appear to a layman such as myself that these heavier unstable particles are just transient interplay of the stable forms.
Take for example an electron and a muon. The muon is unstable because it decays into an electron and two neutrinos in about 2$\mu$s. But a muon is not in some sense an excited electron. Both particles are excitations in a quantum field and they are both as fundamental as each other. The electron is stable only because there is no combination of lighter particles that it could decay into while conserving the total charge of $-e$ and total spin of $\tfrac{1}{2}$. Whether a particle decays or not depends on whether there are any lighter particles for it to decay to. A muon weighs about 105.7 MeV while an electron weighs about 0.511 Mev. So a muon can transform into an electron and have 105.2 Mev left over to go into the two neutrinos and the kinetic energies of all the particles. An electron can't transform in to a muon unless it can find the extra 105.2 MeV from somewhere. If we supply the extra energy, for example in the LEP Collider, then electrons can and indeed do "decay" into muons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/208410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 7, "answer_id": 2 }
Single Narrow Sunbeam I saw a single narrow sunbeam this cloudy, post-rain, morning as I was driving. The narrow beam, which seemed only a couple of inches wide, came directly from the sun, arched slightly and ended on the hood of my car. This beam turned with me as I entered a curve, then was gone. I've seen this a time or two before. What causes this unique sight?
'Arched Slightly' is a key comment here. Light does not bend unless going from one medium to another. Therefore, if the narrow beam of light seemed to bend in its path, it had to have been a phenomena of reflection or refraction. It could be that you were sampling only part of a larger beam that was somehow getting to your eye via an unknown mechanism (I like the English bloke who used the word 'Bonnet'. That might have something to do with it). Either that or it was LGM's probing you. 'Little Green Men'
{ "language": "en", "url": "https://physics.stackexchange.com/questions/208521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Temperature of a falling meteor I am reading "What if?" article https://what-if.xkcd.com/20/ and I'm interested in it's scientific background. Mr. Munroe writes: As it [the meteor] falls, it compresses the air in front of it. When the air is compressed, it heats it up. (This is the same thing that heats up spacecraft and meteors—actual air friction has little to do with that.) By the time it reaches the ground, the lower surface will have heated to over 500℃, which is enough to glow visibly. How can one make such estimation? I wanted to use PV = nRT, but I don't know the volume and the difference in pressure. I tried to sum up all the kinetic energy of all air molecules of the air, bumping into the meteor, but the answer is nowhere near. Does anyone have an idea? Such an interesting problem.
It is true that the most contribution to heat comes from compressing the air. The temperature of a falling meteor was in fact in my aerodynamics II exam where I had to predict its temperature using shockwaves. According to my estimation it was about 10,000 K. You need a proper understanding of compressible air flows in order to answer this question. And I know a little about it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/208722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Why does wavelength determine the energy of a photon? The professor for my first-year university chemistry class remarked that the wavelength of a photon determines its energy. Why is it that the case? I've only completed high-school physics so far, so please bear that in mind in answering this question. Thank you.
Well, actually it doesn't. Knowing the wavelength allows you to calculate the energy, but it does not "determine" it in a causal way. Energy (E), wavelength ($\lambda$) and frequency ($\nu$) are related by $$E = h\nu =\frac{hc}{\lambda}$$ so if you know the wavelength or the frequency you can determine the energy. I think his use of "determine" confused you.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/208942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What are skeleton diagrams and what is their use in qft and many-body physics? How does one construct skeleton diagrams from specific Feynman diagrams (e.g. for the electronic Green function in QED and in many-body gases, for the polarization function, for the vertex function, for the photon Green function, for the phonon Green function)? Explanations and references for actual constructions would be greatly appreciated. What is the use of skeleton diagrams in qft and in many-body physics?
Skeleton diagrams are usually used to discuss general properties of the perturbation series in field theory. They help to prove renormalizability of a theory, or to prove properties of correlation functions. However, they are not used in general for explicit calculation. (The main counter example is the Diagramatic Monte Carlo, that tries to compute high order of skeleton diagram series by doing Monte Carlo sampling of the diagrams.) For example, Gavoret and Noziere (1964) used skeleton diagrams to find the exact low energy behavior of the propagator of bosonic condensates. For a (quick) discussion of these diagrams, see for example Quantum Field Theory by Lewis H. Ryder (around p.350).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/209032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Would a tachyon be able to escape a black hole? Or at least escape from a portion of the hole inside the photon horizon?
Yes, it would. Tachyons are a hypothetical object that can travel faster than light. They also require infinite energy to slow down as they grow faster the more energy they lose. A tachyon positioned right may as well get stuck in the center for a Planck or two I don’t know but that would speed it up A L O T. Tltr: the closer a tachyon is to the center of a black hole, the faster it is.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/209154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Superposition of two wave functions of different Hilbert spaces I am trying to think of this problem for quite some time. Let's say, we have two sets of wave functions $\lbrace|\psi\rangle\rbrace$ and $\lbrace|\phi \rangle\rbrace$ and they belong to two different Hilbert spaces. That is, $$\hat{H_1}|\psi\rangle=E_1|\psi\rangle$$ and $$\hat{H_2}|\phi\rangle=E_2|\phi\rangle.$$ In the real space $\bf{R}$, their functional domains are disjoint. That is, if $\psi(x)$ is defined in $x\le0$, $\phi(x)$ is defined in $x>0$. In this case, is it possible to conceive some kind of superposition between the two waves? If so, how? I mean how do we define the superposed wave function and what can be said about the energy? This paper introduces such a concept http://dx.doi.org/10.1119/1.18854
Two different Hibert spaces correspond to two different physical systems. Superposition of wave functions makes sence for one system (for one Hilbert space), since addition of vectors (quantum states) is defined in a particular vector space (Hilbert space). What you can do is to create a new Hilbert space by forming the tensor product of the two Hilbert spaces. And if you work in the coordinates representation, then you should expand the domain of the each wave function to the entire real line, by multiplying $\psi \left( x \right)$ whith the characteristic function of $\left( -\infty ,0 \right]$ and $\varphi \left( x \right)$ with the characteristic function of $\left( 0,+\infty \right)$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/209238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Gamma matrices and trace operator I'm trying to show that the trace of the product of the following three Gamma (Dirac) matrices is zero, i.e. $$\text{tr}(\gamma_{\mu} \gamma_{\nu} \gamma_{5})=0 \text{.}$$ I attempted to use the fact that the trace operator is invariant under cyclic permutations and linear, and that $$\gamma_{\mu} \gamma_{5}= -\gamma_{5} \gamma_{\mu}, \text{ } (\gamma_{5})^{2}= I_4 \text{ (4 $\times$ 4 identity matrix)} \text{,}$$ where $\gamma_{5} \equiv i\gamma_{0} \gamma_{1} \gamma_{2} \gamma_{3}$. But whenever I do that, it seems that I keep going in circles. Any idea on how I should proceed?
Start noticing that ${(\gamma^{\alpha})}^2 =1\cdot g^{\alpha \alpha}$ and that $$ \textrm{tr} (\gamma^{\mu}\gamma^{\nu}\gamma^5)= \textrm{tr}\left(\frac{1}{g^{\alpha \alpha}}{(\gamma^{\alpha})}^2\gamma^{\mu}\gamma^{\nu}\gamma^5\right)=\frac{1}{g^{\alpha \alpha}}\textrm{tr} (\gamma^{\alpha}\gamma^{\alpha}\gamma^{\mu}\gamma^{\nu}\gamma^5). $$ Now choose $\alpha\neq \mu,\nu$ and commute the second $\gamma^{\alpha}$ three times until the end, to obtain three minus signs as $$ \textrm{tr} (\gamma^{\mu}\gamma^{\nu}\gamma^5)= \frac{1}{g^{\alpha \alpha}}\textrm{tr} (\gamma^{\alpha}\gamma^{\alpha}\gamma^{\mu}\gamma^{\nu}\gamma^5) =- \frac{1}{g^{\alpha \alpha}}\textrm{tr} (\gamma^{\alpha}\gamma^{\mu}\gamma^{\nu}\gamma^5\gamma^{\alpha}). $$ Use at this point the ciclicity of the trace bringing back the last $\gamma^{\alpha}$ to the beginning, which together with the coefficient in the denominator gives rise to the identity leaving back only the additional minus sign in front, which proves that the equation is satisfied only if both members vanish.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/209445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Amplitude of light across material boundaries Does the amplitude of the light ray decrease when it moves from a rarer to a denser medium? I think that since amplitude depends upon the energy of the light ray, it should decrease. This is because of the kinetic energy of the light wave decreases (velocity decreases as light travels from rarer to denser medium), hence the energy of the wave falls. This explanation does not seem convincing, could anyone provide some insights?
The amplitude of the electric field in the medium depends on the medium's permittivity, which is not directly related to its density. Kinetic energy of the light wave decreases(velocity decreases as light travels from rarer to denser medium), hence the energy of the wave falls. This is not true -- the energy in an electromagnetic wave does not depend on its velocity. Additionally, the speed depends on the permittivity and permeability, not the density.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/209806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What experiments have been done that confirm $E=mc^2$? What experiments have been done that confirm $E=mc^2$? Are there experimental results that contradict $E=mc^2$? Or are experimental results consistently showing this famous formula to be true?
Another set of experiments which support $E=mc^2$ are Compton scattering experiments. The mass-energy of the electron is an important quantity in analyzing these events, and the results are consistent across a wide range of energies for the primary photon and scattering angles. The energy of the secondary photon is given by $$E_{\gamma '}= \frac{E_{\gamma}}{1+\left(\frac{E_{\gamma}}{m_ec^2} \right)\left(1-\cos\theta\right)} $$ EDIT: Here's a link to the Wikipedia version of the derivation, although any modern physics or nuclear physics text will have it as well.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/209919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Why is it easier to drop on to a downslope? On a bicycle, why is it easier to land from a drop or jump on a slope going downwards than landing on a flat surface or on an upslope? I've already heard answers such as "because that's how a bike can best keep going with all the momentum it's carrying from the drop" but I'm asking for a more elaborate answer that can give a good understanding of the physics involved.
When you're landing from a jump, you're moving in a forward and downward direction. Landing on a downward slope simply eases the transition as this is already your direction of momentum. A flat or uphill slope will rapidly change your momentum to match the surface.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/210024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to calculate force / torque on non-flat lever, i.e. dolly See attached image. The mass is being rotated on a lever where the pivot point (P) is a certain distance ($L_2$) from the right angle at the bottom. How do I calculate the force necessary to apply horizontally at point U to lift the mass in the worst case (i.e. where the rotation position requires the maximum force), ignoring for now the weight of the lever itself, friction, etc. The structure will never rotate counter-clockwise from its illustrated position, and will rotate up to $60^\circ$ clockwise. Also assume the mass will be distributed evenly across $L_1$, i.e. the center of the mass is in the center of $L_1$. For those interested: this is part of a robotics project. A string attached to a motor/pulley system will be pulling at point U. I'm trying to determine if the motor has sufficient stall torque and if so, how much mass we can reasonably expect to be able to lift.
If the lift angle is $\theta$ (shown at zero in the diagram) then the payload lever arm is $$x_1 = \tfrac{L_1}{2} \cos \theta+L_2 \sin\theta$$ The force lever arm is $$x_3 = L_3 \cos\theta$$ Static balance exists when $$ \left. \vphantom{\int } (M g) x_1 = F x_3 \right\} \\F = \frac{x_1}{x_3} M g = \frac{\tfrac{L_1}{2} \cos \theta+L_2 \sin\theta}{L_3 \cos\theta} M g $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/210108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why isn't the acceleration at the top point of a ball’s journey zero? When I shoot a ball vertically upward, its velocity is decreasing since there is a downward acceleration of about $9.8\,\mathrm{ms}^{-2}$. I have read that at the top most point, when $v = 0$, the acceleration is still $9.8\,\mathrm{ms}^{-2}$ in the downward direction where $v=0$. That is, the acceleration is still the same. But at the highest point, the ball is stationary, so it is not even moving. How can it accelerate?
You throw the ball upwards with velocity $v$ and it returns to your hand with velocity $-v$. Let's draw a graph showing the velocity as a function of time: Acceleration is defined as: $$ a = \frac{dv}{dt} $$ so it is the gradient of the line in this graph. The velocity-time line is straight so the gradient is constant which means the acceleration is constant. The gradient is just the gravitational acceleration $9.81$ m/s$^2$. The point is that the gradient, and hence the acceleration, does not depend on $v$ at all. So it is the same value of $9.81$ m/s$^2$ when $v = 0$ just as it is at all other values of $v$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/210329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 9, "answer_id": 0 }
Can tidal forces significantly alter the orbits of satellites? I would assume that there are other larger, more significant, forces acting on artificial satellites, but can tidal forces drastically alter the orbit of a satellite over time? I was thinking this could especially be an issue for a satellite in geostationary orbit, because they have to be extremely precisely positioned. However, I could see this being an issue for satellites in other orbits as well, just not to the same degree.
Satellites in geosync are not "precisely positioned". Instead, they drift around and require station-keeping thrusters. If, by "tidal forces" you mean gravitational forces associated with the sun and the moon, then the answer is yes, and the effects are quite important.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/210403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Why must the speed of the aether wind be so small compared to the speed of light? I was doing some reading on the Michelson-Morley Experiment. One of the principle equations for the equations is this one. $$\frac { 2w }{ c } \times \frac { 1 }{ 1-\frac { { v }^{ 2 } }{ { c }^{ 2 } } }$$ Where v is the speed aether wind, c is the speed of light, and w is the distance light travels from point A to point B. The equation is then changed to this one. $$\frac { 2w }{ c } \left( 1+\frac { { v }^{ 2 } }{ { c }^{ 2 } } \right) $$ The two equations are nearly equal, given the fact that if x is a very small number, 1+x is the same as 1/(1-x). So the second equation is dependent on the fact that the speed of the aether wind is very small compared to the speed of light. My question is: why did Michelson think that the speed of the aether is very slow compared to the speed of light. The text I was reading mentioned something about the timing of the eclipses of Jupiter's satellites, but didn't go into detail.
The speed of the earth in its orbit about the sun is about 30 km/s. Michelson assumed that the speed of the earth through the rest frame of the ether was of this order.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/210566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What are the functions of these coefficients $c_1,c_2,c_3,c_4$ in $ \psi_{sp^3}= c_1\psi_{2s}+ c_2\psi_{2p_{x}} + c_3\psi_{2p_y}+ c_4\psi_{2p_{z}}$? Hybridised orbitals are linear combinations of atomic orbitals of same or nearly-same energies. Atomic orbitals interfere constructively or destructively to give rise to a new orbital which is what we call hybridised orbital. This is the definition I'm quite acquainted with. But I couldn't understand one thing. What are $c_1,c_2,c_3,\ldots?$ For instance, $$\psi_{sp^3}= c_1\psi_{2s}+ c_2\psi_{2p_{x}} + c_3\psi_{2p_y}+ c_4\psi_{2p_{z}}.$$ I've read many books one of which state that these coefficients determine the directional properties of the hybrid while other sources tell these coefficients are normalizing constants that is; $$c_1^2 + c_2^2 + c_3^2 + \cdots = 1.$$ But what is the necessity of the sum of the square of the coefficients to be equal to $1?$ Here is the quote: [...] \begin{align} ψ_1 &= c_{1,1} φ_1 + c_{1,2} φ_2 + ... + c_{1,n} φ_n\\ ψ_2 &= c_{2,1} φ_1 + c_{2,2} φ_2 + ... + c_{2,n} φ_n\\ \vdots\\ ψ_n &= c_{n,1} φ_1 + c_{n,2} φ_2 + ... + c_{n,n} φ_n \end{align} Here $n$ atomic orbitals (with their wave functions $φ_1, φ_2, ..., φ_n$) are used to construct n hybrid orbitals ($ψ_1, ψ_2, ..., ψ_n$) through a linear combination, where the coefficients $c_{1,1}, c_{1,2}, ..., c_{n,n}$ are normalization constants that must fulfil some requirements: Hybrid orbitals must be normal: $$ c_{1,n}^2 = c_{1,1}^2 + c_{1,2}^2 + ... + c_{1,n}^2 = 1$$ I then compared the above with these to quantum superposed state $$|\psi\rangle= |1\rangle c_1 + |2\rangle c_2$$ where $|1\rangle,|2\rangle$ are orthogonal states. Here $c_1^2 + c_2^2= 1.$ So, is hybridization a superposition? Can anyone please explain what these coefficients are actually meant for? Why should their square add to $1?$
This is quantum mechanics, my friend. The statement simply says that one hybridized orbital consists of many "pure" orbitals. In your first equation, one hybrid orbital has four pure orbitals $2s, 2p_x, 2p_y, 2p_z$. The coefficients in front of each term can be thought of as how much of one particular kind of pure orbital can be found in the final hybrid orbital. But the coefficient itself has no physical meaning. Its square does. The square of a coefficient gives the probability of finding you hybrid state in that one particular pure orbital. For a concrete example, consider a simpler statement. $\psi_H = \frac{1}{\sqrt{2}}(\psi_{2s}+\psi_{2p_x})$ If you make a measurement, half of the time (square of $\frac{1}{\sqrt{2}}$) you will get $2s$ and the other half $2p_x$. Since the sum of all probabilities should be unity, you have the sum of the square of all coefficients to be one.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/210777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Primitive unit cell of fcc When I consider the primitive unit cell of a fcc lattice (red in the image below) the lattice points are only partially part of the primitive unit cell. All in all the primitive unit cell contains only one single lattice point. My question is how much each point at the corners of the red primitive unit cell contributes? At every corner a point is only partially inside the red primitive unit cell such that all parts together form a single point. How big are these individual parts? In principle it should be possible to calculate that, but I hope there a known results in the literature. Unfortunately I can't find no such thing...
Referring to your figure: Each corner atom contribute, 1/18. Top, bottom, left and right atoms on the faces each contribute, 1/9. The closest and furthest atoms on the faces each contribute, 2/9. To calculate these numbers one needs to find angles which are nothing but 60 or 120 degrees. Here is the method explicitly:
{ "language": "en", "url": "https://physics.stackexchange.com/questions/210963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Conditions on expressing magnetic field in terms of curl of current density Given a current density distribution $\mathbf J(\mathbf x)$ inside a closed bounded region $\Omega$, the magnetic field at any point $\mathbf y$ outside of $\Omega$ can be expressed as $$ \begin{aligned}\mathbf B(\mathbf y)&=\frac{\mu_0}{4\pi}\int_\Omega\mathbf J(\mathbf x)\times\nabla_{\mathbf x}\frac{1}{|\mathbf x-\mathbf y|}d^3\mathbf x\\ &=\frac{\mu_0}{4\pi}\int_\Omega\left[\frac{1}{|\mathbf x-\mathbf y|}\nabla_{\mathbf x}\times\mathbf J(\mathbf x)-\nabla_{\mathbf x}\times\left(\frac{\mathbf J(\mathbf x)}{|\mathbf x-\mathbf y|}\right)\right]d^3\mathbf x\\ &=\frac{\mu_0}{4\pi}\int_\Omega\frac{1}{|\mathbf x-\mathbf y|}\nabla_{\mathbf x}\times\mathbf J(\mathbf x)d^3\mathbf x-\frac{\mu_0}{4\pi}\int_{\partial\Omega}\mathbf n(\mathbf x)\times\left(\frac{\mathbf J(\mathbf x)}{|\mathbf x-\mathbf y|}\right)d^2 S(\mathbf x) \end{aligned}$$ where $\partial\Omega$ is the boundary of $\Omega$, $n(\mathbf x)$ is the unit normal of $\partial \Omega$ and $S(\mathbf x)$ is the area of the surface element. Now, if the current density $\mathbf J(\mathbf x)$ is zero at the boundary $\partial\Omega$ (this can be achieved by slightly enlarging $\Omega$ if $\mathbf J(\mathbf x)$ is not zero at $\partial\Omega$) we can then drop the second term on the last line. Now we simply have $$ \begin{aligned}\mathbf B(\mathbf y)&=\frac{\mu_0}{4\pi}\int_\Omega\frac{1}{|\mathbf x-\mathbf y|}\nabla_{\mathbf x}\times\mathbf J(\mathbf x)d^3\mathbf x \end{aligned}.$$ If the current density $\mathbf J(\mathbf x)$ is continuous and differentiable, the above conclusion should be correct. However, $\mathbf J(\mathbf x)$ might not be continuous in $\Omega$, e.g., infinite thin coils inside $\Omega$ carrying electrical current. Is the above derivation correct for $\mathbf J(\mathbf x)$ containing delta functions? What kind of singularities in $\mathbf J(\mathbf x)$ is permitted?
Interesting observation. As you have stated, the second equation only valid when the boundary contains all the current distribution inside. But is this what you are asking? You should open this question for objections as well.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/211082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Where does the force appear when considering object interactions in another reference frame? Imagine I am sitting on an asteroid with my buddy and drinking a beer. When the bottles are empty we throw them simultaneously in opposite directions perpendicular to the asteroid's movement. What will happen? From the logical standpoint and from momentum conservation, our velocity should not change - the total momentum of two bottles is zero in the asteroid's frame of reference. Suppose somebody is watching the asteroid from another reference frame (velocity not equal to zero). According to Newton's second law, the force is equal to the change of momentum over time. The mass of asteroid was changed (remember the bottles). The momentum was changed ($M\times V$). Where is the force?
The caveat here is that the second law is stated that net force is equal to the change in momentum. Assuming you and your buddy are not too wasted and are able to synchronize throwing the bottles off with the exact same force, exactly in opposite directions and through the center of mass, the net force is zero, and therefore there is no change in momentum of the asteroid. The momentum of each bottle changes, but they are equal in magnitude, and opposite in sign, and therefore a net of zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/211203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
Does a changing magnetic field impart a force on a stationary charged particle? Does a collapsing and re-establishing magnetic field impart a force on a stationary charged particle? Does the charge particle get repelled and or attracted? Does it move or spin?
that curlE=dB/dt basically comes from faraday's flux law. this flux law doesn't work in all case unlike lorentz force. When a loop is moving, the flux law and lorentz force argument, both will lead to the same result. But in the case when a loop is static, it feels like lorentz force law doesn't work here but flux rule certainly gives the explicit answer. this situation makes the farady's flux rule in case when the loop is stationary and field is changing a fundamental law. just imagine a charged particle (with some initial velocity) moving a time varying magnetic field. Now if there's an induced electric field in the region, there must be the loss of kinetic energy of the particle.even if i go into relativistic way, no new equations show the presence of induced electric field. so the point worth musing here is that faraday's law is something invariant and fundamental but why it is so.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/211293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
How does color (or reflection in general) work? I'm confused, does the absorption and emission determine the color of something? Or does that only happen when something is emitting energy? When light hits an object, the photons get absorbed, then emitted with a different wavelength right?
Understanding the refractive index of a material assists to understand the colors to expect under given lighting. When light is traveling through a medium it's phase is shifted according to the material's optical properties and especially the distance which light will travel inside the material. This also applies to the angle of reflection if a surface does not allow the photon to travel through it. Another source of relevant information is the study of Spectroscopy which 'illuminates', pun intended, correlations between specific colors with specific atoms, molecules, meta-materials, etc.. Essentially, atoms each have their own signature spectra which helps identifying them in physics, astronomy and various pursuits. Spectroscopy is also used in the development of optical lattices and unique molecular structures.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/211388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why does light bend? I read about the dispersion of light by a prism and a block (slab), but I don't understand why light bends at all. I know that red light has the longest wavelength and that energy is inversely proportional to wavelength, hence red light contains the least energy. I also know that it bends the least. But why? Why does red light not bend as much as violet light? Please don't use Snell's law in your answer.
I came to know that red light has the longest wavelength and then I read a formula, Energy is inversely proportional to wavelength. This is a quantum mechanics formula, $E=h\nu,$ where $\nu$ is the frequency. That means that red light contains the least energy . And it bends the least. WHY? Why does it not bend as much as violet ( I know they have more energy but what makes them bend ? ) A crystal is a many body organized quantum mechanical entity. Even though composed by zillions of atoms, it can be treated quantum mechanically as one entity when scattering happens, a photon hitting a crystal. The quantum mechanical solution will give a probability distribution for the scattering of a single photon to get through the crystal. This probability distribution has a sharp maximum at the dispersion angle of the crystal. This is BECAUSE the classical framework emerges from the underlying quantum mechanical, has to be consistent and it can be shown to be. The difference in the energy of the photon makes a difference in the maximum of the scattering angle because the energy enters scattering equations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/211473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 3 }
New particles found using the LHC After finding the Higgs boson in 2012, CERN. What did the CERN found recently using the large Hadron Collider?
Here is a (partial?) list of new hadrons discovered at LHC experiments $\chi_b(3P)$: a $b\overline{b}$ bound state, discovered by ATLAS in 2011 $\Xi_b(5945)^0$: a $bsu$ bound state, discovered by CMS in 2012 $\Xi_b^\prime(5935)^-$ and $\Xi_b^\star(5955)^-$: $bsd$ bound states, discovered by LHCb in 2014 $P_c(4380)$ and $P_c(4450)$: $c\overline{c}uud$ bound states, discovered by LHCb in 2015
{ "language": "en", "url": "https://physics.stackexchange.com/questions/211708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Interpretation of cosmological redshift I was trying to understand why we cannot explain the observed redshift of distant galaxies using special relativity and I came upon this article by Davis and Lineweaver. Unfortunately when I arrive at section 4.2, where the authors explain why we cannot use special relativity to explain the observed redshift, I get stuck. In particular I don't understand this sentence: "We calculate D(z) special relativistically by assuming the velocity in $v = HD$ is related to redshift via Eq. 2, so...". What bothers me is the assumption that velocity is related to distance linearly. I was thinking that in a special relativistic model the basic assumptions were: 1)Relativistic Doppler shift formula $$ 1+z=\sqrt{\frac{1+v/c}{1-v/c}} $$ 2)Observed Hubble law $$ z=\frac{H}{c} d $$ Combining this two i get the following relation between velocity and distance $$ \sqrt{\frac{1+v/c}{1-v/c}}-1=\frac{H}{c} d $$ and not the one proposed in the article.
This is just the approximation that $\beta \equiv v/c \ll 1$. Because, $\frac{1}{1-x} \approx 1 + x$ $$\left[ \frac{1+\beta}{1-\beta} \right]^{1/2} \approx \left[ (1 + \beta)^2 \right]^{1/2} = 1 + \beta$$ Thus, $\frac{v}{c} \approx \frac{H}{c}d$, and $$v \approx H\cdot d$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/211797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Why is torque the cross product of the radius and force vectors? I understand the torque vector to be the cross product of the radius (moment arm) and force vectors, but that means the torque would be perpendicular to the radius and force vectors, which makes no sense to me, e.g. a force applied tangent to the surface of a car tire creates a torque along the line of the axle. I'm pretty sure I am just misunderstanding a simple formula, so I wanted to make sure. And, when you use the formula for torque, is torque defined as a vector or just a scalar? I would think it would be a vector.
A force acts upon a line of action in 3D space. The force vector can be anywhere along this line and it won't change the situation. Torque is the moment of force because it conveys the (perpendicular) distance where this force acts upon. Any component of location along the line of action needs to be ignored and this is achieved with the vector cross product. Other common moments in mechanics are: * *Moment of Rotation - Linear velocity is the moment of rotation because the velocity of a point A depends on the perpendicular distance to the axis of rotation $$\mathbf{v}_A = \mathbf{r} \times {\boldsymbol \omega}$$ where $\mathbf{r}$ is the location of the axis relative to A. *Moment of Force - Torque is the moment of force because the equipollent torque at a point A depends on the perpendicular distance to the line of action $${\boldsymbol \tau}_A = \mathbf{r} \times \mathbf{F}$$ where $\mathbf{r}$ is the location of the line of action relative to A. *Moment of Momentum - Angular momentum is the moment of momentum because the angular momentum about a point A depends on the perpendicular distance to the axis of momentum $$\mathbf{L}_A = \mathbf{r} \times \mathbf{p}$$ where $\mathbf{r}$ is the location of the axis of momentum relative to A. All of the above are similar because they are manifestations of the same law. The law described by Julius Plücker when he used the moment of a line to describe the location of a 3D line in space. Rotation, Momentum and Forces all contain lines in space. Their moments are a set of (homogeneous) coordinates describing the closest point on the line to the point of measurement.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/212042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
What is a Christoffel symbol? * *What is a Christoffel symbol? *I often see that Christoffel symbols describe gravitational field and at other times that they describe gravitational accelerations. Then, on some blogs and forums, people say this is wrong because Christoffel symbol is NOT a tensor and thus has no physical meaning. Which of these statements is the right one? *What is the significance of a Christoffel symbol in differential geometry and General Relaivity?
The Christoffel symbols occur as soon as you have curvilinear coordinates, even in a flat space (i.e. without any gravity or curvature). Consider a flat space with curvilinear coordinates ($x^1, x^2, ...$). Because of the curvilinear coordinates the tangent vectors ($\vec{e}_1, \vec{e}_2, ...$) vary from place to place. So when advancing from position $x^\beta$ to position $x^\beta+dx^\beta$ the tangent vectors change from $\vec{e}_\alpha$ to $\vec{e}_\alpha+d\vec{e}_\alpha$. You can expand these changes $d\vec{e}_\alpha$ in terms of the coordinate changes $dx^\beta$ $$d\vec{e}_\alpha=\Gamma^\mu{}_{\alpha\beta}\ \vec{e}_\mu \ dx^\beta,$$ or equivalently (using partial derivatives) $$\frac{\partial\vec{e}_\alpha}{\partial x^\beta}=\Gamma^\mu{}_{\alpha\beta}\ \vec{e}_\mu.$$ This expansion makes up a definition of the Christoffel symbols $\Gamma^\mu{}_{\alpha\beta}$ (see also at Christoffel symbols - Definition in Euclidean space). There is only geometry involved so far, no physics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/212167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Lowering/raising metric indexes So, I was chatting with a friend and we noticed something that might be very, very, very stupid, but I found it at least intriguing. Consider Minkowski spacetime. The trace of a matrix $A$ can be written in terms of the Minkowski metric as $\eta^{\mu \nu} A_{\mu \nu} = \eta_{\mu \nu} A^{\mu \nu} = A^\mu_\mu$. What about the trace of the metric? Notice that $\eta^\mu_\mu$ cannot be written as $\eta_{\mu \nu} \eta^{\mu \nu}$, because this is equal to $4$, not $-2$. It seemed to us that there is some kind of divine rule that says "You shall not lower nor raise indexes of the metric", because $\eta^{\mu \nu} \eta_{\nu \alpha} = \delta^\mu_\alpha \neq \eta^\mu_\alpha$. Is the metric immune to index manipulations? Is this a notation flaw or am I being ultra-dumb?
The mistake you made is this: $\eta^{\mu}_{\nu} \neq \eta_{\mu\nu} $. When you raise index $\mu$ from downstairs to upstairs, the matrix elements change. $\eta^{0}_{0} = 1$, $\eta_{00} = -1$. That is why if you take the trace of $\eta_{\mu\nu}$, you get 2, but if you take the trace of $\eta^{\mu}_{\nu}$ you get 4.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/212421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 2, "answer_id": 0 }
Pendulum's motion is simple harmonic motion For a pendulum's motion to be simple harmonic motion (S.H.M.) is it necessary for a pendulum to have small amplitude or S.H.M. can be produced at large amplitudes as well? If it is really necessary for an S.H.M. to have small amplitudes then why is it? because even at large amplitudes there is restoring force pulling the pendulum toward mean position and its acceleration is directly proportional to the displacement.
It's just because at large angular displacements, it does not approximate the SHM of, say, a block on a spring with no friction. The restoring force is not in the direction of the displacement; therefore it does not act like SHM.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/212583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
What is the 'area law' in the context of matrix product states? I am trying to get into the topic of matrix product states by reading this: A practical introduction to tensor networks: Matrix product states and projected entangled pair states. R. Orús. Ann. Phys. 349, 117 (2014), arXiv:1306.2164. There, often, the word "area-law" is mentioned, but it's not very well explained what is meant by that... It's somehow that states in a Hilbert space are entangled with the neighbored states. (is that right?) But why? And what is meant by a local Hamiltonian?
The area law says that the entanglement of any part of a system with the rest of of the system scales like the boundary (the "surface area") of the region. E.g., in a one-dimensional chain, the entanglement of a contiguous block with the rest should be bounded by a constant, and in 2D, the entanglement of e.g. a square region with the rest should scale like the linear size of this square. The area law is a property which is proven to be satisfied by ground states of local gapped Hamiltonians in one dimension (see arXiv:0705.2024 and arXiv:1301.1162), and the corresponding statement is believed to be true in two dimensions. However, even for systems without a gap the area law is only mildly violated (in that the entanglement does not grow like the volume). Local Hamiltonian refers to the fact that the Hamiltonian is a sum of terms each of which only acts on a small number of closeby spins, e.g. nearest neighbors on a lattice.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/212709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Bernoulli principle and particle Bernoulli principle describes the flow of a fluid for steady, incompressible flow along a streamline. But it is said for a particle of a fluid along a streamline. My question is a particle of fluid refers to a molecule or a group of molecules?
Bernoulli is a continuum rather than a microscopic description of fluid flow. Where you have used 'particle' it should really be 'parcel' of fluid which indicates it is a group of some statistical representative amount of particles (e.g. molecules) which collectively exhibit macroscopic behavior.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/212881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Where does the energy go in a rocket when no work is done? While playing Kerbal Space Program, I wondered where my chemical energy would go when fired at 90° to the motion. It would do no work on the rocket, but all that energy has to go somewhere, right? Anyway, my question is, where does the energy go?
Very little of the energy from a rocket engine ever goes to the kinetic energy of the rocket. The only way you get perfect conversion to KE of the rocket is when the propellant is directed in the opposite direction of motion and when the ejection velocity is exactly equal to the speed of the rocket. In that case, the propellant winds up containing 0 kinetic energy, thus all the kinetic energy liberated in the process goes into the rocket. If the propellant is fired perpendicular to the direction of motion, then the rocket sees 0 change in its own kinetic energy. For the record, this only applies to one frame of reference (probably that of the nearby planet). In this scenario that you have described, all of the change in kinetic energy liberated by the rocket engine goes into the propellant. Of course, the total energy of the reaction is much more, and a great deal of that goes to heat.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/213279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Gradient, divergence and curl with covariant derivatives I am trying to do exercise 3.2 of Sean Carroll's Spacetime and geometry. I have to calculate the formulas for the gradient, the divergence and the curl of a vector field using covariant derivatives. The covariant derivative is the ordinary derivative for a scalar,so $$D_\mu f = \partial_\mu f$$ Which is different from $${\partial f \over \partial r}\hat{\mathbf r} + {1 \over r}{\partial f \over \partial \theta}\hat{\boldsymbol \theta} + {1 \over r\sin\theta}{\partial f \over \partial \varphi}\hat{\boldsymbol \varphi}$$ Also, for the divergence, I used $$\nabla_\mu V^\mu=\partial_\mu V^\nu + \Gamma^{\mu}_{\mu \lambda}V^\lambda = \partial_r V^r +\partial_\theta V^\theta+ \partial_\phi V^\phi + \frac2r v^r+ \frac{V^\theta}{\tan(\theta)} $$ Which didn't work either. (Wikipedia: ${1 \over r^2}{\partial \left( r^2 A_r \right) \over \partial r} + {1 \over r\sin\theta}{\partial \over \partial \theta} \left( A_\theta\sin\theta \right) + {1 \over r\sin\theta}{\partial A_\varphi \over \partial \varphi}$). I was going to try $$(\nabla \times \vec{V})^\mu= \varepsilon^{\mu \nu \lambda}\nabla_\nu V_\lambda$$ But I think that that will not work. What am I missing? EDIT: The problem is that the ortonormal basis used in vector calculus is different from the coordinate basis.
The gradient is a vector, not a covector, hence : \begin{equation} \vec{\nabla} f = \nabla^\mu f = g^{\mu\nu} \nabla_\nu f = g^{\mu\nu} \partial_\nu f \end{equation}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/213466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 0 }
Maintaining symmetry? Minkowski metric is found to be $$ds^2=-dt^2+dr^2+r^2d\Omega^2$$ where $d\Omega^2$ is the metric on a unit two-sphere. Why should we keep track of the $d\Omega^2$ so that spherical symmetry holds well?
What we mean by spherical symmetry is that if we take our geometry and consider the surface at constant $r$ it will have the same geometry as a spherical shell, that is the metric will be: $$ ds^2 = R^2 \left( d\theta^2 + \sin^2\theta \, d\phi^2 \right) \tag{1} $$ where $R$ is some arbitrary constant. If we refer back to your previous question we find a proposal for writing the metric as: $$ ds^2 = -e^{2\alpha(r)}dt^2 + e^{2\beta(r)}dr^2 + e^{2\gamma(r)}r^2 \left( d\theta^2 + \sin^2\theta \, d\phi^2 \right) \tag{2} $$ with $\alpha(r)$, $\beta(r)$ and $\gamma(r)$ being arbitrary functions of $r$. Taking a spherical shell means considering constant $r$ and $t$, so $dt = dr = 0$, and equation (2) becomes: $$\begin{align} ds^2 &= e^{2\gamma(r)}r^2 \left( d\theta^2 + \sin^2\theta \, d\phi^2 \right) \\ &= R^2 \left( d\theta^2 + \sin^2\theta \, d\phi^2 \right) \end{align}$$ where the constant $R = e^{2\gamma(r)}r$. Since this is the same as equation (1) we know that it is a spherically symmetric metric. Given the above it should be obvious that if we mess with the form of $d\Omega^2$ we won't get a spherically symmetric metric. For example we could extend our metric (2) to: $$ ds^2 = -e^{2\alpha(r)}dt^2 + e^{2\beta(r)}dr^2 + e^{2\gamma(r)}r^2 d\theta^2 + e^{2\delta(r)}r^2 \sin^2\theta \, d\phi^2 \tag{3} $$ But at constant $t$ and $r$ we get: $$ ds^2 = e^{2\gamma(r)}r^2 d\theta^2 + e^{2\delta(r)}r^2 \sin^2\theta \, d\phi^2 $$ and this cannot be written in the form of equation (1) so it does not have spherical symmetry.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/213641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Seemingly a paradox on the eigenstate thermalization hypothesis (ETH) In the research field of Many-body Localization (MBL), people are always talking about the eigenstate thermalization hypothesis (ETH). ETH asserts that for a isolated quantum system, all many-body eigenstates of the Hamiltonian are thermal, which means all sub-systems can involve to thermalzation in the end. ETH is not always true and violation to it means MBL for an interacting quantum many-body system. Well, my puzzle is as follows: For a isolated quantum system $A$ and a space-specified sub-system $B\in{A}$. It is assumed the initial state of $A$ is one of the eigenstates $|\psi(t=0)\rangle_{A}$ of its Hamiltonian $H$. Of course it is a pure state. Note the the initial state $|\psi(t=0)\rangle_{B}$ of $B$ is not a pure state unless $|\psi(t=0)\rangle_{A}$ is the direct product state of $|\psi(t=0)\rangle_{B}$ and the state of $A/B$, which means there $B$ is disentangled with the rest part $A/B$. Since $B$ is chosen arbitrarily, mixed initial state of $B$ is the most general case and its state cannot be described by a single state but a density matrix $\rho_{B}(t=0)$. Now let the system $A$ evolve along time. There are two ways to check $\rho_{B}$ at arbitrary time $t$. 1) I can partially trace $\rho_{A}$ by $\rho_{B}=\text{tr}_{A/B}\rho_{A}$. While $\rho_{A}=|\psi|\rangle_{A}\langle\psi|_{A}$ will not change because $|\psi\rangle_{A}$ is the eigenstate and it will not evolve under the time evolution operator thus $\rho_{B}$ will not change forever. 2) The mixed state $\rho_{B}(t=0)$ evolves along time and it may thermalize to Gibbs density matrix $\tilde{\rho}_{B}=\frac{1}{Z}e^{-\beta{H}}$ where $Z$ is its statistical partition function. This is indeed the statement of ETH. What's wrong for the paradoxical results viewed from two different perspectives for the same thing?
The initial state does not need to be one of eigenstates of the hamiltonian, it could be superposition. Therefore time evolution will change it. I don't think your first assumption is correct.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/213733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Can two 2500 K light bulbs replace one 5000 K bulb for growing plants indoors? In an effort to assist an old Greek woman I find myself in need of greater minds. A 5000 Kelvin light bulb is required for her indoor fig plant. Can I get away with substituting two bulbs each in separate fixtures emitting 2500 Kelvin each? All answers are greatly appreciated and I'm looking forward to the education.
probably not - the 5000K is to do with the spectrum of light emitted - it will be bluer than the 2500K light. Both 2500K lights will have the a 'redder' spectrum. To be honest it is not even that straightforward as the temperature is an indication of the average overall temperature that the light appears to emit from. In reality the spectrum will not be broad and continuous, but rather it is likely to contain various peaks from atomic lines. The temperature is an indication of what the light will look like to our eyes. So I am guessing that the advice for 5000 K light is so that there is a good bit of light in the blue (and maybe a bit in the UV) for the fig plant. the 2500K lights will have some blue, but less of it. In principle you will be able to use lots of 2500K lights to have the same ammount of blue light as one 5000K light, but it will not necessarily be two lights - it might be 5 or 10. We can't really give a precise answer except to suggest if you can't get 5000K light you find the next closest, e.g. 4000K and maybe put in 2 or 3 instead of 1 and see how the fig plant grows....
{ "language": "en", "url": "https://physics.stackexchange.com/questions/214057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }