Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Can work hardening of a metal be avoided? My left earbud recently broke mid-wire: the bit that I like to fiddle with and bend. I fixed it, but I was wondering whether there are metals that don't work harden, or resistant to it? Is there a way that you can treat normal metal to prevent it from, or at least reduce the effects of, work hardening. Can these be used in headphones commercially? (i.e are they economically viable?)
Work hardening does not cause things to break, but in fact will cause them to resist further plastic deformation increasing their strength. Wires bent back and forth may eventually break due to fatigue. The material at the edge is compressed and stretched resulting in fatigue. How much cyclic stress there is determines how many cycles the material can last its fatigue limit. From that article: If the stress on the wire at the edge is below the endurance limit then the wire could be bent back and forth indefinitely. One way to reduce stress and allow for more flexible wires is to use braided wires. This works by reducing the cross section of each strand and thus reducing the amount of strain required to produce a certain bend radius. Of course, if by fiddle with and bend you mean deform past the elastic limit so that there is a kink, then you are inherently exceeding the yield stress every cycle and micro cracks will propagate very quickly. If this is what you'd like to due to your cords and would like a cord to survive this treatment, I would design such a cord with thin braided wires that were surrounded by a self-healing polymer. This would allow you to plastically deform the cord and have the cord heal itself. As for metals that can recover from being yielded there are shape-memory alloys, though I think they would be cost prohibitive.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/160146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
How does QFT interpret the Negative probability problem of the real scalar fields' Klein-Gordon equation? I am totally a beginner in QFT, here's the problem that I got: for the real scalar fields, are there any elementary particles descriped by them. If so, how to understand the negative probability problem?
Quantum field theory solves the problem by giving a different interpretation to the "probability". In the case of complex fields, quantum field theory also introduces antiparticles. In the first-quantized Klein-Gordon equation, the time component $j^0$ of the probability current vector $j^\mu$ may indeed be both positive and negative and negative probabilities are bad, as you point out. However, $j^\mu$ is a bilinear expression constructed from the field $\phi$ and its derivatives, roughly $\phi\cdot \partial^\mu\phi$, and when $\phi$ becomes a quantum field (instead of a wave function), which is an operator (or operator-distribution), $j^\mu$ becomes an operator, too. There is nothing wrong about $j^0$ being positive or negative (indefinite) because it defines the charge density for a complex field $\phi$. In quantum field theory, the wave functions that could have had both positive and negative probabilities are used as prefactors in formulae for quantum fields and the positive-energy (and positive-probability) and negative-energy (and negative-probability) solutions for the wave function are treated asymetrically. The latter must be multiplied by the creation operator and the former by the annihilation operators. In effect, it means that in quantum field theory, we may "create" an arbitrary number of particles in wave functions that are allowed by the first-quantized (one-particle) quantum mechanical theory but we are only allowed to use the positive-energy (positive-probability) wave function to excite the vacuum. The negative-energy ones are multiplied by annihilation operators which annihilate the vacuum so we get no state. Those things may only be properly understood along with the full mathematical apparatus of QFT and it is covered in every course or textbook on quantum field theory.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/160230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Evidence that stationary masses in space actually attract each other I'm finding it rather difficult to find experimental evidence that two stationary masses in space (unaffected by external massive bodies or gravities) actually attract one another. For moving masses, this is abundantly clear (planets, asteroids, etc.), but who has actually tried to measure forces of attraction between objects stationary in space with respect to the Sun, and has found through experimentation that the hypothesis of gravity being proportional to motionless masses is true? I'm aware of the Cavendish experiment, however, this experiment is not what I'm looking for because the two balls are moving with the Earth, so they are not completely without motion with respect to the Sun. Relative to each other, the balls are stationary, but I am looking for an experiment conducted where there is no motion in the massive objects relative to the Sun.
The first measurement of the gravitational constant was done by Henry Cavendish in a lab, in which the gravitational force between two lead balls was measured. They weren't moving. http://en.wikipedia.org/wiki/Cavendish_experiment
{ "language": "en", "url": "https://physics.stackexchange.com/questions/160362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 2 }
Is the moment-curvature relation for an elastic beam general? The relationship between the moment and the curvature for an elastic beam is $$M = -EI\kappa$$ Previously, I have only used this with small deflections in static calculations. I am currently working on a dynamic cable model with bending stiffness for a physics simulation. Does this relationship hold for large deflections and dynamic behavior? If not, why?
Not sure you care anymore, but - for large deflections, this does not hold. As the pretty pictures at wikipedia show, an elemental does not remain perpendicular to the neutral axis when bending deep beams, so that goes out the window. The second is the neutral axis in general shifts towards the compression side, because there is less material on that side of the centroid, so I becomes a function of the curvature. (See below). Both of these effects mean different approaches need to be used.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/160497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding turbulent velocity Fourier mode amplitudes from kinetic energy spectrum A random vector field, such as a turbulent flow, can be decomposed into Fourier modes. Taking a snapshot in time (say an initial condition) we have that the randomly fluctuating component of the flow can be described by the sum of fourier modes as follows: \begin{equation} \mathbf{u}(\mathbf{x}) = \sum_\mathbf{k} \hat{\mathbf{u}}(\mathbf{k})e^{i\mathbf{k}\cdot \mathbf{x} } \end{equation} where $\mathbf{k}$ is the wavenumber vector and $\mathbf{u} = (u_1,u_2,u_3)$ and $\hat{\mathbf{u}}$ is the amplitude of one Fourier mode. I now want to use a simple model of a turbulent kinetic energy spectrum that can be found in reality (for example the turbulence that develops in the boundary layer of a duct), and from this turbulent spectrum deduce the amplitudes of the 3D Fourier modes. As an example: One model spectrum I have encountered is that in Pope- Turbulent Flows pg. 232: \begin{equation} E(\kappa) = C\epsilon^{2/3}\kappa^{-5/3}f_L(\kappa L)f_{\eta}(\kappa \eta) \end{equation} where \begin{equation} f_L(\kappa L) = \left ( \frac{\kappa L}{((\kappa L)^2+c_L)^{1/2}} \right )^{5/3+p_0} \end{equation} \begin{equation} f_{\eta}(\kappa \eta) = \exp(-\beta((\kappa \eta)^4+c_\eta^4)^{1/4}-c_{\eta}) \end{equation} Where $\kappa = \mathbf{|k|} $ and there are a bunch of constants I have not defined, but are available in the literature. Now I know that $\frac{1}{2}<u_iu_i> = \int_0^{\infty} E(\kappa) d\kappa$ i.e. the area under the graph is equal to the turbulent kinetic energy and the area of a bin centered at a $\kappa$ is the energy at that wavenumber, but I am not sure how I go about determining the amplitudes of the Fourier modes from this.
Under the definition you use for a Fourier transform, which is the discrete Fourier transform, then we have: $$ \mathbf{u}(\mathbf{x}) = \sum_{i} \hat{\mathbf{u}}(\mathbf{k}_{i}) e^{i \ \mathbf{k}_{i} \cdot \mathbf{x}} $$ which has the inverse given by: $$ \hat{\mathbf{u}}(\mathbf{k}) = \sum_{j} \mathbf{u}(\mathbf{x}_{j}) e^{- i \ \mathbf{k} \cdot \mathbf{x}_{j}} $$ Therefore, you can see, in one dimension, that: $$ E(\kappa) = \sum_{j} E(x_{j}) e^{- i \ \kappa \ x_{j}} $$ or, in the form you are looking for: $$ E(x) = \sum_{i} E(\kappa_{i}) e^{i \ \kappa_{i} \ x} $$ Thus, the values $E(\kappa_{i})$ represent the Fourier amplitudes at wavenumbers $\kappa_{i}$ (within a factor of $2 \ \pi$, depending on choice). This is explicitly discussed in the introduction section of the Wikipedia page for Fourier transforms.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/160663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How much weight would I need to put on the end of a tube to break it? Say I have a tube with a circular cross-section made from some material (for an example, I'd like to use acrylic). I support it horizontally from one end and hang a weight from the other end. How heavy does the weight have to be to break the tube? What if I support the tube from both ends and hang the weight from the middle? For the example, please use acrylic with inner diameter 2 inches, outer diameter 2.25 inches, length 60 inches. However, I'd like to know the formulae and theories that are used to make these calculations so that I can do them myself in the future. You might find the following useful: Properties of acrylic
The equation you would apply is: $\sigma = \frac{M*Y}{I}$ Where M is the bending moment or torque, $Y$ is the distance from the center of the cross section to the top or bottom most fiber, and $I$ is the moment of inertia of the cross section about its x-axis. $\sigma$ is the stress. So, Maximum moment = $M= F * 60$ inches where $F$ = your downward force. $Y= 1.125$ inches. $I$ for this particular cross sectional shape equals $\frac{\pi(D_O^4 -D_I^4)}{64}$ where $D_O = 2.25$ inches and $D_I = 2.00$ inches. I kept all the units in inches. If you know what the maximum tolerable $\sigma$ is in PSI (pounds per square inch), then you plug that into the equation and solve for $F$ in pounds. This is an EXTREMELY basic structural engineering problem. If you apply a force in this fashion to this particular structural configuration, you end up creating a bending moment at the opposite end that causes tension in the uppermost fiber and compression in the bottommost fiber. In structural analysis, the loading possibilities and connection possibilities are innumerable and range from simple to complex. There have been cases in history where even simple structures have collapsed resulting in death because the designers simply neglected basic concepts. Every single weld has to be properly designed. Every single bolt has to be properly sized. Every single element must be correctly designed. Otherwise...possible disaster.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/160728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Showing $\frac{\delta V_{out}}{V_{out}}=\frac{\delta R_2}{R_2} \frac{R_1}{R_1+R_2}$ Consider a voltage divider with $V_{out}=V_{in} \frac{R_2}{R_1+R_2}$. Show that for a small change in $R_2$, the voltage divider equation is: $\frac{\delta V_{out}}{V_{out}}=\frac{\delta R_2}{R_2} \frac{R_1}{R_1+R_2}$. I've been trying to get to that for awhile, but I'm a little confused on how to exactly set UP the problem, along with solving it. I tried both of the following, and ended up with an unsimplify-able expression in both cases that wasn't the one i was looking for. Attempt One: $ \delta V_{out}+{V_{out}}=\frac{R_2+\delta R_2}{R_1+R_2+\delta R_2} $ and Attempt Two: $ \delta V_{out}+{V_{out}}=\frac{R_2}{R_1+R_2}+\frac{\delta R_2}{R_1+\delta R_2} $ Neither worked. WMy question seems to be mainly HOW to set up the equation to manipulate and then find the solution. I don't believe my algebra had an error, but it could have. So, any suggestions, without solving it for me on how to set this up? Thanks. If the answer is in my math, not method, I will post the work once someone helps me understand/confirms how to set up the problem correctly. attempting the comment below $ \dfrac{dV_{out}}{dR_2} = ( \frac{dR_2}{dR_2}\frac{1}{R_1+R_2} + R_2\frac{d(R_1+R_2)}{dR_2}) V_{in} = $ which appeas to be correct, but how to change the R_2 in the numerator to the top?
First, let put $V_{in}$ in this way: $$ V_{in}= \frac{R_1+R_2}{R_2}V_{out} $$ Now, diferentiate the equation $V_{out}=V_{in} \frac{R_2}{R_1+R_2}$ : $$ \delta V_{out}=V_{in}\delta \frac{R_2}{R_1+R_2}=V_{in}\left ( \frac{1}{R_1+R_2}- \frac{R_2}{(R_1+R_2)^2} \right )\delta R_2= V_{in}\left ( \frac{R_1}{(R_1+R_2)^2} \right )\delta R_2 = V_{out} \frac{R_1+R_2}{R_2}\left ( \frac{R_1}{(R_1+R_2)^2} \right )\delta R_2 = V_{out}\left ( \frac{R_1}{(R_1+R_2)} \right )\frac{\delta R_2}{R_2} $$ Now we have: $$\frac{\delta V_{out}}{V_{out}}=\frac{\delta R_2}{R_2} \frac{R_1}{R_1+R_2}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/161074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do we prove that two conjugate operators $X$ and $Y$ induce $\sigma_x$ and $\sigma_y$ driving terms when restricted to a two level subspace? Suppose I have a Hamiltonian for a particle moving in a one dimensional potential $$H = H(X,Y) \qquad [X,Y] = i$$ where $X$ is the dimensionless position, $Y$ is the dimensionless momentum, and $\epsilon_0$ is an energy scale. Both $X$ and $Y$ are Hermitian. Now suppose we add a driving Hamiltonian $$H_x = \epsilon_x \, f_x(t) \, X$$ where $f_x(t)$ is a dimensionless function of time and $\epsilon_x$ is another energy scale. If $f_x(t)$ is on resonance with a transition between two energy eigenstates of $H$, we expect it to produce transitions between those two states. Restricting considerations to just those two levels, we can write $H_x$ as a 2x2 matrix: $$H_x \sim \epsilon_x f_x(t)\left( \begin{array}{cc} 0 & \langle 0 | X | 1 \rangle \\ \langle 1 | X | 0 \rangle & 0 \end{array} \right) = \epsilon_x \langle 1 | X | 0 \rangle f_x(t) \sigma_x \, . $$ where $\sim$ means "is represented by" and we've assumed $\langle 0 | X | 0 \rangle$ and $\langle 1 | X | 1 \rangle$ are both zero.$^{[a]}$ Now suppose we have a different driving Hamiltonian $$H_y = \epsilon_y f_y(t) Y \, .$$ By similar reasoning to what we did above, we could reason that $$H_y \sim \epsilon_y f_y(t) \left( \begin{array}{cc} 0 & \langle 0 | Y | 1 \rangle \\ \langle 1 | Y | 0 \rangle & 0 \end{array} \right) = \epsilon_y \langle 1 | Y | 0 \rangle f_y(t) \sigma_x \, .$$ This, however, is not correct. By looking at examples such as the harmonic oscillator we find that the driving Hamiltonian which couples to $Y$ is proportional to $\sigma_y$, not $\sigma_x$. How, in general, can we argue that two conjugate operators, when restricted to a certain two state subspace, must be proportional to $\sigma_x$ and $\sigma_y$? $[a]$: Or at least, we can ignore the diagonal elements of $H_x$. In general the diagonal part can be written as a sum of $\sigma_z$ and the identity. The identity part can be completely ignored. The $\sigma_z$ part can be dropped if $f(t)$ is oscillatory because is just induces a net-zero fluctuation in the level splitting between $|0\rangle$ and $|1\rangle$. I think.
Consider a Hermitian operator $X$, and denote by $x$ its projection to the two-dimensional subspace. Then, $x$ is Hermitian as well. If you assume that the diagonal is zero (e.g. because you shift the energy + choose a rotating frame accordingly), then $x$ is of the form $x=\alpha_x\sigma_x+\beta_x\sigma_y$ with real $\alpha$ and $\beta$. The same is true for the projection of $Y$, $y=\alpha_y\sigma_x+\beta_y\sigma_y$. Note that we cannot say that $x=\sigma_x$, as can be seen by changing to a different canonical basis such as $(X\pm Y)/\sqrt{2}$ (which gives $x,y\propto\sigma_x\pm\sigma_y$). In particular, note that $[x,y]$ is a sum of commutators of $\sigma_x$ and $\sigma_y$, i.e., it is proportional to $i\sigma_z$ (with a real prefactor). Note that we have not used that $X$ and $Y$ are a conjugate pair. An open question is whether this would imply that $x$ and $y$ are orthogonal (i.e., $\alpha_x\alpha_y+\beta_x\beta_y=0$).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/161158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If the solar system is a non-inertial frame, why can Newton's Laws predict motion? Since there is no object in the universe that doesn't move, and the solar system likely accelerates through space, how did Newton's Laws work so well? Didn't he assume that the sun is the acceleration-less center of the universe? Shouldn't there be many psuedo-forces to account for planetary motion?
When 1686 Newton writes "Principia...", the inertial frame concept does not exist yet. However, we can find in it Corollary IV (introducing the center of mass CM concept for any interacting body set), Corollary V (Galileo's Principle of Relativity, applied to any limited body set with CM at any uniform velocity), and the today almost forgot Corollary VI (a generalization of the V from CM acceleration zero to any variable one). The application of Corollary VI to the Solar System determines that all occur inside it (as if it were a Galileo's ship) in the same way (same 1686 Newton's Laws and other natural ones), no matter at all its CM acceleration, known or not. Rafael A. Valls Hidalgo-Gato; Institute of Cybernetics, Mathematics and Physics; Havana, Cuba.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/161203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 6, "answer_id": 5 }
Textbook recommendation for computational physics Can anyone recommend some good textbooks for undergraduate level computational physics course? I am using numerical recipe but find it not a very good textbook.
I am using this.(freely available) http://farside.ph.utexas.edu/teaching/329/329.html A complete set of lecture notes for an upper-division undergraduate computational physics course. Topics covered include scientific programming in C, the numerical solution of ordinary and partial differential equations, particle-in-cell codes, and Montecarlo methods.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/161368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Geodesic deviation In S. Carroll Lecture Notes on General Relativity, chapter 6, pages 152-153 we have equation (6.62) $$\tag{6.62} \frac{\partial^2}{\partial t^2} S^\mu=\frac{1}{2} S^\sigma \frac{\partial^2}{\partial t^2} h^\mu_{\; \sigma}.$$ While trying to deduce that equation from the previous one $$\tag{6.58} \frac{D^2}{d \tau^2} S^\mu= R^\mu_{\; \nu\rho\sigma} U^\nu U^\rho S^\sigma,$$ where $U^\mu$ is the velocity vector I noticed that he neglected the term $$\tag{*}\frac{\partial}{\partial \tau}(U^\rho \Gamma^\mu_{\; \rho \sigma} S^\sigma)$$ on the left hand side. Is this approximation justified in his setting? Or am I mistaken somewhere?
No, the ∗ term is incomplete. The two total $D$-derivatives on the lhs. of eq. (6.58) in Ref. 1 is what is causing the curvature on the rhs. in the first place, see e.g. Ref. 2 p. 146. Carroll is on the lhs. eq. (6.62) changing notation for the two total $D$-derivatives to two $\partial$-derivatives, but they are still total derivatives. In fluid-dynamical language, one may say that Carroll is going from an Eulerian to a Lagrangian picture. He is considering linearized gravity, so the Riemann curvature tensor is proportional to $\epsilon$, and we can (to the order that we are calculating, namely to first order in $\epsilon$) interpret $S^{\sigma}$ on the rhs. as following the flow. Note that Ref. 2 contains slightly more details than Ref. 1. References: * *Sean Carroll, Lecture Notes on General Relativity, Chapter 6. The pdf file is available here. *Sean Carroll, Spacetime and Geometry: An Introduction to General Relativity, 2003; Chapter 7.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/161429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving inhomogeneous differential equation with Green function I'm not sure if this question is for physics forum, but my book's title is "Green's Functions in Quantum Physics", so I ask here. The book says that the Green's function defined as $$ (z-L( \mathbf{r}))G(\mathbf{r},\mathbf{r'};z)=\delta(\mathbf{r-r'}) $$ solves the inhomogeneous differential equation $$ (z-L( \mathbf{r}))u(\mathbf{r})=f(\mathbf{r}) $$ with the "same boundary condition" and it gives the solution $$ u(\mathbf{r})=\int d\mathbf{r'}G(\mathbf{r},\mathbf{r'};z)f(\mathbf{r'}). $$ I think that the third equation is just from by multiplying $f{(\mathbf{r})}$ to first equation and integrating it. But then, why do we need the statement "with same boundary condition"? I don't see where to use that condition. My book does not prove this and wikipedia link Green function also states this, but no proof. Is there any simple explanation or proof for this?
Suppose for simplicity that $u(\boldsymbol{r})$ satisfies the boundary condition $u(\boldsymbol{r}_0)=0$ for $\boldsymbol{r}_0$ in the boundary then the integral on the right hand side of your last equation should satisfy \begin{equation} \int{d\boldsymbol{r}'G(\boldsymbol{r}_0,\boldsymbol{r}';z)f(\boldsymbol{r}')}=0 \end{equation} and then $G(\boldsymbol{r}_0,\boldsymbol{r}';z)=0$ it is, $G$ satisfies the same boundary condition as $u$. The same is true for Neumman boundary conditions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/161519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Problem on bending plates in Newtonian Mechanics? I am reading a book on interesting physics problems and demonstrations. One of the problems in the section on buildings, structures and equilibrium talks about a plate with one side attached to the wall. The plate will hang, and the question deals with the amount of work per kilogram (or kinetic energy per kilogram as is written in the problem) on the plate. Now, the information on the dimensions of the plate are given, but I am having difficulty understanding how they would affect the amount of kinetic energy per kilogram in the plate, if the dimensions even affect the plate hanging. So my question is do the dimensions of a plate with one side attached to the wall affect the amount of kinetic energy (or work) per kilogram?
Yes - if the plate is stiffer, then the deflection will be smaller and so the amount of work done (which is force times distance) will be less. For a typical linear elastic situation, the work done will be $\frac12 F x$ where $F$ is the final force and $x$ is the displacement. The factor $\frac12$ comes about from the fact that the initial force needed for deflection is small and it grows with displacement. If you just applied the full force at once, the plate would accelerate and oscillate (it would overshoot the point of equilibrium). Because of this, the work done will be smaller if the plate is wider or thicker, and greater when the plate is longer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/161626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Would the blue glow of Cherenkov radiation be visible when diffused across ice, such as in the IceCube neutrino experiment? The blue glow characteristic of Cherenkov radiation is visible emanating from underwater reactors. Is it also visible through ice, at the IceCube neutrino experiment (not that anyone is physically standing there looking)? Would the radiation be around for such a short time, produced discontinuously, that it wouldn't be visible?
The flash of Cerenkov light from a single neutrino interaction is probably not sufficient for the human eye to detect; this is why an array of PMTs is used to pick up the signal. It's not a question of light being transmitted, but whether the intensity is sufficient to be "visible". In a reactor there are very large numbers of particles traveling at greater than the speed of light, and interacting strongly with the surrounding medium. That's why there is a blue "glow". Neutrinos have a lot of energy, but not a high probability of interacting. According to the icecube wiki page, they expect one neutrino event every 20 minutes. Blink and you would miss it. And while there may be a lot of photons produced in one flash, very few (if any) of these would reach your eye unless you were very close - the light spreads over an area $4\pi r^2$ and of that, your extended pupil (say 8 mm diameter) only occupies $\pi \cdot \mathrm{0.004^2 m^2}$. So at 1 m distance only 4 in $\mathrm{10^6}$ photons would reach your eye - and if you were 20 m away, that number would drop to 1 in $\mathrm{10^8}$ - ignoring any scatter in the ice, etc.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/161703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What exactly is an image? When we say several rays meet to form an image, what is that which is formed? Is it an arrangement of unknown entities? What exactly am I looking at when I see my image in a plane mirror?
Short way, image is simple information. The act of seeing things is simple a correlation made by some physical system into human brains. When we talk about optics, we talk about this correlations uses purely light ray (a lot of photons). The image in the mirror follows this logic. But human eyes can only resolve (make a effective correlation) the part of the light that is reflected by perfect mirror. We draw a lot of geometric diagrams make the assumption that the light rays travel into homogenous media by straight lines. The interesting problems is calculate how the light rays travels for some source to some point (could be a human eyes) in a non-homogenous media and respond how this point "feel" the geometry around, making assumptions that light rays always made straight lines. An image could be represent a true geometry around or not, depend on the fact that media is homogenous or not.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/161803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $∣1 \rangle$ an abuse of notation? In introductory quantum mechanics it is always said that $∣ \rangle$ is nothing but a notation. For example, we can denote the state $\vec \psi$ as $∣\psi \rangle$. In other words, the little arrow has transformed into a ket. But when you look up material online, it seems that the usage of the bra-ket is much more free. Example of this usage: http://physics.gu.se/~klavs/FYP310/braket.pdf pg 17 A harmonic oscillator with precisely three quanta of vibrations is described as $|3\rangle$., where it is understood that in this case we are looking at a harmonic oscillator with some given frequency ω, say. Because the state is specified with respect to the energy we can easily find the energy by application of the Hamiltonian operator on this state, H$|3\rangle$. = (3 + 1/2)$\omega h/2\pi |3 \rangle$. What is the meaning of 3 in this case? Is 3 a vector? A scalar? If we treat the ket symbol as a vector, then $\vec 3$ is something that does not make sense. Can someone clarify what it means for a scalar to be in a ket?
What is the meaning of 3 in this case? In this case, the character "3" is a convenient, descriptive label for the state with three quanta present. It is often the case that an eigenstate is labelled with its associated eigenvalue. In the harmonic oscillator case, the number operator commutes with the energy operator (Hamiltonian) so a number eigenstate is also an energy eigenstate. Thus, the state with three quanta present satisfies $$\hat N |3\rangle = 3\,|3\rangle$$ But, it also satisfies $$\hat H |3\rangle = (3 + \frac{1}{2})\hbar \omega\, |3\rangle = \frac{7}{2} \hbar \omega\,|3\rangle$$ So we would be justified in labelling this state as $$|\frac{7}{2} \hbar \omega\rangle $$ though that's not typical.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/161862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 2 }
Proof that all primitive cells have the same size A primitive cell of a crystal lattice is a set $A$ such that two copies of $A$ which are translated by a lattice vector do not overlap and such that $A$ tiles the entire crystal. I have read (for example in the german “Festkörperphysik” by Gross, Marx), that all primitive cells have the same size/volume. Intuitively, this seems plausible, but is there a proof? My precise, measure theoretic, interpretation of this statement is: If $a_1, \ldots, a_n$ is a basis of $\mathbf{R}^n$ and $A, B \subset \mathbf{R}^n$ are sets such that $(\cup (A+\alpha_1 a_1+\ldots+\alpha_n a_n))C$ and $(\mathbf{R}^n \cup (B+\alpha_1 a_1+\ldots+\alpha_n a_n))^C$ (where the union is over all $\alpha_1, \ldots,\alpha_n ∈ \mathbf{Z}$) are Lebesgue null sets and such that for all $\alpha_1,\ldots,\alpha_n∈\mathbf{Z}$: $(A+\alpha_1 a_1 + \ldots \alpha_n a_n) \cap A$ and $(B+\alpha_1 a_1 + \ldots \alpha_n a_n) \cap B$ are Lebesgue null sets, then $A$ and $B$ have the same Lebesgue measure.
The quotient forming map $\Bbb R^n\to\Bbb R^n/\Lambda$ is a local isometry (as translations by elements of the lattice $\Lambda$ are isometries without fixed points) onto a torus, whose volume is equal to the (absolute value of the) determinant of the basis of $\Lambda$. The preimage of each point of the torus has exactly one point in the primitive cell, by definition of the primitive cell, so (as soon as the primitive cell has a well-defined volume) its volume must be equal to that of the torus.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/161981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What is negative energy? From what I've read negative energy is based on the Dirac sea concept of virtual particles. Negative energy is referenced by Casimir effects of virtual particle concentration differences between the space outside the experiment and inside the experiment i.e. the two uncharged metal plates. So is negative energy simply negative virtual particle flux?
In terms of the Casimir Effect the vacuum state between the plates is at a lower energy state than that outside them. Taking the normal vacuum as baseline, the area between the plates is negative energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/162078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What is the speed of sound in space? Given that space is not a perfect vacuum, what is the speed of sound therein? Google was not very helpful in this regard, as the only answer I found was $300\,{\rm km}\,{\rm s}^{-1}$, from Astronomy Cafe, which is not a source I'd be willing to cite.
From the ideal gas law, we know: $$ v_\textrm{sound} = \sqrt{\frac{\gamma k_\textrm{B} T}{m}} $$ Assuming that interstellar space is heated uniformly by the CMB, it will have a temperature of $2.73\ \mathrm{K}$. We know that most of this medium comprises protons and neutral hydrogen atoms at a density of about 1 atom/cm−3. This means that $\gamma = 5/3$, and $m = 1.66\times 10^{-27}\ \mathrm{kg}$, giving a value for $v_\textrm{sound}$ of $192\ \mathrm{m\ s^{-1}}$. However, this is not propagated efficiently in a vacuum. In the extremely high vacuum of outer space, the mean free path is millions of kilometres, so any particle lucky enough* to be in contact with the sound-producing object would have to travel light-seconds before being able to impart that information in a secondary collision. *Which for the density given, would only be about 50 hydrogen atoms if you clapped your hands – very low sound power! -Edit- As has quite rightly been pointed out in the comments, the interstellar medium is not that cold. At the moment, our solar system is moving through a cloud of gas at approximately 6000 K. At this temperature, the speed of sound would be approximately $9000\ \mathrm{m\ s^{-1}}$. See Kyle's answer for a table of values for $v_\textrm{sound}$ that can be found in different environments in space, or pela's for information on how early universe sound waves became responsible for modern-day large scale structure.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/162184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101", "answer_count": 6, "answer_id": 3 }
Different kinds of trace for statistical ensembles In the chapter 7 of the book "A Modern Course in Statiscal Physics" by L. Reichl, we found $Tr[\hat{\rho}]=1$ for microcanonical ensembles and $Tr_N[\hat{\rho}]=1$ for canonical and grandcanonical ones. I looked for the meaning of $Tr_N$ in the book but I didn't find it. It seems that it is related to number of states with a given energy E, but I don't know how this relation looks like. Thus -What does $Tr_N$ mean? or -What is the difference between $Tr_N$ and $Tr$? A consequence of this difference in the book is $Tr\left[e^{\left(\frac{\alpha_0}{k_B}-1\right)\hat{I}}\right]=1$, (1a) $e^{\left(\frac{\alpha_0}{k_B}-1\right)}N=1$, (1b) for microcanonical ensemble, and $Tr_N\left[e^{\left(\frac{\alpha_0}{k_B}-1\right)\hat{I}+\frac{\alpha_E}{k_B}\hat{H}}\right]=1=e^{\frac{\alpha_0}{k_B}-1}Tr_N\left[e^{\frac{\alpha_E}{k_B}\hat{H}}\right]$, (2) for canonical or grandcanonical, where $\hat{H}$ is the hamiltonian operator and $\alpha_0$, $\alpha_E$ and $k_B$ are constants, that I also doesn't understand these results. About Eq. (2), it seems that, if $Tr_N$ has the property $Tr_N\left[\hat{A}\hat{B}\right]\equiv\frac{1}{N}Tr[\hat{A}]Tr[\hat{B}]$, (3) where $\hat{A}$ and $\hat{B}$ are diagonal matrices and $N$ is the dimension of $\hat{A}$ and $\hat{B}$, I can understand (2), but not even more Eqs. (1).
Using the comment of @MarkMitchison, since $e^\hat{C}=\sum_{k=0}^\infty\frac{\hat{C}^k}{k!}$ and $e^x=\sum_{k=0}^\infty\frac{x^k}{k!}$ (as can be seen here), so $e^{\alpha\hat{I}}=e^\alpha\hat{I}$, and I "can take the first term outside the trace in Eq. (2). You don't need to assume any special properties of the trace". Again, thanks @MarkMitchison.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/162276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What does the statement "the laws of physics are invariant" mean? In the first paragraph of Wikipedia's article on special relativity, it states one of the assumptions of special relativity is the laws of physics are invariant (i.e., identical) in all inertial systems (non-accelerating frames of reference) What does this mean? I have seen this phrase several times, but it seems very vague. Unlike saying the speed of light is constant, this phrase doesn't specify what laws are invariant or even what it means to be invariant/identical. My Question Can someone clarify the meaning of this statement? (I obviously know what an inertial frame is)
In layman's terms, it just means that the laws of physics are the same everywhere. This means that we are talking about one common set of laws. The fun part is figuring out how one common set of laws can behave the same, while they are taking place within different frames of reference. Thus we have a one, that is shared by a many. How can this be, when each frame of reference is different. Of course once you fully understand both the cause and structure of Special Relativity, the answer becomes obvious.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/162335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 10, "answer_id": 8 }
Is $ds^2$ just a number or is it actually a quantity squared? I originally thought $ds^2$ was the square of some number we call the spacetime interval. I thought this because Taylor and Wheeler treat it like the square of a quantity in their book Spacetime Physics. But I have also heard $ds^2$ its just a notational device of some sort and doesn't actually represent the square of anything. It is just a number and that the square sign is simply conventional. Which is true?
It is a square of a proper time interval or a square of proper distance (modulo an inessential sign).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/162491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 3 }
selection of p substrate as wafer in typical cmos process flow why is p-substrate typically used as wafer in the typical cmos process flow? why not n substrate?with respect to memories, Has it got anything to do with the aplha-paritcle radiation induced errors (soft errors) ? please explain.
Up until the mid-1990's, your could get silicon wafers grown by float-zone (FZ) or Czochralski (CZ), in (100), (110), and (111) orientations, doped with B, P, and the occasional exotic (As, Sb). By the introduction of 200mm (8") wafers, you became limited to (100) CZ B-doped wafers. Why? The float-zone process could not scale up to the larger wafer diameter with good throughput and dopant uniformity. CZ became the only way to go at 200mm, and by now it really really is the only way to go. The dopant uniformity was also a big factor - you can't have the background doping levels change not just across an individual wafer but also from wafer to wafer along the boule. As transistor features decrease, the required doping levels need to be (a) less than before, and (b) more tightly controlled. With a varying background doping that became harder and harder. Boron became the only available bulk dopant because it behaves much better in the crystal growing process. It diffuses faster in the liquid, so you reduce the center-to-edge doping variation. It also segregates less during solidification, so the distribution along the growth direction stays more constant. This is good crystal growth thermodynamics and kinetics (materials science) driving this selection. As for direction, (110) and (111) were always niche uses, and nobody wanted to deal with them in the high volume market (since they weren't high volume). (100) won for CMOS.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/162564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the criterion for a change to be adiabatic? I'm trying to understand whether the change of a parameter $\lambda$ of a Hamiltonian $H$ is adiabatic. Reading Landau and Lifshitz "Mechanics", I see ... let us suppose that $\lambda$ varies slowly (adiabatically) with time as the result of some external action; by a "slow" variation we mean one in which $\lambda$ varies only slightly during the period $T$ of the motion: $$T\frac{d\lambda}{dt}\ll\lambda.$$ But we can choose $\lambda$ in such a way that it has arbitrarily large value by taking Hamiltonian as $H'(\lambda)=H(\lambda-C)$, where $C=\mathrm{const}$ is a large compensating constant. Thus the condition of adiabaticity would be automatically fulfilled for arbitrarily fast change. Reading Wikipedia, I see In mechanics, an adiabatic change is a slow deformation of the Hamiltonian, where the fractional rate of change of the energy is much slower than the orbital frequency. But again, we can shift energy by arbitrarily large constant without affecting the equations of motion, and then any change of energy will have very small logarithmic derivative. Thus the criteria given above are too ambiguous to be usable. So, what is the true unambiguous criterion for the change to be adiabatic? Or, if the criteria cited above are unambiguous, then what is my mistake?
Look to the more fundamental, classical definition. Adiabatic just means 'without heat transfer'. But more specifically it requires defining a system or control volume where the heat is not transferred. Your model or equations of motion must exist over some defined space and time interval. If heat is not transferred in/out of the system, the process is adiabatic.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/162632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What physical evidence is there that subatomic particles pop in and out of existence? What physical evidence shows that subatomic particles pop in and out of existence?
This answer is basically an argument about why you should treat the terms of a perturbation series as interesting objects under the right circumstances. It doesn't really change the fact that these are just mathematical terms, but it shows that they have explanatory value in addition to simply being part of the sum because each term can be the leading term in the sum of another physical process. Meson Production A significant contribution to forward, production of pions and other mesons is the knock-on of quark-pairs from the nucleon sea. Reactions like $$ e^- + p \to e^- + \pi^+ + \text{undetected hadronic junk} \,.$$ For one of many more technical set of discussions, see the $f_\pi$ collaboration's papers:1 * *http://inspirehep.net/record/535171 *http://inspirehep.net/record/1290558 *http://inspirehep.net/record/1334567 Drell-Yan The "Drell-Yan" process is $$ q + \bar{q} \to l + \bar{l} \,,$$ with $q$ representing a quark and $l$ representing a charged lepton (experimentally one is generally interested in muons because the signature is experimentally easy to find). It is obtainable in collisions between two protons. Protons have a valence quark content of $uud$. So where does the anti-quark in the initial state come from? From the nucleon Sea. Experiments using this technique include NuSea and SeaQuest 1 Chosen because I know which ones they are on account of having been part of the collaboration way back at the dawn of my career.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/162845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 5, "answer_id": 3 }
Relation between area elements in finite deformation theory (continuum mechanics) There are relations for the line and volume elements in continuum mechanics. For example: \begin{align} \ \ \ \ \ \ \ \ \ \ \ \ \frac{V}{V_0}&={\rm det}(F)\tag{1}\\ \lambda^2&=(F^TFe_1\cdot e_1)\tag{2} \end{align} with $F$ being the deformation gradient, $$\lambda=\delta x/\delta X \tag{3}$$ is the stretch and $e_1$ is the unit vector in direction where stretch is to be found. Is there a similar relation between infinitesimal areas (for ratio of deformed and undeformed areas)?
Your first two equations can be written in other forms, so how about this? In 1-D: $$\lambda=\frac{L}{^0L}$$ (I use a pre-superscript for initial values and post-subscripts for spatial directions). In 2-D: $$\lambda_1\lambda_2=\frac{L_1}{^0L_1}\frac{L_2}{^0L_2}=\frac{A}{^0A}$$ In 3-D: $$\lambda_1\lambda_2\lambda_3=\frac{L_1}{^0L_1}\frac{L_2}{^0L_2}\frac{L_3}{^0L_3}=\frac{V}{^0V}$$ Can't recall ever using the 2-D case but it might arise in a plane stress or plane strain calculation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/162984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Could dark energy just be particles with negative mass? The title speaks for itself. Dark matter: We see extra attractive force, and we posit that there are particles which create such a force, and use the measure of that force to guess their locations. Dark Energy: We see extra repulsive force. Only thing is, dark energy is uniform. So I suppose the stuff would have to be (at least somewhat) uniformly distributed throughout the universe. How uniform do we know it to be? Could the "stuff" be somehow a part of empty space itself?
If dark energy would consist of particles, it would dilute with the growing radius of the universe to the third power, since the total number of particles would stay the same while the volume increases. What observations found was that dark energy rather behaves like a constant which does not thin out, that's why it is also known as the cosmological constant. That means even if the universe expands, the amount of dark energy per cubic meter stays (at least approximately) the same.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/163074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 8, "answer_id": 2 }
Why do you not get burned when you move your finger (quickly) over a candle flame? When we move a finger quickly over/through a candle flame, why doesn't it get burned? http://en.wikipedia.org/wiki/Fire
What happens when you place your finger in a flame is that energy is transferred from a hot gas to the mass of your finger. This transfer of energy takes time. I guess this transfer is primarily through thermal conduction but the arguments probably apply for radiated energy too. The time it takes to transfer a given amount of energy into a given volume of matter is, I think, a property we call the thermal conductivity of the material. But I suspect that's just a recognition that a transfer of energy doesn't happen instantaneously. the rate at which energy is transferred also depends on a property we describe as the initial temperature difference between, in this case, the hot gas and the finger. The effect of this energy on the temperature of that volume of matter is what we call the thermal capacity of the substance. For example, you have to add 4200 Joules of energy to a kilogram of water to raise it's temperature by 1 Kelvin (1 C). Once the finger's temperature rises above a certain limit, it will cause a chemical process that changes the chemical structure of the matter in your finger and breaks down it's structure - i.e. burning. This is also why your kettle takes several minutes to boil a litre of water. The water has to be held over the heating element for sufficient time for enough energy to be conducted from the heating element into the water to raise it's temperature enough that there begins a phase-change from liquid to gas. To char 1cc of finger takes a certain amount of energy, a fast moving finger doesn't leave the finger inside hot gas long enough for that amount of energy to be transferred.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/163139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Silicone tube with three holes, flow rate, pressure I have a silicone tube -- a saline solution flows in from one end, and then flows out of three holes of equal diameter and equal distance from each other that are along the side of the tube. What can I do to make the pressure at the three holes equal?
If injected from a single end, and assuming a constant diameter pipe, the pressure decreases linearly along the length of the pipe due to friction. Therefore equal diameter holes will not yield equal flow rates or pressures. For equal flow rates: To control the flow rates, you must control the injection pressure (ex. reservoir with regulator valve). Specify a desired flow rate (per hole or total) where $Q_\text{total}=3Q_\text{hole}$. For each hole location along the length of the pipe, $\ell$, calculate the pressure. $$ P = P_\text{injection}-\frac{8\mu\ell Q_\text{total}}{\pi r_\text{pipe}^4} $$ Next, apply a loss coefficient depending on the cross sectional geometry of the holes (sharp-edged, rounded, etc.) to obtain the pressure at the holes. $$P_{hole} = P*K_{L}$$ Next, for the desired flowrate, $Q_{hole}$, and known exit pressure, solve for the radius required at each hole: $$P_\text{hole}-P_\text{exit} = \frac{8\mu\ell Q_\text{hole}}{\pi r_\text{hole}^4} \rightarrow r_\text{hole} = \left(\frac{8\mu\ell Q_\text{hole}}{\pi(P_\text{hole}-P_\text{exit})} \right)^{1/4}$$ For equal pressures: The only practical way I can think to accomplish this is adding regulator valves to each hole, with an injection pressure high enough to overcome frictional losses to the furthest hole.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/163339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Why does a metal boat float? I was in class learning about density and stuff. Our teacher told us that things that are denser than water sink in water, and less dense things float. Then, our teacher asked us why metal boats float in water, even though they are denser than water. Is it because of the surface tension of water? Some other thing? Any help would be appreciated.
This is because the whole boat, along with the air in the boat, is lighter than the water it displaces. For example, if a small boat will take up 1 cubic meter of water, then it has to be heavier than the weight of 1 cubic meter of water. This is explained in this post by What If here. For the same reason that bowling balls float (because salt water the size of a bowling ball weighs more), boats float (because the overall weight of a boat is less than the overall weight of salt water the size of a boat.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/163408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Dependence of streamlined flow on viscosity My teacher told true class the following statement:- Streamlined flow is more likely for liquid with more viscosity as sturdy flow can only be achieved with slow speed. It's true that viscosity causes slow speed of liquid but viscosity is due to relative motion of layers of fluid from a fixed point. Then shouldn't viscosity causes a non sturdy flow due to relative motion of layers of fluid even though the speed is slow causing disruption to streamlined flow?
What your teacher told the class is true, though it's more complicated than that. For any flowing fluid there are two types of force. The viscous force is due to the viscosity of the fluid, and that's the obvious force that everyone thinks about. However a fluid has a mass, and therefore when it's moving it has a momentum, and therefore whenever the fluid changes direction the momentum changes and this produces a force. These forces are known as inertial forces. As a general rule, when viscous forces dominate the flow is laminar, and this tends to happen at high viscosities and low flow rates. When inertial forces dominate the flow is turbulent, and this happens at low viscosities and high flow rates. We define a parameter called the Reynolds number, which is basically the ratio of the inertial forces to viscous forces. So a high Reynolds number means turbulent flow and a low Reynolds number means laminar flow. The expression for the Reynolds number varies depending on the system, but for example the equation for the Reynolds number of a fluid flowing in a pipe is: $$ Re = \frac{\rho v D_H}{\mu} $$ where $\rho$ is the density, $v$ is the flow velocity, $D_H$ is the effective pipe area and $\mu$ is the viscosity. So the dependance on viscosity is: $$ RE \propto \frac{1}{\mu} $$ and as your teacher says, high viscosity means a low Reynolds number and therefore the flow is more likely to be laminar.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/163514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is this way of calculating the diffraction pattern valid? I've seen that in some books (Fowles) the intensity of the diffraction pattern is calculated in the following way. We place the source $S$ and the point at the screen $P$ in the line perpendicular to the aperture (second diagram). We calculate the intensity at $P$. Now, the rest of the pattern is obtained by displacing the aperture keeping $S$ and $P$ fixed (this would mean that we change the integration limits). It is supposed that this method should be equivalent to keeping $S$ and the aperture fixed and moving $P$ (first diagram), which is what we really want to calculate. But how can they both be equivalent? Are we making any approximations? I've read in my note that this is done to simplify calculations, but no justification is given.
In Fresnel diffraction, you are evaluating the contribution of every possible ray from source to screen by computing the relative phase shift for each ray. The method you show is only valid if the distance from source S to aperture Q is much larger than the distance from Q to P: that is the only condition in which small lateral displacements of S relative to Q don't affect the result (it will result in a small phase gradient across the aperture; that has to be negligible compared to the shift due to P). So I disagree with @Nordik's answer: I don't think it's the distance between S and P that matters; it's the difference in distance such that SQ >> QP.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/163617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Can we measure the electron spin independently of its magnetic moment? What experimental evidence do we have for the intrinsic angular momentum of the electron (its spin)? I am specifically interested in whether we have a value for this that is independent of the intrinsic magnetic moment, and hopefully a value for the bare electron alone (i.e. not in some system such as an atom).
In quantum mechanics, the magnetic moment operator is related to the spin operator by: $\vec\mu = -\left(\frac{e}{mc}\right)\vec{S}$ In other words, they are directly proportional up to some known physical constants. This means that measuring the spin of an electron is exactly equivalent to measuring its magnetic moment: if you obtain either quantity, you just multiply or divide by a constant to get the other. In other words, your question is equivalent to "can we measure two times the momentum of an object independently of its momentum?" An example of an experiment which can measure the spin of electrons outside of atoms is the Stern-Gerlach experiment.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/163722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
"Find the Lagrangian of the theory" I've heard a few of my professors throw around the term "finding the Lagrangian of a theory". What exactly is this referring to. From what I understand it seems that you determine invariances (symmetries) and they give you a hint for what your Lagrangian is. Furthermore there is more to the story because I know: $L=T-U$ is only one of the forms the Lagrangian can take in classical mechanics. So far I only learned about the Lagrangian in classical mechanics and might be building up to a limited knowledge of Feynman's path integral in my QM course. What other theories have Lagrangians and how you can tell? Are all Lagrangians of a given theory equivalent?
Usually the terms "Lagrangian" and "theory" can be considered the same. For a new theory, you have a new Lagrangian. For example, when we say "QED is different from QCD", we mean their Lagrangians are different. Each theory has its own Lagrangian. Although, observable quantities (and especially the equation of motion) is more important than the Lagrangian. So, when they are invariant under Lagrangian changing, we say those transformations are the symmetries of the theory. We have many theories in physics. For example Quantum electrodynamics (QED), Quantum chromodynamics (QCD), General relativity, New massive gravity (NMG), Topologically massive gravity (TMG), and so on! Finding the Lagrangian is not always simple and we have to consider many things. Symmetries and conservation laws are our hints to get the Lagrangian.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/163847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How to raise indices on the electromagnetic tensor How do you transform between the electromagnetic tensors $F_{\mu\nu}$ and $F^{\mu\nu}$? $$ F_{\mu \nu}= \begin{pmatrix} 0 & E_x & E_y & E_z \\ -E_x & 0 & -B_z & B_y \\ -E_y & B_z & 0 & -B_x \\ -E_z & -B_y & B_x & 0 \end{pmatrix},\\ \ F^{\mu \nu} = \begin{pmatrix} 0 & -E_x & -E_y & -E_z \\ E_x & 0 & -B_z & B_y \\ E_y & B_z & 0 & -B_x \\ E_z & -B_y & B_x & 0 \end{pmatrix} $$ In other words, what do you do to $F_{\mu\nu}$ to get $F^{\mu\nu}$?
Index raising and lowering is defined through the metric, in this case the flat space metric (Minkowski) $$ g^{\mu\nu} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix}$$ We raise an index by aplying the metric to a tensor, like this $A^\mu=g^{\mu\nu}A_\nu$. Now, if you want to raise two index you need to operate with the metric twice. $$F^{\mu\nu}=g^{\mu\alpha}g^{\beta\nu}F_{\alpha\beta}$$ In a more formal language lowering and raising indices is a way to construct isomorphisms between covariant and contravariant tensorial spaces. We use the metric tensor because it help us to map basis vectors $e_i$ to dual basis vector $\beta^i$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/163981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why doesn't the speed of the wind have an effect on the apparent frequency? A boy is standing in front of stationary train. The train blows a horn of $400Hz$ frequency . If the wind is blowing from train to boy at speed at $30m/s$, the apparent frequency of sound heard by the boy will be? The answer: The frequency remains the same at $400Hz$ MY QUESTION: Why doesn't the speed of the wind have an effect on the apparent frequency?
The formula for apparent frequency as heard by observer when velocity of sound and wind are in same direction is given by $$n^\prime=\frac{v+v_w-v_o}{v+v_w-v_s}n$$ Where $n$=original frequency $n^\prime$=apparent frequency $v$=velocity of sound $v_w$=velocity of wind $v_s$=velocity of source of sound $v_o$=velocity of observer Since $v_o=0$ & $v_s=0$ $$n'=\frac{v+v_w-0}{v+v_w-0}n=n$$ So it is the same as the original frequency
{ "language": "en", "url": "https://physics.stackexchange.com/questions/164486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 3 }
Why is work done on the system considered when calculating the work output? The thermodynamic efficiency $\eta$ is calculated by $\eta= \frac{W_{out}}{Q_{in}}$ Using the first law of thermodynamics we usually say that $W_{out}$ is $Q_c+Q_h$, where $Q_c$ is the heat dissipated into a cold reservoir, and $Q_h$ is the heat absorbed by a hot reservoir. Both are measured within the system, such that $Q_c<0$ and $Q_h>0$ However I object to that. $W_{out}$ is not $Q_c+Q_h$. That calculation is simply the magnitude of net energy in the process due to work. Namely, its considering the $W_{input}$, done by the surroundings on the gas, to calculate the $W_{output}$. Consider the a Carnot engine. I would say that the work output is the area underneath the expansion isotherm, and the expansion adiabat. All other works are done on the system and are not "outputs". You might say that the other works are "negative outputs". But although this makes sense in a mathematical sense, it doesn't make practical sense that this should be a $W_{output}$. I'm obviously wrong about this, so I'd like someone to clear things up for me. Thanks.
The Carnot cycle is ... well ... a cycle. In each turn of the cycle, the system returns to exactly the same state over and over again. Pick a point, call it the beginning of the cycle. After one turn of the cycle, the system is where it started, with exactly the same energy it had when it started. During the cycle, it has done some work. Where has that energy come from? It can't have come from the system, the system has exactly the same energy it had when it started. The energy can only have come from the heat reservoirs. $Q_h$ is removed from the hot side, $Q_c$ is deposited the cold side. The net energy, $Q_h + Q_c$ has been converted to work. The Carnot engine provides the mechanism for the transfer of the energy from heat to work.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/164583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Bounded operator - definition? As mentioned also in Bounded and Unbounded Operator, an operator $A$ is said to be bounded, if $$\|Af\|\leq k \|f\|,$$ where the constant $k$ does not depend on the choice of $f$ (let us consider a map to the same Banach space). However, in a mathematical physics text I came across a definition: a symmetric operator $B$ is said to be bounded from below if there $\exists$ a constant $c$ such that $$\langle\psi,B\psi\rangle\geq c\|\psi\|^2$$ for all $\psi$ in the domain of $B$. Both definitions are logical (in the second one we can imagine $B$ being the Hamiltonian, than the system energy is bounded from below and hence the system is stable). The only think that bothers me is when we rewrite the first definition into a similar form to the second one (we assume the norm comes from an inner product), namely: $$\langle Af, Af\rangle \leq k\|f\|^2,$$ we get something quite different on the left-hand side, so the same words (bounded operator) refer to different things. Any hints how I can clarify this to myself?
When one says that an operator is bounded, you can think of it as being bounded from above. This is different from being bounded from below. An operator can be bounded (from above) and bounded from below, or perhaps just bounded, or just bounded from below. Observe that $(Af,Af)$ and $(\psi,B\psi)$ are slightly different: the former is always positive for any operator $A$, not necessarily symmetric, while the latter can be negative. If $c$ turns out to be non-negative then the operator $B$ is positive and, if it is also bounded (from above) its spectrum is contained in $[c,\Vert B\Vert]$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/164690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Derivation of group velocity? In the standard simplified derivation of group velocity (which can be found here) we use two waves $$y_1=A\sin(K_1x-\omega_1 t)$$ $$y_2=A\sin(K_2x-\omega_2 t)$$ In the proof we then get $$V_g=\frac{\Delta \omega}{\Delta k}$$ But I do not understand the step where this is then turned into $$V_g=\frac{\mathrm{d} \omega}{\mathrm{d} k}$$ why do we assume that $\Delta \omega$ and $\Delta k$ are small? The derivation is valid in the case where they are not small, which means that $$V_g= \frac{\mathrm{d} \omega}{\mathrm{d} k}$$ does not hold in this case and therefore does not hold in general. Consider this example Let $K_1=3$ and $k_2=1$ and let us say we have relationship $\omega=k^3$ using my first fromula we get $V_g=13$ but using the second (with $\bar k=2$) we get $V_g=12$, theses are diferent.
The basic answer to your question is that the derivation is only valid for small $\Delta k$ (corresponding to $\Delta \omega$) so that the beats have a much smaller wavenumber and frequency than the two waves that you are adding. The group velocity only makes sense if you have well defined envelope. In this: https://www.desmos.com/calculator/ffaybnenut , you only have a clear envelope for about $a \geq6$, so this means $\Delta k$ must be a sixth as big as $k$ or smaller for the derivation to apply. Mathematically there's nothing invalid about it, but the point is to capture physics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/164829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Conservation and forces/energy Are there really non-conservative forces in actuality ? Feynman states in his book that in fact, all forces are conservative ( originating from conservative vector-fields ), provide we look close enough ( microscopic level ). The reasoning is that we can't allow non-conservative forces in order for Conservation of Energy to follow. But at the same time, physicists who seem to really know the subject in an advanced-level, assert that most forces refuse to be conservative. For instance, see acepted answer of Locally every force admits a potential?. So, are all forces conservative forces and conservation of energy is not violated, or are there non-conservative forces and conservation of energ is violated , or finally, Law of Conservation of energy can cohexist with non-conservative forces ?
All the known forces conserve energy, but they don't necessarily conserve energy in macroscopic modes. For instance friction takes some of the energy of macroscopic motion and coverts it into an increase in temperature (i.e. energy in microscopic modes). Total energy is conserved but energy that is useful at the human scale is not. Feynman is talking about all energy and introductory textbooks (and those that concern themselves with thermodynamics) use a more restrictive definition. Just make sure you know which definition you care about.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/164916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How could a cord withstand a force greater than its breaking strength? How could a 100 N object be lowered from a roof using a cord with a breaking strength of 80 N without breaking the cord? My attempt to answer this question is that we could use a counter weight. But I don't really understand the concept behind counterweights so I hope someone can clear that up for me and if there is a better answer I'll love to know it.
Breaking strength refers to the maximum tension in the cord. Now, from the sounds of this problem, you've probably been doing force diagrams involving cords. What happens when you attach two cords to a single 100N object (and keep it stationary)? Is the tension in both of those cords 100N? Or is the combined force 100N, so that each just has 50N? Put another way, most ropes you see will be made of many individual little threads. Each one of them is much weaker than the whole rope. See what I'm getting at?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/165212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 4 }
Sound waves during day and night A man stands on the ground at a fixed distance from a siren which emits sound of fixed amplitude . The man hears the sound to be louder on a clear night than on a clear day. Why?
The speed of sound depends on the square root of temperature, so the refractive index is proportional to $T^{-1/2}$. Let's assume that the sound is emitted isotropically. During the day, the usual situation is that the temperature decreases with height. Thus the refractive index increases with height. This will tend to make sound waves emitted in the direction of the listener bend upwards into the atmosphere - reducing the amplitude/loudness that they hear. At night it is quite possible to get a temperature inversion (especially a clear night), such that air near the ground is colder than higher up. As the refractive index decreases with height it means that sound waves propagating upwards at some angle to the horizontal will be bent back towards the ground. The sound waves at some distance from the source will be more intense than you might expect if the waves propagated isotropically. So I think the situation would be completely contrary to what you say in your question - and indeed that has been my empirical experience.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/165331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Is there a maximum frames per second (FPS)? Take a video camera and crank up the frames per second rate. Disregarding current technological advancements, could a camera's FPS go so fast that any two captured images be identical? Would accomplishing this defy "time"?
I've worked on a camera that has as one of its core features the ability to increase the FPS until you are counting single photons. Here is one of the pdfs about it. You will see from the figures that there is an intrinsic tradeoff between the noise/image-quality and the FPS, which is simply due to the statistics of photon counting noise as you get fewer photons. See for example figure 7, where there is an examination of photon statistics under various temporal and spatial binning. This sort of analysis is very relevant in low light limits where we are naturally photon starved, but it should be clear to see that the same concept would extend to a high light limit, just with the ability to go to shorter frame times than we are currently able. Once one got to a VERY short time period, Heisenberg's Uncertainty Principle would start to severely disrupt the image. Since $\Delta E \Delta t \ge \frac{\hbar}{2} \approx 5.3\times10^{-35}~\textrm{Js}$, and a blue photon is around $5.0\times10^{-19}~\textrm{J}$, that leaves us at about $1.1\times10^{-16}~\textrm{s}$ as the frame time at which the measurement is so short that the energy uncertainty would be equivalent to the energy of the photons we were trying to observe. Everything would be a noisy blur at that point, and therefore this is probably a fundamental limit for visible light.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/165381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 7, "answer_id": 0 }
Definition of a normal mode? What is the formal definition of a normal mode for a string? And how does this relate to the definition from e.g. wiki that seem to be applied to discrete systmes of particles only? Also on a string what makes: $$y=A\cos(kx)\sin(\omega t)$$ a normal mode, and $$y=A\sin(\omega t+kx)$$ not? (I know why the firt is a statioary wave and the second is not, but that is not whay I am asking here, I am spefically concerned with the definition of normal modes).
Normal modes are the separable solutions to the string's (linear) partial differential equation $$y(t) = X(x)T(t)$$ that arise from applying the solution method of separation of variables. These solutions form an orthogonal (normal) basis for any solution. Due to the form, a function of space only multiplied by a function of time only, the shape of the mode does not change with time, only the amplitude. Animated gif credit
{ "language": "en", "url": "https://physics.stackexchange.com/questions/165587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Fractional exponent in a scalar quantum field: Is energy and momentum conserved in this case? Assuming that I would have the following term in the Lagrangian for a scalar boson field $$L=\int d^4x g (\phi^{2-p} \phi^{\dagger 2+p}+\phi^{\dagger 2-p} \phi^{2+p}))$$ with a fractional number $p$. Now I am inserting the Fourier Expansion for the scalar boson: $$\phi=(2 \pi)^{-4}\int d^4kA_\vec{k}e^{i \vec{k} \vec{x}}$$ Because of the fractional exponents I cannot use the relation $$\int f(x) dx)^q=\int \prod_{i=1}^q f(x_i)dx_i$$ How can I express above term in the Lagrangian in terms of multiple integrals over $k$? Does it happen that I have not the coefficients $e^{i(\vec{k_1}+...+\vec{k_n}) \vec{x}}$ (which lead to energy/momentum-conservation Delta distributions $\delta(\vec{k_1}+...+\vec{k_n}) $ after integration over the spacetime)? Will there occur a energy/momentum-nonconserving term?
The $e^{\vec{x}\cdot(\sum \vec{k})}$ which leads to momentum conservation at each vertex when we go to momentum space feynman diagrams is a perturbative result at heart. Your model as it stands is not cast in a way conducive of perturbation theory.. Let's try this instead, if we want to have something amenable to perturbation. (if $p<1$), then let's do a field redefinition $\phi\equiv \rho e^{i\phi}$, then we will have $L = |\partial (\rho e^{i\phi})|^2 + 2\rho^4 \text{cos }(2p\phi)$ this can only be seen as a perturbative model if we are to expand in $p$ maybe? which would lead to an infinite number of vertices between $\rho^4$ and $n$ $\phi$s after expanding the cosine.. only then you can say that momentum is conserved at each vertex Otherwise you'd have to solve the model nonperturbatively. With the guaranteed result that momentum will be conserved in the final answer, but without insight to what happens at each vertex.. because the picture of vertices and such is not valid to begin with!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/165632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Symplectic structure and isomorphisms In his book Mathematical Methods of Classical Mechanics, V.I. Arnold writes To each vector $\xi$, tangent to a symplectic manifold $(M^{2n},\omega^2)$ at the point $\mathbf{x}$, we associate a 1-form $\omega^1_\xi$ on $TM_\mathbf{x}$ by the formula $$\omega^1_\xi(\boldsymbol{\eta})=\omega^2(\boldsymbol{\eta},\xi)\quad\forall\boldsymbol{\eta}\in TM_\mathbf{x}$$ I see how $\omega^2$ furnishes an isomorphism $\xi\rightarrow \omega^1_\xi$. But then Arnold has the example In $\mathbb{R}^{2n}=\{(\mathbf{p},\mathbf{q})\}$ we will identify vectors and 1-forms using the Euclidean structure $(\mathbf{x},\mathbf{x})=\mathbf{p}^2+\mathbf{q}^2$. Then the correspondence $\xi\rightarrow\omega^1_\xi$ determines a transformation $\mathbb{R}^{2n}\rightarrow \mathbb{R}^{2n}$. By "Euclidean structure" I presume he is talking about the Euclidean metric. But I don't see how this isomorphism induces the transformation $\mathbb{R}^{2n}\rightarrow \mathbb{R}^{2n}$ or furthermore how to determine the matrix of this transformation. And help would be greatly appreciated.
Same as with the symplectic form: $\omega(v) = (u_\omega,v)$ defines the isomorphism between 1-forms and vector fields. When the metric is Euclidean the dual basis to an orthonormal basis corresponds to the basis itself.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/165753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Kinetic energy of a variable mass particle If a particle's mass is a continuous differentiable function of time, $m(t)$, and its position is also a continuous differentiable function of time, $x(t)$, what is the expression of its kinetic energy? Does $\frac{1}{2}mv^2$ still hold?
Yes; if you integrate the correct form of Newton's 2nd law (see for example http://en.wikipedia.org/wiki/Variable-mass_system) you find the final kinetic energy is $\frac{1}{2}mv^{2}$. Just for fun I worked out what happens if you naively use the normal form of the 2nd law: $$\vec{F}_{net\,ext}= \frac{d\vec{p}}{dt}= \frac{d}{dt}[m(t)\vec{p}(t)]=m(t) \frac{d\vec{v}(t)}{dt}+ \vec{v}(t)\frac{dm(t)}{dt}$$ A particle cannot undergo internal displacement and hence cannot have an associated potential energy. Thus the work done on the particle by external forces goes into kinetic energy: $$\Delta K = m(t)\int\frac{d\vec{v}}{dt}\cdot d\vec{s}+ \vec{v}(t)\cdot \int\frac{dm}{dt} d\vec{s}$$ By changing the variable of integration the first integral becomes $$m(t)\int\vec{v}\cdot d\vec{v} = \frac{1}{2}[m(t_{f})v^{2}(t_{f})-m(t_{i})v^{2}(t_{i})]$$ Similarly the second integral becomes $$\vec{v}(t)\cdot \int\vec{v}(t)\,dm=m(t_{f})v^{2}(t_{f})-m(t_{i})v^{2}(t_{i})$$ Thus the kinetic energy of the particle should be $\frac{3}{2}m\,v^{2}$. However the second integral doesn't contribute to the kinetic energy of the particle, but rather to the extra mass the particle lost or gained during the time interval, which is exactly what the added term in Wikipedia's definition accounts for.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/166000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Mutually Commutative Operators What is the definition of a mutually commutative set of operators? I've found articles describing a complete set of mutually commutative operators, but I can't actually find what mutually commutative means. I ask because I'm asked to prove that a particular set of operators is mutually commutative.
Mutually commutative means that every operator in the set commutes with every other one. This implies that, if the operators in question are observables, they can all be measured simultaneously. A complete set of mutually commuting observables is a set of observable, hermitian operators that commute - therefore their eigenvalues can be used to label a state. "Complete" refers to the state being fully determined without degeneracies. As an example: The most famous set are the quantum numbers labeling the Hydrogen orbitals, corresponding to the set of observables $$ {\mathcal H, \vec J^2, J_z, \vec L^2, \vec S^2}$$ With the five eigenvalues of these operators the state of the electron in the hydrogen atom can be uniquely determined and all these values can be measured simultaneously because every operator in the set commutes with all others.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/166060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Do transferring energy and applying force to a body imply same? Do transferring energy and applying force to a body imply same meaning? When we say, "I throw a ball using my pushing force so on the other hand, can I say that I transferred my kinetic energy to the ball therefore it became moving.
Yes. When you apply a net force to a mass (please note the word "net"), the object become accelerated . This acceleration means the body changes velocity, and a change in velocity means there is a change in the energy, because of the energy formula: $E=\frac{1}{2}mV^2$ The case of the circle is particularly interesting, because there is a force applied but the energy does not change. This is the result of applying force to decrease the velocity of the mass in a direction (say the x axis, if the mass initially moves over the x axis) while at the same time you apply force to accelerate the same mass in a perpendicular direction (the y axis). This makes the velocity-energy to keep constant while you apply a centripetal force. You can check this by yourself by decomposing the centripetal force in its x-y components and calculating work done (and work done = energy). Or, you can check the demonstration.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/166143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Can mass be uncertain? If you can have uncertainty in momentum, then wouldn't you have uncertainty in mass and velocity? Why can't mass be uncertain?
Based on the simple definition of the Heisenberg uncertainty principal, uncertainty is an inverse ratio between location and velocity, so if you're willing to know very little about location, then you can measure velocity with accuracy. Now, on the quantum level, particles can do strange things like borrow energy from the future so, there's probably a small and perhaps temporary level of uncertainty always, but beyond that, the uncertainty principal allows you to measure mass, velocity or momentum precisely so long as you don't care about location.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/166226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Does the speed of electrons depend on energy? I would like to know whether the speed of an electron depends on energy. If yes then in a circuit when electrons flow out of a resistor the energy decreases by a considerable amount, leading to the charge per electron decrease and eventually to the decrease in current in a series circuit. How is that possible?
If you have current flowing one way through a resistor, then the electrons flow through the other way. Since current flows from the high voltage end of a resistor to the low voltage end, then the electrons come in at the low voltage end and come out at the high voltage end. When electrons (which are negatively charged) go from low voltage to high voltage, they gain energy from the electric field driving them. However, in a resistor, they lose an equal and opposite amount of energy by repeatedly crashing into other parts of the resistor, thus heating up the resistor. The power loss in a resistor is $IV$, which is exactly what the electrons would have gained in energy had they not lost it due to collisions within the resistor. So two things happened, the electrons were given energy from the electric field, and they gave energy to the resistor by heating it up. Easy come, easy go. Equal numbers of electrons flowed in and out (from no charge buildup) and they leave with the same energy, hence same speed, so same current.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/166312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why does the thought experiment of a photon bouncing of a mirror imply anything for other objects? Okay, so I am reading a book, "The Elegant Universe" by Brian Greene, which talks about motion and its effect on time. Greene makes the point that time changes with motion by saying that if you have two mirrors and bounce a photon off of them it will have bounced off them very very often during a second. But, if you have the mirrors moving you must shoot the photon at an angle so it will be able to hit the mirrors and not miss and fly off. Since you are shooting diagonally it takes the photon more time to hit each mirror. So it's basically saying that time changes the closer you come to light speed. Why? Why does this hold true for all matter? How does bouncing a photon around prove anything?
This is an effect known as time dilation. In this post, I will be taking material from the excellent book, Einstein Gravity in a Nutshell, by A. Zee. Figure 1 will be the basis for the argument. We bounce a photon around to create a clock. It is postulated that the speed of light is the same in all frames. In the rest frame, the time it takes for light to make a round trip is $$\Delta t=\frac{2L}{c}$$ In the moving frame we have a (relative) velocity $u$. Using a little geometry, we find that $$c\Delta t'=2\sqrt{(\tfrac{1}{2}u\Delta t')^2+L^2}$$ You can easily solve this to obtain $$\Delta t'=\gamma\Delta t,\quad \gamma=(1-u^2/c^2)^{-1/2}$$ which is the basic equation for time dilation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/166448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can an object appropriately isolated from its surroundings become colder than its surroundings? Consider a sealed box, well-insulated on all sides, except for the lid which is transparent to infrared. An object is placed inside the box and the box is evacuated (purpose being to thermally isolate the contents of the box from its surroundings). The box is placed outdoors (in an everyday atmosphere) on a clear night. Let's assume that at the start of this experiment, the box and its contents are in thermal equilibrium with its surroundings. The object inside the box will radiate infrared according to its temperature, which should escape through the lid of the box. With nothing but clear dark sky above, I assume there is nothing to radiate appreciable heat back into the box and maintain the object's temperature. Question: will the object cool below the ambient temperature outside the box?
There is a well known sinkhole in Utah that displays similar characteristics to what is described in the [nice] accepted answer. There is relatively little standing between that location and the upper atmosphere Peter Sinks Utah During calm cloudless nights, this high elevation basin dissipates daytime heat rapidly into the atmosphere. Cool dense air can then slide downwards towards the basin floor in a process known as cold air pooling. Consequently, extremely low temperatures can occur, particularly in the wake of arctic fronts in winter. The second coldest temperature recorded in the contintental United States happened in that location in February 1985The conditions of a still, clear sky in the desert and the overall extreme cold front led to a reading of -69 fahrenheit. In this case we are describing the air temperature and not surface temperature: but similar considerations apply. I remember hearing about that measurement when at Purdue U. in a relatively balmy -20F.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/166526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Predicting Polarity of capacitor in the given diagram I have to determine the polarity of the Capacitor in the given diagram. I Approached the problem as follows: When Magnet 1 moves with its North pole towards the coil, emf is induced in the coil as the magnetic flux through the coil changes. So, when seeing from the left hand side (i.e. from magnet 1) the direction of induced current appears to be Anticlockwise. Though, on seeing from the left hand side, the South pole of magnet is coming towards, according to Lenz's Law the coil will behave like a South pole, thus the direction of current is Clockwise. I am stuck at this point. How shall I proceed? My textbook explained it this way which I did not understand: The direction of induced current when seen from the left hand side is Anticlockwise, and its direction is Clockwise when seen from right hand side.Thus, direction of induced current is in Clockwise sense (why?) . This implies Plate A is positive plate and point B is negative one. Please Help.
The two magnets are mirrored, so from different sides the same current can appear either clockwise or anticlockwise. Think if the coil was flat in the page and the north magnet was dropping from above. This creates an increasing $\vec{B}$ field into the page, so the emf must induce a current out of the page, and the current will be anticlockwise. Now imagine the same north magnet approaching the coil from under the page. The north magnet creates an increasing $\vec{B}$, this time out of the page. The emf must counter this magnetic field, and so from this perspective we see a clockwise current. But both systems are equal, and the only thing that changed was our perspective. If you keep the capacitor with labeled sides A and B in your diagrams of both perspectives, you'll see that in both cases current flows into side A. When you do the analysis of the current for the south magnet you should find the same thing, that current flows into side A of the capacitor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/166651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
Why does light travel as waves? Why does light travel as waves instead of say just a straight line? What are the forces that make a light photon travel in a wavelike pattern?
Your wording suggests a few misconceptions: * *It seems you are thinking of light as having a corpuscolar nature (nothing wrong with that, you are in good company). Well it turns out that things just do not work that way. Phenomena like diffraction (to name one) tell us that we cannot describe the behaviour of light thinking of it as composed by (classical) particles.$^\dagger$ *When we say that "light is composed of oscillating electromagnetic waves", we do not mean that some physical entity that we call "light" is literally going up and down in space. What oscillate are the electric and magnetic fields composing the electromagnetic field. And when saying that an electrical/magnetic field oscillates we mean that the intensity of that field is going up and down in some complicated way at every point in space. This has nothing to do with actual motion induced by some force. $^\dagger$ It is probably mandatory at this point to point out that light can't be described as "just" electromagnetic waves either. It has in fact a quantum mechanical nature, which means that it can be both wave-like and particle-like, depending on what you want to measure. The double slit experiment is the canonical example of this. If you still wonder as to why does nature work this way, I suggest you this Phys.SE question regarding the Why vs How issue in physics, and of course the classical Feynman's video.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/166740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why is most probable speed not equal to rms speed for an ideal gas? The rms speed of ideal gas is $$\mathit{v_{rms}} = \sqrt{\dfrac{3RT}{M}}.$$ The most probable speed is the speed where $\dfrac{dP(\mathit {v})}{dv} =0$ where $P(\mathit{v})$ is the probability distibution. Solving for $\mathit{v}$, we get $$ \mathit{v_p} = \sqrt{\dfrac{2RT}{M}}.$$ Now, $$\mathit{v_p} \neq \mathit{v_{rms}}.$$ Why? Why is it so?
In any probability distribution, there are many ways to find some kind of "average" value, that is, ways to define the "centrality" of the distribution. In discreet distributions you have almost certainly come across mean, median and mode, and perhaps also the different "flavours" of means - arithmetic, geometric, harmonic etc. For continuous distributions we have yet more ways of finding the centrality, e.g. the RMS (normally used for distributions where the random variable can be positive or negative in equal measure) as well as the most probable. Generally these numbers, which are single number representatives of the whole distribution, will be different, although in special cases they can be equal. Here we have a distribution that is certainly not one of these special cases, so it is likely that any two of the chosen measures will be different.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/167013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Understanding incompressibility (of rubber or viscoelastic material) Literature gives a lot of explanation why rubber is incompressible. However, I still need some thinking to understand physical behavior of rubber or any such material. Often, incompressibility is tied to Poison's ratio ($\nu$) -> 0.5. At Poisson's ratio ->0.5, ratio of bulk modulus ($K$) to shear modulus ($G$), which can be given below, tends to infinity. $$ \lim_{\nu \to 0.5} \frac{K}{G}=\lim_{\nu \to 0.5}\frac{2(1+\nu)}{3(1-2\nu)} \rightarrow \infty $$ However, I feel this is only math. I think following: If we take water in cylinder and compress it with a piston (assuming no gap to 'leak' water), will water be compressed? I read that water is incompressible. Now, if I take rubber instead, will rubber be compressed? Or with the magical property of incompressibility, rubber will become so stiff that even if I apply tons of force to piston, rubber in the cylinder will not change its volume? I would appreciate insights into this. Thanks.
Nothing is incompressible, but most liquids and solids have a very low compressibility i.e. a very high bulk modulus. The reason for this is that in liquids and solids the atoms/molecules are in contact with each other. To squeeze them closer together you need to deform the bonds in molecules and/or the electron distribution around atoms. Both processes take a lot of energy so the force required is high. I'm using a rather vague definition of in contact here. Atoms and molecules don't have sharp edges - they are fuzzy objects where the electron density falls off continuously with distance. However there will be an equilibrium distance where the attraction due to Van der Waals or dipolar forces is balanced out by the repulsion due to overlap of the electron clouds. It's trying to push atoms closer together than this equilibrium distance that takes a lot of energy and therefore requires a lot of force.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/167107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Why are solar panels blue, rather than black, when black absorbs more light? This is an image of a solar panel array, courtesy of Wikipedia. Some of these look rather black, but most of them are blue. As far as I know, solar panels work by absorbing "light energy", and then converting this to "electrical energy". Some of the energy is also converted into "heat energy", as is natural; things put into sunlight will warm up. Lastly, some of the "light energy" will get reflected as "light energy". But only on specific wavelengths. That's how we can see colors... Now, black objects reflect less light than blue objects. So, given a certain amount of light denoted by $x$, it should hold true that blue.reflected(x) > black.reflected(x). Inversed, it should hold that (black.heat(x) + black.electrical(x)) > (blue.heat(x) + blue.electrical(x)). Basically, because there's less light reflected, more energy is absorbed. So if a black object (say, a black solar panel) absorbs more energy than a blue object (like a blue solar panel), why are blue solar panels still in use? Why aren't solar panels black, as to absorb the maximum amount of energy from the light?
You're looking at solar cells for terrestrial operation. The main efficiency number is not Power_electric/Power_solar, but Power_electric/investment. Capturing the last few bits of blue light just isn't worth it. In space applications, the investment is dominated by the launch costs. Using a more exotic material to capture 1% more energy might shave a kilogram of weight from the solar panel array, and that's worth thousands of dollars. In those situations, you're likely to see darker panels. The other extreme is cheap solar-powered toys, which need very little power at all and just care about total cost. Such cells can even be beige. If it saves a cent and the toy still works, who cares?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/167190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Why aren't all black holes the same "size"? The center of a black hole is a singularity. By definition, a singularity has infinite density. So how can a black hole with a different mass or density be described?
What matters is the mass of the black hole. All black holes have a singularity that has no size, no space or time. These break down and become meaningless at the singularity. Since space is meaningless, so is density. It only has mass. The amount of matter that has fallen into the black hole determines its mass. The more mass/matter a singularity has the larger its event horizon. So, the greater the black hole's mass the larger it is.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/167282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 6, "answer_id": 5 }
On the definition of elastic restoring force in a spring How is the elastic restoring force defined exactly for a spring? We know by Hooke's law that $$F_\text{restoring} = -kx$$ but what does $F_\text{restoring}$ really mean? I thought up till now that it was the force the spring pulled with at both ends if you stretched it by a distance $x$. This definition worked pretty well until I encountered some problems when I was doing problems a little above my usual level. I have stripped down the problem I encountered to its core (where I think my confusion arises from): Consider a spring attached to a wall (massless, ideal) in its relaxed. If we pull it with a force $F$, clearly the spring exerts a pulls with a force $F$. However, initially the spring is unstretched. The definition fails in this case. What is the precise definition of a restoring force in a spring in the most general case?
The word 'restoring' is synonymous with 'opposing' in that it matches the applied force, but in the opposite direction. But more so 'restoring' implies that energy is being stored - potential energy - which can subsequently be retrieved. The potential energy is the integral of force over the path of deflection: $$E_p=(1/2)kx^2$$ The energy imparted by the pulling force is stored in the spring which is able to do work. In it's relaxed state (position) one can arbitrarily assign 'zero' potential energy by defining 'x' as zero at that position. Any deflection relative to zero stores energy. Another interpretation is the fact that springs tend to 'restore' position to the relaxed state once the net external forces are removed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/167458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is it possible to use hot cloudy water as a cloud chamber? This morning I got some warm water from the shower head to a dark plastic basin to wash some sensitive clothes. During the process lots of tiny bubbles got into the water so it had a cloudy appearance. Since the water was quite warm the air in bathroom was humid. The water was steaming. Having seen some cloud chamber videos before, I suddenly recognized horizontal thread-like patches of clouds rising from the surface. But once the all bubbles floated up and popped and water cleared up the steaming stopped. My hypothesis is that the tiny bubbles released lot of water wapor at the surface when they all popped making a supersaturated region for some seconds, that allowed condensation along trails. But I'm not sure. Did I really saw particle ionization trails? Or just the turbulent air flow caused the thread-like patches of clouds?
Have you heard of the "bubble chamber"? It is like a cloud chamber, but uses liquid hydrogen (usually). When you take a liquid to a temperature/pressure where bubbles could form if there is a nucleation site, you can indeed observe traces. Now whether you observed something like this in your bathroom is hard to estimate. Supersaturated liquids (like hot water straight from a tap) can indeed produce small bubbles, and if the conditions are just right it's conceivable that a fast particle would produce a streak. But such a streak would more likely be inside the liquid (bubbles), rather than in the vapor above it. It is possible though - if the air was sufficiently still, and there were no other nucleation sites. I have never seen it myself.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/167504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How does one experience single gravitons and/or Planck-force? Moving sufficiently far away from a light source one would not be able to measure a steady stream of light, but only single photons every now an then. The experience would be a very faint blinking. Can this behavior be translated to gravity? From what I understand gravity is similarly quantized and transmitted via gravitons. How would one experience/measure gravity, sufficiently far away from every other body of mass? As a blinking of gravity? A tug of Planck-force every now and then? What is the mental picture to paint here? Edit: As an interested layman I deduced the necessity to quantize gravity by the necessity to quantize the attractive force it causes between objects (space and time are quantized, therefore acceleration must be, therefore force must be as well). I then further assumed, that these "quantum force packets" are equivalent to a graviton. The first couple of answers indicate, that this assumption is wrong. So, my rephrased question is: How would one experience/measure the force, induced by gravity, sufficiently far away from every other body of mass?
From what I understand gravity is similarly quantized and transmitted via gravitons. Well, we don't know that. There is no accepted quantum theory of gravity, only approximations like semiclassical approaches. We cannot give you a "mental picture" at the moment because we don't have one. We can speculate all day, and extrapolate from all the other forces and such, but we cannot, with the certainty usually required of scientific theories, proclaim anything definite about the way gravity works at the quantum scale.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/167621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Finding the appropriate coordinate transformation given two metrics Given the two-dimensional metric $$ds^2=-r^2dt^2+dr^2$$ How can I find a coordinate transformation such that this metric reduces to the two-dimensional Minkowski metric? I know that $g_{\mu\nu}=\begin{pmatrix}-r^2&0\\0&1\end{pmatrix}$ (this metric) and $\eta_{\mu\nu}=\begin{pmatrix}-1&0\\0&1\end{pmatrix}$ (Minkowski). Obviously, the matrix transformation is $\begin{pmatrix}1/r^2&0\\0&1\end{pmatrix}g_{\mu\nu}=\eta_{\mu\nu}$, but how is that related to the coordinate transformation itself? EDIT: would the following transformation be acceptable? $$r'=r\cosh t$$ $$t'=r\sinh t$$ Such that: $dr'=\cosh t\ dr+r\sinh t\ dt,\quad dt'=\sinh t\ dr+r\cosh t\ dt$ And: $ds'^2=-dt'^2+dr'^2=-r^2dt^2+dr^2=ds^2$ Where we have: $ds'^2=\eta_{\mu\nu}dx^{\mu}dx^{\nu}$ as requested. Is that correct? Also, is there a formal way of "deriving" the proper change of coordinates (since mine is more of an educated guess)?
If you were to Wick rotate $t \rightarrow i \theta$, the metric would be $ds^2 = dr^2 + r^2 d\theta^2$, which is just flat space in polar coordinates. The standard cartesian coordinates can be obtained by $x=r\cos\theta$, $y=r\sin\theta$. The same procedure works in the original Lorentzian signature metric, but with hyperbolic trig functions instead of sines and cosines. By the way, this is two-dimensional Rindler space, which is just a patch of two-dimensional Minkowski space: http://en.wikipedia.org/wiki/Rindler_coordinates.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/167822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
White, is it a colour or absence of colours? Our chemistry sir and we had an argument today at the lab, he says that white actually is not a colour, it is the abscence of colour, but we say that it is a colour and we gave the following point to substanciate our point that white is a colour: When we see an object in red colour, it actually reflects red colour and absorbs all the other colours, in this point of view, a white object reflects all colours which fall on it, so it is a colour. We do not know who is correct, I am posting this question in hope that I will get the correct answer.
Colors have been defined by the International Commission on Illumination. They have defined the CIE XYZ color space where white is a color defined by the point x = y = z = 1/3.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/167935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
What happens to photons that get trapped in a black holes event horizon? So, I know that photons do not travel fast enough to escape a black hole once it passes the event horizon. Also, I know that the photons themselves aren't affected by the gravity, but rather their path instead. My question is, if the photons are stuck in-between the singularity and the event horizon, where do they go? Do they build up around the singularity, and they just haven't built up enough to pass the event horizon, or do they somehow escape and just don't emit light?
In a classical Schwarzschild black hole, inside the event horizon all things, whether they be massless photons or bodies with mass, will travel towards smaller radial coordinate. This applies even to light that is emitted outwards from inside the event horizon. That is, both light and mass are inevitably compelled to move inwards and will ultimately encounter the singularity, so there is no build up of anything between the event horizon and the singularity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/168065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Frequency dependence of the speed of light in air According to this link, the speed of light of different colors in a medium should be different. But if the refractive index of light in air is 1 then this means that the speed of light in air and vacuum should be the same. Could anyone help me out here? Thanks
It is true in general that the speed of light in a medium will depend to some extent on the wavelength/frequency of the light itself, but in most (not all) everyday situations this is not apparent or important, and makes the theory of optics much easier mathematically. As for the refractive index of air, it is not quite 1, but slightly larger, 1.0003 IIRC, so although we can approximate this as 1, actually light will travel slightly slower in air than it would in a vacuum.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/168141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Tensor product in quantum mechanics In Cohen-Tannoudji's Quantum Mechanics book the tensor product of two two Hilbert spaces $(\mathcal H = \mathcal H_1 \otimes \mathcal H_2)$ was introduced in (2.312) by saying that to every pair of vectors $$|\phi(1)\rangle \in \mathcal H_1, |\chi(2)\rangle \in \mathcal H_2$$ there belongs a vector $$|\phi(1)\rangle \otimes |\chi(2)\rangle \in \mathcal H$$ In a footnote it stated that the order doesn't matter and that we could also call it $$|\chi(2)\rangle \otimes |\phi(1)\rangle$$ I'm a bit confused, since I though that the order of the tensor product generally matters. What would that expression look like if we picked a basis, say: $$|\phi(1)\rangle = a_1|u_1\rangle + a_2|u_2\rangle + \dotsc$$ $$|\chi(2)\rangle = b_1|v_1\rangle + b_2|v_2\rangle + \dotsc$$ Any help will be appreciated!
$|\phi(1)\rangle \otimes |\chi(2)\rangle $ is a cumbersome notation to write ket corresponding to $\psi$ function $\phi(\mathbf r_1)\chi(\mathbf r_2)$, where $\mathbf r_i$ refers to coordinates of the $i$-th subsystem. That's why the order of factors in $\otimes$ product does not matter; the resulting ket corresponds to the same $\psi$ function and is thus the same ket. On the other hand, $|\phi\rangle \otimes |\chi\rangle $ (without labels) is meant to be read according to different convention; here it is commonly understood that the order of factor signifies the sub-system it refers to. So $|\phi\rangle \otimes |\chi\rangle $ denotes ket corresponding to $\phi(\mathbf r_1)\chi(\mathbf r_2)$ just as $|\phi(1)\rangle \otimes |\chi(2)\rangle $ does, but: $|\chi\rangle \otimes |\phi\rangle $ denotes ket corresponding to $\chi(\mathbf r_1)\phi(\mathbf r_2)$ which is not the same. This is because different meaning of the $\otimes$ notation is used.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/168231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
If we can see a galaxy can that galaxy see us? This is a question about the properties of the expansion of the universe. I can't say it any better than: If we observe a primordial galaxy that existed soon after the Big Bang, does it follow that the same galaxy, at roughly the same number of years after the big as we are today, can observe our galaxy as it was soon after the Big Bang. I only mean theoretically. Please ignore the galaxy merger issue and issue of new stars since that time, if the question can be constructed logically.
Yes. The Milky Way is a very old galaxy, having formed roughly half a billion years after the Big Bang. So if we observe a galaxy that has a redshift of ~10, we are looking back in time to approximately this epoch, so an alien astronomer in that galaxy observing the Milky Way today would see it redshifted by the same factor, and would observe it in the process of forming. If we observe a galaxy at redshift ~0.5, we are looking 5 billion years back in time, so an alien astronomer in that galaxy would see the Milky Way as it looked when it was roughly 8 billion years old, and with a magically powerful telescope. it would be able to see our Sun in the process of forming. This does not really have anything to do with time symmetry. The light that leaves the distant galaxy and the light that leaves the Milky Way at the same time in each other's directions simply travel through space, meet each other halfway (without interacting) and reaches the other galaxy at the same time.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/168381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Mass, energy, and entropy. I have a seemingly simple question about the relation between these three that for some reason doesn't make sense to me. If entropy is the disorder of a system, then a low entropy state is one of higher energy. As we know, mass is energy. From here we must say that the more mass something has, the lower its entropy because the mass can be converted to energy. Why then are black holes, the most massive things known, considered to be of such high entropy?
Calling entropy "disorder" is somewhat misleading, it can also be described as the amount of area containing an amount of energy or even information. A black hole will contain a significant amount of quantum data in an incredibly small space, making them objects with high entropy. However this is a fixed amount of entropy and to continue the propagation of energy as is needed by the laws of thermodynamics, some of the particles containing this information will escape (look up hawking radiation). Backing away from quantum mechanics and black holes and going to more simple terms of entropy (because simple to more complex is a boring way to teach). Imagine a drop of ink and a bowl of water, if we were to drop the ink into the water then it would spread until it was equally distributed throughout the water (maximum entropy). We would still have the same amount of ink throughout the entire bowl, but it would have mixed with the water. Entropy increases in the way that ink will only spread through the water, but will not slowly un-mix back into the drop that first entered the water.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/168468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Distinguishing density operators with the same diagonal elements If I have two sources of qubits and one source produces the density matrix: $$\rho_1 = \begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix}$$ and the other source produces: $$\rho_2 = \begin{pmatrix} 1 & 1 \\ 1 & 1\end{pmatrix}$$ Is it possible to perform a measurement to determine which source the qubit is coming from? I understand that the diagonal elements tell us the probability of finding the qubit in that state - and so just measuring the state of the qubit in this case will not be enough to distinguish them. I also understand that the non-diagonal elements tell us the extent to which the state is a mixed state or a pure state - so in the first case we have a statistical mixture and in the second we have a pure state, but I'm unsure how we could this fact to distinguish them?
The second density matrix is actually a rank-1 projection (if normalised) hence a dyadic product and therefore a pure state. It is enough then to measure against a state which is perpendicular to this vector (i.e. $(1/\sqrt 2,1/\sqrt 2)$) to say whether the qubit is not coming from the second source.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/168559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Faraday's Law and Lenz's Law: Is there any theoretical explanations on why changing magnetic field induces an electric field? This is a more specific extension to this question I came across today One certain aspect of Faraday's Law always stumped me (other than it is an experimental observation back in the 19th century) The Maxwell-Faraday Equation reads: $$\nabla \times \mathbf{E}=-\frac{\partial \mathbf{B}}{\partial t}$$ I am also briefly aware that in special relativity, magnetic fields in one frame are basically electric fields in another, but Q1 How exactly does a changing magnetic field induces an electric field. Is there any theoretical explanation that came up in the literature using more fundamental theories such as QED and relativity that explains how it happens? Q2 Is their a theoretical reason Why does the electric field is produced in a way that opposes the change in the magnetic field?
Indeed, this observation remains mysterious from a 19th century viewpoint. Since we know special relativity, though, it is natural in the covariant formulation of electromagnetism that spatial and temporal changes of fields are interrelated. More specifically, we need to express the three-vectors $\vec E$ and $\vec B$ in a covariant way, which is done by defining the field strength tensor $F$ component-wise as $$ F^{i0} := E^i \quad \text{and} \quad F^{ij} = \epsilon^{ijk}B_k$$ This object now behaves properly (as a 2-tensor) under Lorentz transformations, in contrast to the three-vectors $\vec E$ and $\vec B$ whose components mix. Now, Maxwell's equations of course must also be written covariantly, $$ \partial_\mu F^{\mu\nu} = j^\nu \quad \text{and} \quad \partial_\mu\left(\frac{1}{2}\epsilon^{\mu\nu\sigma\rho}F_{\sigma\rho}\right) = 0$$ and if you go and write out this with $\partial_t$ and $\vec \nabla$ and so on again, you get back, among others, the Maxwells-Faraday equation. So, essentially, the mixture of electric and magnetic fields, and their spatial and temporal changes, is a direct consequence of the fact that the world is not Galilean, but relativistic.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/168725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Does the Earth revolve around the Sun? I am aware of this Phys.SE question: Why do we say that the earth moves around the sun? but I don't think this is a duplicate. In a binary star system, where the masses of the 2 stars are not so different from each other, can we say that each star revolves around the other? If yes, then couldn't the Sun-Earth system be an extreme case of such a system? Therefore, strictly speaking, can we argue that the Sun revolves a tiny bit around the Earth as well?
There are at least 8 more planets in the solar system, besides the Earth (and some more were discovered). When Copernicus decided to place the Sun in the center of the solar system, instead of the Earth, that was mainly because this arrangement simplified drastically the form of the orbits of the other planets. With the Geocentric model of the solar system those orbits appeared as very complicated. Later on, the Kant-Laplace theory for the solar system, strengthened this Heliocentric configuration.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/168824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Charged black hole It is known that Einstein's equations admit solutions for charged black holes. The Reissner–Nordström metric in case of a non-rotating charged black hole and for rotating charged black holes there is the Kerr–Newman metric. In Reissner–Nordström metric I can calculate electric field, it has the following form $$ F_{0r}=-F_{r0}=\frac{Q}{r^2} $$ But I can not understand the following thing. In my point of view when two charged particles are interacting, they exchange photons between each other. If I apply this argumentation in the case of the interaction between a charged black hole and a probe charge, I will get zero force because the black hole cannot emit photons and absorbs all photons. Can we explain how it works at a quantum level point of view? (I mean QED and not Quantum gravity!!).
The issue is with your picture of the electromagnetic (and generally any) interaction as arising because of the exchange of real particles. However, the electrostatic interactions $\sim 1/r$ arise thanks to the "exchange of virtual particles". Virtual particles are not particles, they are called "particles" because they appear in a similar context as actual quantum particles in the mathematical expressions. However, "virtual particles" can never be observed, they are quantum-field excitations that are invariably attached to their source and cannot be particle-like. One defining property of "virtual particles" is that they can "travel" at a speed larger than light. More specifically, their momentum can be space-like and their mass formally imaginary. In other words, Coulombic-type interactions $\sim Q_1 Q_2/r$ are "mediated by photons traveling faster than the speed of light". This means that such a virtual particle can without any issue "tunnel" outside of a black hole and mediate interactions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/168890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What happens to the photon if the frequency is lower than the threshold frequency? An electron is ejected only if the frequency of light is greater than the threshold frequency. What happens to the photon if the frequency is lower than the threshold frequency?
What happens to the photon if the frequency is lower than the threshold frequency? The idea is that there is a certain threshold energy required to remove an electron from an atom to far away from the atom (the work function). The absorption of the photon supplies this energy. This is an electronic process that depends on the electromagnetic interaction between the electron and the material. It the photon has less energy then it may not get absorbed by an electronic process. However, in real life, it can still get absorbed by, e.g., processes involving phonons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/168988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why does Li+ ion move to cathode in Li-ion battery? When Li-ion battery is discharged or being used, the positive lithium (Li+) ions move from anode to cathode through the electrolyte. Meanwhile the electrons move in the same direction through the external circuit. Why does this happen? I mean, why does a Li+ ion get attracted to the positive electrode (cathode)?
In electrolytic cells the negative electrode are called cathode while positive electrode are called anode , in which +ve ions move towards cathode as cathode is negative electrode, and -ve ions move towards anode , while in electrochemical cells cathode is positive electrode while anode is considered to be negative electrode due to high density of electron. As these cells are electrochemical cells, So in li ion batteries, when external load is connected in full charging state electron starts moving from anode towards cathode and electronic density starts decreasing on anode , and increase on cathode, At the same time Li ions starts moving toward cathod due to high density of electron on cathode.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/169050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Approximations of the kind $x \ll y$ I have an expression for a force due to charged particle given as $$F=\frac{kQq}{2L}\left(\frac{1}{\sqrt{R^2+(H+L)^2}}-\frac{1}{\sqrt{R^2+(H-L)^2}}\right) \tag{1}$$ where $R$, $L$ and $H$ are distance quantities. Now I want to check what happens when: * *$H\gg R,L$ *$R,H\ll L$ How can I work out the approximation of this force? Do I have to write it slightly different into form (2) to get it right? $$ ~F=\frac{kqQ}{2LR}\left(\frac{-1}{\sqrt{1+\left(\dfrac{H+L}{R}\right)^2}}+\frac{1}{\sqrt{1+\left(\dfrac{H-L}{R}\right)^2}}\right) \tag{2}$$ (which is the same expression just written out differently). Any explain about this subject would be very helpful.
When considering these things, at least as a warm up to a more rigorous answer, it is worth thinking about what it means to say that $H>>R$. I take this to mean that, roughly, if I add $H$ to $R$ I'm going to get something close to $H$, as it is much larger. For example $1000000>>1$ so $1000000+1\simeq1000000$. When the variables are squared, as you have, this difference is even more marked. So, if $H>>R,L$, $(H+L)^2+R^2 \simeq (H+L)^2$, so the first term in your brackets becomes ${1 \over H+L}$ (or ${1 \over |H+L|}$). Similarly the second term, and then add the two to get something non-zero. You can use a similar process for the second question. A more rigorous way to do it would be to do something similar to what you and other answerers have done, which is to take out a factor from the front of your square roots which corresponds to the large variable in each case, so that you're left with dimensionless terms inside like $1, L/H$, and $R/H$. These last two can be neglected compared to 1 and you should get the same results.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/169371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Free fall of mass-spring-mass system I am a little bit confused about the implication from my computation. Must have done something wrong in the computation. Assume we hold the top end of the mass-spring-mass system in the air, and the system is at equilibrium. Now release the top end mass to let the system have a free fall, what would be the tension in the spring (assume massless) ? I solve a simple ODE and it turns out the tension is given by a sinusoid with magnitude the initial tension. However, this seems against intuition. I must have solved the wrong ODE...I could have done better with the drawing but I'm lazy. I'm confused because some people told me accelerometer is based on this principle...what would the accelerometer read in this case...
Of course, the solution is correct. Letting the system fall, the gravitation doesn't act on it anymore, and you have a mass-spring-mass system oscillating in absence of external forces.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/169470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can gravitational waves resonate? Can gravitational waves resonate? - Perhaps by creating standing wave interference in a cavity? Could that feasibly happen either in nature or by engineering?
It is theoretically predicted that superconducting layers might be able to act as reflectors through the so called Heisenberg-Coulomb effect. Out of these, you could of course form a cavity able to contain a gravitational wave in principle. This effect has, to my knowledge, not yet been experimentally tested, although several tests have been proposed, see, for example, Do mirrors for gravitational waves exist?(ArXiv link).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/169550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Why are $SU(N)$ gauge theories easier to handle for $N\rightarrow \infty$? I was wondering if there was a intuitive/heuristic argument to understand why generalizing the QCD gauge group $SU(3)$ to $SU(N)$ and taking $N\rightarrow \infty$ simplifies the analysis of the theory. Since in this limit only planar diagrams survive, the others being suppressed. At first I would expect things to get a lot nastier by introducing such large number $N$ of colors...
The intuitive idea is based on the Central Limit Theorem. Because suppose (as is usually the case) that your matter fields are in the fundamental representation of $SU(N)$, then the multiplet contains $N$ independent fields. Now the central limit tells us that the arithmetic average of $N$ independent random variables self-averages to a normal distribution, i.e. have small fluctuations. So in QCD for example, hadrons are always color singlets; a pion is $\pi = \sum_{c = 1}^N q_c\bar{q}_c$, so they have to be an average over all quark colors, so for $N\rightarrow \infty$ the fluctuations of $\pi$ (and hadrons in general) are much smaller than those of $q$ (quarks), because they should self-average according to the CLT. And in particular $$\left\langle \pi(x) \pi(y) \right\rangle \xrightarrow{N\rightarrow\infty}\left\langle \pi(x)\right\rangle\left\langle \pi(y)\right\rangle$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/169727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Parameterization of an arbitrary element of $U(2)_L \times U(2)_R$ (Chiral symmetry with two quarks) When you write down the Lagrangian for two quarks : \begin{equation} \mathcal{L}_\text{QCD}^0 = -\frac{1}{4} G_{\mu\nu}^a G^{a\mu\nu}+ \bar\Psi i \gamma^\mu D_\mu \Psi \end{equation} you find an $U(2)_L \times U(2)_R$ global symmetry because you can rewrite it : \begin{equation} \mathcal{L}_\text{QCD}^0 = -\frac{1}{4} G_{\mu\nu}^a G^{a\mu\nu}+ \mathcal{L}_\text{QCD}^L+\mathcal{L}_\text{QCD}^R \end{equation} with $\mathcal{L}_\text{QCD}^{L,R} = \bar\Psi_{L,R} i \gamma^\mu D_\mu \Psi_{L,R}$ An arbitrary element of $U(2)_L \times U(2)_R$ can be written : \begin{equation} (g_L, g_R) = \left(e^{ i \gamma +i \gamma^i \frac{\sigma_i}{2}},e^{ i \delta +i \delta^i \frac{\sigma_i}{2} }\right) \end{equation} where the $\sigma_i$s are the Pauli matrices. But you could, in principle, rewrite this element as : \begin{equation} (g_L, g_R) = \left(e^{ i \alpha} e^{ i \beta } e^ {i \alpha^i \frac{\sigma_i}{2}} e^{i \beta^i \frac{\sigma_i}{2}},e^{ i \alpha} e^{ - i \beta } e^{i \alpha^i \frac{\sigma_i}{2}} e^{-i \beta^i \frac{\sigma_i}{2}}\right) \end{equation} That expression shows that one can factor two $U(1)$s and obtain : \begin{equation} U(2)_L \times U(2)_R = SU(2)_L \times SU(2)_R \times U(1)_V \times U(1)_A \end{equation} What I don't understand is how to obtain explicitly the second expression of $(g_L, g_R)$ starting from the first one.
Let's see what relation can we find between $\alpha, \beta, \alpha^i, \beta^i$ and $\gamma, \delta, \gamma^i, \delta^i$ First using Baker Campbell Hausdorff lemma we deduce two things: $$\alpha + \beta = \gamma \text{ and } \alpha - \beta = \delta$$ because $\mathbb{1}$ commutes with $\sigma$. And $$e^{i\vec{\alpha}\cdot \vec{\sigma}} = \mathbb{1}\text{cos }\alpha + i\sigma\cdot \hat{\alpha}\,\text{sin } \alpha$$ The latter gives us that $$ e^{i\alpha\cdot \sigma}e^{\pm i\beta\cdot \sigma} =\left( \mathbb{1}\text{cos }a + i \sigma\cdot\hat{a}\,\text{sin } a\right)\left(\mathbb{1}\text{cos }b \pm i \sigma\cdot\hat{b}\,\text{sin } b\right) $$ $$ = \left(\text{cos } a \text{ cos b} \mp (\sigma\cdot \hat{a})(\sigma\cdot\hat{b})\text{ sin } a \text{ sin }b \right) + i \left(\sigma\cdot\hat{a}\text{ sin } a\text{ cos }b \pm \sigma \cdot\hat{b}\text{ sin } b\text{ cos }a \right) $$ $$ = e^{i\gamma\cdot\sigma} \text{ or } e^{i\delta\cdot\sigma} $$ Then you verify the ansatz suggested by user40085; $\vec{a} = (\vec{\gamma}+ \vec{\delta})/2$ and $\vec{b} = (\vec{\gamma}- \vec{\delta})/2$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/169914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Electric field of a full disk - when $R \to 0$ - it's not equal to coulomb law An MIT document states that the electric field of a full disk, when $R \to 0$, is similar to Coulomb's law $$\mathbf E_{disk}=2\pi k_e\sigma\left[1-\frac{x}{\left(x^2+R^2\right)^{1/2}}\right]\hat{i}=\frac{\sigma}{2\varepsilon_0}\left[1-\frac{x}{\left(x^2+R^2\right)^{1/2}}\right]\hat{i}$$ Either version is fine, its just a different way of writing the constant. You should also check the limits: for $R\to0$ (but keep $Q$ constant!) it should go to a point charge. For $R\to\infty$ (infinite plane) it should be a constant. Though, I don't think that it works that way, it is easily seen that when $R \to 0$, then $\mathbf E_{disk} = 0$. Can somebody help me figure out how to arrive at the stated result - what am I missing to get the field of a point charge when the disk size goes to zero?
Remember that we keep leading order terms. So for the second part of the expression in parentheses, as $R \rightarrow 0$, we don't just get 1. Using the taylor expansion, we get $$ \frac{1}{\sqrt{1+\frac{R^2}{x^2}}}\Rightarrow 1 - \frac{1}{2}\frac{R^2}{x^2}+....$$ Plug this into the original equation while remembering $\sigma= \frac{Q}{\pi R^2}$ gives $$ \vec{E}_{disc}= \frac{\sigma}{2 \epsilon_0}\left[\frac{R^2}{2x^2}\right] = \frac{Q}{4 \pi \epsilon_0 x^2}$$ which is exactly the field of a point charge that we want.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/169976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why the CMB has not been dispersed so far? Imagine you have a box of black body radiation. What happens if you open the box for a long time? It becomes dispersed and no radiation remains in the box. Now, apply this example to the Cosmic Microwave Background radiation. The CMB has been produced about 380,000 years after the Big Bang. Giving that the space is flat as many observations suggest, that radiation has been produced in a universe with no boundaries. Now, my question is this: in these conditions, why that radiation has not been dispersed completely so far? It is true that the radiation has been produced everywhere in the space but giving that the space is infinite, why has not it been dispersed do far? Calculations in the standard textbooks are done in such a way as if the CMB has been within a physical box, however expanding with the universal expansion. But, this is not the actual situation. The CMB has not been and is not enclosed by walls of a box, so it must have completely dispersed so far. What has prevented this to happen?
The CMB was emitted from everywhere, in all directions. The CMB emitted at the point where you are standing right now, has now been dispersed to a distance $d_\mathrm{CMB}$ equal to the distance that light can travel in the almost 13.8 billion years that have passed since it was emitted*. (note that $d_\mathrm{CMB}$ is much larger than 13.8 billion lightyears, since the Universe has expanded since it was emitted; in fact it's roughly 46.5 billion lightyears.) On the other hand, the CMB emitted from a distance that is now $d_\mathrm{CMB}$, is what we observe today. That means that the CMB we observed today all comes from a thin shell of the Universe in which we are centered, and which has a radius of $d_\mathrm{CMB}$. The drawing below may help understand; in a while from now, the picture would look exactly the same, except $d_\mathrm{CMB}$ has increased so that the sphere will be larger, since by that time, photons originating from farther away will have had the time to reach us.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/170103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How to approximate the force on a magnet below a coil o x o x Coil with 4 turns o x o x _ | | | | Neodymium magnet |_| I need to know the force acting on a neodymium magnet which is placed below a coil. This simple looking problem is actually very complicated, and no data is known about the system, so it can be simplified as necessary. If the coil has a diameter of about 1 meter, height 150 mm and has 4 turns, and the magnet is 150mm by 50mm, and is placed bellow the coil, close to the edge (see ASCII drawing above). What is the force on the magnet as a function of current?
The field from a bar magnet is approximately a dipole. The field from a coil is approximately a dipole. The force between two dipoles will contain both a torque term, and a attraction / repulsion term - both of which will be proportional to current. Equations for this can be found at http://en.wikipedia.org/wiki/Magnetic_dipole#Forces_between_two_magnetic_dipoles $$F = \nabla (m_2 \cdot B_1)\\ \Gamma = m_2 \times B_1$$ Note that when you get close to the coil, its field is no longer strictly a dipole; instead you might want to think about the field due to just the current closest to the magnet (since the field due to the conductor on the other side will have minimal effect on the force). Thus to estimate this I would probably use the magnetic moment of the bar magnet, and the current due to four linear wire segments (length equal to the diameter) closest to the magnet. The result will be close (within a factor 2 or so). If you are on axis, the problem is much simplified. Now you can get $B_1$ from an expression for the field of a circular coil - if you have 4 turns, you evaluate this for 4 distances from the center of the coil (z = 0, 50, 100, 150 mm): $$B_z = \frac{\mu_0}{4\pi}\frac{2\pi R^2 I}{(z^2 + R^2)^{3/2}}$$ The value of $m_1$ is something you have to get from the parameters of the magnet.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/170185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Has anyone tried Michelson-Morley in an accelerated frame? After doing much more digging than I thought I had to do, I found out that the speed of light is NOT invariant in an accelerated reference frame. Has anyone done any experiments to confirm this? In particular a Michelson-Morley experiment in an accelerated reference frame? I figured light being invariant in any constant speed frame would automatically imply being invariant in any frame whatsoever. I have to credit Richard Mould's Basic Relativity with informing me about this fact.
even though John puts it down quite nicely, I don't think that was the answer you sought? Yes, the Michelson-Morley experiment has, to my knowledge, only been done in accelerating reference frames, because of the rotation and gravity of the earth, some with precision high enough to measure both gravity or the rotation velocity. Unfortunately I can't name-drop who, but rotating Michelson-interferometer experiments there has also been conducted.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/170275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
What would happen to a diamond if it was exposed to near vacuum at room temperature? I know that coal (graphite) turns into diamond when a high enough pressure is applied, but I guessed it would stay as a diamond unless it was incinerated by a high enough temperature. However, I recently took a closer look at the phase diagram of carbon, and it seems that diamond should turn back into graphite on low enough pressures even at room temperature: I guess this is a relatively easy experiment to make, I just don't have the necessary funds to sacrifice a diamond, and I couldn't find such an experiment with a quick Google search. What would really happen when a piece of diamond is put into a vacuum chamber and air is pumped out, as the pressure drops enough so that the subject leaves the "metastable diamond" phase? Does it remain the same? Does it just peacefully turn into a similarly shaped piece of graphite, which remains a normal piece of graphite after putting it back into atmospheric pressure? Or would something more spectacular happen?
Nothing happens. We have put many diamonds into vacuum chambers to do ion implantation. They remain diamond.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/170435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why doesn't $x$ reach a constant for a block experiencing $v^n$ resistive force? I am stuck on the Exercise 3.5 of Newtonian Dynamics by R. Fitzpatrick: A block of mass $m$ slides along a horizontal surface which is lubricated with heavy oil such that the block suffers a viscous retarding force of the form $$F = - c\,v^n,$$ where $c>0$ is a constant, and $v$ is the block's instantaneous velocity. If the initial speed is $v_0 $ at time $t=0$, find $v$ and the displacement $x$ as functions of time $t$. Also find $v$ as a function of $x$. Show that for $n=1/2$ the block does not travel further than $2\,m\,v_0^{3/2}/(3\,c)$. The last part of the question asks to show that for $n=1/2$ the block does not travel further than $2mv_0^{3/2}/(3c)$. We start from Newton's second law $$ m \frac{d^2x}{dt^2} = m \frac{dv}{dt} = m v \frac{dv}{dx}= -cv^n. $$ Separating variables gives $$ \int_{v_0}^{v} \frac{dv'}{(v')^{n-1}} = -\frac{c}{m} \int_0^x dx', $$ $$ v^{-n+2} = v_0^{-n+2} - \frac{(-n+2)cx}{m}. $$ Plugging $n=1/2$, $$ v^{3/2} = v_0^{3/2} - \frac{3cx}{2m}. $$ Setting the velocity to zero (this must be the case if the block stops moving), $$ x =\frac{2m v_0^{3/2}}{3c}, $$ which is the desired result. The problem arises when I try to solve for $x$ in terms of $t$. Now, $$ m \frac{dv}{dt} = -cv^n, $$ $$ \int_{v_0}^{v} \frac{dv'}{(v')^n} = -\int_0^t \frac{c}{m} dt', $$ $$ \frac{1}{v^{n-1}} = \frac{1}{v_0^{n-1}} - \frac{(-n+1)c}{m} t. $$ Rising everything to $1/(1-n)$ power (of course, assuming that $n \ne 1$), $$ v = \left( \frac{1}{v_0^{n-1}} - \frac{(-n+1)c}{m} t \right)^\frac{1}{1-n}.$$ Plugging $n=1/2$ gives: $$ \frac{dx}{dt} = \left( v_0^{1/2} -\frac{c}{2m} t \right)^2. $$ Let's separate the variables and try to integrate, $$ \int_0^x dx = \int_0^t \left( v_0^{1/2} - \frac{c}{2m} t' \right)^2 dt', $$ $$ x_{\mathrm{f}} = \int_0^{\infty} \left( v_0^{1/2} - \frac{c}{2m} t' \right)^2 dt'. $$ I've plugged $t = \infty$ because it seems to me that the block must stop to this time if it's going to stop at all. The problem is that the integral on the right hand side won't converge! So $x$ has no finishing point, which contradicts the first part of the solution. What's going on here?
From $$\dfrac{dx}{dt}=\left(v_0^{1/2}-\dfrac{c}{2m}{t}\right)^2$$ and $$v(t_f)=\left.\dfrac{dx}{dt}\right|_{t=t_f}=0$$ you should be able to get a finite bound on your last integral. EDIT: one possible reason for which your final integral doesn't properly converge comes from an earlier step. Indeed, you moved from: $$m\dfrac{dv}{dt}=-cv^n$$ to: $$\dfrac{dv}{v^n}=-\dfrac{c}{m}dt$$ The big caveat here is of course that this is only valid for $v\neq 0$. And in fact, the physical solution tells us that $v=0$ forever when $t_f$ is reached!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/170552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can only one electron or photon produce interference pattern? If we shoot one electron or photon at a time to a double slit for a long time, interference pattern will build up on the other side. If the gap between each electron or photon is long enough that they don't interfere it appears that a single electron or photon is interfering with itself. So, is the interference pattern obtained by shooting only one electron or photon its just that we can't see the pattern because its too dim and so we have to shoot many electrons or photons one after the other to make the pattern brighter?
You can't predict where the electron will hit, but you can measure that it will hit at some discrete point. The probability distribution of final positions on the detector corresponds to the interference pattern. You will see the pattern only after shooting many electrons: https://physicsforme.files.wordpress.com/2012/04/slit.jpg
{ "language": "en", "url": "https://physics.stackexchange.com/questions/170725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Do photons with a frequency of less than 1 Hz exist? A photon with a frequency of less than 1 Hz would have an energy below $$ E = h\nu < 6.626×10^{−34} \;\rm J $$ which would be less than the value of Planck's constant. Do photons with such a low energy exist and how could they be detected? Or does Planck's constant give a limit on the amount of energy that is necessary to create a single photon?
Yes. Essentially any frequency > 0 is theoretically possible. You may have confused this with the concept that it’s not possible for an electromagnetic wave with a given frequency f to have an energy less than E = h.f without eliminating the entire wave.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/170828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Bending along an axis for strength? I read about this law / property a couple of months back, but I've forgotten what it's name was and I can't seem to find it by Googling. I was hoping someone could give me the name for this property. If I recall correctly, it was named after same famous mathematician like Gauss or something... More detail: This site was basically describing how you can make a long piece of metal, paper, etc. stronger by bending it along its long axis. This way, it is less likely to collapse along its length when upright. An example of this property was grass blades, which are able to stay upright due to the fold / bend along their long axis. If someone knows the exact name of this property, please do tell me!!
What you are looking for is the famous Theorema Egregium by Gauss, which asserts that the Gaussian curvature of a surface is invariant under local isometry. At the same time, the Gaussian curvature of a surface is the product of the principal curvatures. Regarding a slight bend along the middle as a local isometry (of course, this conceptualization breaks down when one really bends the object too far, e.g. to the point of permanently deforming it), this physically implies that the leave of grass (or slice of pizza, which is my personal favorite as far as applications go---yum!) will resist bending along the axis orthogonal to the one you're bending it along. I think it's also important to note that this has been discussed several times over at math.SE
{ "language": "en", "url": "https://physics.stackexchange.com/questions/171009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How can one motivate the relativistic momentum? Motivating the non-relativistic momentum $$\mathbf{p} = m\mathbf{v}$$ is quite easy: it is meant to represent the quantity of motion of the particle, and since the mass is one measure of quantity of matter it should be proportional to mass (how much thing is moving) and should be proportional to velocity (how fast and to where it is moving). Now, in Special Relativity the momentum changes. The new quantity of motion becomes $$\mathbf{p} = \dfrac{m\mathbf{v}}{\sqrt{1-\dfrac{v^2}{c^2}}}$$ Or, using $\gamma$ the Lorentz factor $$\mathbf{p} = \gamma(v) m\mathbf{v}$$ where I write $\gamma(v)$ to indicate that the velocity is that of the particle relative to the frame in which the movement is being observed. The need for this new momentum is because the old one fails to be conserved and because using the old one in Newton's second law leads to a law which is not invariant under Lorentz transformations. So the need for a new momentum is perfectly well motivated. What I would like to know is how can one motivate that the correct choice for $\mathbf{p}$ is the $\gamma(v)m\mathbf{v}$. There are some arguments using the mass: considering a colision, requiring momentum to be conserved, transform the velocity and then find how mass should transform. Although this work, it doesn't seem natural, and it is derived in one particular example. On my book there's even something that Einstein wote saying that he didn't think it was a good idea to try transforming the mass from $m$ to $M = \gamma(v)m$, that it was better to simply keep $\gamma$ on the new momentum without trying to combine it with the mass. So I would like to know: without resorting to arguments based on transformation of the mass, how can one motivate the new form of momentum that works for special relativity?
Special relativty is about Minkowski spacetime. A line element is given by $$ ds^2 = c^2dt^2 - dx^2 - dy^2 - dz^2 $$ A free particle will move on a straight line, that is, it will minimize the path length $$ L = \int ds = \int \sqrt{c^2 \left(\frac{dt}{d\lambda}\right)^2 - \left(\frac{dx}{d\lambda}\right)^2 - \left(\frac{dy}{d\lambda}\right)^2 - \left(\frac{dz}{d\lambda}\right)^2} \ d\lambda$$ where $\lambda$ is an arbitrary parametrisation of the path. We set $$ I(\lambda) := \sqrt{c^2 \left(\frac{dt}{d\lambda}\right)^2 - \left(\frac{dx}{d\lambda}\right)^2 - \left(\frac{dy}{d\lambda}\right)^2 - \left(\frac{dz}{d\lambda}\right)^2} $$ The Euler-Lagrange-equations give: $$ \frac{d}{d\lambda} \left( \frac{\delta I}{\delta \left( \frac{d(ct)}{d\lambda} \right)} \right) - \frac{\delta I}{\delta (ct)} = 0 $$ $$ \frac{d}{d\lambda} \left( \frac{\delta I}{\delta \left( \frac{dx}{d\lambda} \right)} \right) - \frac{\delta I}{\delta x} = 0 $$ etc. Therefore if we evaluate the differentials and multiply by $I$: $$ c \frac{d^2t}{d\lambda^2} = 0 $$ $$ - \frac{d^2x}{d\lambda^2} = 0 $$ $$ - \frac{d^2y}{d\lambda^2} = 0 $$ $$ - \frac{d^2z}{d\lambda^2} = 0 $$ Now we parametrisate by proper time $d\lambda = d\tau = \frac{1}{c} ds$, introduce $x_\mu = (ct,-x,-y,-z)^T$ and multiply by $m$. This leaves us $$ m \frac{d^2x_\mu}{d\tau^2} = 0 $$ the covariant equation of motion of a free particle if we combine all 4 equations. Using $$ d\tau = \frac{1}{c} ds = \frac{1}{c} \sqrt{c^2 dt^2 - dx^2 - dy^2 - dz^2} \\ = \frac{1}{c} dt \sqrt{c^2 -\left(\frac{dx}{dt}\right)^2 - \left(\frac{dy}{dt}\right)^2 - \left(\frac{dz}{dt}\right)^2} = dt \frac{1}{\gamma(v)} $$ to express by system time $t$, this is equal to: $$ \frac{d}{dt} \left( m \cdot \gamma(v) \cdot \frac{dx_\mu}{dt} \right) \hat{=} \frac{d}{dt} \left( \matrix{\gamma(v) \cdot m c \\ - m \cdot \gamma(v) \cdot \frac{dx}{dt} \\ - m \cdot \gamma(v) \cdot \frac{dy}{dt} \\ - m \cdot \gamma(v) \cdot \frac{dz}{dt}} \right) \hat{=} \frac{d}{dt} \left( \matrix{ \gamma(v) \cdot m c \\ - m \cdot \gamma(v) \cdot \vec{v} } \right) = \left( \matrix{0 \\ \vec{0}} \right) $$ The new dynamical quantities are $ \vec{p} = m \cdot \gamma(v) \cdot \vec{v}$, which we may call momentum, and $\frac{E}{c} = \gamma(v) \cdot m c $ where $E$ is energy. One can now try to add forces on the right side of the equation of motion. In short: If we start by the assumption that a free particle moves on a straight line in Minkowski space, we are led to new dynamical quantities $\vec{p}$ and $E$ that can be used to describe properties of motion in a similar way as they did in newtonian mechanics. If one tries to describe nature on basis of tensors, the quantity $\gamma(v) \cdot m$ is not a "good" quantity, as it does not transform like a tensor (e.g scalar). However the quantities $m$ and $(\frac{E}{c}, \vec{p})^T$ are tensors (scalar and contravariant tensor of first rank). So these are the "better" quantities according to the criterion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/171107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 0 }
The meaning of the temperature in the Shockley Equation In the Shockley equation, which is the model of the $I$-$V$ curve of a p-n junction, what does the Temperature refer to, carrier temperature or lattice temperature? When a p-n junction subjected on a forward current, is the carrier temperature higher than lattice temperature as it does in the laser exciting case?
The Shockley diode equation doesn't distinguish between carrier ($T_{eh}$) and lattice temperature $T$; it assumes that they are in equilibrium, $T_{eh} = T$. Just a word of caution. We can't really say we have a single hot-carrier because temperature is a property of a large number of particles. You can say that you have a hot electron gas which has a range of velocities given by the Fermi-Dirac distribution with $T_{eh} > T$. Regarding hot-carrier effects at forward bias. It very much depends on the device structure you are considering (more on this later). But in general the best way to think about this is as an energy balance. A forward bias accelerates carriers, so they are gaining energy from the field at rate $R_{field}$, however, carriers can lose energy by emitting phonons at rate $R_{phonons}$. Usually, $R_{field} \ll R_{phonons}$ so the carrier temperature remains in equilibrium with the lattice. At extreme forward bias conditions it's possible $R_{field} > R_{phonon}$ allowing $T_{eh} > T$. Until voltage saturation occurs. To generate hot-carriers at modest forward voltages you need to design your semiconductor heterostructure with a potential cliff. For example, see the diagram below. Here electrons are injected into the low bandgap region, where they join a hot-distribution. You have to go to quite extreme lengths to generate hot-carriers in semiconductors because they lose energy by emitting phonons very quickly. For example, in GaAs the LO-phonon energy is 36meV and $R_{phonon}=1/1ps$. Therefore, if an electron has 1eV of excess energy (above the band edges) it can cool to the band edges within around 30ps (after emitting around 30 phonons)! If you are interested in hot-carrier effects in semiconductors you should read about hydrodynamic transport equations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/171183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Shadow of a ripple? Suppose a stone is thrown into a pool of crystal clear water and its a bright sunshiny morning. You can observe a shadow of the wave in the bottom of the pool. Why does this happen? Is it due to superposition of light or some other thing?
The dark part of the shadow is the umbra, and the part that is a little lighter is the penumbra. They can be experienced on Earth, but more readily in space, such as during a solar eclipse, when the Moon moves in front of the Sun and leaves a shadow on Earth.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/171253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Why does the human body feel loud music? I was sitting close to a speaker and I could feel the sound coming from it all over my body, especially in my heart, and it pounded with the loud beats of the music. Was my heart pounding because of the excitement at listening to the music or was I really feeling the sound in my heart and all over my body? I have some understanding that it is all about sound waves & acoustics (bass/low & high pitch/low and high notes etc.) but it is not clear to me. I hope I have correctly framed my question.
was I really feeling the sound in my heart and all over my body? It is definitely possible to feel sound. This occurs when the pressure is high enough and the frequency is low enough for the sense of touch. The heart can definitely produce a sensation of pain, perhaps also that of external pressure albeit with a rather low sensitivity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/171342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 2 }
When a pn junction is formed, why is a positive region of charge formed on the n side of the junction? I understand that when electrons diffuse from n-side to p-side, negative charge is developed on the p-side. But the mere absence of electrons on the n-side doesn't make that positively charged. The n-side must be neutral as it has no charge now. Where am I getting wrong?
You can also see the fact that dopant atoms are themselves neutral, so an n-type semiconductor itself is neutral but has free electrons. If, for example, you apply an external voltage, "only" the free electrons will move away and you are left with a positively charged n-type semiconductor. As in p-type under an electric field, "only" holes have the chance to move away. (Late, though)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/171438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How much mass is typically ejected from a supernova? How much mass is released from a supernova of a 15 solar-mass star? 20? 25? What is the relation between star mass and mass ejected?
I like to explain this using a figure from a talk by Marco Limongi some years ago. Based on a given set of models, the $x$-axis shows the initial mass of the models and the $y$-axis the final mass. The different coloured layers show the composition of the star at the moment of collapse. The mass ejected in the supernova is the difference between the curve marked remnant mass, which specifies (for these models) how much matter became part of the remnant, and the final mass, which was the mass of the star at collapse, after it had already lost a lot during its life. The interesting point in this prediction is the change between the supernovae that leave neutron stars versus those that leave black holes. At the boundary, there's a large drop in the supernova-ejecta mass, because the black hole doesn't have a surface off of which inward falling material can bounce. But, though the broad trends are probably right, note that this is the result for a particular set of model assumptions (e.g. mass loss on the main sequence, supernova energy and dynamics). The amount of ejecta for the supernova of a given progenitor is an open question, and still subject to intense research.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/171617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Is Magnus effect a corollary of Bernoulli principle? Magnus effect is commonly explained using Bernoulli principle. However, taking the lift on a rotating cylinder as an example, the velocity difference is caused by the extra work done by the rotating cylinder but not by the pressure difference, the Bernoulli principle is basically energy conservation along a streamline. However here the energy is not conserved due to external work done. So is Bernoulli principle abused in explaining Magnus effect?
As shown in the figure, the rotating water in the bucket will become concave, and the faster the rotation, the more concave the water surface is. This shows that at a certain height H, the closer the water is to the center of rotation, the lower the pressure is. And the faster the water rotates, the lower the pressure of the water in the center of rotation. As shown in the figure, because the air flow on the right side of the ball is opposite to the rotation direction of the ball, the air flow on the right side of the ball rotates slowly; the air flow on the left side of the ball rotates in the same direction as the rotation direction of the ball, so the air flow on the left side of the ball rotates fast. According to the conclusion about the bucket, the pressure on the left side of the ball must be lower than that on the right side of the ball. So the ball will be exerted a right-to-left force F. So I don't use Bernoulli's principle to explain Magnus Effect.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/171863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 1 }
How does an Inductor "store" energy? It seems to me that an electromagnetic field is nothing more than a collection of photons, which as I've heard, extends through space infinitely. Why is it, then, that an inductor such as simple copper wire loop, can "store" energy in it as an electromagnetic field? Wouldn't the photons or waves of EMF just fly away into space and be lost (the energy would be lost, not stored), how is it that this energy is stored as if the photons would fall back down and hit the wire to create current when the field collapses?
Your argument that the energy should radiate away would be true if your inductor were a good antenna, in which case it would be a bad inductor! The problem is an impedance mismatch: The inductor produces a magnetic field (which stores the energy you inquire about), but little electric field. That is the wrong ratio, or impedance, to couple to the vacuum where photons travel at the speed of light. You obviously are correct in arguing that this is nevertheless electromagnetic energy that must be quantized as photons. But these photons are localized, essentially trapped inside or in the neighborhood of the inductor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/171955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Polyakov equation in the strings theory In the equation of Polyakov there wouldn't be in our universe 10 or 11 dimensions but more (26) because it is referred to the bosonic theory. Are there any connections between this equation and the invariance of Lorentz? Can you explain where, also in form of formula?
There are a few different ways to see that the bosonic string lives in $D=26$. This, by the way, is known as the critical dimension of the theory. I'll give a brief sketch the answer, a more complete one can be found in any textbook, but in particular Polchinski's. Classically, the Polyakov action has 3 main symmetries. These are: 1) Lorentz invariance of the target space (the $D$-dim. space), 2) diffeomorphism invariance of the worldsheet theory, and 3) worldsheet Weyl invariance. Quantum mechanically, there is a potential for any of these symmetries to become anomalous (an anomaly is when a symmetry of a classical theory is not a symmetry of the quantized theory). Upon quantization, it is found that these symmetries will be anomalous unless $D=26$. The easiest way to see this is to quantize the open string in light-cone gauge, and find that the photon will be massive unless $D=26$. (A massive photon is inconsistent with Lorentz invariance).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/172040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When to use $h = c_p\Delta T$ or $u = c_v\Delta T$ I'm getting myself confused on when to use $h = c_p \Delta T$ or $u=c_v \Delta T$, where $c_p$ is the specific heat at constant pressure and $c_v$ is the specific heat at constant volume. It's in relation to thermodynamic processes such as expanding volumes with pistons and the likes. Here's what I know (in relation to this): First law for a closed system (per unit mass) $$q-w = \Delta u$$ First law for an open system (per unit mass) $$q-w_s = \Delta (h+\frac12c^2 +gz)$$ Example Say I've got a piston expanding - causing an ideal gas to expand at constant pressure. I can say that $\mathrm{d} w = p\mathrm dv$ as well as $\mathrm du = c_v\mathrm dT$ ─ is this correct? Subbing this in I get $$\mathrm dq = p\mathrm dv + c_v \mathrm dT,$$ whereas if I decide I want to use \begin{align} h & = u+ pv \\ \mathrm dh & = \mathrm du + p \mathrm dv + v \mathrm dp \\ \mathrm du & = \mathrm dh - p \mathrm dv - v \mathrm dp \end{align} giving \begin{align} \mathrm dq & = p\mathrm dv + \mathrm dh +- p \mathrm dv - v \mathrm dp \\ \mathrm dq & = c_p \mathrm dT - v \mathrm dp . \end{align} Which (if any) expression for $\mathrm dq$ is correct? I feel like there's some flaws in my fundamental understanding of whats happening here. Is it to do with open/closed systems?
Fundamentally there's a simple difference: When you are working with a perfect gas at constant volume you can take the variation of inside energy equal to the heat absorbed in the transformation. In this case you must use $C_v$, obviously. In the other side, where the pressure is constant you can't consider the equality defined previously. In fact you must consider that ONLY the heat absorbed is equal to your equation (which is correct in his procedure) with Cp. All of that remembering the values of $C_p$ and $C_v$, also knowing that $C_p - C_v=R$ for an ideal gas.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/172146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
About Lorentz Group In definition of orthogonal matrices we say that the a matrix $A$ is orthogonal if $A^TA = I$, while for Lorentz Group it is written as $\Lambda^Tg\Lambda = g $. And we say that Lorentz transformation forms an orthogonal group My Question is why do we insert the $g$ in the above definition?
On a vector space $V$ with metric $g$ - be that euclidean, lorentzian or whatever - the Orthogonal group $O(V,g)\subset GL(V)$ is defined to be the group of (linear) isometries on $V$. More precisely, for an element $\Lambda\in O(V,g)$, $$ g(\Lambda v,\Lambda u)=g(u,v)$$ holds for all $u,v\in V$. Orthorgonal trafos preserve lengths and angles. Expanding the above equation in an orthonormal basis $g_{ij}\equiv g(e_i,e_j)$ one finds \begin{align} g(\Lambda u,\Lambda v)&=(\Lambda u)^i\,g_{ij}\,(\Lambda v)^j \\ &=\Lambda^i_{\;k}u^k\,g_{ij}\,\Lambda^j_{\;l}v^l\\ &=(\Lambda^T)_k^{\;i}\,g_{ij}\,\Lambda^j_{\;l}\;u^k v^l\\ &=(\Lambda^Tg\Lambda)_{kl}u^kv^l\\ &\stackrel{!}{=} g(u,v) = g_{kl}\,u^kv^l \end{align} Since this must be true for any $u,v$, the statement follows. Observe that nowhere in this discussion did we use the particulars of the the metric. The discussion is valid for general $g$. In the euclidean case one has $g_{ij}=\delta_{ij}$, so the relation simplifies to $\Lambda^T\Lambda=I$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/172247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }