Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why do high current conductors heat up a lot more than high voltage conductors? 120 volts x 20 amps = 2,400 Watts However, if I increased the voltage and lowered the current, you can also use a smaller wire size (more inexpensive), also have less heat and achieve the same watt Power. 1,000 volts x 2.4 amps = 2,400 Watts * *Why doesn't it heat up like current? *To me this approach seems more efficient and less costly because you don't use as much material, so why isn't this common?
Amps travel in a straight line and so must travel inside the wire. Volts travel around the amps and usually outside the wire. So amps will generate heat - because of the atoms and valence electrons create degrees of resistance - while volts , generally, will not. But if you use thick enough wire you will not notice the heat increase.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/92502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Non-locality and quanta Quantum mechanics is non-local in that long distance correlations are present, though there is no signalling possible. But QFT is Lorentz invariant and contains quantum mechanics as a special case. I assume this is not a paradox as paradoxes do not exist but I do not understand the details. Can anyone supply a reference or satisfactory explanation?
Pseudo- apparent "Long-distances" correlations exist in statistical classical physics too. In fact, if you consider a system, which at some time, for some reason, in which two subsystems (of this system) are being locally correlated, and supposing, that, in the future, these two sub-systems are spatially separated , the correlations between these sub-systems continue to exist (these correlations are completely independent of the distance between the sub-systems), while no information or energy can be sent instantaneously from one sub-system to the other. So, correlations have nothing to do with non-locality, in statistical classical physics, in Quantum mechanics, or in Quantum Field Theory. And Quantum mechanics and QFT respect causality/locality
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Applying multiple forces to one object and calculate net movement and rotation? I'm working on a small game as a hobby project, and I've run into a problem that would seem simple, to me, but that I can't find any information on or solution to. How would one go about figuring out what happens to this object, in terms of movement and rotation? I have a lot of (bad) ideas, but I think I'll hold them back and just leave it at that. This is my first post here, sorry if it's inappropriate etc. (Please let me know.) (PS. Yes, it's a space game with small ships with many engines : )
I do not know if you remember Newton's second law from high school physics: $$ \sum{\vec{F}}=m\vec{a} $$ where $\vec{a}$ is the acceleration of the center of mass. And similarly there is also a relation for angular acceleration: $$ \sum{\tau}=I\alpha $$ where $\tau$ is torque and $I$ is the moment of inertia around the center of mass. This assumes that your game is 2D. However in 3D the torque becomes a vector, just as the angular acceleration and moment of inertia becomes a tensor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I find the tension in the cable from this problem? I am trying to find the tension of the cable but I don't know what to do.
You can approach this problem two ways: * *Balance the forces: Determine a coordinate system you would like to use, draw a free-body diagram, and do some bookkeeping as to which forces are completely or partly along the principle axes of your coordinate system (if partly, use your trig. identities to split up any forces which are in both). We can balance the forces here because there is no net force (acceleration) on the beam if it is in equilibrium. $ \sum \vec{F} = 0 $ * *Balance the torques: Determine a coordinate system (usually choose positive torques to be CCW, but it makes no difference), choose a point of rotation in which to analyze the torques about, and do some bookkeeping. Again, if in equilibrium there is no net torque (rotational acceleration). Note that torque is: $ \vec{\tau} = \vec{r} \times \vec{F} = rF \sin\theta $ where $r$ is the distance from the pivot point the force is acting, $F$ is the magnitude of the force, and $\theta$ is the angle between the force and radial vector.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Synchronisation of clocks How can two clocks be synchronised with each other at some instant without being at the same place and same time $?$ considering that simultaneity is a relative concept .
Here's the standard way in flat spacetime. Let's say you want to produce a synchronized pair of clocks that are a spatial distance $d$ away from one another, then perform the following steps: * *Construct two identical clocks such that they start ticking when they receive a special light signal. Call the clocks clock $1$ and clock $2$. *Before you engage either clock with the light pulse, set clock $1$ to time $0$ and clock $2$ to time $d/c$ where $c$ is the speed of light in vacuum. *Still before you engage either clock, move clock $2$ a spatial distance $d$ away from clock $1$. *Send out the special light signal from right next two clock $1$ toward clock $2$ so that it immediately starts clock $1$. The signal will reach clock $2$ at precisely the time $d/c$ later, so once clock $2$ is engaged, it will be synchronized with clock $1$ for all later times.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Do centripetal and reactive centrifugal forces cancel each other out? In order for a body to move with uniform velocity in a circular path, there must exist some force towards the centre of curvature of the circular path. This is centripetal force. By Newton's Third Law, there must exist a reactive force that is equal in magnitude and opposite in direction. This is the reactive centrifugal force. My question is simple, and it is probably the result of lack of common sense but here it goes: In uniform circular motion, why don't these forces simply cancel each other out? If they did, how would we know they exist in that situation? When I swing a rock tied to a rope, I feel the centrifugal force, but not the centripetal force. In this situation how can the reactive force be greater than the force itself?
The centripetal force is the accelerating force acting towards the centre of the orbit; the centrifugal force can be considered its Newton's 3rd law pair. If you swing a stone attached to a string in a circle, the centripetal force is the pull of the string on the stone. But the pull of the stone on the string - which is not physically very relevant - is the centrifugal force.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 8, "answer_id": 1 }
Will density of a metal increases during forging? This question is metallurgical engineering, but I had a similar doubt regarding density of liquids and what causing it. Forged parts refines defects, dislocations will be moved strengthening the metal. But will the density of forged metal change? My earlier question was, what causes liquids to have different densities?
The density of the metal, steel in this case, is a function of atomic weight, atomic spacing, and the volume of steel being measured. Since atoms in the metal crystal (no, there are no molecules in ordinary steel) are very small, it would take a huge change in the inter-atomic spacing to affect density to a measureable degree. The change would have to affect the strength of the bonds that hold the atoms together (and apart) and this is not easily changed by conventional physical means. The simpler answer is: No, forging does not change the density of the metal.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
What happens to a conducting ring when exposed to an electric field? It might be a silly question, but one of my friends just got asked this question at an oral exam, and he could not answer it, and didn't receive the answer either (Or at least he forgot). And I've been thinking a while, and I'm not sure what would really happen. If I take, lets say a wedding ring, made from a conducting material, placed it on a table, and turned on an external electric field, what would happen ? The inside of the ring is field free, right ? So does anything even happen, and if so, is it only on the surface of the ring ?
The answer depends on some factors: * *Orientation of field. A.If your field is horizontal and passes from the wedding ring, charges will be induced on the ring and it may/may not move depending on the field strength. B.If your field passes vertically from the wedding ring, then if the thickness of the ring is negligible, nothing will happen. C.If your field is it an angle then the ring may rotate and move, just rotate, just move or may do nothing, again based on field strength. *Field strength. A. If you apply an high electric field in horizontal direction and somehow keep the ring fixed, the induced charges may jump through the air. B. If you keep on increasing the strength of electric field there will be a time when the ring will be ionised, but that would not be practical. *If you apply a time variable field of sufficient strength the ring may start vibrating.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does a photon instantaneously gain $c$ speed when emitted from an electron? An excited electron loses energy in the form of radiation. The radiation constitutes photons which move at a speed $c$. But is the process of conversion of the energy of the electron into the kinetic energy of the photon instantaneous? Is there a a simple way to visualize this process rather than math?
Quantum Mechanics tells us that electrons only lose or gain energy equal to the energy of an incoming or outgoing photon. And by default, all photons travel at speed c in vacuum. As I understand it, there is no "conversion" time for energy. Photons are energy and energy comes in photons. What we choose to call them is more a reflection of the state of the system than how much or what wavelength of photons are emitted. Hope this helps!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/94049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 6, "answer_id": 5 }
Minimum seperation between two Airy disks as a function of the distance between two point sources of coherent light passing through the same objective I have two coherent point sources of light, $A$ and $B$, separated by a distance $L$, which I focus down to the diffraction limit using a high-powered objective (e.g. a $\approx 100x$ objective). If I turn on $A$ and turn off $B$, I have an Airy disk at position $c_1$, and I turn off $A$ and I turn on $B$, I have an Airy disk at position $c_2$. Given that both light sources are sent through the same objective, what is the minimum distance between $c_1$ and $c_2$? Is it simply $L$ scaled down by the objective (i.e. $\frac{L}{100}$)? Or does something odd happen because of e.g. curvature of the lens in the objective? EDIT: A restatement of this question would be the following - Assuming all of the optics are perfect, if I shine a laser at a point (x,y) on an objective, and then shine the laser at a point (x2,y2), will the peak of the Airy disk move the same distance?
Look up the angular resolution of an optical telescope. It is given approximately by the equation $$ \theta=1.22\frac{\lambda}{D}. $$ From this you can convert to the linear separation between the two objects.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/94135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Efflux speed of ideal fluid dependent on diameter? I have a cylinder full of water with diameter $D$ with a round opening on the bottom with diameter $d$. The water is friction-free and incompressible. Now I need a relationship for the efflux speed $v$ with which water exits the cylinder and I shouldn't use the approximation $d \ll D$, but formulate a general relationship. Ok. So what I thought is to equate the Bernoulli law on the top of the cylinder with that on the bottom of the cylinder which gives me $v=\sqrt{2gh}$. Solved. How does the speed of efflux depend on the diameter of the efflux hole? I googled quite a lot and all I could find was the above relationship...
If you are concerned about efflux speed you need not worry about the diameter of efflux hole. However If you are worried about Volume rate flow of fluid you need the diameter of the hole. In that case ${dV/dt}$ =$ d * v $ ; where d is the diameter of the hole, and the v is the efflux speed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/94217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
During reflection does the emitted photon have same properties? When light (photon) is reflected the the original photon is absorbed by an electron and then emitted again. Does this "new" photon have the same wavelength, frequency etc. as the original?
There is no absorption and re-emission involved in reflection. None whatsoever. This can be proved with a laser; any colored laser. Using a collimated expanded laser beam, incident on the reflecting surface, at an angle, you will get interference between the incident wave and the reflected wave. That is only possible if the two are coherent, which means the incident wave is refracted, and not absorbed. The reflecting surface is not a lasing medium, so there couldn't be any coherent stimulated emission, and moreover, be so for any incident wavelength. The photons are not absorbed during reflection.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/94359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
How do I explain to a six year old why people on the other side of the Earth don't fall off? Today a friend's six year old sister asked me the question "why don't people on the other side of the earth fall off?". I tried to explain that the Earth is a huge sphere and there's a special force called "gravity" that tries to attract everything to the center of the Earth, but she doesn't seem to understand it. I also made some attempts using a globe, saying that "Up" and "Down" are all local perspective and people on the other side of the Earth feel they're on the top, but she still doesn't get it. How can I explain the concept of gravity to a six year old in a simple and meaningful way?
Explain that since the earth is round, and people do not fall down (as there is no absolute "down"), they fall towards the center of the earth.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/94471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "154", "answer_count": 15, "answer_id": 12 }
Perpetual motion in spaces of different gravity? Imagine two locations with different amounts of gravity. I carry up a weight in low gravity, move it on this height over to the other place, and let it fall down there with higher gravity. Wouldn't falling down release more energy than lifting up hast cost? If so, is it theoretically possible to generate such a transition between different levels of gravity near to each other?
The main point is that Newtonian gravity fields are conservative. What that means is that it is impossible to have a configuration like the one you drew without there being gravitational fields pointing to the left and to the right in the regions where you want to do the 'horizontal' transfer. For example, you might try to achieve this on Earth by taking the usual uniform gravitational field and locating a very heavy mass just under the foot of the conveyor belt on the left. This will mean, though, that as you move your mass from the foot of that conveyor belt you will be fighting against the attraction of that very same mass, as shown with the red arrows: The net result is that doing both of those horizontal transfers takes work, and in fact it must take exactly the same amount of work as what you've gained from lifting the object in the weaker field. There are, of course, many possible ways to achieve the fields you want, apart from the one in my image, but because all gravitational fields are the sum of attractive forces to a bunch of point masses, and the field of each point mass is conservative, you will always, necessarily, have cross-pointing fields like the one I pictured that will do away with any perpetual motion engine.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/94596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Dielectric constant of water I need the dielectric constant of water from $10^{-2}$ Hz to $10^4$ Hz. As stupid as it may seem, I cannot find much info. I've googled for days. All I can find is close to GHz. And the only info close to Hz, ($100$ Hz) shows a great variation. A relative dielectric constant at $100$ Hz of about $4000$. So, I cannot interpolate back in frequency (I put a link to the paper at the end). Does anyone have any info about where I could find this data? I know that for constant current and about $20$ C the constant is $80.1$. What about at $50$ Hz? I need the complex dielectric constant to test a program. Any lead would be really appreciated. http://arxiv.org/abs/1010.4089
EDIT#2: * *I'm now made aware that you need wavelengths that are much larger than those presented here(a bit of an oops from reading this question quickly). This approach is still valid, but what you need cannot be obtained from these data. I'm going to leave this here however to collect downvotes and if anyone needs $\epsilon_r$ as it depends on $10^{7}$ to $10^{16}$ Hz You could find the imaginary(absorption) and real parts of the complex refractive index of water $$\bar{n} = n + i \kappa$$ and relate to the relative permitivitty where $$\epsilon_r = n^2 - \kappa^2$$ at a given frequency. Some information on the frequency dependent absorption and refractive index are certainly available. EDIT: See www.philiplaven.com/p20.html -- Figure 6: Complex refractive index of water at different wavelengths. so it is clear the data exists. This is from D. Segelstein, "The Complex Refractive Index of Water", M.S. Thesis, University of Missouri, Kansas City (1981). You can download the data at this page. From this data, you can construct the relative dielectric permitivitty using the formula above.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/94639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Gravitational collapse and free fall time (spherical, pressure-free) A very large number of small particles forms a spherical cloud. Initially they are at rest, have uniform mass density per unit volume $\rho_0$, and occupy a region of radius $r_0$. The cloud collapses due to gravitation; the particles do not interact with each other in any other way. How much time passes until the cloud collapses fully? (This was originally from a multiple-choice exam - I solved the problem via dimensional analysis on the options then. I'm wondering how it might be solved directly now). The answer is $$t = \sqrt{\frac{3\pi}{32G\rho_0}}. $$
For a spherically symmetric distribution of mass, the acceleration felt by a test particle at radius $r$ is $-G M /r^2$ (negative because pointing in toward the center), regardless of the radial distribution of mass. This is a key part of the question, make sure you are comfortable with it. It is a concept that is related to Gauss' law of electromagnetism, if you've encountered that. The total mass of the collapsing cloud is given by the initial uniform density times the volume, or $M = (4 \pi /3) r_0^3 \rho_0$ From Newton's second law, the equation of motion for a test particle at the edge of the cloud is then $$ \frac{d^2r}{dt^2} = - \frac{4 \pi G r_0^3 \rho_0}{3r^2}$$ Now for some chain rule trickery (this is a nice trick, so it's good to remember it for similar differential equations): $$ \frac{d}{dt} = \frac{dr}{dt} \frac{d}{dr}$$ Keeping in mind that $v \equiv \frac{dr}{dt}$, and using the chain rule subustitution just mentioned, the equation of motion is now $$ v \frac{dv}{dr} = - \frac{4 \pi G r_0^3 \rho_0}{3r^2}$$ The point of doing all this is that the differential equaiton is now more clearly separable. You can solve it by integrating as follows $$ \int v \, dv = - \frac{4 \pi G r_0^3 \rho_0}{3}\int\frac{dr}{r^2}$$ $$\frac{1}{2} v^2 = \frac{4 \pi G r_0^3 \rho_0}{3r} + C$$ (You also could have gotten to this point by relating gravitational potential energy to kinetic energy, and being careful about where you set the zero of the gravitational potential). When $r = r_0$, $v = 0$, so $C = - \frac{4 \pi G r_0^2 \rho_0}{3}$ and $$\frac{1}{2} v^2 = \frac{4 \pi G r_0^2 \rho_0}{3}\left(\frac{r_0}{r} - 1 \right)$$ $$ |v| = \sqrt{\frac{8 \pi G r_0^2 \rho_0}{3}\left(\frac{r_0}{r} - 1 \right)}$$ The total time can be found by integrating $$t_{\rm collapse} = \int dt = \int \frac{dr}{|v|} = \sqrt{\frac{3}{8 \pi G r_0^2 \rho_0}}\int_0^{r_0}{\frac{dr}{\sqrt{\left(\frac{r_0}{r} - 1 \right)}}}$$ This is going to be a tricky integral, so let's non-dimensionalize it. Make a change of variable $u \equiv r/r_0$. Then we have $$ t_{\rm collapse} = \sqrt{\frac{3}{8 \pi G \rho_0}}\int_0^{1}{\frac{du}{\sqrt{\frac{1}{u} - 1}}}$$ If you are really adept at trigonometric substitutions in integrals, here's your chance to shine. Otherwise, just use Wolfram Alpha or something similar to tell you that the integral evaulates to $\pi/2$. That gives, finally, $$ t_{\rm collapse} = \sqrt{\frac{3 \pi}{32 G \rho_0}}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/94746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Why is the angle of a triangular prism equal to the result of the following 2 calculations? (Experiment with optical goniometer) I know there are two ways of measuring the angle of a prism with a goniometer: let the collimator shine (monochromatic) light on 2 sides of the prism and measure the angle between the 2 reflected lightbeams. or Let the lightbeam hit one side of the prism, note down the position of the the table of the goniometer where the prism is on (=$\beta_1$) and check the location of the reflected lightbeam, turn the table around counter-clockwise until the reflected lightbeam is in the same place and note down that position (=$\beta_2$). Now, my problem is that, while I understand completely how to perform these actions, I don't understand why: In the first case the measured angle is the angle of the prism times two. In the second case when subtracting $|\beta_1-\beta_2$| from $180°$ you get the angle of the prism.
I believe you only need one rule: "The angle of reflection equals the angle of incidence." Draw a triangle on a piece of paper. ( For this situation, the third dimension of the prism is irrelevant). Draw two parallel lines originating from roughly the direction the apex of the prism is pointing. It's not necessary to be exactly in that direction, as you'll see. Those lines hit the two sides of the prism and reflect, relative to the normal to the surface, according to the rule I quoted. Since you know (I hope! ) about complementary angles and the various theorems from trigonometry about the interior and exterior angles of a triangle, you'll soon see why the formula works. The second formulation is similar, although you have added the possibility of an error term due to inexact reading of the goniometer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/94965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is a simple mathematical model of a star? I had a discussion at work regarding a recent fusion experiment in China that resulted in temperatures five times hotter than the Sun. Someone mentioned that no one can know the temperature of the Sun. I replied that we have mathematical models of the Sun, but I didn't have any on hand, and I haven't been able to find much on the internet. So, where can I find a system of equations for modelling a star? I found this site, which is about the level of complexity I was looking for, but it looks spotty. For instance, $\partial T / \partial t$ and $\partial P / \partial t$ terms seem to come out of nowhere--most everything else is written in terms of $\partial / \partial M$. I can code numerical solutions to PDEs, but I haven't had much physics or anything, so I didn't know where or what to look for. EDIT: I found this really great post by Dr. Brian Koberlein describing a very (very) simple model of a star. He goes on to build upon that simple model here.
The variable $M$ on that page is used instead of the radial $r$ coordinate; $M$ denotes the total mass inside the ball of radius $r$, the cumulative mass. These explanations are clear e.g. from this alternative presentation of the equations of the stellar structure: https://en.wikipedia.org/wiki/Stellar_structure#Equations_of_stellar_structure That page also more or less describes where the equations come from and what they mean. On that page you found, $t$ is time and it isn't surprising that the time derivative of the pressure $P$ and absolute temperature $T$ appear in the dynamical equations as well. There are lots of books on stellar structure http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=stellar+structure&tag=lubosmotlsref-20 If you had some additional, more particular questions about the equations, their origin, or their independence etc., you may ask again.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/95128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Tricky operator identity: $[L^2,[L^2,\vec{r}]]=2 \hbar ^2 \{ L^2, \vec{r}\}$? This operator identity showed up in a course I was taking, and it was given without proof. $$[L^2,[L^2,\vec{r}]]=2 \hbar ^2 \{ L^2, \vec{r}\}$$ The curly brackets denote the anticommutator, $AB+BA$. The $\vec{r}$ operator is the position operator. The $L^2$ operator is given by: $$L^2 = -\hbar ^2 \left( \frac{1}{\sin \theta} {\partial\over\partial\theta} (\sin \theta {\partial\over\partial\theta}) + \frac{1}{\sin^2 \theta} {\partial^2\over\partial\phi^2}\right)$$ Is there a way of proving this identity without tediously expanding all the commutators? I've been trying to find one but was unable to.
The symbol $r$ in the identity represents (and will represent in the text below) the whole three-component vector of operators $\hat{\vec r} = (\hat x, \hat y, \hat z)$. The simple way I found to prove the identity is to verify that all matrix elements of both sides match. Let's calculate the matrix elements of the operators $LHS,RHS$ between $$\langle j,m,a| LHS| k,n,b\rangle$$ and similarly for the right hand side. Here, $j,m$ and $k,n$ are the usual total angular momenta (which I will assume to be integers, just the orbital angular momentum case) and the $z$-component and $a,b$ represent the other quantum numbers that won't matter. The advantage is that $\vec L$ combine to $L^2$ almost everywhere. The left hand side operator is $$ L^2 L^2 r - 2 L^2 r L^2 + r L^2 L^2 $$ so the matrix element (because $L^2$ acts either on the bra or ket vector in a simple way) is the same as the matrix element of $$ \hbar^4 r[ j(j+1)j(j+1) - 2j(j+1)k(k+1) + k(k+1)k(k+1)] $$ The coefficient in the parenthesis is equal to a complete square, $$ \hbar^4 r [j(j+1)-k(k+1)]^2 $$ Note that $\hbar^4 r$ is in all terms. The right hand side has the same matrix elements as the operator $$ 2\hbar^4 r [j(j+1) + k(k+1)] $$ They don't look "obviously" equal: one is quartic, one is quadratic. But we must realize that the operators on both sides are $j=1$ vector operators, from the $\vec r$ factor, so they only change the angular momentum by zero or $\pm 1$. So it is enough to compare the expressions for these three choices; for higher changes of $j$, the matrix elements on both sides clearly vanish (and are therefore equal). For $j=k$, the matrix element vanishes because of parity: $r$ carries the negative parity while the parities $(-1)^l$ are $(-1)^j$ or $(-1)^k$ for the bra/ket vectors. For $j=k+1$, the LHS is $$\hbar^4 r (k+1)^2 (k+2 - k)^2 = 4\hbar^2 r (k+1)^2 $$ while the RHS is $$2\hbar^4 r[(k+1)(k+2)+k(k+1)]= 4\hbar^4 r(k+1)^2$$ so it works. The same verification applies to the case $k=j+1$, too, just $j,k$ are interchanged. There are many other ways to calculate or verify the identity but I found this one easiest. Note that I am not assuming any coordinates; the abstract calculation above works in any coordinates.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/95264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Can we calculate the frame dragging force of the Earth? Although clearly this force would be significantly greater with a rotating black hole, is it still possible to calculate this drag for say a satellite orbiting the Earth?
Yes, as mentioned in the comments, the frame-dragging of a satellite orbiting the Earth was measured by the Gravity Probe B mission. The gyroscopes on the Gravity Probe B measured a frame-dragging drift rate of $−37.2 \pm 7.2$ mas/yr , where the theoretical prediction was $−39.2$ mas/yr (mas = milliarcsec). The results can be found in this paper. The theoretical frame-dragging value follows from the Schiff equation $$ \boldsymbol{\Omega} = \frac{GI}{c^2r^3}\left( \frac{3(\boldsymbol{\omega}\cdot \boldsymbol{r})\boldsymbol{r}}{r^2}-\boldsymbol{\omega} \right), $$ where $\boldsymbol{r}$ is the position vector of the satellite, $I$ is the moment of inertia of the Earth, and $\boldsymbol{\omega}$ is the angular velocity of the Earth. You can see this equation in the figure below (source): This equation can be derived from gravitomagnetism; see this article, this article, or Weinberg's Gravitation And Cosmology, section 9.6 (Precession of Orbiting Gyroscopes). In order to find the average frame-dragging, we have to integrate the equation over an orbital revolution. Fortunately, Gravity Probe B has a polar orbit, for which the average value becomes $$ \boldsymbol{\Omega}_\text{av} = \frac{GI\boldsymbol{\omega}}{c^2r^3}\frac{\int_0^{2\pi}(3\cos^2\theta - 1)\text{d}\theta}{\int_0^{2\pi}\text{d}\theta} = \frac{GI\boldsymbol{\omega}}{2c^2r^3}, $$ where $\theta$ is the angle between $\boldsymbol{r}$ and $\boldsymbol{\omega}$. We have, using this source, $$ \begin{align} I &\approx 8.02 \times 10^{37}\,\text{kg}\,\text{m}^2,\\ \omega &= \frac{2\pi}{86164\,\text{s}} = 7.29 \times 10^{−5}\,\text{rad}\,\text{s}^{-1},\\ r &= 6371 + 642 = 7013\,\text{km}. \end{align} $$ Combining this, I find $\Omega_\text{av} = 40.8$ mas/y, close to the value cited in the Gravity Probe B paper.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/95361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Centre of instantaneous rotation problem Is there a point of Centre of Instantaneous Rotation (CIR) for every type of motion or only for cases of rolling?
For a 3D rigid body there is always an instantenous screw axis. This consists of a 3D line (with direction) and a pitch. The pitch describes how much parallel translation occurs for each rotation of the rigid body. A pure rotation has zero pitch, whereas a pure translation has an infinite pitch. ( 3D Kinematics Ref. html, University of Pennsylvania Presentation ppt, Screw Theory wiki) Screw Properties * *Given a moving rigid body, a point A located at $\vec{r}_A$ at some instant has linear velocity vector at the same point $\vec{v}_A$ and angular velocity $\vec{\omega}$. *The screw motion axis has direction $$\vec{e} = \frac{\vec{\omega}}{|\vec{\omega}|}$$ *The screw motion axis location closest to A is $$\vec{r}_S = \vec{r}_A + \frac{\vec{\omega}\times\vec{v}_A}{|\vec{\omega}|^2}$$ *The screw motion pitch is $$h = \frac{\vec{\omega} \cdot \vec{v}_A}{|\vec{\omega}|^2}$$ where $\times$ is the cross product, and $\cdot$ is the dot (scalar) product. Proof Image point S having a linear velocity $\vec{v}_S$ not necessarily parallel to the rotation axis $\vec{\omega}$. Working backwards (from S to A), the linear velocity of any point A on the rigid body is $$ \vec{v}_A = \vec{v}_S + \vec\omega \times ( \vec{r}_A-\vec{r}_S) $$ This is used in the screw axis position equation $|\vec{\omega}|^2 (\vec{r}_S-\vec{r}_A) = \vec{\omega} \times \vec{v}_A$ (from above) as $$ |\vec{\omega}|^2 (\vec{r}_S-\vec{r}_A) = \vec{\omega} \times \vec{v}_S - \vec{\omega} \times \vec\omega \times ( \vec{r}_S-\vec{r}_A)$$ which is expanded using the vector triple product as $$ |\vec{\omega}|^2 (\vec{r}_S-\vec{r}_A) = \vec{\omega} \times \vec{v}_S - \vec{\omega} (\vec{\omega}\cdot (\vec{r}_S-\vec{r}_A))+ |\vec{\omega}|^2 (\vec{r}_S-\vec{r}_A)$$ $$ \vec{\omega} \times \vec{v}_S = \vec{\omega} (\vec{\omega}\cdot (\vec{r}_S-\vec{r}_A)) =0 $$ since right hand side is always parallel to $\vec{\omega}$ and the left hand side is always perpendicular to $\vec{\omega}$. The only solution to the above is the velocity at the screw axis S to be parallel to the rotation $$ \vec{v}_S = h \vec{\omega} $$ and the velocity at A becomes $$ \vec{v}_A = h \vec{\omega} + \vec{\omega} \times (\vec{r}_A-\vec{r}_S) $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/95486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Can a mass matrix be asymmetric? I am developing a mathematical model of a mechanical device consisting basically of coupled harmonic oscillators. It turns out that the system mass matrix is asymmetric. I seem to read somewhere that a mass matrix has to be symmetric, but I am not sure. So I would like to know whether it is possible for a mass matrix in this case to be asymmetric. If it can't, what are the physical implications of an asymmetric mass matrix in this case?
In the world of robotics and dynamical systems the mass matrix is always symmetric. It is also positive definite, a result of kinetic energy $$ K=\frac{1}{2} \dot{q}^\top M \dot{q} $$ being always positive.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/95627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Why does ice form on bridges even if the temperature is above freezing? So with this "arctic blast" continuing, I've noticed that for my area, the temperature drops below freezing just long enough to cause freezing rain, but then the sun comes out and the temperature rises immediately. However, on bridges, ice continues to form. How can ice form even if the temperature is above freezing?
Simply it is radiative cooling, or losing heat from a surface on earth to the outer space via thermal radiation. You can have an ambient temperature of 3 C, and the bridge temperature might be -2 C due to this mechanism of cooling. Radiative cooling happen for materials that are good thermal emitters over the range of wavelength where the atmosphere is almost transparent to radiation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/95705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 8, "answer_id": 7 }
light beams of the sun We receive sunlight on earth surface. What type of light beams are these? Random/Parallel/Converging/Diverging I think it should be Diverging as Sun is radiating these beams away. But in one book, answer is given as Random, in another it's Parallel.
Jan L's answer is correct. Consider as well: when there's a solar eclipse, there is a penumbra because, as he said, the sun is not a point source. However, when dealing with a focussing system, the angle of divergence is close enough to zero that setting the lens to "infinity" is quite sufficient to focus an image of the sun.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/95771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Do solar neutrinos actually oscillate between electron, mu and tau? I was reading up on the history of the solar neutrino problem, and as far as I can understand it, neutrinos supposedly oscillate from one form to another, thus explaining why there were only one-third the number of neutrinos detected than were expected, when they began neutrino observations in the 1960's. The Wikipedia article on the topic ends with this statement: The convincing evidence for solar neutrino oscillation came in 2001 from the Sudbury Neutrino Observatory (SNO) in Canada. It detected all types of neutrinos coming from the Sun, and was able to distinguish between electron-neutrinos and the other two flavors (but could not distinguish the muon and tau flavours), by uniquely using heavy water as the detection medium. After extensive statistical analysis, it was found that about 35% of the arriving solar neutrinos are electron-neutrinos, with the others being muon- or tau-neutrinos. The total number of detected neutrinos agrees quite well with the earlier predictions from nuclear physics, based on the fusion reactions inside the Sun. But as far as I can see, none of this or anything else I've read seems to give any proof that solar neutrinos change type while en route to the Earth. It seems that the sun just emits about 1/3 of each of the three types. Or is it that at the temperature of the solar core only electron neutrinos are emitted, and then they oscillate (randomly?) from that type to the others and back again? I'd welcome a little clarity about this.
http://en.wikipedia.org/wiki/CERN_Neutrinos_to_Gran_Sasso Neutrino flavor oscillation is facilitated by passage through matter. They travel hard by lightspeed, but not faster. Solar core fusion emits two electron neutrinos/helium output. They scramble flavors during passage to the surface, traveling through our atmosphere (a yard of lead at sea level, mass/area), and through rock. Absence of observed neutrinoless double beta-decay validates neutrinos and anti-neutrinos being distinct Dirac fermions, rather than being identical Majorana fermions. Here's a poser: Is an electron neutrino an electron without its charge?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/95858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
How do photons 'connect' during wireless connection? So wireless router broadcasts a signal and then your device searches. So what actually happens when the photons 'meet' it's kind of like saying, 'ah your one of us, so we will follow you, show us the way' It's so bizarre, how do photons connect during wireless connection?
In principle the wireless router is sending out radio waves (photons) in all directions. Some of those are picked up by the antenna in your phone or laptop, which turns it into an electrical signal. It's similar to broadcasting in public radio: the broadcasting station is sending out radio waves in all directions. Your kitchen radio doesn't transmit anything, but instead it picks up all waves that hit its antenna, and then tries to filter out the frequency/channel you want to listen to. The broadcasting station doesn't know where your radio is - it just transmits waves in all directions. And your radio also doesn't know where the broadcasting station is - it receives all sorts of waves from all directions. Then it's just a matter of electrically/electronically filtering out and amplifying the desired wave. Same goes for routers: they receive all sorts of waves from nearby routers (and other sources), but filter out only the signal that has the correct SSID, for example. It's important to see that photons don't interact - two waves can perfectly go through each other. The signal is induced in the antenna because the electric and magnetic components of the radio wave make the electrons in the antenna move. And moving electrons are a current - something electronic components can process and the device can "understand".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/95949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What is average life in radioactivity and what is its significance? By definition, average life of radioactive sample is the amount of time required for it to get decayed to 36.8% of its original amount. But what is the significance of 36.8% and why has that value been chosen?
I endorse Kyle's answer. Just two short comments. The number 36.8% is literally $$ 36.8 \approx 100 \exp(-1) =\frac{100}{2.71828\dots} $$ Moreover, it is right to call this quantity "average lifetime" or just "lifetime" because it is literally the average value of the time for which a nucleus (or something else) from the ensemble lives. If the initial number is $N_0$, they decrease to $$ N(t) = N_0 \cdot \exp (-t/t_0)$$ at time $t$ where $t_0$ is what we want to call the (average) lifetime. How many nuclei $dN\lt 0$ die (decay) in the short interval $(t,t+dt)$? Well, it's given by the derivative $$ dN = dt\cdot \frac{dN(t)}{dt} = N_0\cdot dt\cdot \exp(-t/t_0)\cdot \left(-\frac{1}{t_0}\right)$$ To calculate the average "age at death" (a statistical expectation value), we must integrate $$ \langle t \rangle = \int_0^\infty dt\cdot t\cdot P({\rm lifetime}=t) =\\ = -\int_0^\infty t\cdot \frac{1}{N_0} \cdot dN/dt \cdot dt = \int_0^\infty dt\cdot t\cdot \exp(-t/t_0)\frac{1}{t_0} = t_0$$ where $P$ refers to the probability density that the lifetime was $t$ which can be calculated by integration by parts. So the average "age at death" for a large ensemble of nuclei will really be equal to the $t_0$ that appears in the exponent of $\exp(-t/t_0)$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/95998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Wavefunction of a Baryon How to write the total wavefunction of a Baryon including space part, spin part, isospin part and color part such that the net wavefunction is antisymmetric? What is the difference in wavefunctions of two different baryons but of same quark content say proton $p$ and $\Delta^+$ baryon?
To write the wavefunction of a baryon, you write it as a direct product of the different parts of the wavefunction (just as you would for any other particle): \begin{equation} \left| \psi \right\rangle = \left| \mbox{spatial} \right\rangle \otimes \left| \mbox{spin} \right\rangle \otimes \left| \mbox{Isospin} \right\rangle \otimes \left| \mbox{color} \right\rangle \end{equation} Furthermore, the difference between a proton and $ \Delta ^+ $ is that they have different spins and total isospin. The proton is a spin $ 1/2 $ and total isospin $ 1/2 $ object while the $ \Delta ^+ $ is a spin $ 3/2 $ and total isospin $ 3/2 $ object.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/96046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What is the constant $K_1$ in these orbital equations? I want to compute the derivatives of argument of periapsis and longitude of the ascending node of the orbit of a GPS satellite from the following formula. $$\frac{d\Omega}{dt} = -K \cos{i} \\ \frac{d\omega}{dt} = K ( 2 - 2.5 \sin^2{i}) \\ K = \frac{nK_1}{a^2(1-e^2)^2}$$ But what is $K_1$?
I found the solution by myself. $K_1$ is a constant describing the flattening of the earth $$K_1 = 66063.1704 ∗ 10^6 \space[m^2]$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/96137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why are temperatures generally hotter in the Middle East than in Europe? How come the average temperature in the middle east (Israel, Saudi Arabia, Sudan or lower) is always so much significantly higher than in Europe (say Germany, England etc.)? I know that the sun rays pass a greater distance to Europe than the middle east, but is that the only factor influencing? And also, the distance isn't that much greater so how come the sun is weakened so significantly in Europe in comparison to the middle east?
As a visual demonstration of Luboš Motl's answer, this: Image obtained using Climate Reanalyzer (http://cci-reanalyzer.org), Climate Change Institute, University of Maine, USA. is the average surface temperature on earth for 2013. This: is the solar flux by Luboš' formula. And here: I've tried to replicate their crazy color scheme. The point is, gross temperature phenomenon on the earth is dominated by the radiant solar flux. Regions near the poles are colder because the incoming light flux is spread over a larger area. Imagine trying to hold a surfboard perpendicular against the flow of a river, its hard, but if you tip it so that it comes at an angle to the incoming water, it becomes easier.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/96229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Isotropic systems and homogeneity In isotropic systems, the atomic arrangement is homogeneous in all directions. In the case of glass, which has the atomic structure of a liquid and, therefore, a random atomic structure that is definitely not homogeneous, is it that the atomic arrangements in each direction are equally disordered and therefore homogeneous in that sense?
First, an isotropic system need not be homogenous. We say the electric field from a point charge is isotropic although it is inhomogeneous. However, an isotropic, translation invariant system must be homogeneous. While you are correct that a class is truly inhomogeneous as it is described by discrete atomic positions, people say that a glass is homogeneous when it is understood that they are only concerned with properties of a glass which are defined on a length scale much larger than the typical atomic separation. On these length scales, the inhomogeneities wash out (since the atomic positions are assumed to have a very short correlation length). An example of such a quantity would be an average density. Suppose you chipped off a centimeter cubed piece of glass and you wanted to know how much it weighed. If the atomic positions are truly uncorrelated, then you would do just fine to take the mass of a full meter cubed of glass and use that to find the density. In fact, this approach would give a pretty good estimate of density down to length scales of tens of nanometers.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/96316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
CPT and event horizon Is the example of neutrino entering the event horizon of BH quoted from this article a valid possible example of CPT violation due to the presence of event horizon in BH ? Please, note that there is a very similar previous Question here . I don't think this is a duplicate, meaning that I have specified a counter example of possible CPT violation in BHF, as presented in the article I quoted.
ok, dear user: You are probably confused by the long, correct but superficial and naive discussion of the blogger you quote above that disagrees with Hawking. Of course, in most of the cases I agree with the blogger, or, better said, I would agree with him if it happened to have the same biases and beliefs about nature... fortunately this is not the case. First, the blogger is right: time reversal symmetry is a symmetry in the dynamics and it is NOT the entropic idea of time. However, time reversal of the dynamics in the presence of strong space-time alterations is not independent on the entropic time definition. The things may appear different when analyzed from a classical point of view but entropy can be generalized in a quantum mechanical and quantum informational sense that escapes the blogger up there... This being said, a correct formulation of quantum gravity appears to be necessary even for aspects considered as "effective" before, and maybe Hawking is right after all... the problem lies again in how exactly one quantizes a theory, be it 0, 1 or 2 dimensional... I learned to be cautious about calling Hawking an idiot... this happened with... time...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/96387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Temperature: Why a Fundamental Quantity? Temperature is just an indication of a combined property of the masses of the molecules and their random motion. In principle, we can explain "no effective energy transfer between two conducting solid bodies in contact" via a condition in terms of the masses of the molecules and their speeds such that due to the collisions of molecules of two bodies, net energy transfer between two bodies is zero. But it would be a complex calculative work to derive this condition analytically so we use the temperature scale just as a phenomenological parameter to easily determine the condition of "no net energy transfer between conducting solids" for practical purposes. But it does not denote any fundamentally new property of the body separate from the already known mechanical properties of the same. Then why do we call it a fundamental quantity, e.g. in the SI list of fundamental quantities?
A quantity is called as fundamental quantity if it can't be explained in terms of other fundamental quantities: * *we know temperature is the vibrations and collision of constituent atoms and molecules and *vibration can be explained by other known fundamental quantities. Hence temperature is not a fundamental quantity. But wait... Kelvin is a fundamental unit! In the past, temperature was used for the measurement of "hotness". For that we (Humans) devised different temperature scale and laws like the Zeroth Law of Thermodynamics. Then as we encountered more physical phenomenon such as thermodynamic equilibrium, we found that this quantity, the temperature, is the same for two systems in equilibrium. At that time we usually dealt in macroscopic domains but as we started to research microscopic domains, we can explain temperature as the vibrations and collision of molecules and atoms. It's a lot easier to measure temperature than to measure the motion of component particles. Hence, we can accept it as a fundamental quantity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/96448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 6, "answer_id": 5 }
Can stress be observed directly? Strain can be directly observed using e.g. a ruler. Can (internal) stress be directly observed?
Strain is difficult to observe with the naked eye. Many material either plastically deform or break instead of showing visible strain. With the help of polarization filters it is no problem to visualize strain in transparent materials. This is an image of a plastic ruler under strain viewed through polarization filters: A high number of bands corresponds to a large strain, areas that only show a single color are less strained. Further details are nicely explained in the wikipedia article on photoelasticity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/96554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calculate work done in an inclined plane How can you calculate the work done by a force (of unknown quantity) exerted on a 10kg block on an inclined plane. The force is pointing upwards and parallel to the incline (which is inclined 30 degrees with respect to the horizontal). a. frctionless plane b. coefficient of friction = 0.12 So the forces acting the block are the normal force, its weight, the friction force (for letter (b)), and the force exerted upwards the incline. All are given or can be solved almost instantly except for the force upwards denoted by F. How do I solve this problem? I am not sure what value of acceleration to use in the axis of the incline for F=ma. Sorry I could not provide a diagram for this.
OK, I'll help you this far. Here's the diagram you should be able to make, and figure out everything else from that. I purposely put in ?? so you can't just hand it in and pretend you did it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/96620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why does a picture of a person seem to be looking in the same direction irrespective of the angle of observation? If you observe a picture of a person hanging on a wall who seems to be looking directly towards you always seems to be looking at you even though you change your angle of observation to the extremes. The same can be observed in a television. If a television is watched by many people from different angles all observe that a person on the screen is looking at them. Why does it happen like that? Update: I First thought that it may be because of some data being lost due to conversion of 3D to 2D. But same is observed in a theater while watching a 3D movie.
Seeing an object is seeing the light reflected from the surface. When we change the lighting on the object then we observe a different image. When we observe an object from different angles we see different images. This is because the reflected light goes in different directions and we see a particular ray only in one directions. For example when a ray is reflected from side of an object, we can see the ray only when we are in the path and if we are not in the path we cannot see the side of the object. Coming to the case of a television. The image displayed on television is a 2-D image. Even if the angle of observance is change what we see is a 2-D image so we observe the same image. In case of 3-D pictures as seen in 3-D theaters we observe similar effect as in 2-D televisions. A 3-D image is formed by superimposition of multiple 2-D images of different polarization.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/97694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$1.7\cdot 10^{-24}$ mole apples a day As the title suggests I was wondering why the International Bureau of Weights and Measures decided a mole to be a standard (SI-)unit. After some research I found I was not alone with this problem. The core of my question is: * *How is the unit “mole” necessary as a standard unit? *If mole is the standard unit, why wouldn't I have a give all numbers in mole? Of course a mole is a convenient unit but I can't see how it is as fundamental as i.e. a meter, as it is clearly based on the concept of counting the atoms. EDIT: Due to the first answer I got I realized my question is probably misleading: by fundamental I do not mean dictated by nature but considered a base unit. The need of units is obvious in the case of meters and seconds: no matter how you want to measure time or length, you will necessarily have to compare it to some standard (in this case meter and seconds, independently of how they are defined). In the case of "number of particles" this is not needed, instead one could say they are compared to "1". This is normally not considered a unit. Is this difference only a personal opinion?
The kilogram is the only kg-m-sec physical standard: a 35 mm film can-sized cylinder of Pt-Ir alloy whose mass measurably drifts (possibly trace atmospheric hydrogen chemistry) at different rates for the primary standard and its secondary standards. A silicon-28 single crystal solid sphere machined precise to a couple of atoms thickness is a superior standard kilogram - by exactly defining the mole. Unless you can provide a purely theoretical relationship for the kilogram that can be reduced to practice at will, the mole is fundamentally important to the engineering of civilization.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/97760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
Which electron gets which energy level? Electrons sit in different energy levels of an atom, the farther the higher energy is. Every electrons have the same structure, they can gain energy from environment, electrons which gained energy could jump to a higher energy level and will finally fall back again. I'm wondering why some electrons have the "right" to "store" that high energy since every electron is the same. Why do those electrons can have more energy and sit in higher energy level than other electrons?
The electron is not who "wins" energy. The increase in energy corresponds to the system electron-nucleus. The "incoming" energy is stored in the system, by increasing the distance from the nucleus to the electron. The configuration of the atom, is such that always "looking" the lowest energy state for the system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/97908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Do black holes exert an infinite amount of energy at the event horizon? An interesting thought crossed my mind when reading about Hawkings' apparent horizon theory. If we assume that event horizons do actually exist, it would seem that black holes violate basic laws of physics. My (limited) understanding is that in traditional black holes, the event horizon is the place at which nothing, not even light, can escape falling into the singularity. I also believe that I have learned that for something other than light to travel at the speed of light, an infinite amount of energy would be required. Therefore, since a traditional event horizon is defined as the so-called "point of no return," wouldn't an infinite amount of energy be exerted at the event horizon? If light itself cannot escape, that would seem to mean that the gravitational force would be at least equal to the speed of light, and wouldn't that require infinite energy?
Look into the sky through all 4(pi) steradians. Every direction is equidistant from the same Big Bang. There is no visible direction, no path to escape this universe. There is no energy expended in enforcing this. It is a consequence of geometry. Exactly how black holes are structured, "surface" and center, remains hotly debated. The only incontestable observation is that they externally gravitate indistinguishable from general relativity. An interior photon attempting to exit a black hole's gravitational potential would red shift without limit. It must pay back its easily calculated binding energy. One fails to see an infinity in that other than Zeno's paradox. One might be naughty and declare all the gravitation of a black hole resides its "surface," its interior then being wholly unremarkable. There is no added gravitation internal to a thin spherical shell. That removes the naked singularity problem, and the problem of assigning large external angular momenta (Kerr black holes) given all interior mass resides at a point.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/98094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Admixtures of longitudinal and timelike photons! In the quantization of electromagnetic field the physical states $|\psi\rangle$ are found to obey the following relation: $[a^{(0)}(k)-a^{(3)}(k)]|\psi\rangle=0$ It is explained as the physical states are admixtures of longitudinal and timelike photons. What do longitudinal and timelike photons physically mean? Why the polarizations, $\epsilon^{(0)}$ and $\epsilon^{(3)}$, timelike and longitudinal photons, are called unphysical?
When you change the free field $A_\mu$ by means of a gauge transformation, you can easily see that it affects longitudinal and timelike degrees of feedom. Since observables are gauge invariant, those degrees of freedom cannot be physical.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/98173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
What are the correct initial conditions for the moon (in a simulation)? So I've modeled the interactions between the sun and all the planets (and the interactions between the planets) using Verlet integration. I've used data from Wikipedia for masses, distance from the sun etc. I initialized the initial velocities of the planets via the critical velocity equation. This produces nice stable velocities. I'm unsure of how to calculate the initial velocity of the moon so that it stays in orbit around the earth.
You may have noticed that if you start with the sun at rest, and put Jupiter into the system with an initial velocity to (say) the left, then over time the whole system moves left. (If you haven't noticed this is it worth setting the system up that way and letting it run long enough that you do notice it.) The trick is to recall that both bodies orbit their combined barycenter and put them in with linear velocities found in the CoM frame. For the Earth-Moon system you have to do the same thing, and then set the CoM system's velocity relative the Sun as if the pair were a planet.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/98447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does there exist a single plate capacitor(conductor)? Does there exist a single plate capacitor(conductor)? if yes How will you define the capacitance and potential(difference) of such conductor?
A simple example is that of a sphere. One way to find its capacitance is to take the limit of a nested sphere capacitor with radii $a,b$: $$C = \lim_{b\to\infty}\frac{4\pi\epsilon_0}{\frac{1}{a}-\frac{1}{b}} = 4\pi a\epsilon_0\text{.}$$ A van de Graaff generator is a commonly discussed in physics classes, and involves this type of setup. For a parallel-plate capacitor, however, doing the same gives zero capacitance.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/98671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Twin Paradox, calculating spacetime intervals from both perspectives I've very recently started to try to understand special relativity. I'm want to get a decent understanding of the twin paradox. I'll post what I've done so far and highlight what's gone wrong for me. The situation is that Alice and Bob both begin at point $x_1$ in Alice's coordinate system $(x,t)$ (we have orientated the axis so that y and z will not matter). Bob then instantaneously moves off with velocity $v$ in the positive $x$ direction. Bob's coordinate system is now $(x',t')=(\gamma (x-vt), \gamma (t-v\frac{x}{c^2}))$. In Alice's coordinates, Bob reaches point $x_2$ after having moved distance $d$, then instantly turns around and travels back to Alice at $-v$. I want to show that Alice's space time interval $\Delta s_A^2$ is greater than Bob's $\Delta s_B^2$, as each path's space time interval is proportional to the proper time passed along that path. According to Alice: $\Delta s_A^2=-c^2\Delta t^2 + \Delta x_A^2$ where $\Delta t=\dfrac{2d}{v}$ and $\Delta x_A=0$ as she didn't move, but the time elapsed is the time it took bob to travel the distance and then come back. Thus $\Delta s_A^2=-\dfrac{4c^2d^2}{v^2}$ Still according to Alice, Bob's $\Delta s_B^2=-c^2\Delta t^2+\Delta x_B^2$, where the change in time is the same, but now bob has moved distance $d$ twice, thus $\Delta x_B=2d$ ($\Delta x$ refers to total distance traveled rather than displacement which is 0 in this case). Now we have $\Delta s_B^2=-\dfrac{4c^2d^2}{v^2}+4d^2$. The |size| of Bob's spacetime interval now definitely smaller than Alice's, and this would be all OK, except that when I do the calculations in Bob's frame, they don't agree. This is contradictory to the fact that $\Delta s^2$ is conserved under Lorentz transformations. According to Bob: $\Delta {s'}_A^2=-c^2\Delta t'^2 + \Delta {x'}_A^2$ where $\Delta t'=\dfrac{2d'}{v}$. I'm not sure if I'm right in saying that $v$, the relative velocity of the two frames, is the only velocity upon which both Alice and Bob will agree on, other than the speed of light. Anyway, $d=\gamma d'$ as lengths contract by $\gamma$ i.e. $d'$ is smaller than $d$ by factor $\gamma$. Also, $\Delta {x'}_A=2d'$, so we have $\Delta {s'}_A^2=-\dfrac{4c^2d'^2}{v^2}+4d'^2=-\dfrac{4c^2d^2}{\gamma ^2 v^2}+\dfrac{4d^2}{\gamma ^2}=4d^2\dfrac{v^2-c^2}{\gamma^2 v^2}\neq \Delta s_A^2$ Although I haven't included them here, my calculations for $\Delta {s'}_B^2$ and $\Delta s_B^2$ agree. Sorry for the ultra-long post, but any help would we well appreciated!
There are just 3 events that need be considered here: (a) the initial event that Alice and Bob are co-located, (b) the event that Bob turns around and (c) the final event that Alice and Bob are again co-located. If, according to Alice, Bob's speed on both legs is $v$, and the distance to the turnaround point is $r$, then: $$\Delta s^2_{ac} = (c2r/v)^2$$ $$\Delta s^2_{ab} = \Delta s^2_{bc} = (cr/v)^2 - r^2 = r^2[(c/v)^2 - 1)]$$ The proper time for Alice is then: $$\tau_A = 2r/v $$ and the proper time for Bob is: $$\tau_B = 2r/v\sqrt{1 - \frac{v^2}{c^2}} = \frac{\tau_A}{\gamma} $$ So, clearly, Bob's proper time along his path from (a) to (c) is less than Alice's proper time on her path.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/98708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Would a considerably big asteroid be disintegrated by the Earth's Roche limit? If there was a big asteroid with a diameter of say 50km+ in a collision course with the Earth (not orbiting), would it disintegrate into smaller chunks due to the Earth's Roche limit, or the time it will spend in the Roche radius won't be enough for the tidal forces to have an effect? My simple calculations and assumptions of an asteroid with density as the moon will have ~9500km Roche radius with the Earth, so an asteroid with velocity of 20km/s will have about 8 minutes as soon as it enters the Roche radius until it collides with the surface of the Earth. My question here : is this time enough to disintegrate the asteroid?
When an object comes within the Roche limit, it breaks up because of tidal stresses - the part closest to the earth feels a stronger gravitational attraction than the furthest part. Hence, the closest part will fall a little faster than the trailing parts. As a result, "disintegration" does not mean that the body will fly apart like a bomb. Instead, it breaks up and the pieces slowly move apart. This will definitely not happen within 8 minutes, so as far as an observer on earth is concerned, the impact is the same as from a solid body. Even if the asteroid were disintegrated into dust, the effect on earth would still be the same, as all the dust particles would hit at essentially the same instant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/98805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
What's the dimensionality of a solid angle? I haven't seen this explained clearly anywhere. Solid angles are described usually as a fraction of the surface area of a unit sphere, similar to how angles are the fraction of the circumference of a unit circle. However, I don't know how solid angles are actually quantified. Are solid angles just a single number, the describes this fraction of the area? It's confusing to me since often times, I've seen integrals that integrate over a sphere using solid angles, which seems to imply that solid angles are multi-dimensional quantities (e.g. when integrating using spherical coordinates, the solid angle would have to consist of the azimuthal and polar angles covered by the differential solid angle). Following from this, how would you write down a solid angle that covers the entire surface of a unit sphere?
John Rennie's answer seems fine to me (+1). I'll only add the relevant pieces of the BIPM brochure (PDF, p. 118). BIPM rules.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/98863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 0 }
Is crystal momentum an operator? My teacher has for Bloch waves the notation $\langle \vec{r}|\vec{k} \rangle = e^{i\vec{k}\cdot \vec{r}}u_{\vec{k}}(r)$ and uses it consistently. However, does this not assume that there is an operator that has eigenstates $|\vec{k} \rangle$? If so, how would such an operator be defined?
It turns out the Bloch states are eigenstates of the translational operator, $T(\vec{R}_{j})$, namely, $T(\vec{R}_{j})\left\vert\vec{k}\right\rangle=e^{i\vec{R}_{j}\cdot\vec{k}}\left\vert \vec{k}\right\rangle$, where $\vec{R}_{j}$'s are lattice vectors. The translation group element $T(\vec{R}_{j})$ has a unitary representation, say, $T(\vec{R}_{j})=e^{i\hat{\vec{K}}\cdot\vec{R}_{j}}$ with $\hat{\vec{K}}$ being hermitian. If we have $\hat{\vec{K}}\left\vert\vec{k}\right\rangle=\vec{k}\left\vert\vec{k}\right\rangle$, then this leads to $e^{i\hat{\vec{K}}\cdot\vec{R}_{j}}\left\vert\vec{k}\right\rangle=e^{i\vec{R}_{j}\cdot\vec{k}}\left\vert \vec{k}\right\rangle$ consistent with $T(\vec{R}_{j})$, namely, $T(\vec{R}_{j})\left\vert\vec{k}\right\rangle=e^{i\vec{R}_{j}\cdot\vec{k}}\left\vert \vec{k}\right\rangle$. Therefore, it seems that the crystal momentum operator is the generator $\hat{\vec{K}}$ of the translational group. Unfortunaltely, I don't know how to write $\hat{\vec{K}}$ in terms of more familiar expressions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/98927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Ferromagnets and magnets tend to align in the center. Why is that? When you bring a large iron plate and a magnet, the magnet attracts the iron plate and it tends to slide itself to the center. When I place it on the edge, it always aligns at the center why is that?
The magnet is attracted to the whole of the plate, so it will be at equilibrium when all the attractive forces from the plate to the magnet are balanced. In the case of a symmetric plate, the point where the magnetic forces between the magnet and the plate are in equilibrium would be the geometric center of the plate. An analogy would be - an object in space gravitationally attracted to the center of gravity of a system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/98986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Double slit experiment and representation of light waves Consider the following image from Wikipedia and based on it I have a doubt. I do not understand why are the light waves represented like the waves in water. Shouldn't the waves be like sine waves. Why is the slit part the starting of a new wave. Secondly, Why does this lead to to destructive interference rather than constructive because the waves are emitted at the same time. And if anyone can tell me how is double slit possible even when there is a single photon?
Shouldn't the waves be like sine waves. That depends on what you want to show. In this particular case, what is shown is the propagation of a particular ray of light. Why is the slit part the starting of a new wave. A slit acts as a source of spherical waves. This is the Huygens-Fresnel principle. This is done to produce a new single source of light. Why does this lead to to destructive interference rather than constructive because the waves are emitted at the same time. It produces constructive and destructive interference. The black spots are maximum destructive interference while the bright spots are maximum constructive one. Both phenomena always come together. How is double slit possible even when there is a single photon. The photon is not only a particle but also a wave that can interfere with itself. That's what the experiment shows, that's its nature. This is the "weirdness" of quantum mechanics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/99055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why does gravity decrease as we go down into the Earth? We all know that gravity decreases as the distance between the two increases. Hence $$ F = G \frac{Mm}{r^2}. $$ Hence the acceleration due to gravity $$ g =\frac{F}{m}= G \frac{M}{r^2} $$ increases as $r$ decreases. Then why does it decrease as we go deep into the earth?
That equation applies for point sources, which the Earth technically is not. We can, however, treat the Earth as a point source as long as its internal structure is irrelevant (i.e. as long as we are outside of it). Once we enter the surface of the Earth, we can no longer simplify it by pretending it's a point and we have to perform a full analysis of the system. It's important to note, as joshphysics pointed out, that because the density of material in the Earth isn't a constant, moving deeper underground will actually put us closer to a region that is denser (and therefore pulls on us more strongly) while putting a region that is less dense farther away, which would increase the force of gravity. For a first approximation, however, we can assume that the density of the Earth is constant. So, given this assumption, we can show that only the amount of mass that is still underneath us actually exerts a net pull on us in a variety of ways. For instance, it's a known result that at any point inside a spherical shell of mass, there is no net gravitational force due to symmetry. We therefore know that once we are inside the Earth, any mass that isn't as deep as we are has no net effect on us. One can also use Gauss's Law to demonstrate the same thing. In short, the only mass that exerts a net force on us is the mass that is below us, and the deeper we travel underground, the less mass is beneath us. Therefore, there is less gravitational pull as we travel deeper beneath the surface.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/99117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Definition of quantum anharmonicity I have been reading research papers in mathematical physics for some months now, and I've seen the the term "anharmonic oscillator" quite frequently. At first I assumed that given a Schrodinger equation $$\frac{d^2u}{dx^2}+(E-V(x))u=0$$ where $E$ is the energy, and $V(x)$ is the potential function. If $V(x) = x^2 +$ higher order polynomial terms, then this gives rise to the anharmonic oscillator since the higher order terms ensure that the potential will deviate from the "harmonic path". However, I've recently seen potentials of the form $$V(x) = \frac{1}{x^3}+\frac{1}{x^4}+\frac{1}{x^5}$$ described anharmonic oscillator as well. I just wish to know what is a good definition of anharmonic oscillators and anharmonicity?
One has to be careful with the given potential. To start with it must be shown that $$h=-(d/dx)^2+V(x),$$ defines a unique self-adjoint operator $H$, i.e., is essentially self-adjoint. In case $$V(x)=ax^2+bx^3+cx^4$$ with $c>0$ this is indeed the case. In fact the resolvent of $H$ is compact (these matters are discussed in the books by Reed and Simon), so $H$ has discrete spectrum. In case $c=0$ and $b≠0$ then $h$ is not bounded from below, which, apart from the self-adjointness matter, makes physically no sense. As to $$V(x)=\frac{1}{x^3}+\frac{1}{x^4}+\frac{1}{x^5},$$ this potential is far too singular in $x=0$ to lead to a correct Hamiltonian. Maybe you can indicate in which context you encountered them? In summary, a polynomial potential with even highest order term and positive coefficient $V_{2n}=a_{2n}x^{2n}$, where $a_{2n}$ is positive, is acceptable. The analogue of the quartic potential plays an important role in quantum field theory.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/99276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Definition of Ohm in SI basic units in words One way Wikipedia defines Ohm is (this is also teached in school): $$1\Omega =1{\dfrac {{\mbox{V}}}{{\mbox{A}}}}$$ They describe this definition in words, too: The ohm is defined as a resistance between two points of a conductor when a constant potential difference of 1.0 volt, applied to these points, produces in the conductor a current of 1.0 ampere, the conductor not being the seat of any electromotive force. The definition of Ohm in SI basic units is: $$1\Omega = 1{\dfrac {{\mbox{kg}}\cdot {\mbox{m}}^{2}}{{\mbox{s}}^{3}\cdot {\mbox{A}}^{2}}}$$ It's really hard for me to get that this definition is correct. It's clear that mathematical calculations confirm this definition. But how do you describe the definition of the SI in words like that paragraph on wikipedia? Edit: How would you describe it? Although it is not common to do it that way, I think describing it that way, could be very interesting.
I would describe it as (example) 120 joules per coulomb (120 volts) divided by 60 coulombs per second (60 amps) equals 2 (ohms) of resistance "which means you have 1/2 or 2 times less the amperes then voltage". so maybe an ohm can be n of VpA (# of volts[SI] per amp[SI] or in this case, # of N Kg per charge for every charge per second). But that's still essentially giving the formulae.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/99326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 3 }
The physics of breaking eggs I have to to write an 4000 word research paper for my IB diploma in high school. It is called the extended essay. I was thinking about writing on the physics of breaking eggs. I came up with the idea that their might be some experiments I could do and find the best way to crack an egg. I'm having trouble finding sources. I think I need help with some directions I could take this topic as I find it very interesting.
I think jinawee's list (among the comments) is a great starting point. There was one interesting series of experiments that I didn't find in that list (at least not in the first 5 pages of the list that I looked through), which were reported in this PRL paper. (If you don't have access to PRL, check out the arXiv version.) What might make this series of experiments especially useful for a starting point for an IB extended essay, is that the experiment was very easy with well recorded results: they drilled small holes in the top and bottom of the eggs, blew their contents out, and dried them; then placed each egg inside a large, sealed plastic bag and catapulted it against the ground and then recorded the sizes of the fragments. You might then compare your results with theirs. There was also a great media coverage of this series of experiments: see a great description and videos among the APS focus stories (and here are shorter descriptions in Nature News and in the Guardian).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/99398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Formation of meniscus If molecules at the surface of a liquid have higher energy and want to minimise the surface area, then why is a mensicus formed which of course increases the surface area?
The reason is that the gas-liquid surface area is not the only surface area that is minimized. The total energy of the system (including only surface energies) is given by: $$E=\gamma_{lg} A_{lg}+\gamma_{sg} A_{sg} + \gamma_{sl} A_{sl} $$ Formation of a meniscus as opposed to a flat surface indeed increases $A_{lg}$, but, due to volume conservation, it lowers $A_{sg}$ or $A_{sl}$ depending on whether the meniscus is curved upward or downward respectively. If for example $\gamma_{sg}$ is very large then it can by energetically favourable to increase $A_{lg}$ in order to decrease $A_{sg}$ and lower the total energy. In this way you can in fact derive the contact angle that will result in the minimum energy state.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/99479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
While holding an object, no work done but costs energy (in response to a similar question) I read the answer to Why does holding something up cost energy while no work is being done? and wanting to know more, I asked my teacher about it without telling him what I read here. Instead of referring to muscle cells and biophysics, he answered my question in terms of entropy. He told me that while my arm muscles are stretched when I hold the object, they are more ordered. When my arm is at rest and muscles are not contracted, the muscles are less ordered (more entropy). So his conclusion was that the energy is required to keep the system (my arm muscles) from going to a state of higher entropy. However, the answer in terms of muscle cells doing work on each other (i.e the answer to the hyper-linked question) made more sense to me. Could someone please give me some intuitive sense to my teacher's answer or explain the link between the two answers if there are any...
Your teacher's explanation is incorrect. A simple counterexample can be constructed to illustrate this by considering what happens when the role of your arm is replaced by that of a rubber band. When a weight is suspended from the ceiling by a rubber band, the band stretches and its polymer chains become more ordered, in exact analogy to your teachers claim for an arm holding a weight. However, the rubber band can suspend the weight indefinitely for as long as you leave it there, and it's obvious that no energy is expended during that time. The correct answer, as you alluded to, is in biophysics, and the fact that keeping muscle cells contracted requires continual energy supplying; but this is a matter of biology, not physics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/99616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Expansion of ideal gas Consider an ideal gas in a chamber (A), separated from another chamber (B) by a diaphragm, in the following two situations: (1) Instantaneously burst the diaphragm (2) Plug in an isentropic nozzle so that the gas escapes gradually Are the two cases identical? I believe there should be some work done in the second case, because the opposing pressure in chamber B increases gradually, so the work done by the gas would increase as it reaches steady state. Is this reasoning correct, or should both cases have zero work done?
Assuming the walls of the container are perfect insulators the final steady states must be indentical, as they are determined only by the gas's volume, internal energy and particle number, all of which are the same in both cases. (Internal energy is the same as no energy is transfered to the gas from outside.) I think the confusion arises because one part of the gas is doing work on another part of the gas during the expansion in the second case, but the net work done by the gas as a whole is still zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/99807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The question about Lorentz invariance of the helicity quantum number for the massless particles I need to show that helicity is Lorentz invariant (under the proper Lorentz transformation) for the massless particles. I heard about most frequently used argument which contains an idea of impossibility to "outrun" the massless particle, so the sign of helicity is Lorentz invariant. But how about the absolute value of the helicity (when we don't divide this operator on the spin operator norm)? I want to ask about the method of proof of Lorentz invariance of helicity value $h$, which is determined for the massless particles as $$ W_{\mu} = hp_{\mu}, \qquad (1) $$ or in the spinor language $$ W_{c \dot {c}}\psi_{a_{1}...a_{n}\dot {b}_{1}...\dot {b}_{k}} = hp_{c \dot {c}}\psi_{a_{1}...a_{n}\dot {b}_{1}...\dot {b}_{k}}, \qquad (2) $$ where $h = \frac{n - k}{2}$ and $\psi_{a_{1}...a_{n}\dot {b}_{1}...\dot {b}_{n}}$ has only one independent component. Do expressions $(1), (2)$ give us the authomatical proof of the invariance of the helicity value? For example, the left part of $(2)$ transforms under spinor representation of the Lorentz group as the product of $n + 1$ undotted spinors and $k + 1$ dotted spinors, so the right side must transforms by the same way, so it means that $h$ is Lorentz scalar? Analogical thinking may be passed for $(1)$.
Rotational invariance follows easily from the Poincare algebra. The non-trivial part is the invariance under boosts. Using the Poincare algebra one finds: $[\frac{J\cdot P}{H},K_i]=i\bigg(\frac{\epsilon_{ijk}K_jP_k}{H}+J_i-\frac{P_i}{H^2}J\cdot P\bigg)\qquad(1)$ So it does not follow from the Poincare algebra that $\frac{J\cdot P}{H}$ is boost invariant. Instead only for those states for which $(\sigma P^{\mu}-W^{\mu})|\sigma\rangle=0$ do we have that helicity is boost invariant. One can show this by using that this condition requires $\sigma\vec{P}=H\vec{J}-\vec{P}\times\vec{K}\qquad (2)$ Plugging (2) into (1) and acting on such a state gives: $[\frac{J\cdot P}{H},K_i]|\sigma\rangle=0$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/99900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to get a function for the voltage across a capacitor connected to an AC voltage source? I am looking for the way of obtaining a solution for $V_{c}$ ,as a function of $t$ depending of $\omega$, of the following differential equation related to an electrical circuit involving a low-pass filter : $ \frac{ d V_{c}(t)}{d{t}} + \frac{V_{c}(t)}{\tau} = \frac{V_{s}(t)}{\tau} $. where, $V_{c}$ is the voltage across the capacitance, $V_{s}$ is the voltage given by the AC voltage source, $\tau$ is the time constant, considering that, $\tau = R C$ , $V_{s} = V_{in} \sin{( \omega t )}$ , $I_{R} = I_{C} = \frac{V_{s}(t) - V_{c}(t)}{R} = C \frac{dV_{c}(t)}{dt}$. I approached the problem by first solving the homogeneous part ( $ \frac{ d V_{c}(t)}{d{t}} + \frac{V_{c}(t)}{\tau} = 0$ ) and for which I get the following solution : $V_{c}(t) = K e^{\frac{-t}{\tau}}$ (where $K$ is a constant). I now need find the particular solution (to get the general solution : $S_{general} = S_{homogeneous} + S_{particular}$ ). I think the particular solution might be of the type : $V_{c}(t) = A \cos{(\omega t + \phi)}$. Edit : Once I replace the particular solution in the equation, I come to something depending on $\omega$ , $\phi$ and $\frac{A}{V_{in}}$ , but I don't see how to continue using the sum function sum formulas.
If you just plug in your suggested solution, you get $$\frac d{dt} A\cos(\omega t + \phi)+\frac 1{\tau}A\cos(\omega t + \phi)=\frac{V_{in}}\tau\sin(\omega t)\\ -A\omega \sin(\omega t + \phi)+\frac 1{\tau}A\cos(\omega t + \phi)=\frac{V_{in}}\tau\sin(\omega t)$$ Now you should be able to use the function sum formulas to solve for $\phi$ and $\frac A{V_{in}} $
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
The relationship between the energy and amplitude of a wave? Derivation? From multiple online sources I read that $$E \propto A^2$$ but when I mentioned this in class, my teacher told me I was wrong and that it was directly proportional to amplitude instead. As far as I know, every website I stumbled upon concerning this said that is the case. My teacher has a Ph.D and seems pretty experienced, so I don't see why he would make a mistake, are there cases where $E \propto A$? I also saw this derivation: $$\int_0^A {F(x)dx} = \int_0^A {kx dx} = \frac{1}{2} kA^2$$ located here, does anyone mind explaining it in a bit more detail? I have a basic understanding of what an integral is but I'm not sure what the poster in the link was saying. I know there is a pretty good explanation here, but it seems way too advanced for me (gave up once I saw partial derivatives, but I see that they're basically the same later on). The first one I linked seems like something I could understand.
The poster from that link is saying that the work done by the spring (that's Hooke's law there: $F=-kx$) is equal to the potential energy (PE) at maximum displacement, $A$; this PE comes from the kinetic energy (KE) and is equal to the integral of Hooke's law over the range 0 (minimum displacement) to $A$ (maximum displacement). Anyway, your professor is wrong. The total energy in a wave comes from the sum of the the changes in potential energy, $$\Delta U=\frac12\left(\Delta m\right)\omega^2y^2,\tag{PE}$$ and in kinetic energy, $$\Delta K=\frac12\left(\Delta m\right)v^2\tag{KE}$$ where $\Delta m$ is the change in mass. If we assume that the density of the wave is uniform, then $\Delta m=\mu\Delta x$ where $\mu$ is the linear density. Thus the total energy is $$E=\Delta U+\Delta K=\frac12\omega^2y^2\,\mu\Delta x+\frac12v^2\,\mu \Delta x$$ As $y=A\sin\left(kx-\omega t\right)$ and $v=A\omega\cos(kx-\omega t)$, then the energy is proportional to the square of the amplitude: $$E\propto\omega^2 A^2$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 1, "answer_id": 0 }
The physics of sound boards As a kid I was bemused at why soundboards worked. A small sound could be demonstrably amplified simply by attaching the source to a surface that is rigid and not too thick. How could the volume increase so much given that there was no extra energy added? As an adult I kind-of-think I know, but there are still many nagging questions. I assume it has to do with the waves propogating from a vibrating object actually being a compression on one side of the object just as they are a decompression on the other side, and something about that lack of coherence limits the volume. Exactly why remains a mystery to me. Is separating the pocket of compression and decompression so that the boundary along which they meet is quite small part of the issue? My question is what are the physics that make a soundboard work? Interesting specifics that would be nice-to-knows would be why does a hollow one (like a violin) work better than a solid one (imagine a filled in violin)? How important are the harmonics of the solid? But the real question is what are the physics that make a soundboard work? P.S. I am a mathematician, so feel free to wax very mathematical if it is necessary to give a good explanation.
The soundboard resonates with the same frequencies as the source. It takes it energy form the vibrating source. As the soundboard distributes this energy over a larger volume of air, the sound is louder, but the energy is depleted quicker, limiting the time you hear the sound. Try this with a tuning fork. Hold it by your ear and time the duration 0of the sound. Repeat the measurement with the tuning fork held against a glass plate, a window will suffice. Even without a stopwatch you will find the duration of the amplified sound shorter. Furthermore the shape and material of the (hollow) soundboard determines the relative amplification of the produced frequencies. That's why certain violins sound better than others (and are worth more).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Hydrogen atom: potential well and orbit radii I happened to open up an old solid-state electronics book by Sah, and in it he says: "it is evident that the electron orbit radius is half the well radius at the energy level En" The orbit radius is $r_n=\frac{4\pi\epsilon_0 ℏ^2 n^2}{mq^2}$ and the potential well $V(r_n)=\frac{−q^4m}{(4\pi\epsilon_0)^2ℏ^2n^2}$ Of course the orbit has to be confined in the well, but it's not obvious to me why it should be exactly half the well radius? This isn't something I recall seeing before either in any other text. Thanks
Of course the orbit has to be confined in the well, but it's not obvious to me why it should be exactly half the well radius? This isn't something I recall seeing before either in any other text. Please keep in mind that when people are speaking of orbits in the microcosm of particles and nuclei, they are speaking about average numbers over the orbitals . The quantum mechanical solution does not give orbits. We can only calculate wave functions whose magnitude squared gives us orbitals, probability distributions, not classical orbits. Cross-section of computed hydrogen atom orbital (ψ(r, θ, φ)2) for the 6s (n = 6, ℓ = 0, m = 0) orbital. Note that s orbitals, though spherically symmetrical, have radially placed wave-nodes for n > 1. However, only s orbitals invariably have a center anti-node; the other types never do. So of course the electron is not confined into a specific radius except in the Bohr model. It is averages that hold in the quantum mechanical calculations, which describe reality.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Radiation and absorption Can thermal radiation from a cooler object (B, which emits longer wavelength radiation) ever ADD to the overall thermal energy level of a warmer object (A, which emits shorter wavelength radiation)? Subsidiary question: What exactly happens - at the molecular level - to the longer wavelength radiation from B as it arrives at A?
The thermal radiation from B does indeed heat object A. The trouble is that A loses energy by thermal radiation faster than the thermal radiation from B can heat it, so the end result is that A cools down. You can show this very easily. The Stefan-Boltzmann law tells us that the energy flux per unit area is proportional to $T^4$ so the rate of heat loss from A is: $$ j_A = \sigma T_A^4 $$ If we take the extreme case where B is right next to A so there is no attenuation of the thermal radiation from it, the rate of heat loss by B and therefore the heat gain by A is: $$ j_B = \sigma T_B^4 $$ So the net rate of heat loss by A is: $$ j_{net} = \sigma \left(T_A^4 - T_B^4\right) $$ and since $T_A > T_B$ the net rate of heat loss is positive i.e. A is cooling down so B cannot raise the temperature of A. You ask what happens at the molecular level. The answer is that heat absorption (and radiation) is a messy business and many different interactions can occur. The dominant one is usually that EM radiation makes electrons in the solid oscillate, and the electrons transfer this energy to the lattice by colliding with it. So the the EM energy ends up as lattice vibrations i.e. heat.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does electron in wave form have mass? I heard from my lecturer that electron has dual nature. For that instance in young's double slit experiment electron exhibits as a particle at ends but it acts as a wave in between the ends. It under goes diffraction and bends. But we don't see a rise in energy. It has to produce 500kev of energy (please correct if my approximation is wrong) according to mass energy equivalence relation. But wave is a form of pure energy and doesn't show properties of having mass as of experimental diffraction. So where is the mass gone?
The so called Copenhagen Interpretation avoids the question about whether the electron is a particle or a wave. This question is directly not allowed. In fact, the wave function is an instrument of the theory with not physical meaning. Acording to CI, the goal of the theory is only to make predictions about the results of a specific experiment. In the case of the double slit experiment, we can ask: what is the probability that a given detector at screen will be activated? The theory has a precise answer to this question. But the question: how does an electron behave? does not relate with a specific experimental situation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 7, "answer_id": 6 }
Non-symmetric Lorentz Matrix I was working out a relatively simple problem, where one has three inertial systems $S_1$, $S_2$ and $S_3$. $S_2$ moves with a velocity $v$ relative to $S_1$ along it's $x$-axis, while $S_3$ moves with a velocity $v'$ along $S_2$s $y$-axis. So I constructed the Lorentz transformation by multiplying the transformation from 1 to 2 with the transformation from 2 to 3, and I obtain the transformation. Now, this is all nice and well, and it is also where the question for which I'm doing this stops. However, it made me think: the resulting Lorentz matrix is not symmetric! Somehow I always thought that they always were (simply because I hadn't encountered one that was not), which I suppose is naive. Is there any information inherent to the fact that the transformation is not symmetric? Does this in some way mean that there is a rotation happening? This is what seemed most plausible to me, as if you boost 'harder' in one direction than in the other, your axis is essentially rotating. Or am I going about this the wrong way?
Yes, your intuition is correct: two different boosts do contain one rotation, and precisely two boosts along two orthogonal axes contain one rotation around the third orthogonal axis --- the most direct way to see that is by considering that the commutator of two different boosts is one rotation, and more completely the Lorentz algebra of rotations $R_{a}$ and boosts $B_{a}$ is given by the commutation relationships $$[R_{a},R_{b}]=\varepsilon_{abc}R^{c}$$ $$[B_{a},B_{b}]=-\varepsilon_{abc}R^{c}$$ $$[R_{a},B_{b}]=\varepsilon_{abc}B^{c}$$ where $\varepsilon_{abc}$ is the Levi-Civita symbol and indices are moved by employing the Minkowski matrix. So to summarize, yes, you were right.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What are functions of a complex variable used for in physics? Whenever someone asks "Why are complex numbers important?" the answer, at least in the context of physics, usually includes things like quantum mechanics, oscillators and AC circuits. This is all very fine, but I've never seen anyone talk about functions of a complex variable. Complex functions of real variables are used often enough, but I do not yet see (with one minor exception; see below) why my university would decide to dedicate half a semester to the theory of holomorphic functions if there are no physics applications. Don't get me wrong; I don't regret learning about complex functions. I think it is one of the most beautiful subjects within math, but my question still stands. Are there any applications of functions $f: \mathbb{C} \to \mathbb{C}$ within physics? About the exception: If a function $f$ is holomorphic, then its components $u,v$ are automatically harmonic. This is a quick way to find solutions to Laplace's equation $\nabla^2 u = 0$, but surely this minor trick doesn't justify having to learn about the whole theory.
This is all very fine, but I've never seen anyone talk about functions of a complex variable. Laplace transform: The Laplace transform is a widely used integral transform in mathematics with many applications in physics and engineering. It is a linear operator of a function f(t) with a real argument t (t ≥ 0) that transforms f(t) to a function F(s) with complex argument s, given by the integral $$F(s) = \mathcal L\{f(t)\}= \int_0^\infty f(t)e^{-st}dt, \, s = \sigma +i\omega $$ Since, by the above $$\mathcal L\{\frac{d}{dt}f(t)\} = sF(s) - f(0^-)$$ and $$\mathcal L\{\int_0^tf(\tau)d\tau\} = \frac{1}{s}F(s)$$ the Laplace transform of a differential or integral equation is an algebraic equation in the complex variable $s$. An elementary application is to the Tautochrone problem.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 5, "answer_id": 0 }
What determines how much electrical charge an object can hold? What determines how much electrical charge an object can hold? Does increase voltage force more electrical charge to be store in an object (Van de Graaff generator), since electric field increase as voltage increase. I don't think it is about relative permittivity in dielectric material because it just creates a bipolar.
It should be associated with the work function, which is the minimum thermodynamic work (i.e. energy) needed to remove an electron from a solid to a point in the vacuum immediately outside the solid surface, and different materials have different work functions. Consider a very simple case, that a spherical electrical object exists in vacuum. Considering that we move an electron from the object to the infinitely distance, if the energy of the system decrease, then it shows the electrical object is unstable. Hence, the maximum electric charge that an object can hold should make the energy unchanged in the process of removing electron. If its radius is $R$, the work function is $W$, the maximum electric charge that it can hold is $Q$, then $$\frac{1}{4 \pi \epsilon_0} \frac{Qe}{R}=W$$ the left side is the change of electrical energy. As for the object with other shape, you have to change the form of expression in the left side.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Enthalpy Change in Reversible, Isothermal Expansion of Ideal Gas For the reversible isothermal expansion of an ideal gas: $${∆H}={∆U}=0 \tag1$$ This is obvious for the case of internal energy because $${∆U} = \frac {3}{2} n R {∆T} = 0 \tag2$$ and $${∆U} = -C_P n {∆T} = 0 \tag3$$ For the case of enthalpy it is easy to see that $${∆H} = -C_v n {∆T} = 0 \tag4$$ I've also seen $${∆H} = ∆U + ∆(PV) = ∆U + nR{∆T} = 0 \tag5$$ Now for the part I don't understand. $$dH = dU + PdV \tag6$$ $$dH = dU + nRT \frac {dV}{V} \tag7$$ $${∆H} = {∆U} + nRT \ln\frac {V_2}{V_1} \tag8$$ $${∆H} = 0 + nRT \ln\frac {V_2}{V_1} = \ln\frac {V_2}{V_1} ≠ 0\tag9$$ Clearly, it is incorrect to make the substitution $ P = nRT/V$ in going from $(6)$ to $(7)$. Why is that? I thought equation $(6)$ was always valid, and integrating such a substitution should account for any change in the variables throughout the process. Why does this not yield the same answer as $(4)$ and $(5)$?
$$PV =nRT$$ So, for constant temperature, $dU=0$. $H=U+PV$ The term $d(PV) = 0$ because $PV$=constant So, $dH=0$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Bulk flow of air in a long tube between Antarctica and Australia? I have a 5km diameter clear plastic tube which is open at each end and runs from the center of Antarctica to Lake Eyre in Australia. The tube is on the ground where it can be and at sea level on the ocean. Will there be bulk flow of the air in the tube? If so, which way will the air flow?
I would think the air in the tube will generally flow from Australia to Antarctica: what we have is actually a "smoke stack": the air is hot in Australia, cold in Antarctica, and the altitude in the center of Antarctica is a couple of kilometers higher than the altitude of Lake Eyre (which is below the sea level).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Physical Interpretation of the Bloch vector In the expression of the density matrix of a (Electron-Spin) Qubit $$ \rho=\frac{1}{2}(I + x \sigma_x + y \sigma_y + z \sigma_z) $$ where $\tau=(x,y,z)$ is unit vector in the Bloch sphere, which is the physical Interpretation of $\tau$? Can it be interpreted as the direction of spin axis of the electron?
Performing explicit but trivial computations, it turns out that (assuming $\hbar=1$): $$\tau/2 = (\langle S_x\rangle_\rho, \langle S_y\rangle_\rho, \langle S_z\rangle_\rho)$$ So $\tau/2$ describes the expectation values of the three components of the spin when the system is in the, generally, mixed, state $\rho$. Indeed: $$\langle S_k \rangle_\rho := tr\left(\frac{1}{2} \sigma_k \frac{1}{2}(I + \sum_{i=1}^3x_i \sigma_i)\right) = \frac{1}{4}\left( tr \sigma_k + \sum_{k=1}^3 x_itr(\sigma_k \sigma_i)\right)= \frac{1}{4}\left(0 + \sum_{i=1}^3 2 x_i \delta_{ik} \right) = \frac{1}{2} x_k\:.$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/100960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Taking pivot about an accelerating point Given this question: A small ball of mass $m$ and radius $r$ rolls without slipping on the inside surface of a fixed hemispherical bowl of radius $R>r$. What is the frequency of small oscillations? The standard solution is to write Newton's second law for the ball and then take the centre of mass of the ball to be the pivot and write $$\tau = I \alpha.$$ Only the frictional force contributes to the torque in this case. From Newton's second law, I can express the frictional force in terms of the gravitational force and therefore the frictional force can be eliminated in the equation for torque. I then make small angle approximation and get the equation to be of the form $$k\theta=-I\ddot{\theta}$$ from which I can find the frequency. Another approach uses the point of contact of the ball with the sphere as the pivot. It has the advantage that the frictional force adds no torque. Both approaches give the same result. My question is since both pivots that we have chosen are accelerating, why are not fictitious forces considered? In the first place, can the pivots that we choose when writing $$\tau = I \alpha$$ be accelerating?
My question is since both pivots that we have chosen are accelerating, why are not fictitious forces considered? Pivot is accelerating if it is a geometric point defined as the point of contact. Pivot is not accelerating if it is the material point of the small ball; it stands still and has zero acceleration. The description resulting in the equation of motion is done in the frame of the bowl, which is considered as inertial frame, hence no inertial forces.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/101110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Confused about Impulse Encountered a problem that involves impulse while studying for my exam and I'm not sure how to even approach it. I know that momentum is conserved, but I'm not sure how to relate that to avg force. Maybe someone can help point me in the right direction? I know that it's in quadrant III, through intuition, but I can't come up with a provable explanation Relevant equation: $J=F_{avg}\Delta T$
The total impulse is the change in momentum (note that this is a vector equation): $$ \vec{I} = \vec{p}_{final} - \vec{p}_{initial} $$ You know the momentum before and after the collision so you can calculate the total impulse, both magnitude and direction. Impulse if force times time, so the direction of the force will be the same as the direction of the impulse.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/101226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Fermi-Dirac distribution derivation? I am trying to derive the Fermi-Dirac statistics using density matrix formalism. I know that $$<A>= Tr \rho A.$$ So I started from $$<n(\epsilon_i)>= Tr \rho n(\epsilon_i)=\frac {1}{Z} \sum e^{-\beta \epsilon_i n_i}n_i=\frac {1}{Z} e^{-\beta \epsilon_i}. $$ In the last passage I used the pauli principle ($n_i=0,1$). Now to derive the correct Fermi-Dirac distribution I have to use for $Z=1 +e^{-\beta \epsilon_i}$. Why I have not to use the general form of $$Z=\prod_i (1 +e^{-\beta \epsilon_i})~?$$ Can anybody give me a good explanation?
The derivation of the Fermi-Dirac distribution using the density matrix formalism proceeds as follows: The setup. We assume that the single-particle hamiltonian has a discrete spectrum, so the single-particle energy eigenstates are labeled by an index $i$ which runs over some finite or countably infinite index set $I$. A basis for the Hilbert space of the system is the occupation number basis \begin{align} |\mathbf n\rangle = |n_0, n_1, \dots\rangle \end{align} where $n_i$ denotes the number of particles occupying the single-particle energy eigenstate $i$. For a system of non-interacting identical fermions, the set $\mathscr N_-$ of admissible occupation sequences $\mathbf n$ consists of those sequences with each $n_i$ equal to either $0$ or $1$. Let $H$ be the hamiltonian for such a system, and let $N$ be the number operator, then we have \begin{align} H|\mathbf n\rangle = \left(\sum_{i\in I}n_i\epsilon_i\right)|\mathbf n\rangle, \qquad N|\mathbf n\rangle = \left(\sum_{i\in I} n_i\right) |\mathbf n\rangle \end{align} where $\epsilon_i$ is the energy of eigenstate $i$. We can also define an observable $N_i$ which tells us the occupation number of the $i^\mathrm{th}$ single-particle energy state; \begin{align} N_i|\mathbf n\rangle = n_i|\mathbf n\rangle \end{align} Note that we are attempting to determine the ensemble average occupation number of the $j^\mathrm{th}$ energy eigenstate. In the density matrix formalism, this is given by \begin{align} \langle n_j\rangle =\mathrm{tr}(\rho N_i) \end{align} where \begin{align} \rho = \frac{e^{-\beta(H-\mu N)}}{Z}, \qquad Z = \mathrm {tr}\big(e^{-\beta(H-\mu N)}\big) \end{align} The proof. * *Show that \begin{align} Z = \sum_{\mathbf n\in \mathscr N_-}\prod_{i\in I}x_i^{n_i} \end{align} where $x_j = e^{-\beta(\epsilon_j-\mu)}$, the sum is over admissible sequences $\mathbf n$ of occupation numbers of single-particle energy states, and the product is over indices $i$ labeling an orthonormal basis of single particle energy eigenstates. *Show that the ensemble average occupation number of the $j^\mathrm{th}$ state can be computed as follows: \begin{align} \langle n_j\rangle = x_j\frac{\partial}{\partial x_j}\ln Z \end{align} *Show that the product and the sum in the partition function can be "exchanged" to give \begin{align} Z = \prod_{i\in I}\sum_{n=0}^1 x_i^n \end{align} where the product is now over single-particle energy eigenstates, and the sum is over admissible occupation numbers of a single-particle state. *Combine the results of steps 2 and 3 to show that \begin{align} \langle n_j\rangle = \frac{1}{e^{\beta(\epsilon_j-\mu)}+1} \end{align} which is the desired result.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/101408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Does the equation of continuity hold for turbulent flows? My textbook mainly deals with laminar flows. The book derives the equation of continuity, which states that the cross-sectional area times the velocity of a flow is always constant. But nowhere in the derivation does the textbook explicitly assumes that the flow is laminar. So, does the equation hold for turbulent flows too?
Continuity is just the principle of conservation of mass in differential form. The full continuity equation is (in index notation): $\frac{\partial \rho}{\partial t} = -\frac{\partial }{\partial x_i}(\rho u_i)$ For example, consider an infinitesimal control volume (CV). The equation says that the local $\rho$ (inside the CV) will decrease in time if the flux divergence term $\frac{\partial }{\partial x_i}(\rho u_i)$ is positive. The flux divergence term transports quantities from one region to another in a manner that the net contribution to the system remains fixed whereby if integrated over the domain and there are no sources at the boundaries, their contribution is zero. This is a very fundamental conservation law, if we assume incompressible flow and that the density of individual fluid particles does not change but each particle may have different densities - you arrive at the form in your book. This is derived from first principles so any flow within these parameters, including turbulent flows, no matter how disorganized, follow the same mass conservation law.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/101555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Does a mirror help a near-sighted persion see at a distance clearer? A near-sighted person without eye-glasses can not clearly see things at distance. If he takes a photo of the things at distance, he can see the things from the photo much clearer, because he can place the photo much closer to his eyes. If he turns his back at the things at distance, and holds a mirror close to his eyes in a position so that the mirror reflects the things at distance behind him, will he see the things much clearer than if he looked at the things at distance directly?
does a flat transparent glass make near-sighted people see farther? the answer to this question is the answer to yours.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/101627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 8, "answer_id": 3 }
Wrong positioned Ampere-meter and Voltmeter I'm dealing with a problem here and even that I'm trying to solve it i can't It says: In what figures the voltmeter and ampere-meter are wrong positioned? I think that all the the others are correct except the second one. Can anyone help me?
It's quite opposite. All are correct except the second one. Ampere-meter should be positioned in series configuration and voltmeter should be positioned in parallel configuration to the measured element. Notice that voltmeters should have very high resistance, so that most of the current will flow through the measured element. The bigger resistance of the voltmeter, the better it's accuracy. Similarly, ampere-meters should have as low resistance as possible, to limit the influence on the circuit.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/101701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
x-ray in oil droplets experment In oil droplet experiment, x-ray makes the air molecules negatively charged. How does that work? X-ray carries high energy and ionizes air, doesn't that make air positively charged?
I recently did this lab and the answer I got to this question was: The x-rays ionize the air molecules or gas in between the capacitors. The free electrons then fly off and attach them selves to the oil drops. Alternatively, you could use an electron beam (same effect).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/101811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does putting color filters make the sources incoherent? In Young's double slit experiment a single source is used to illuminate two slits which then acts as two coherent sources to produce interference pattern. But, what if I put color filters on the two slits. Will it make the slits incoherent? I myself think yes because in one of my previous questions here I got to know that two different sources cannot be coherent. Put filters will make the slits act like two different sources which are not coherent as light is coming through two different filters. Is my thinking correct?
This will not happen because color filters don't work like this. A red color filter does not convert blue photons to red photons. It absorbs photons that are not red (most of them) and lets red photons pass unaffected. If you use a red filter for one slit then a blue photon will not go through that slit at all, so you fill effectively have a single-slit experiment. You may choose the filters such that a photon has a certain probability to pass through both slits. Such photons will interfere same as they would without a filter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/101867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The best way to cool the teapot My professor today in the class made us a question: "Lets say we have a teapot with water in it.The water is hot.Now we want to cool the water. Will it cool faster if we put an ice cube above the teapot or under the teapot." My answer was the it will cool faster if we put the ice cube above it because the warm air stays up and the ice cube will melt faster. He didn't tell me if I was right or not. Can anyone help me ?
Open or not open, touching or not touching, ice cube above is better than ice cube below where cooling of the pot is concerned. Your choice is correct but it is more accurate to say that air cooled by the ice cube sinks onto the teapot thereby cooling it (note that this mechanism cannot occur if the cube was under the teapot). This isn't a question about which is the best position to melt an ice cube (the answer to that would still be the same though) so it is not relevant to talk about what happens to the ice cube. If the ice cube was allowed to touch the pot, then the correct answer is still on top of the teapot. The same idea of convection applies but now to liquid in the pot rather than air around it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/101915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why doesn't diamond glow when hot? In an answer to this SE question, the respondent explains that heating a perfect diamond will not cause it to glow with thermal blackbody radiation. I don't quite follow his explanation. I think it comes down to: there is no mechanism for diamond to generate light in the visible region of the spectrum. He mentions that interband transitions are well out of the visual range, so there will be no contribution from that. He mentions that the Debye temperature for diamond is > 2000 K. I presume that the argument here is that optical phonons will be frozen out, too. (But diamond doesn't have infrared-active phonons, does it?) So is that why hot diamond doesn't glow? I suppose that if one considers real (not ideal) crystals, imperfections, impurities, and the existence of surfaces lead to the possibility of emission mechanisms, and thus glow. In fact it might be the case that a finite but otherwise perfect crystal might have an extremely faint glow. Is this basically the reason that hot diamond does not glow? Further elucidation welcome.
I bet that diamond will glow. you may not have heard of quenching before, but Soviet opticians love to tell the story of Vavilov. he studied luminescence, and before photomultipliers were invented, he used sit in the dark room for hours. the eye would get used to darkness, and become very sensitive. the observer would be able to "detect" a single photon. ok, I'm exaggerating, not a single photon but a few of them at once :) it's fascinating story, they developed quantitative techniques using their eyes. anyhow, i bet that if you heat up the diamond, you'll see it glow in the dark.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/101960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Unusual observation in photoelectric effect simulation I was studying a photoelectric simulation (http://phet.colorado.edu/en/simulation/photoelectric) and I observed a really unusual thing. When I held intensity and potential at a constant value and then varied the frequency I observed that there was peak in photocurrent. That is, it first increased when moving towards ultraviolet and then decreased. Please try it yourself and explain.
You are increasing the energy of each photon, but holding the intensity constant. If you do this, fewer and fewer photons are leaving the lamp, so fewer and fewer are hitting the metal. This is probably the effect you are seeing.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/102044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can Information Travel Faster Than The Speed Of Light? Many believe that nothing can travel faster than speed of light, not even information. Personally, i think theoretically information can. Consider this following imaginary experiment: Imagine we are living on a planet that is big enough for a, let's say, 10-light-seconds-tall tower to erect. We hang a pendulum near the planet's surface using a long thin wire at the top of the tower. If someone at top of the tower cut the wire then the pendulum will instantly falling to the ground. In this case we can say that the information "someone cuts the wire" travels 10-light-seconds distance in no time. Since someone on the surface can only see the act of cutting the wire 10 seconds later, can we infer that the information travels faster than light?
The idea that the pendulum would drop instantly isn't even true of short, Earth-bound pendula: c.f. various Internet videos about dropping slinkies (toy springs). The reason why slinkies drop in this way is essentially the same reason why an idealised pendulum (strong enough to hold itself together, albeit maybe not as stretchy as a slinky) would not immediately drop: there is still a speed at which the tension in the pendulum is communicated, relating to the speed of sound in the material. Until each part of the pendulum/slinky "finds out" that the part above it isn't holding it up any more, it stays essentially at rest. This speed of sound is limited by intra-molecular information transfer, which is itself limited by the speed of light; ergo the bottom of the pendulum in your thought experiment will not fall any sooner than ten seconds before the top is released.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/102106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Energy in a Solenoid? Consider a circuit consisting of a battery, a resistor and a solenoid inductor. Then, the emf $\mathcal{E}$, is defined as: $$\mathcal{E} = L\frac{di}{dt} + iR$$ Multiplying both sides by $i$ gives: $$\mathcal{E}i = Li\frac{di}{dt} + i^2R$$ The term on the left side gives the rate at which the battery does work. Since the second term on the right side gives the rate at which energy appears as thermal energy in the resistor, the second term gives the rate at which magnetic potential energy is stored in the magnetic field. Therefore $$\frac{dU_B}{dt} = Li\frac{di}{dt}$$ $$\int^{U_B}_{0} dU_B = \int^i_0 Li\text{ }di$$ $$U_B = \frac{1}{2}Li^2$$ Q1) I'm assuming there finding the energy in the steady state. I thought the current was constant in the steady state so shouldn't $\frac{di}{dt}$ be zero? Q2) Why isn't the emf: $$\mathcal{E} = -L\frac{di}{dt} + iR$$ Since the self-induced emf generated by an inductor tries to oppose the flow of current, shouldn't the emf be the opposite way? Q3)The bounds of the integral: $U_B$ and $i$. How are they related? Are they the energy and current at the same point in time $t$? Or is $U_B$ the energy at any point in time and $i$ the current at some other point in time (not necessarily the same times)?
1) With a constant and DC power source eventually the solenoid will become fully 'charged'. At that point its 'resistance' term vanishes because it no longer produces an emf against the battery. At this point, the $\frac{di}{dt}$ term will be zero, because the current isn't changing. 2) When you cut power, the magnetic flux is no longer maintained by the current. However, the flux will try to stay constant, so that means the current will continue as it did before, powered by the magnetic field of the solenoid. 3) $U_B$ is simply all the $dU_B$ added up over time. You'd have to integrate until $t =\infty$, aka until the current behaves as if there's no solenoid at all (the integral at the righthand side).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/102170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does a half-life work? Carbon-14 has a half-life of 5,730 years. That means that after 5,730 years, half of that sample decays. After another 5,730 years, a quarter of the original sample decays (and the cycle goes on and on, and one could use virtually any radioactive isotope). Why is this so? Logically, shouldn't it take 2,865 years for the quarter to decay, rather than 5,730?
I think you're confused simply by the language. Remember that it's a quarter of the original sample. So it's like compounding interest in the bank. You start with initial principal, once the interest is compounded, you might say that the percentage of that principal is ADDED TO the "principal", and then a percentage of THAT is calculated, and added to that second number. Similarly with nuclear decay, except you're subtracting, and you're subtracting an even half over a year instead of adding something like .05% every month (or whatever number banks use). Half of that second sample is a quarter of the original. So you could express this fraction of the original as $\frac{1}{2^n}$ where $n = $the unit of time for your constant. In this case, a year. So for every year, $\frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \frac{1}{16}$, etc.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/102222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 13, "answer_id": 8 }
Prove identity of partial derivatives I can not do the following problem: Prove the identity: $$\left( \frac{\partial x}{\partial y} \right)_{z}\left( \frac{\partial y}{\partial z} \right)_{x}\left( \frac{\partial z}{\partial x} \right)_{y}=-1$$ State the properties that must be $x=x(y,z)$, $y=y(x,z)$, $z=(x,y)$. The truth is I do not know how to start, and they do not know how to interpret the functions $x$, $y$, $z$. Any help or explanation will be the most grateful.
$x=x(y,z)$, $y=y(x,z)$, $z=(x,y)$ $$dx= (\frac{\partial x}{\partial y})_z dy + (\frac{\partial x}{\partial z})_y dz$$ $$dy= (\frac{\partial y}{\partial x})_z dx + (\frac{\partial y}{\partial z})_x dz$$ $$\therefore dx= (\frac{\partial x}{\partial y})_z [(\frac{\partial y}{\partial x})_z dx + (\frac{\partial y}{\partial z})_x dz] + (\frac{\partial x}{\partial z})_y dz$$ $$dx= (\frac{\partial x}{\partial y})_z (\frac{\partial y}{\partial x})_z dx + [(\frac{\partial x}{\partial y})_z(\frac{\partial y}{\partial z})_x + (\frac{\partial x}{\partial z})_y] dz$$ $$(\frac{\partial x}{\partial y})_z (\frac{\partial y}{\partial x})_z dx + [(\frac{\partial x}{\partial y})_z(\frac{\partial y}{\partial z})_x + (\frac{\partial x}{\partial z})_y] dz=1dx+0dz$$ $$ (\frac{\partial x}{\partial y})_z(\frac{\partial y}{\partial z})_x + (\frac{\partial x}{\partial z})_y=0 $$ Using reciprocal relation: $$ (\frac{\partial x}{\partial y})_z = \frac{1}{(\frac{\partial y}{\partial x})_z} $$ $$\left( \frac{\partial x}{\partial y} \right)_{z}\left( \frac{\partial y}{\partial z} \right)_{x}\left( \frac{\partial z}{\partial x} \right)_{y}=-1$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/102278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why are free electrons free? This is what I understand so far: in a conductor, the ions have a weak pull on the valence electrons. So when an electric field is applied, the free electrons are able to easily move about. Makes sense. In a neutral conductor with no electric field, the free electrons aren't bound to any ions. Why? I understand that the ions have a weak pull on the electrons, but what makes electrons leave the ion and stay free?
In a single free atom, electrons have well defined energy levels and are somewhat bound to atom. Consider the following quantum mechanical model of atom to get an idea about an isolated atom. When all this isolated atoms come together to form the crystal, the atoms do not have well defined energy levels. There will be molecular orbitals. When the atoms get even closer, energy bands are formed which are continuous variation of energy levels. Continuous energy level mean electrons are somewhat free to move. Energy bands splits into valence and conduction band. These two bands are separated by certain energy gap. Depending on this energy gap, materials are distinguished as conductors, insulators, and semiconductors. The energy required to attain energy level of conduction band is obtained by applying even small electric field in conductors. So, formation of continuous variation of energy levels makes electrons to be free.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/102345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
A paradox to Lenz's law I have read that in simple words, Lenz's law states that: The direction of current induced in a conductor is in such a fashion, that it opposes its cause. This validates law of conservation of mass-energy. I arranged the following thought experiment: Let there be a pendulum with its bob being a small bar magnet. The pendulum is oscillating in a direction parallel to the horizontal axis of the bar magnet on which the North and South poles lie. Also, the pendulum is in complete vacuum. (But gravity is there to make the pendulum oscillate.) At one of the extreme positions of the pendulum, we keep a solenoid, ends of which are connected to a load resistance. As the North pole of the bar magnet approaches the solenoid, current is induced in the solenoid in such a fashion that a North pole is formed at the end near to the bar magnet's North pole, and the bar magnet gets repelled towards the other side. The bar magnet then goes to the other end and then comes back (as a pendulum does) and again the same process is repeated. This should go on forever, and current should keep appearing across the load resistance. How does the law of conservation of energy hold here?
Actually pendulum wont oscillate forever.Its energy turns into heat in resistor.In other word it's domain would be something like this figure.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/102556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 3 }
Some questions about Dirac spinor transformation law I have perhaps meaningless question about Dirac spinors, but I'm at a loss. The transformation laws for for left-handed and right-handed 2-spinors are $$ \tag 1 \psi_{a} \to \psi_{a}' = N_{a}^{\quad b} \psi_{b} = \left(e^{\frac{1}{2}\omega^{\mu \nu}\sigma_{\mu \nu}}\right)_{a}^{\quad b}\psi_{b}, \quad \psi^{b}{'} = \psi^{a}(N^{-1})_{a}^{\quad b}, $$ $$ \tag 2 \psi_{\dot {a}} \to \psi_{\dot {a}}' = (N^{*})_{\dot {a}}^{\quad \dot {b}} \psi_{\dot {b}} = \left(e^{\frac{1}{2}\omega^{\mu \nu}\tilde {\sigma}_{\mu \nu}}\right)_{\dot {a}}^{\quad \dot {b}}\psi_{\dot {b}}, \quad \psi^{\dot {b}}{'} = \psi^{\dot {a}}(N^{*^{-1}})_{\dot {a}}^{\quad \dot {b}}, $$ where $$ (\sigma_{\mu \nu})_{a}^{\quad b} = -\frac{1}{4}\left(\sigma_{\mu}\tilde {\sigma}_{\nu}-\sigma_{\nu}\tilde {\sigma}_{\mu}\right), \quad (\tilde {\sigma}_{\mu \nu})_{\quad \dot {a}}^{\dot {b}} = -\frac{1}{4}\left(\tilde {\sigma}_{\mu} \sigma_{\nu}- \tilde {\sigma}_{\nu}\sigma_{\mu}\right), $$ $$ (\sigma_{\mu})_{b\dot {b}} = (\hat {E}, \sigma_{i}), \quad (\tilde {\sigma}_{\nu})^{\dot {a} a} = -\varepsilon^{\dot {a}\dot {b}}\varepsilon^{b a} \sigma_{\dot {b} b} = (\hat {E}, -\sigma_{i}). $$ Why do we always take the Dirac spinor as $$ \Psi = \begin{pmatrix} \varphi_{a} \\ \kappa^{\dot {b}} \end{pmatrix}, $$ not as $$ \Psi = \begin{pmatrix} \varphi_{a} \\ \kappa_{\dot {b}} \end{pmatrix}? $$ According to $(1), (2)$ first one transforms as $$ \delta \Psi ' = \frac{1}{2}\omega^{\mu \nu}\begin{pmatrix}\sigma_{\mu \nu} & 0 \\ 0 & -\tilde {\sigma}_{\mu \nu} \end{pmatrix}\Psi , $$ while the second one - as $$ \delta \Psi ' = \frac{1}{2}\omega^{\mu \nu}\begin{pmatrix}\sigma_{\mu \nu} & 0 \\ 0 & \tilde {\sigma}_{\mu \nu} \end{pmatrix}\Psi , $$ so it is more natural than first, because the first one has both covariant and contravariant components, while the second has only covariant (contravariant components).
I think it is convention to write the conjugate Weyl fermion in, \begin{equation} \left( \begin{array}{c} \phi _\alpha \\ \bar{\kappa} ^{\dot{\beta }} \end{array} \right) \end{equation} (it is common to put a bar over the conjugate representation), with a raised index in order to comply with the ${} _{ \dot{\alpha} } ^{ \,\, \dot{\alpha} } $ contraction of spinor indicies. Recall that we write, \begin{equation} \phi \chi \equiv \phi ^\alpha \chi _\alpha , \quad \psi \bar{\chi} \equiv \phi _{\dot{\alpha}} \bar{\chi} ^{\dot{\alpha}} \end{equation} Thus having the particular index structure for the Dirac spinor gives, \begin{align} \bar{ \Psi } \gamma ^\mu \Psi & = \left( \begin{array}{cc} \kappa ^{\beta } &\bar{ \phi} _{\dot{\alpha}}\end{array} \right) \left( \begin{array}{cc} 0 & ( \sigma ^\mu ) _{ \beta \dot{\beta} } \\ ( \bar{\sigma} ^\mu ) ^{ \dot{\alpha } \alpha } & 0 \end{array} \right) \left( \begin{array}{c} \phi _\alpha \\ \bar{ \kappa} ^{\dot{\beta}} \end{array} \right) \\ & = \kappa \sigma ^\mu \bar{\kappa} + \bar{\phi} \bar{\sigma} ^\mu \phi \end{align} where all the dotted indices contract with an "upwards staircase", ${}_{ \dot{\alpha} } ^{ \,\, \dot{\alpha} } $, and undotted with a "downwards staircase", $ {} ^\alpha _{ \,\, \alpha } $.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/102611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Understanding fields and their correlation to forces I seem to be confused between the concept of a "force", and a field. Now let's assume there is a magnetic field of $1$ $\mathrm{Tesla}$, what does that mean in relation to force? Finally, if field is $1$ $\mathrm{Tesla}$ does that always mean, the force at that field is always the same? Example a magnetic field source (From Solenoid) of $1$ $\mathrm{Tesla}$ can apply a force of $10,000$ $\mathrm{Newtons}$, magnetic field source (Permanent Magnet) generates the same field strength, at the same conditions does it produce the same force?
A force is experimented by a charge when you put in in presence of a field. The strength of the force is a function not only of the strength of the field, but also of the strength of the charge. So, in a given electric field, a larger charge will experience a larger force. The classical concept of a field is more useful than that of force in the sense that is more general, because you can calculate the strength of a field regardless of the particles or stuff that is going to experience it. You can compute the field once, and it allows to quickly compute the force experimented by a variety of test particles put on it. I could go much deeper about how historically fields became a fundamental concept in modern physics instead of that of force. But I believe that is beyond your question
{ "language": "en", "url": "https://physics.stackexchange.com/questions/102772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Schrödinger's Equation and its complex conjugate I would like to know why there is a minus sign on the right-hand side of the Schrödinger's complex conjugate equation, whereas in the Schrödinger's equation there isn't. I know it is a simple question, but I don't know where this comes from. $$ -\frac{\hbar^2 }{2m}\frac{\partial^2\psi}{\partial x^2} + V(x)\psi = i \hbar \frac{\partial \psi}{\partial t} $$ $$ -\frac{\hbar^2 }{2m}\frac{\partial^2\psi^*}{\partial x^2} + V(x)\psi^* = -i \hbar \frac{\partial \psi^*}{\partial t} $$
It is the definition of complex number. Let's say $z=x+iy\quad \Rightarrow z^*=x-iy$ $z=x-iy\quad \Rightarrow z^*=x+iy$ In simple words, you just have to change the sign of the Imaginary part. The thing is that $\psi(x)$ it's a imaginary number, so it's conjugate it's just $\psi^*(x)$. If you have the $\psi(x)$ function, then you can change $i\to -i$ or in the oposite way.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/102838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Bogoliubov transformation with a slight twist Given a Hamiltonian of the form $$H=\sum_k \begin{pmatrix}a_k^\dagger & b_k^\dagger \end{pmatrix} \begin{pmatrix}\omega_0 & \Omega f_k \\ \Omega f_k^* & \omega_0\end{pmatrix} \begin{pmatrix}a_k \\ b_k\end{pmatrix}, $$ where $a_k$ and $b_k$ are bosonic annihilation operators, $\omega_0$ and $\Omega$ are real constants and $f_k$ is a complex constant. How does one diagonalise this with a Bogoliubov transformation? I've seen an excellent answer to a similar Phys.SE question here, but I'm not quite sure how it translates to this example. Any hints or pointers much appreciated.
I would just like to point out that the given Hamiltonian does not require a Bogoliubov transformation to be diagonalized, since it is of the form of a single-particle operator (nevertheless in second quantization) i.e. does not contain 'off-diagonal' terms of the form $a a$,... You can simply diagonalize it by diagonalizing the coupling matrix. @leongz: although this matrix is also Hermitian for the true Bogoliubov case, you will generally get the wrong answer for the eigenenergies and modes if you diagonalize it. The resulting modes would not be bosonic, i.e. it would not be a canonical transformation. You can obtain the right answer (which is much more powerful than the typical ansatz for the Bogoliubov operators) by diagonalizing $\Sigma H$, where $\Sigma$ is the pseudonorm on the sympletic space you're working on. Note however, that this matrix is not always Hermitian (and not always diagonlaizable - but this is physical: one bosonic mode is missing for each Goldstone mode).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/102967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Charge distribution on conductors? You have seen that the excess charge on an isolated conductor moves entirely to the conductor’s surface. However, unless the conductor is spherical, the charge does not distribute itself uniformly. Put another way, the surface charge density s (charge per unit area) varies over the surface of any nonspherical conductor. Why wouldn't the charge always distribute uniformly? I thought the charges would always want to maximize distance between themselves and so would spread out all over the conductor uniformly.
The fact that the static charge does not spread uniformly is the basis for things like lightning rods. Sharp edges are places that static charges, particularly higher voltage ones, like to reside. This design also aids in dissipating dangerous voltages via the coronal (ionized air) discharge mechanism. Sometimes, the uneven charge distribution is because those points are the most distant ones available, and electrons being of like charge, tend to repel each other. However, the topology of the conductor is also important because charges prefer to reside on the outside surface(s) of a conductor as opposed to inside ones. An interesting followup question might be whether charge actually is evenly distributed over the surface of a typical capacitor, such as an electrolytic type. We were taught that it was, but it probably wouldn't make very much difference to the functioning of such components if it were not. NPO (non-polarized) capacitors are available, but these are made by connecting two ordinary capacitors with a polarized dielectric in series such that the polarities of the pair are reversed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/103048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Prove EM Waves Are Transverse In Nature Why we say that EM waves are transverse in nature? I have seen some proofs regarding my question but they all calculate flux through imaginary cube. Here is My REAL problem that I can't here imagine infinitesimal area for calculating flux because em line of force will intersect (perpendicular or not) surface at only one point so $E.ds$ will be zero so even flux through one surface of cube will always be zero. I am Bit Confused. I DON'T KNOW VECTOR CALCULUS BUT KNOW CALCULUS.
An EM wave is generated by the oscillation of an electron. Near the electron we have the near field and here all the wave components are non-zero. Far away from the source we have the far field and it is in the form of a spherical surface wave advancing along the radius of a sphere with centre at the source. If we take a small section of this spherical advancing surface we have a plane wave. Due to symmetry all components normal to the propagation direction cancel each other and become zero. This leaves a wave with variations of the z propagating wave component only varying with time and distance along z- hence the equation given in the other answers. Note that this applies to all waves from a single point.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/103171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Can geodesics in a Lorentzian manifold change their character? From a physics perspective, it's pretty easy to see why a a massive particle will be restricted to timelike paths, etc. but does the math guarantee that on its own or do we have to impose it? More specifically, given an arbitrary smooth Lorentzian manifold, can there be geodesics which change character from spacelike to null to timelike, or timelike to null, etc., and how do we rule these out/why are they naturally ruled out?
There is a conserved quantity for geodesics which comes from the fact that the metric $g_{ab}$ is (trivially) a Killing tensor, i.e. $$\nabla_{(c}g_{ab)} = 0.$$ Any tensor $\xi_{ab}$ that satisfies $\nabla_{(c}\xi_{ab)}=0$ gives rise to the conserved quantity $\epsilon = u^a u^b\xi_{ab}$, which is preserved along geodesics for which $u^a$ is the tangent vector. To see this, we write $$\frac{d}{d\lambda}\epsilon = u^a\nabla_a\epsilon=u^a\nabla_a(u^bu^c\xi_{bc}) = u^au^bu^c\nabla_a\xi_{bc}+u^c\xi_{bc}u^a\nabla_a u^b+u^b\xi_{bc}u^a\nabla_au^c$$ The three terms on the RHS all vanish. The first term is symmetric in the lower indices, so it is zero because we assumed $\xi_{ab}$ is a Killing tensor. The second two are proportional to the acceleration of $u^a$, which vanishes since we assumed that $u^a$ is tangent to geodesics. So we find $$\frac{d}{d\lambda}\epsilon=0$$ Now taking $\xi_{ab} = g_{ab}$, since we have a Killing tensor, the quantity $\epsilon = g_{ab}u^au^b$ has to be constant. But for our tangent vector normalized to $0$ or $\pm1$, $\epsilon$ is just telling us whether $u^a$ is spacelike, timelike or null. So, mathematically, the tangent vector of a geodesic cannot change it's normalization (and hence cannot switch between spacelike, timelike or null) because it is a conserved quantity along geodesics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/103215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Effect of linear terms on a QFT I was told when first learning QFT that linear terms in the Lagrangian are harmless and we can essentially just ignore them. However, I've recently seen in the linear sigma model, \begin{equation} {\cal L} = \frac{1}{2} \partial _\mu \phi _i \partial ^\mu \phi _i - \frac{m ^2 }{2} \phi _i \phi _i - \frac{ \lambda }{ 4} ( \phi _i \phi _i ) ^2 \end{equation} with $m ^2 =-\mu^2 > 0$, adding a linear term in one of the fields $\phi_N$, does change the final results as you no longer have Goldstone bosons (since the $O(N)$ symmetry is broken to begin with). Are there any other effects of linear terms that we should keep in mind or is this the only exception of the "forget about linear terms" rule?
Adam's answer from a slightly different perspective. Linear terms are source terms, which are essentially equivalent to boundary conditions. Allowing nontrivial boundary conditions considerably enriches the mathematical behavior these models exhibit. In particular, you shouldn't be surprised that boundary conditions can lead to interesting phase structures. Even in the Ising model, something has to choose +1 or -1 magnetization.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/103328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 3, "answer_id": 1 }
what determines the wavelength of waves on the open ocean? Looking at the picture below, you can totally see that these are tiny boats. The water is practically washing over the hull of these boat models. But the water has boundaries that are very far away, and even if the water is very deep, it would (at least in my mind) not make a big difference to the size of the waves after just a few inches of depth. What dictates the size waves in the open water? Is it the molecular properties of the water?
This is really just a footnote to Ross' answer as Ross is quite correct and the link he provided contains the information you asked for. In the open sea waves are normally produced by the wind. When the wind hits the sea surface it creates essentially random patterns of pressure variation and these lift some parts of the surface up and press others down. Over time this creates waves. The important variable is the wind speed. There isn't a precise windspeed - wavelength relationship because the random nature of wave generation means you get a spread of wavelengths. However it's generally true that low wind speeds produce short waves and high wind speeds produce long waves. I did a quick graph of the average wavelength against speed from the data in the Wikipedia article and I get: The data doesn't include low speeds, but the trend is obvious. I don't think there's a simple explanation for exactly why the wavelength depends on wind speed. According to the Wikipedia article the pressure variations created by the wind create small waves, but the wind then interacts with these waves and increases their wavelength. This is known as the Miles mechanism, and if you're sufficiently interested there is a paper about it here. However you'll need to pay to see more than the first two pages.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/103395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
How does one exert greater force on the ground by jumping? When one jumps, how does he/she manage to exert greater force on their ground than their weight? Also, what is normal force and the reaction force (are they the same thing?) and by newton's third law, shouldn't the reaction(weight) when we are standing on the ground that the ground exerts on us send us flying above the ground- why doesn't the law apply here? Finally, when we drop a hard stone on the ground why doesn't it bounce? Plus, why is the force exerted by the stone on the ground greater than its weight?
There's still something missing from all the answers so far. When you drop something on the ground, say, a rock of mass $m$, by the time it makes contact with the ground it's traveling at a velocity $v$ and thus has momentum $p = mv$. To be stopped completely, its momentum has to equal $0$ at the end. So you have a total change in momentum of $\Delta p$. According to (the most literal, I think) Newton's 2nd law, you have $\Delta p = F \Delta t$, where $F$ is the force slowing down the object over the timespan $\Delta t$ (in reality time is continuous and $F$ is probably changing continuously, but this is enough to illustrate the point). So, if the $m = 1\ kg$ rock goes from falling at $v = 10\ m/s$ to $0$ in a millisecond or so, you might have $F = \Delta p/\Delta t = 10\ kg\ m/s /(.001s)=10000\ N$, which is obviously much bigger than just the gravitational force of $F_g \approx 1\ kg \times 10\ m/s^2 = 10\ N$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/103473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 0 }
Why don't the leaves of an electrometer repel each other in water? A normal electrometer filled with air will repel like it should do for electrostatic demonstration, but what if it is filled with water instead or even oil, what will happen? My guess is that the water is charged too, making the net repelling force equal to zero. But what will happen if it is filled with oil or another liquid?
If the electrometer leaves are wetted by the liquid, capillary forces (wicking) will pull them together and not allow them to easily separate. DOI: 10.1021/la902779g DOI: 10.1109/84.232594 DOI: 10.1021/ja983882z http://web.mst.edu/~numbere/cp/chapter%203.htm 3.1.4 Application to Parallel Plates Take two clean microscope slides, immerse them in clean water, press them together. Normal separation forces exceed the strength of the glass. Parallel separation by shear requires considerable force. Electrometer leaf electrostatic forces will be irrelevant up to considerable charge loadings.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/103619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What is the exact relation between $\mathrm{SU(3)}$ flavour symmetry and the Gell-Mann–Nishijima relation I'm trying to understand how the Gell-Mann–Nishijima relation has been derived: \begin{equation} Q = I_3 + \frac{Y}{2} \end{equation} where $Q$ is the electric charge of the quarks, $I_3$ is the isospin quantum number and $Y$ is the hypercharge given by: \begin{equation} Y = B + S \end{equation} where $B$ is the baryon number and $S$ is the strangeness number. Most books (I have looked at) discuss the Gell-Mann–Nishijima in relation to the approximate global $\mathrm{SU(3)}$ flavour symmetry that is associated with the up-,down- and strange-quark at high enough energies. But I have yet to fully understand the connection between the Gell-Mann–Nishijima and the $\mathrm{SU(3)}$ flavour symmetry. Can the Gell-Mann–Nishijima relation somehow been derived or has it simply been postulated by noticing the relation between $Q$, $I_3$ and $Y$? If it can be derived, then I would be very grateful if someone can give a brief outline of how it is derived.
Indeed, the formula only appeared empirically in 1956, before the eightfold way, for hadrons, long before quarks; and was seen to be such a basic fact that it informed the way flavor SU(3) was put together; and was subsequently spatchcocked into the gauge sector of the EW theory a decade after that--hence the alarming asymmetry of the hyper charge values. Its basic point is that isomultiplets entail laddering of charge, $I_3$ being traceless, but in the early days of flavor physics, with just the strange quark, an isosinglet required its charge to be read by something, and so was incorporated into the 3rd component of Gell-Mann's diagonal $\lambda_8$, providing the needed 2nd element of its Cartan sub algebra. Note that, in left-right flavor physics, say after the introduction of the charmed quark, C came as an addition to the strangeness, additively in the hypercharge, so (S+C+B)/2, whereas in the left-handed sector of the EW theory charm and strangeness (and T and B-ness) are in weak isodoublets, having escaped the hypercharge pen!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/103717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Should theory be the appropriate term? Should theory be the appropriate term? I mean, for example, because of the quantum field theory we have been able to find the subatomic particles that it theorized and make the Standard Model. Why then is is labeled as a theory? Also wave-particle duality is widely accepted fact yet labeled as a theory. What is up with that, why call it a theory. Maybe because it promotes the fact of idealism?
From http://en.wikipedia.org/wiki/Scientific_theory: "Scientific theories are testable and make falsifiable predictions. They describe the causal elements responsible for a particular natural phenomenon, and are used to explain and predict aspects of the physical universe or specific areas of inquiry[...]. Scientists use theories as a foundation to gain further scientific knowledge, as well as to accomplish goals such as inventing technology or curing disease. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge. This is significantly different from the common usage of the word "theory", which implies that something is a guess "
{ "language": "en", "url": "https://physics.stackexchange.com/questions/103813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Should water cool faster if is inserted metal canister with ice inside either mix only with ice? Let's say that we have two canister first bigger (metal canister 2l) with 1l of water at 100C, and second smaller (metal canister 1l) with 1l of ice. And we want to cool down boiled water to 50C. So we will insert ice (only ice not hole canister) into boiled water and after some time get wanted water (50C). But what if we insert into boiled canister - small canister (hole canister with ice inside), should boiled water be cooled down faster? (cause smaller canister with metal surface have bigger thermal conductivity than pure water)
Ice absorbs heat from the boiling water by melting. That means, if you put ICE in water, it increases the volume of liquid and their by its mass. if you are considering only the temperature(Not volume) initially and finally, the ice dropped in the canister will be faster, because even after melting of ice, cool water + Dropped ice(more surface area) will continue to absorbs heat until It reaches equilibrium(here 50°C) and thru convection. Ice inside canister starts to melt creating a layer of water between the canister wall and ice their by lowering conductivity and this cannot be avoided by any means. So your 1st option is quicker.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/103861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Creating electricity from mains water pressure. Could someone cleverer than me help me out? I had a crazy thought going through my head the other day and I can't lay my mind to rest until I get an answer. Q. How much energy could be produced by using mains water pressure to turn a generator? And would it be feasible to install a system to feed whatever is produced back to the grid? Assuming that the system would be installed in a building where a constant water supply is needed so the generator would be turning continuously, and a rough water pressure of around 3-4 bar. Thanks in advance for any help
You could certainly make electricity this way, it just wouldn't be cost effective. 3-4bar would be the same pressure as a 30-40 meter hydroelectric dam. The energy per time unit depends upon the flow rate (which depends upon the 4th power of pipe diameter). potential energy = pressure X volume I wouldn't want to see your water bill!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/103949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }