Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What is the physical meaning of the third invariant of the strain deviatoric? In continuum mechanics of materials with zero volumetric change, the material condition can be expressed by the strain deviatoric tensor instead of the strain tensor itself. To express the plasticity of the materials, the plasticity surface is constructed from the second and third strain invariants, i.e., $I_2 = \sqrt{-\frac{1}{2}\text{tr}(\varepsilon_{dev}^2) }$, $I_3 = \det(\varepsilon_{dev})$. It is obvious that the second invariant is not able to describe the tension-compression asymmetry of the material. Therefore, the third invariant is also included in the plasticity surface. Now the question is why the third invariant can express the tension-compression asymmetry. I mean, how the determinant of the strain deviatoric determines the tensile or compressive state of the material. Thanks in advance
If $E\equiv \varepsilon_{dev}$ describes a tension state then $-E$ describes a compression state. Now, $I_2(E)=I_2(-E)$ meaning, as you say, that $I_2$ cannot distinguish traction from compression. However, in 3D, $\det(-E)=-\det(E)$. Thus, $E$ and $-E$ have opposite third invariants. This difference can be exploited to build yield functions which do distinguish traction from compression. I do not whether $I_3$ has a direct "physical meaning" or not. However, this is the only available choice for isotropic materials. Recall that for an isotropic material, the yield function $f$ must be an isotropic function of $E$; that is $$ f(E) = f(RER') $$ for any rotation $R$ and its inverse $R'$. In that case, $f$ can be written as a function of the invariants: $f(E)=f(I_1,I_2,I_3)$. But $I_1=0$ and $I_2$ is even, meaning that the only way to distinguish traction from compression in isotropic materials is by writing an $I_3$-dependent yield criterion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/403220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Does increasing tension on a string reduce or increase the harmonic wavelength for a standing wave? I had thought that increasing tension on a string increases the frequency and thus decreases the wavelength. My book says otherwise. Which is correct?
The question is really about how the harmonics in a string change when its tension is increased. Because nothing is said about the length of the string, I guess you need to assume that the length is constant. The frequencies of the harmonics for an initial string tension are $1F_0, 2F_0, 3F_0, ... nhalF_0$. The text specifies that an unknown vibrator is exciting the third harmonic, states that the vibrator frequency does not change, and asks what harmonic will be excited next as the string tension is increased. The only harmonics that initially have a lower frequency than the initial frequency of the third harmonic are the first (fundamental) and the second. As the string tension is increased, all the harmonic frequencies increase. The first one that can reach the initial frequency of the third harmonic as the tension is increased is the second harmonic. The new fundamental harmonic frequency will be $(3F_0)/2$ -- that is, half of the new second harmonic frequency, which is half of the original third harmonic frequency.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/403336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Polyakov Loop and Chemical Potential I have read in a paper (http://arxiv.org/abs/1203.3556) that in a thermal field theory, the chemical potential is $\mu=T \ln P$ where $$T^{-1}=\int_{0}^{\beta} \sqrt{-\xi^2}dt,$$ $\xi$ is $\partial_t$, and $P$ is the Polyakov Loop: $$P=e^{\int_{0}^{\beta} A_a \xi^a dt}.$$ How chemical potential is related to the Polaykov Loop? I did not find anything related in the web that's why I asked.
I think the second formula is also not very profound. This is just based on the fact that the chemical potential $$ \Delta S = \int dt \, \mu Q $$ enters the action like the zeroth component of an (imaginary) $U(1)$ gauge field $A_\beta=(i\mu,\vec{0})$. Note that this is the Polyakov line for a $U(1)$ background field, not the Polyakov line of (for example) the dynamical gauge field in QCD.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/403481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Confused with heat as a form of energy I have quite a simple question. Energy can be defined as capacity to do work. But I have read When energy is exchanged between thermodynamic systems by thermal interaction, the transfer of energy is called heat. I can't understand what is the work done. For example, what is the work done in cooling water from 20 to 40 celsius (1 atm)? So, is heat a form of energy if energy is capacity to do work?
Heat is energy, you are correct. Here is an analogy. Think of temperature as a kind of measure of an atom's velocity. The faster an atom jiggles the higher it's temperature. So far so good? Now let's take the analogy one step further - if something has a velocity, you can calculate it's kinetic energy by computing 1/2mv^2 right? In the same way, using our analogy, one can calculate heat from temperature. The equation is H = cT (where c is a constant, I forget what it is called, heat capacity I think). So, finally, to answer your actual question - as heat "flows" what is really happening is molecules of higher velocity (and therefore higher temperature) are ramming in to molecules of lower velocity. Since work is a force x a distance, one can imagine in our analogy that work is being done on that slower atom by the faster atom as they smash (that's the force) over some - very short I'm guessing - distance. By doing work on the slower atom it's velocity increases, meaning it has a higher temperature and more energy (just as, say, a car with a higher velocity has more energy than the same car at a slower velocity). Pardon the long answer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/403616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can I change solid into liquid, gas or solute by cutting again and again until it is a molecule particle? If I have small piece of solid, for example, pure iron powder, fine sand , (at room temperature) then I cut it slowly into half again and again. Every times, after I cut it, I wait for temperature back to room temperature again. Case 1: If I do this in vaccuum, is it change to liquid or gas ? I think it might be impossible because intermolecular force will make it form solid again. If it is impossible what is the smallest size (approximately) that I can get and is it look like viscous liquid ? Case 2: If I do this in water or in polar liquid, is it change to solute ? Case 3: If I do this in carbon tetrachloride or in nonpolar liquid, is it change to solute ?
Solid, liquid and gas are properties that belong only to multiple particles, not a single particle. Those words describe the relationship between different particles, not a single particle. If you end up with a molecule, all you have is a molecule. Now if you end up with zillions of molecules all whizzing around more or less at random, you have a gas. If you end up with a dense bunch (zillions again) of molecules which are very close together but not actually joined by molecular bonds, then it's a liquid. And if you end up with molecules that are clumped together by strong molecular bonds in a more or less rigid structure, it's a solid. Whether a substance is soluble in water is, thankfully, the problem of chemists. A substance can react in many ways to contact with water. Wikipedia's page on Carbon Tetrachloride also lists it's solubility in water.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/403739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What would qualify as a deceleration rather than an acceleration if speed is unchanged? The instantaneous acceleration $\textbf{a}(t)$ of a particle is defined as the rate of change of its instantaneous velocity $\textbf{v}(t)$: $$\textbf{a}(t)=\frac{\mathrm{d}}{\mathrm{d}t}\textbf{v}(t).\tag{1}$$ If the speed is constant, then $$\textbf{a}(t)=v\frac{\mathrm{d}}{\mathrm{d}t}\hat{\textbf{n}}(t)\tag{2}$$ where $\hat{\textbf{n}}(t)$ is the instantaneous direction of velocity which changes with time. Questions: * *According to the definition (1) what is a deceleration? *In case (2), when will $\textbf{a}(t)$ represent a deceleration? For example, in uniform circular motion, why is it called the centripetal acceleration and not centripetal deceleration?
Acceleration is the general term for a changing velocity. Deceleration is a kind of acceleration in which the magnitude of the velocity is decreasing. The reason this might be confusing is because the word 'acceleration' is sometimes used to mean that the magnitude of the velocity is increasing, to contrast it with deceleration. One cannot go wrong, however, if one always takes acceleration to mean simply 'changing velocity'. In that case, circular motion corresponds to acceleration (because the velocity is changing) but not deceleration (because its magnitude is not decreasing).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/403864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 1 }
How are the apparently unbound degrees of freedom in Einstein field equations filled? Something that has always bothered me in general relativity is the annoying fact that there seems to be too few information in the Einstein field equations themselves. In order to solve a system we need to calculate both the stress energy tensor $T^{\mu \nu}$ and the metric $g^{\mu \nu}$ for all points in spacetime. These tensors have 10 independent components each, so 20 in total. Now the available laws of physics provide us with two bunches of equations. Einstein field equations $$R^{\mu \nu} - \frac{1}{2} Rg^{\mu \nu} = \frac{8 \pi G}{c^4} T^{\mu \nu}$$ Conservation laws $$T^{\mu \nu}_{; \mu} = 0$$ These amount to 14 equations falling short 6. In concrete calculations in textbooks this is always countered by strong assumptions on the metric or stress energy tensor, but these seem ad hoc solutions. What am I not seeing here? Why is this not a fundamental problem?
The energy-momentum tensor has to be generated by some matter content of the theory. Suppose the energy momentum tensor is generated by a (free, massless) scalar field $\phi$ satisfying $$ g^{\mu\nu} \nabla_\mu \nabla_\nu \phi = 0.$$ This will generate an energy-momentum tensor through: $$ T_{\mu\nu} = \nabla_\mu \nabla_\nu \phi $$. All degrees of freedom are now fixed by equations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/404347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the difference between electrons and holes in silicon? Electrons and holes behave differently in a silicon semiconductor (e.g. mobility of holes is one order of magnitude smaller than that of electrons, the collection time of holes at the same electric field is larger than for electrons... ). I was wondering, if holes are simply "a lack of electrons", they should behave in a mirrored way as electrons (if the latter move from $V_a$ to $V_b$ in a given time, the corresponding holes created when these electrons move should move in the opposite direction at the same speed). My question is: what is the origin of a different behavior between electrons and holes?
For effective hole movement many valence electrons must move. For electron movement only a single conduction electron moves.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/404636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How to tune the frequency of a coil? I'm doing a scientific fair project. I want to do it about wireless energy transmission. In practice consists of two coils NOT physically connected. One is the transmitter of energy, which is connected to a source of energy. The other (s) is the receiver, which receives the energy through magnetic fields. The main characteristic is that both coils are designed to have the same frequency, since this way the amount of energy transferred is maximized. So my question is, how can I do, to equalize its frequencies? Keep in mind that I am a high school student and you could explain it to me in more practical terms to carry out the project. PD: I investigated, and according to this, I must obtain "the maximum output" to pass the point of resonance, but I have no idea what this means
Normally, the resonant frequency of the antenna can be tuned with a variable capacitor (varactor) using a bias voltage. In direct inductive coupling of coils there is no resonance involved although capacitors may still be used to match the driver amplifier or receiver detector to the coil. The word antenna in this context is probably a misnomer for there is no radiation involved, rather the coils act as a pair of loosely coupled inductive transformer. You can find the details and with design formulas in Umar Azad, Crystal Jing, Ethan Wang: "Link Budget and Capacity Performance of Inductively Coupled Resonant Loops" IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, VOL. 60, NO. 5, MAY 2012
{ "language": "en", "url": "https://physics.stackexchange.com/questions/404743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Experiment - does mass of a moving body really increase or is it invariant? Suppose we have a mechanical balance, with two identical particles kept in the two sides. Now the balance does not show any deflection. Now, one of the particles is given some constant horizontal velocity. Will the balance show the moving particle to be heavier (that side will move downward )or not? (There is no friction between the balance and the moving particle)
Yes, it would -- the elementary pre-general relativity answer is "because gravity (which is what is measured by your balance) depends on the energy (mass plus kinetic energy) of an object, not the rest mass". So although the mass remains the same, a special relativistic correction to Newtonian gravity would be to consider the total energy instead of invariant mass. This answer is unsatisfactory, however, because it doesn't seem to make sense to consider energy individually when calculating gravity, when it's just the time-like component of the four-momentum $(E, p_x,p_y,p_z)$ -- it would actually seem more natural to use mass (the norm of this vector) in your calculation than to use energy. This is actually the theoretical motivation for general relativity, which explains that this force which depends on energy, $\Gamma_{00}^i$, is just one of the components (the "time-time component") of the sixteen components of the gravitational field tensor, albeit the most significant component we see at weak gravitational fields and low speeds. This is seen, e.g. in the Einstein field equation, where each of the sixteen components of the Einstein tensor depends on a corresponding component of the energy-momentum tensor, such as energy, momentum, pressure and shear stress.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/404855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How can a horizontally fired bullet reach the ground the same time a dropped bullet does? I studied projectile motion and now I know that we can treat each component of motion independently. Since gravitational acceleration acts on both a horizontally launched bullet and a vertically dropped bullet in free fall, they both will reach the ground at the same time as their vertical initial velocity is zero. This is what I studied in high school. But I found it against a real observation that a horizontally fired bullet will travel for much longer time compared to a simply dropped bullet before hitting the ground. Could you please elaborate on how to connect the physics of the situation and real life observations?
It happens in real life just as physics says. It was tested on the TV show Mythbusters: https://www.youtube.com/watch?v=tF_zv3TCT1U
{ "language": "en", "url": "https://physics.stackexchange.com/questions/405005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Using complex exponential to represent waves in EM Ever since we've been using exponentials to work with electromagnetic waves, I've been confused about the imaginary portion and want to confirm my thinking. What does the imaginary portion represent? Nothing, right? It's just a side effect of using complex exponentials because they are very easy to deal with algebraically. So, in reality we can completely restructure all the math to be written in terms of cos/sin instead and never let a single imaginary number appear, right?
The exponential function is easier to manipulate than the real trigonometric functions, in particular when it comes to derivatives and integrals: the manipulations can be done completely algebraically using complex numbers. In practice, it's easier to manipulate $e^{i\alpha}e^{i\beta}=e^{i(\alpha+\beta)}$ than $$ \cos(\alpha)\cos(\beta)=\frac{1}{2}\left(\cos(\alpha-\beta)+\cos(\alpha+\beta)\right) $$ etc. Of course the physical signal is real, which means that one must eventually return to either cosine or sine form. For this, a convention is chosen, and the most common one is to take the real part of the exponentials and ignore the imaginary part. One can then include various effects by using complex propagation constants, complex permittivity etc.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/405143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why do really cold objects evaporate really quick? So I saw a video of LNG (liquid natural gas) and when it got in contact with water, which was room temp the LNG evaporated instantly...why? Ice takes a while to evaporate like a sec even when hot water is dumped...why do really “cold” liquids evaporate super quick. Is there a name to this phenomenon? If you can dumb down your answer that would be great...I am only in 9th grade.
The “boiling point” of LNG is -162°C (-259°F), so putting it in touch with even cool water (10C) is a 170C temperature difference. That’s like putting water on a 270C (520F) griddle. The water boils quickly. To add to the effect, water requires an unusually large amount of energy to boil a cc. LNG needs a lots less to boil a cc, so heat transfer can boil it fast. But the biggest effect is the huge temperature difference. The world looks HOT to cryogenic liquids like LNG.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/405554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Kirchoff's radiation law So, I have some problems understanding Kirchoff's Radiation law. My textbook, Transport Processes and Separation Process Principles, by Geankoplis, states that at the same temperature T1 the emissivity and absorptivity of a surface is equal, which holds for any black or non black solid surface. In a problem from my professor it is given that : The sun radiate a flat surface with 1000 W/m2. The absorptivity of the plate is 0.9 and the emissivity is 0.1. The air temperature is 20 C and the heat transfer coefficient is 15 W/Km2. Calculate the surface temperature at equilibrium if the bottom is isolated. My question is: how is it possible that the emissivity and absorptivity in this case is not equal, which contradicts Kirchoff's law?
Emissivity and absoprtivity are both functions of wavelength. The plate may absorb 90 % of sunlight ($\lambda \approx 0.5 \mu$m) and have an emissivity and absorptivity of 0.1 in the thermal infrared $(\lambda \approx 10 \mu$m). These numbers are a bit unlikely, it is usually the other way around.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/405702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
is Work done = Total Energy in System? I have recently learned about three types of energy. Kinetic, elastic and gravitational potential energy. I have also leaned about Work done on a particle. I would like to know if the Work done on a system is equivalent to the total energy in a system? I ask this because when we determine the work done by a force compressing or extending a spring, we do this by finding $$W = \int \frac{\lambda x}{l} \thinspace dx= \frac{\lambda x^2}{2l}$$ and then we define this result as the Elastic Potential Energy. Does this mean that total energy is equivalent to work?
The short answer is "no", although it will depend on what you call "total energy". The point is that work done equals the variation of kinetic energy: $$W=\Delta E_k$$ (I'm not considering heating, just mechanics). But, there are two types of forces. We can divide the work in two parts: work done by conservative forces and work done by non-conservative forces. The first one can be written as $W_c=-\Delta E_p$. So then you have $$ \Delta E_k = W_c + W_{nc} = -\Delta E_p +W_{nc} $$ so, if you rearrange it, you have $$\Delta E_k + \Delta E_p = W_{nc}$$ So total mechanical energy variation equals the work done by non-conservative forces. $\Delta E_m=W_{nc}$. * *If there isn't any non-conservative force on the system, the energy will be conserved ($E_m$ won't vary$. *When you calculate that integra, you are calculating the work done by a conservative forces. Potential energy is "minus the work done by a conservative force". *But you must be careful to account all forces in the system. There can be many conservative forces (ellastic, gravitational, electrostatic...), and there can also be non-conservatives too!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/405868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Einstein field equations to the Alcubierre metric I was wondering how Alcubierre derived the metric for the warp drive? Sources have said it's based on Einstein's field equations, but how did he go from this to the metric?
Alcubierre started with the metric and used the Einstein equation to calculate what stress energy tensor was required. The Einstein equation tells us: $$ R_{\mu\nu} - \tfrac{1}{2}R g_{\mu\nu} = \frac{8\pi G}{c^4} T_{\mu\nu} $$ Normally we start with a known stress-energy tensor $T_{\mu\nu}$ and we're trying to solve the equation to find the metric. This is in general exceedingly hard. However if you start with a metric it's easy to calculate the Ricci tensor and scalar so the left hand side of the equation is easy to calculate, and therefore the matching stress-energy tensor is easy to calculate. The only trouble is that doing things this way round will usually produce an unphysicial stress-energy tensor e.g. one that involves exotic matter. And indeed this is exactly what happens for the Alcubierre metric - it requires a ring of exotic matter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/406012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Horizontal Beam Bending Due to Gravity I just thought about this problem recently but am not sure where to find a solution. I am not certain which parameters are important in this problem, so please bear with me. Assume that a beam of length $L$ with rectangular cross section of width $W$ and height $H$ is pinned on the wall horizontally so that the width of the beam's cross section is parallel to the ground and the height of the beam's cross section is almost parallel to the normal to the ground (almost parallel because the beam bends, causing the height at each point to tilt a little). Suppose that the beam has mass $M$ which is distributed uniformly. Under gravity, the beam bends (see the picture below). What is the shape of this bent beam? (It suffices to determine the shape of the central line that runs the length of the beam.) I assume that Young's modulus $E$ and the shear modulus $G$ of the beam are needed. Furthermore, the material that makes up the beam is assumed to be isotropic. With the parameters $L$, $W$, $H$, $M$, $E$, $G$, and the gravitational acceleration $g$, is this problem complete? Do you know how to solve it or any reference towards a solution? I would like the exact shape, rather than an approximated one. I have seen an approximation by a circular arc. Thank you very much in advance.
This is a typical cantilever beam with the length L, under uniform load q , its own weight. Assuming the beam is slender L/H >20 the shear deflection will be of tertiary order and will not have meaningful impact on bending. then the bending or deflection will be $$\delta_x = qL^4 /8EI \space ,while\space I = Wh^3/12 $$ And $$\theta_b =qL^3/6EI $$ Moment at support is $ = qL^2/2$ And shear at support $ V= qL $
{ "language": "en", "url": "https://physics.stackexchange.com/questions/406268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Inertia on a rotating disc? If I toss a ball upwards in a train moving with uniform velocity, the ball will land right back in my hand. This is because the ball has inertia and it continues to move forward at the speed of train even after leaving my hand. Now consider I'm standing on the outer edge of a rotating disc (merry-go-round). If I toss a ball upwards, it doesn't fall back in my hand. Why? Doesn't it have a rotational inertia (is that even a term?) to continue rotating even after I let go of it? Is the ball going to land on a new location on the disc? Or is it going to fall away from the disc? At least the ball should have inertia of tangential velocity at which I tossed the ball upwards, right? So the ball should fall away from the disc? Can someone describe what happens in this situation?
On the merry-go-round, you are constantly accelerating inwards with acceleration $a=\frac{v^2}{R}=R\omega^2$. When you release the ball, it is no longer accelerating (horizontally) and will move in a straight line (horizontally). So whereas you accelerate inwards to maintain your circular motion, the ball follows a straight line and lands away from the disc. By the way, what you call rotational inertia is called moment of inertia and for a particle of mass $m$ at radius $R$ it is $I=mR^2$. When you release the ball, it is no longer affected by any force $F$ and therefore there is no torque $\tau=RF$ from your hand and consequently it would have constant angular momentum $L=\omega I$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/406384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 1 }
What is the difference between relative time dilation and absolute time dilation I know special relativity says that traveling at high speeds (or really any speed) causes time dilation; and General relativity says that gravity also causes time dilation. I was wondering if relative time dilation (where two observers each measure the other's time to be slow) was caused not by time dilation, but instead because with the relative velocity difference between them, if they became increasingly far from each other, light would take longer and longer to reach them from the other. This would result in them both observing each other to have a slower time, though neither would necessarily experience the time dilation.
It's a sensible thought but no. "A sees B's clock running slow." , which you meet in introductory relativity explanations, is shorthand for "A sees the ticks of B's clock arrive at a certain rate. A knows that with every tick, the clock is getting further away (or, in some cases, nearer) so each light signal has further (or less far) to travel, and A compensates for that in working out the rate at which B's clock had to be ticking in order to arrive at the rate they perceive. This calculated rate is slow."
{ "language": "en", "url": "https://physics.stackexchange.com/questions/406840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Line element in Kruskal coordinates I try to calculate the line element in Kruskal coordinates, these coordinates use the Schwarzschild coordinates but replace $t$ and $r$ by two new variables. $$ T = \sqrt{\frac{r}{2GM} - 1} \ e^{r/4GM} \sinh \left( \frac{t}{4GM} \right) \\ X = \sqrt{\frac{r}{2GM} - 1} \ e^{r/4GM} \cosh \left( \frac{t}{4GM} \right) $$ Wikipedia shows the result of the line element. $$ ds^2 = \frac{32 G^3M^3}{r} e^{-r/2GM} (-dT^2 + dX^2) + r^2d\Omega^2 $$ I tried to calculate the metric tensor using $ds^2 = g_{ij} \ dx^i dx^j$. As $T$ and $X$ show no dependence in $\theta$ and $\phi$, the $d\Omega$ seems to make sense, but the calculation of the first component of $g$ was not working. $$ g_{tt} = J^TJ = \frac{\partial T}{\partial t} \frac{\partial T}{\partial t} + \frac{\partial X}{\partial t} \frac{\partial X}{\partial t}\\ = \frac{1}{32} \left( \frac{r}{GM} - 2 \right) \frac{ e^{\frac{1}{2} \frac{r}{GM}}}{G^2M^2} \left( \cosh^2 \left( \frac{t}{4GM} \right) + \sinh^2 \left( \frac{t}{4GM} \right) \right) $$ Is this the right way to compute the line elements? What would be better way to calculate the line elements (maybe starting with the Schwarzschild-coordinates)?
I don't think you can drive the line element with the jacobian $J$ The Kruskal-Szekeres line element Beginning with the Schwarzschild line element: \begin{align*} &\boxed{ds^2 =\left(1-\frac{r_s}{r}\right)\,dt^2-\left(1-\frac{r_s}{r}\right)^{-1}\,dr^2-r^2\,d\Omega^2}\\\\ r_s &:=\frac{2\,G\,M}{c^2} \,,\quad \text{for 2 dimension space}\\ ds^2 & =\left(1-\frac{r_s}{r}\right)\,dt^2-\left(1-\frac{r_s}{r}\right)^{-1}\,dr^2 \end{align*} Step I) \begin{align*} &\text{for} \quad ds^2=0\\ 0&=\left(1-\frac{r_s}{r}\right)\,dt^2-\left(1-\frac{r_s}{r}\right)^{-1}\,dr^2\,,\Rightarrow\\ \left(\frac{dt}{dr}\right)^2&=\left(1-\frac{r_s}{r}\right)^{-2}\,,\Rightarrow \quad t(r)=\pm\underbrace{\left[r+r_s\ln\left(\frac{r}{r_s}-1\right)\right]}_{r^*}\\ &\Rightarrow\\ \frac{dr^*}{dr}&=\left(1-\frac{r_s}{r}\right)^{-1}\,,\quad \frac{dr}{dr^*}=\left(1-\frac{r_s}{r}\right)\,,&(1) \end{align*} Step II) \begin{align*} &\text{New coordinates}\\ u & =t+r^* \\ v & =t-r^*\\ &\Rightarrow\\ t&=\frac{1}{2}(u+v)\,,\quad dt=\frac{1}{2}(du+dv)\\ r^*&=\frac{1}{2}(u-v)\,,\quad dr^*=\frac{1}{2}(du-dv)\\ dr&=\left(1-\frac{r_s}{r}\right)\,dr^*=\frac{1}{2}\,\left(1-\frac{r_s}{r}\right) (du-dv) \quad\quad(\text{With equation (1)})\\ \Rightarrow \end{align*} \begin{align*} ds^2 &=\left(1-\frac{r_s}{r}\right)\,du\,dv \end{align*} Step III) \begin{align*} r^* & =\left[r+r_s\ln\left(\frac{r}{r_s}-1\right)\right]= \frac{1}{2}(u-v)\,\Rightarrow\\ \left(\frac{r}{r_s}-1\right)&=\exp\left(-\frac{r}{r_s}\right) \,\exp\left(\frac{1}{2\,r_s}(u-v)\right)\\ \left(1-\frac{r_s}{r}\right)&=\frac{r_s}{r}\left(\frac{r}{r_s}-1\right)\\ \,\Rightarrow\\\\ ds^2&=\frac{r_s}{r}\,\exp\left(-\frac{r}{r_s}\right) \,\exp\left(\frac{1}{2\,r_s}(u-v)\right)\,du\,dv \end{align*} Step IV) \begin{align*} &\text{New coordinates}\\ U= & -\exp\left(\frac{u}{2\,r_s}\right) \,,\quad \frac{dU}{du}=-\frac{1}{2\,r_s}\,\exp\left(\frac{u}{2\,r_s}\right)\\ V= & \exp\left(-\frac{v}{2\,r_s}\right) \,,\quad \frac{dV}{dv}=-\frac{1}{2\,r_s}\,\exp\left(-\frac{v}{2\,r_s}\right)\\ \,\Rightarrow\\\\ ds^2&=\frac{4\,r_s^3}{r}\exp\left(-\frac{r}{r_s}\right) \,dU\,dV \end{align*} Step V) \begin{align*} &\text{New coordinates}\\ U & =T-X\,,\quad dU=dT-dX \\ V & =T+X\,,\quad dV=dT+dX\\ \,\Rightarrow\\\\ &\boxed{ds^2=\frac{4\,r_s^3}{r}\exp\left(-\frac{r}{r_s}\right) \left(dT^2-dX^2\right)} \end{align*} With Matrices and Vectors The Kruskal-Szekeres line element Beginning with : \begin{align*} ds^2 & =a\,du\,dv\\ &\Rightarrow\\ g&=\frac{1}{2}\begin{bmatrix} 0 & a \\ a & 0 \\ \end{bmatrix}\\\\ q'&=\begin{bmatrix} du \\ dv \\ \end{bmatrix}\,,\quad q=\begin{bmatrix} u \\ v \\ \end{bmatrix} \,,\quad a=\left(1-\frac{r_s}{r}\right) \end{align*} Step I) \begin{align*} R&= \begin{bmatrix} \frac{1}{2}(u+v) \\ \frac{1}{2}(u-v) \\ \end{bmatrix} \,\Rightarrow\quad J_1=\frac{dR}{dq}= \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} \\ \end{bmatrix}\\\\ ds^2=&a\,q'^T\,J_1^T\,\eta\,J_1\,q'=a\,du\,dv \end{align*} where $\eta= \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}\\\\$ Step II) \begin{align*} a&\mapsto {\it r_s}\,{{\rm e}^{-{\frac {r}{{\it r_s}}}}}{{\rm e}^{1/2\,{\frac {u-v }{{\it r_s}}}}}{r}^{-1} \\\\ ds^2&=a\,du\,dv={{\it du}}^{2}{\it r_s}\,{{\rm e}^{-1/2\,{\frac {2\,r-u+v}{{\it r_s}}}}} {r}^{-1}-{{\it dv}}^{2}{{\rm e}^{1/2\,{\frac {2\,r-u+v}{{\it r_s}}}}}r{ {\it r_s}}^{-1} \end{align*} Step III) \begin{align*} R & = \begin{bmatrix} -\exp\left(\frac{u}{2\,r_s}\right) \\ \exp\left(-\frac{v}{2\,r_s}\right) \\ \end{bmatrix}\,,\Rightarrow\quad J_2=\frac{dR}{dq}=\begin{bmatrix} -\frac{2\,r_s}{\exp\left(\frac{u}{2\,r_s}\right)} & 0 \\ & -\frac{2\,r_s}{\exp\left(-\frac{v}{2\,r_s}\right)} \\ \end{bmatrix}\\\\ ds^2=&q'^T\,J_2^T\,J_1^T\,g\,J_1\,J_2\,q'= \frac{4\,r_s^3\,\exp\left(-\frac{r}{r_s}\right)}{r}\,du\,dv \end{align*} Step IV \begin{align*} R & = \begin{bmatrix} u-v \\ u+v \\ \end{bmatrix}\,,\Rightarrow\quad J_3=\frac{dR}{dq}=\begin{bmatrix} 1 & -1 \\ 1 & 1 \\ \end{bmatrix}\\\\ ds^2=&q'^T\,J_3^T\,J_2^T\,J_1^T\,g\,J_1\,J_2\,J_3\,q' = \frac{4\,r_s^3\,\exp\left(-\frac{r}{r_s}\right)}{r}\left( du^2-dv^2 \right) \end{align*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/407108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Approximating sums as integrals and divergent terms I have the following sum (notice that the sum starts from 2, i.e. there's no divergence): $$\sum_{i=2}^{N}C_i\dfrac{\exp{\left(-k| \mathbf{R}_i-\mathbf{R}_1| \right) }}{| \mathbf{R}_i-\mathbf{R}_1|}$$ Where $\mathbf{R}_i$ are vectors belonging to $\mathbb{R}^3$ and are enclosed in some volume $V$ (They represent the positions of some atoms). $C_i$ is some well behaved function (we might aswell take it to be 1). Now suppose I want to approximate this sum as an integral, in the limit where $N \rightarrow \infty$ and the atoms at position $\mathbf{R}_i$ are densely close to each other. My tentative answer would be to write: $$\lim_{N \rightarrow \infty} \sum_{i=2}^{N}C_i \dfrac{\exp{\left(-k| \mathbf{R}_i-\mathbf{R}_1| \right) }}{| \mathbf{R}_i-\mathbf{R}_1|} = \int_V d^3\mathbf{R} \dfrac{\exp{\left(-k| \mathbf{R}-\mathbf{R_1}| \right) }}{| \mathbf{R}-\mathbf{R_1}|} \rho(\mathbf{R}) C(\mathbf{R}) $$ Where in this limit: $\mathbf{R}:=\mathbf{R}_i$, and $\rho(\mathbf{R})=\dfrac{N}{V}$ Is this in some way rigorous? I think it makes sense as I often saw a similar procedure in Statistical Mechanics. Now, what about the term $\mathbf{R}_i=\mathbf{R}_1$? In the sum that term is divergent and is not included. But in the integral it is somewhat impossible to exclude it, and it doesn't give any problem as it's divergence seems to be cancelled by the integration in 3 variables. Is there a way to convince myself that the error I'm making is negligible?
Make a substitution $\mathbf R' = \mathbf R - \mathbf R_1$ $$\int_V d^3\mathbf{R} \dfrac{\exp{\left(-k| \mathbf{R}-\mathbf{R_1}| \right) }}{| \mathbf{R}-\mathbf{R_1}|} \rho(\mathbf{R}) C(\mathbf{R}) = \int_{V'} d^3\mathbf{R'} \dfrac{e^{-k| \mathbf{R}'| }}{| \mathbf{R}'|} \rho(\mathbf{R+\mathbf R_1}) C(\mathbf{R}+\mathbf R_1)$$ Let's now turn to polar coordinates centered in $\mathbf R_1$, with $r' = |\mathbf R'|$. It might now be quite hard to convert $\rho$ and $C$ to polar coordinates in this frame of reference, depending on the symmetries of your problem. If, as I suspect, $\rho$ is unknown and will be found using this integral, then you shouldn't have a problem. But I don't know, and I hope this helps anyway. $$ ... = \int re^{-kr}\rho(r,\theta,\phi) C(r, \theta, \phi) \,d\theta d\phi dr.$$ Notice that changing coordinates introduced a $r^2$ factor. This shows (unless I'm missing something!) that your integral doesn't diverge, if $\rho$ and $C$ are well-behaved.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/407222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
In which direction does the normal force point if a rod can swivel? A rod is attached to a wall in such a way it can swivel. In this case: In which direction does the force (of the wall on the rod) point to? I drew the blue force as I would make a force diagram. Am I wrong? Here is an example in which the rod can swivel, but now the normal force is perpendicular to the wall. The direction of the force here is different. Why? Is maybe one of the pictures wrong? Also: What is the recipe here? How do we determine the direction?
The pin forces can point in any direction, since all directions are constrained for motion. You just can't have a normal force in the same direction as sliding is allowed because that would mean the joint can do/consume work. Now for any example the actual direction is such that all forces converge to a single point. Slide the force vectors along their line of action such that they meet at a single point. At this location the forces must balance, and that is how the magnitude and direction of the pin force (pink) is found, as well as the magnitude of the tension (black).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/407356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Connecting a charged capacitor to an uncharged capacitor I was attending a lecture about capacitors and something confused me. If I charge a capacitor using a DC supply, the capacitor will gain charge $Q_0$. Now, if I discharged it along an uncharged capacitor in this arrangement, according to the lecture notes, the capacitors share the total charge $Q_0$. Now, I had a question. Aren't there electrons on the uncharged capacitor, such that they flow between the two capacitors to cause equal p.d. on both capacitors hence the total charge in this circuit greater than $Q_0$?
When we say that a capacitor is uncharged it means that the net charge on each plate of the capacitor is zero ie equal numbers of positively charged ions and negatively charged electrons. The charged capacitor also has a net zero charge it just so happens that there is a net surplus of electrons on one plate and an equal net deficit of electrons on the other plate. The magnitude of the surplus/deficit you have called $Q_0$. Overall the net charge on the system of the two capacitors before connection is zero and stays zero after connection. If you start with a surplus/deficit of $Q_0$ then it will stay as such because there is no way that the charges can neutralise one another as they reside on two different parts of the circuit separated by insulators between the plates.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/407459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to check if a thing I have is exactly of 1 kg or not? The definition of a kilogram is as far I know, Kilogram: The mass of a cylinder made of platinum-iridium alloy kept at International Bureau of Weights and Measures is defined as 1 kg. But it is not possible to go and check that reference all the time. So is there any way to check the accuracy of checking 1kg?
You measure it on a calibrated scale. But because local gravity varies by 0.25% around the Earth, you generally have a set of calibrated masses with your scale and (at least on digital scales) a software option to perform a calibration. These masses are checked against a mass at the maker of the scale, and those masses are checked by a calibration service company, whose masses are checked by some national laboratory, and so on until the original Paris Kg. This process is a bit annoying, prone to error and ultimately involves carrying lumps of metal to Paris - so there is a plan to redefine the Kg in a way that any laboratory can make their own measurement
{ "language": "en", "url": "https://physics.stackexchange.com/questions/407598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why are there only four fundamental interactions of nature? Is there an answer to the question why there are only four fundamental interactions of nature?
Because we don't need more. Well, we haven't found any evidence of any others. And until then, there's no need. Granted, some experiments might show indication of something else going on that pushes revision of the Standard Model. On the mathematical side, this can be explained from symmetry: the Standard Model Lagrangian obeys a certain set of symmetry operations, which physicists assume to be valid. From this Lagrangian, using the Quantum Field Theory formalism, the separate "fundamental interactions" can be derived. From Wikipedia (emphasis mine): The global Poincaré symmetry is postulated for all relativistic quantum field theories. It consists of the familiar translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity. The local SU(3)×SU(2)×U(1) gauge symmetry is an internal symmetry that essentially defines the Standard Model. Roughly, the three factors of the gauge symmetry give rise to the three fundamental interactions. The fields fall into different representations of the various symmetry groups of the Standard Model (see table). The three interactions mentioned here are slightly different categorizations of your four fundamental interactions, but essentially the same (except maybe the Brout-Englert-Higgs part). The fourth, gravitation, is assumed to be somewhat related to the Higgs part of the Standard Model, and doesn't yet quite fit in the rest of the Standard Model.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/407688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 10, "answer_id": 1 }
Angle in pair production Assuming a very high energy photon (energy $E$) crosses the atmosphere and produces an electron-positron pair, I would like to know what is the angle between these to leptons produced. I was trying to calculate it by applying the energy-momentum conservation and realized that in this case the angle could be 0 if the momentum $p$ does not need to be conserved. Question: Does $p$ need to be conserved in the interaction or is it enough that the following relation applies:$$ E^2=2\left(p_\text{e}c\right)^2+2\left(m_\text{e}c^2\right)^2 \,,$$where $p_\text{e}$ is the momentum of the resulting electron/positron and $m_\text{e}$ its mass?
The center of momentum frame of the resulting electron-positron would have to be a $0$ momentum frame of the starting photon in order for momentum to be conserved, but photons can't have $0$ momentum. This is why pair production must occur near a nucleus or such to receive some recoil. The usual way to derive the angle I think is to consider the recoil, but in the limit where the recoil momentum is small relative to the other momenta (i.e., it's approximately $0$). (So, to answer the question, momentum is of course conserved, but you can take this approximation if you want) I believe the (special relativity kinematic) angle you get in this case should indeed be $0$, but the angle in reality can be a little bit larger than $0$, depending on how much recoil momentum there was.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/407890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is this cloud blue? I saw these clouds on the horizon, behind a ridge (apologies I couldn't get more pixels): Why is the front cloud darker than the cloud behind? There were no other clouds that I saw which could've been casting a shadow on the front cloud. What would cause a cloud to reflect less light?
A possible reason the cloud reflects less light is that it has a lower density of microscopic water droplets in it as it has more air spaces in between that cloud. Notice how water droplets have almost no preference of scattering so it scatters almost all wavelengths of light so it appears that they reflect all light wavelengths, hence the ordinary white color of clouds. However, Air scatters light near the blue end of the spectrum and hence more air spaces mean more blue light scattering so the cloud appears more blue in between. Hence the lack of water vapor (which is mainly responsible for all wavelengths of light instead to scatter giving a white-like image), that cloud appears bluish.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/408100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Does the neutral charged object attract the positive charged object or the negatively charged object? Consider an electrically neutral object: * *Is it going to attract a positively charged object or the negatively charged object? *What is the type of attraction? *How does it attract or why does not it? *Why the positive protons of the atom attracts to the neutral neutrons and the negative electrons does not?
Lets look at 2 cases: i) When the neutral body is a conductor: If a charged body is brought near a neutral body, the same charge in the neutral body would get repelled and go to the far side hence accumulating the opposite charge in the near side. By coulombs inverse square law the attraction force on the nearer side overweighs the force of repulsion from the farther side . Therefore a neutral body Is attracted by a charged body. 2)When the body is a dielectric: Since in dielectrics the electrons are bound to the atom, the atoms get polarized in the influence of electric field created by the charged body. Therefore in the same way As case 1 it gets attracted to the charged body. Well, ofcourse the attraction is because of electrostatic forces of attraction. Coming to the next part of the question; the neutron and proton are point charges so there is no question of charges getting polarized so there is negligible force of Electrostatic attraction. Since the protons and neutrons are in the nucleus, nuclear forces operate (at very close distance nuclear forces are immensely strong). The electrons being in the orbits around the nucleus, the nuclear forces are very weak.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/408335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
When is it appropriate to solve the time-independent Schrödinger equation? I am currently going through Griffiths over the summer but I am a bit confused by one point and I don't have any instructor to ask, so I was wondering if you could help clarify. In Section 2.3, the harmonic oscillator, he writes: "it suffices to solve the time-independent Schrödinger equation." Clearly, this is not sufficient in every case. I was wondering how we know a priori that * *it is sufficient and *we are not missing some information by only solving the time independent case.
The time-independent Schrödinger equation is just separation of variables acting on the “true” Schrödinger equation. The eigenvalues (the separation constants) of such equation just so happen to represent the energy of our quantum system. As such, if our interest is solely on the available and accessible states of or system, the time-independent version does just fine. If, however, we seek to model the system’s time evolution, then we need to evoke the time part of the Schrödinger equation. If our time-independent equation has normalized solutions $\psi_1(x), \psi_2(x),\dots$, with $\int_{-\infty}^\infty\psi^*_m(x)\psi_n(x)=\delta_{mn},$ then we write $$H\psi_n(x)=E_n\psi_n(x),$$ Where $H$ is the Hamiltonian and $E_n$ are the corresponding energies. The time-dependent equation is $$i\hbar\frac{\partial}{\partial t}\Psi(t,x)=H\Psi(t,x).$$ As such, we can write our time dependent quantum state in terms of a superposition of the independent states: $$\Psi(t,x)=\sum\limits_n^{}A_n\psi_n(x)e^{iE_nt},$$ where the $A_n$ are a normalized set of constants, $\sum_n|A_n|^2=1$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/408460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Hydrogen atom- Eigenvalue/function relation I have been given the following Question: The energy eigenstates of the atomic electron are usually described by wave functions $ψ_{nℓm}(r)$. Relate each of $n, ℓ,$ and $m$ to the eigenvalue of a specific operator by giving the eigenvalue equation for this operator acting on $ψ_{nℓm}(r)$. I understand the following eigenvalue/function relations: $$\hat{\vec{L}^2} Y_{ℓm}=\hbar^2 ℓ(ℓ+1) Y_{ℓm}.$$ and; $$\hat{L_z} Y_{ℓm}=\hbar m Y_{ℓm}.$$ But I don't understand where the principle quantum number, $n$ comes into things. If someone could explain, that'd be great. Thanks.
n is for the energy of the electron, the eigenvalue of the Hamiltonian. It is called principal, because it should be the basic being related to energy. It is n because it is natural, in the case of H, it is En=-13.6eV/n^2. $$\sum_{\ell=0}^{\ell=n-1}(2\ell + 1)= n^2$$ It was first used with Bohr H atom, he used n for the quantization of angular momentum, as n the allowed orbit. $L = n{h \over 2\pi} = n\hbar$ But the n you are talking about is the solution of the Schrodinger equation for H. This n that you get by solving the Schrodinger equation is the allowed energy state. Hamilton's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics. Suppose we have a set of basis states { | n ⟩ } , which need not necessarily be eigenstates of the energy. For simplicity, we assume that they are discrete, and that they are orthonormal, i.e., ⟨ n ′ | n ⟩ = δ n n ′ Note that these basis states are assumed to be independent of time. We will assume that the Hamiltonian is also independent of time. The instantaneous state of the system at time t, | ψ ( t ) ⟩ , can be expanded in terms of these basis states: | ψ ( t ) ⟩ = ∑ n a n ( t ) | n ⟩ where a n ( t ) = ⟨ n | ψ ( t ) ⟩ . Please look at Hamilton's equations in https://en.wikipedia.org/wiki/Hamiltonian_(quantum_mechanics)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/408756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Under what conditions can molecules exist? I am curious to know the conditions required for any two or more atoms to bond together and form a stable molecule. Is there a set of rules that should be satisfied?
In computational chemistry, we approach this question from the Born-Oppenheimer approximation perspective, in a very pragmatic way. Consider we have an ensemble of electrons and nuclei. First, we assume that the nuclear and electronic wavefunctions can be separated. Then, we solve the Schrödinger equation for the electrons in the field of fixed nuclei. The eigenvalues are the potential electronic energies $E$. And to answer the question, we need to focus only on the lowest eigenvalue, the ground state electronic energy. If we repeat this procedure for all possible nuclear configurations R, we form a multidimensional potential energy surface $E\left(\mathbf{R}\right)$. (For $N$ nuclei, this surface has $3N-6$ dimensions.) A necessary (but not sufficient) condition for the molecule to exist is there is at least one region $\mathbf{R}_0$ of this surface with energy lower than the energy of the nuclei separated by an infinite distance. $$E\left(\mathbf{R_0}\right) < E\left(\mathbf{R_{\infty}}\right),$$ where $\mathbf{R_{\infty}}$ means a nuclear geometry in which at least one of the dimentions of $\mathbf{R}$ tends to infinite. We determine such bound regions using geometry optimization methods, which search for minima of $E\left(\mathbf{R}\right)$. Now that we solved the electrons, we go back to the nuclei. Their potential energy is given by $E\left(\mathbf{R}\right)$. We check what is their quantum zero-point energy, $\varepsilon_{ZP}$. Usually, we use a harmonic approximation around $E\left(\mathbf{R_0}\right)$ to do that. Finally, we estimate thermal effects (entropy, enthalpy, and Gibbs corrections) coming from finite temperatures. These corrections give an additional energy term $\varepsilon_T$. The sum of all those contributions is an approximation for the Gibbs free energy $$G\left(\mathbf{R}\right) = E\left(\mathbf{R}\right) + \varepsilon_{ZP} + \varepsilon_T.$$ The molecule will exist if (and only if) $$G\left(\mathbf{R_0}\right) < G\left(\mathbf{R_{\infty}}\right).$$ If you don't work with computational chemistry, all these calculations may sound abstract. However, we have many efficient approximations to perform them, even for molecules with a few hundred nuclei. A good entry point to know more is the ChemCompute platform, which provides tutorials, software, and computer time for running these calculations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/408921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the cause of wave impedance? As in electrical impedance, Causes: Resistance - collision of electrons with atoms and other electrons, Reactance - Capacitive and inductive effects. Likewise, what offers opposition to a wave traveling in a medium?
assuming we are discussing longitudinal waves (as opposed to gravity or capillary waves), the two factors influencing the movement of waves through a medium are its density and its compliance. on the other hand, if you are talking about electromagnetic waves, the determinants of their speed (c) in a vacuum are the electric constant and the magnetic permeability of free space.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/409074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Potential difference across a zero resistance wire So I started off with electrostatics and everything seemed nice and mathematical and justified and then "DC circuits" happened! I just cannot understand the model of electron flow in electrical circuits. Here are my specific doubts-: 1) If potential difference across a tiny cross section of conducting wire is zero, then why on earth does electron flow across that cross section at all? Never mind potential difference across the whole circuit. 2) Is there a constant electric field across a wire connected to a battery? If yes then how is potential difference across a zero resistance wire constant? Shouldnt it be increasing? Doesnt it violate ohms law? If no, then why do electrons flow at all? Please take time to consider these doubts and relieve me of my frustration. I havs searched through the net for this but every answer seems like beating around the bush. All of the 4 books I have consulted do not address these facts to my satisfaction. Frankly I think nobody understands this.
For a current to flow in a conventional wire (not a superconductor, vacuum, etc.), the potential difference across any segment of the wire and the electric field in it have to be greater than zero. In most cases, the potential difference in the wires could be approximated as zero, because the resistance of the wires is much smaller than the resistance of other elements in a circuit, including the battery, and, therefore, most of the voltage drops on those other elements.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/409310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 1 }
Non-zero electric field inside a conductor, when applying an large external field I'm probably missing something, or does not understand conductors well enough. But I have a question related to the title of this message. In many places you read that there can be no electric field inside a conductor. The arguments typically go something in line with, since there is an electric field, charges inside the conductor will rearrange themselves so cancel the field. Very simply stated. That I don't understand, is that this seams to assume that there always is "enough" charge to redistribute. To clarify my confusion, let's say we have a conducting solid sphere with some charge. If we apply an "large" external static field to this sphere, charges inside it will tend to cancel it out. But, what if the total charge inside it is not enough? The total charge in the sphere can only generate a limited field, but the external one can be arbitrary large. What if the field outside is so large that the potential it generate, from one side of the sphere to the other, is larger than what the internal charge can generate? As I said, I'm probably missing something essential, but can someone please point out the misstake in the above argument?
Let’s look at the numbers. Atoms are typically an angstrom ($10^{-10}$m) across. A metal conductor will typically have one conduction electron per atom. A Coulomb is a big amount of charge: a macroscopic 1A current for a second. It’s also $6 \times 10^{18}$ electrons. Combining those, one Coulomb corresponds to the charge in just the first layer of atoms in a 60cm2 patch of conductor surface. The first nanometer is a factor 10 more; the first micron in ten-thousand time more. Unless you’re doing an experiment aimed at truly extreme conditions, you don’t run out of charges in metallic conductors.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/409393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is it possible for a lightning strike to hit the ground if there are high rise buildings nearby? Say we have pointed conductors connected to the top of the high rise buildings. Will the strikes hit the nearby ground in such a case?
It might, if the path of the lightning does not get close enough to the rod or its grounding structure in comparison to alternative targets on the ground. The lightning strokes originate at the clouds, because the clouds have much higher charge concentration and much stronger local electric field. In comparison, the density of charges, induced by the clouds on the surface of the earth, is relatively low and the field is relatively weak, even around sharp objects like a lighting rod. So, under typical conditions (no mountains or skyscrapers), the origin and the path of a lightning leader is primarily defined by the location of the cloud and not the location of buildings with lightning rods. If the distance between a descending leader and a nearby grounded structure is similar to the remaining distance to the earth, chances are the lightning will hit the structure. Otherwise, it is likely to hit the earth. Although the sharp tip of a lightning rod provides the easiest path to the ground, its action is mostly local and is unlikely to change the course of a leader at distances comparable with its height.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/409513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Does photon absorption annihilate the associated EM wave instantly? My Understanding A single photon has an associated electromagnetic wave. The wave is spread out in space, but the photon is considered a point particle. If the photon is absorbed, the entire wave disappears. Photon absorption is instantaneous, so the wave disappears instantly. In other words, the wave can no longer be detected anywhere in the universe; despite that the interaction happened at a single point. My Question Is my understanding correct, and if not, what am I missing?
You are not correct and you are not incorrect. This is the realm of quantum interpretation, and this particular conundrum is called the Einstein bubble paradox. What exactly is happening down there is an unresolved question. All we really know is this: light propagates according to classical electromagnetism, but its energy and momentum can only be emitted/absorbed in quanta. Make of that, philosophically, what you will.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/409617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Density function in phase space What does density function in phase space physically mean? How does it indicate, the more familiar density that we are accustomed to ( an analogy may be), in phase space?
If you integrate out the momentum variables, then you get the usual density as a function of just position. Let's say there are N particles each with mass $m$ so total mass $Nm$. $$ \int d^3p d^3x \; \; \rho_{phase} (x,p) = Nm\\ \int d^3p \; \; \rho_{phase} (x,p) = \rho (x)\\ $$ So the phase space density is giving more refined information. Careful that the units are different. For example, $\rho$ has units mass per volume, but $\rho_{phase}$ has units like $kg/(m kg m/s)^3$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/409776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why does sound travel faster in steel than in water? I understand that sound travels faster in water then in air. Water is a liquid, and air is gas. Water still has the ability to roll the molecules over each other (so water can flow), it has some flexibility. But I do not understand how a solid that is inflexible can make sound waves travel faster then in a flexible liquid. In fact, sound waves travel over 17 times faster through steel than through air. Sound waves travel over four times faster in water than it would in air. Question: * *Why does sound travel faster in steel than in water? I am interested in the quantum mechanical level.
the speed of sound (a compression wave) in steel is given by the square root of (the ratio of the sum of the bulk compressive modulus and 4/3 times the shear modulus divided by the density of the material). Since steel is very stiff, this makes the numerator very big and even though the density of the steel is significant, the ratio remains big and so does the square root- so the speed of a compression wave in steel is big. For water, the expression is similar: the square root of (the bulk elastic modulus divided by the density). In this case, the result is smaller because water is less dense than steel and not as stiff. You say you want this explained at the quantum level. To do so requires a quantum treatment of the physics of interatomic bonds, intermolecular bonds, electron orbital shapes and sizes, and strong force bonds so that the resistance of the materials to compressive stresses and their bulk densities can be accounted for on a quantum level. This is a huge job for which I am not qualified, and I invite the experts to weigh in on these matters.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/409928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
if light source is bigger than the object, it is possible that shadow of the object is bigger than the object? When sun light falls on my bathe tub i noticed that the shadow of any small particle floating on the water surface is bigger than the particle and also it is quite circular i.e. Deform from It's actual shape. Generally it tends to become a circle.
In this case, the likely explanation is that the surface of the water near the "floater" is not flat due to surface tension. This causes the light rays entering the water near the particle to be bent, causing a "shadow" under the particle. You might have also noticed that the swirling water when the drain is opened casts a "shadow" (without the presence of any particles) because the water surface is not flat. For example, you can see the bending of the water surface in this photo. And here is an example of the cast "shadows".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/410075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Energy conservation on expanding universe Due to the expansion of the universe, the photons emitted by the stars suffer redshift, Its mean that the energy is lowered a little bit. Does this mean that the energy is lost? Does the expansion of the universe violate some conservation principles according to Noether's theorem?
while photons appear red-shifted for a remote observer who is receding away due to the expansion of the universe, they still retain the same wavelength and energy relative to the frame they originate from, thus no energy has been lost
{ "language": "en", "url": "https://physics.stackexchange.com/questions/410392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Massless charged particles in an electric field According to this question, theoretically, there can be massless charged particles. What will happen if we put them in an electric field? How will they respond to the increase in momentum/energy? In case of photon the frequency of the associated electromagnetic wave increases in this case.
That is exactly what happens. Their energy and their momentum increase, although their spatial velocity would always be equal to c. You could observe this increase by scattering them with other particles to measure their energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/410496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Torricelli's Law and Variable Density I wish to explore a slight modification of a well known result found in several physics texts - Torricelli's Law of Efflux. The most common problem on the above result has the following setup: The container is filled with fluid of a certain density, upto height H and has a hole at a distance h from the water surface. Usually, we assume the density of fluid to be uniform, while applying Bernoulli's Principle to figure out the velocity of efflux. What effect would variable density have on the velocity of efflux? P.S. Of course, the variation of density with depth $y$ from the fluid surface is known. P.P.S. Though a qualitative idea would suffice, it is always better to do quantitative analysis of such situations. For the sake of simplicity, let's assume linear and increasing variation of density with depth from the fluid surface.
Let the base of the tank be the datum (z = 0) for zero potential energy. Then the form of the Bernoulli equation that would be valid for this problem would involve an integral of the density variation. Taking the two locations for applying the Bernoulli equation as 1.the upper fluid surface in the tank (assuming it is open to the atmosphere) and 2. the exit hole, we have: $$p_{atm}+\int_0^H{\rho g dz}+0=p_{atm}+\int_0^{(H-h)}{\rho g dz}+\frac{1}{2}\rho_{(H-h)} v^2$$ This reduces to $$\frac{1}{2}\rho_{(H-h)} v^2=\int_{(H-h)}^H{\rho g dz}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/410701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What would be the charge distribution of a conducting sphere in front of a positive point charge? What would be the charge distribution of a conducting sphere in front of a positive point charge? I mean if it's a positive charge then it should induce negative charge in the near side and positive on the other side. But as it's conducting then it should distribute the charge all over the sphere. So it should make the sphere nutral. Or something extra-ordinary might happen. Assume the sphere is isolated.
if it's a positive charge then it should induce negative charge in the near side and positive on the other side. That's correct. But as it's conducting then it should distribute the charge all over the sphere. So it should make the sphere neutral. Since the sphere is isolated, it remains neutral at all times. The electrons moving toward the external positive point charge will each leave behind one positive ion. As the electrons are moving closer to the side where the point charge is, they will start experiencing the increasing repulsion from each other and the attraction from the ions left behind and, at some point, when these forces will equalize the attraction force from the point charge, the electrons will stop moving. At that point, there wont'be any field or force inside the sphere or along the surface of the sphere - otherwise the electrons would continue moving. When there is no field, there is no potential difference, so we say that the sphere has reached the equipotential state. This does not mean that the potential of the sphere will be zero: it will be positive due to the presence of the positive point charge. There will be electric field around the sphere, but all the field lines will be normal to the surface of the sphere, so they would not be causing electrons to move along the surface, but rather try to pull electrons or ions away from the sphere, which won't happen unless the field is very strong. The charges would be evenly distributed along the surface of the sphere, if (a) there was a net charge on the sphere and (b) there was no external field to bias it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/410906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
As the universe expands, do we have any reason to suspect further separation of the fundamental forces/interactions? At some point, all four forces were one force. (another question: what exactly does that mean?). At some point gravity and the strong force separated out leaving the electroweak force. Then the electroweak force separated out to become the electromagnetic force and the weak force. I assume we are not done with phase transitions. So are there any theoretical reasons to believe that there won't be any further separations? For example, the electromagnetic force separates into two forces. How do we know that a force is "fundamental" and not separable? A related question: What does it mean to say that "the fundamental forces of nature were unified"?
Electroweak unification is broken when the temperature is low enough for the Higgs field to settle into its ground state. The ground state of the Higgs field is charged under "weak isospin" and "hypercharge", causing the weak force particles to acquire mass and leaving only the photon massless. In most theories, grand unification, which would unite the strong force with the electroweak force, is also broken by some kind of Higgs field, but one which enters its ground state at much higher temperatures. So as the universe passes from hot to cool, first the superheavy Higgs field relaxes to its ground state and breaks grand unification, and then later the standard Higgs field relaxes to its ground state and breaks electroweak unification. There is no real reason to expect further symmetry breaking, but this odd little paper does explore the possibilities.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/410968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
How do bulk regions of clouds conduct charges into a lightning bolt? It's easy to find explanations of the theories of charge accumulation in clouds during storms, as well as ones describing suspected processes leading to lightning channel formation. What I have yet to encounter is any theory describing how a large region of a cloud can be conductive enough (for any duration, long or short) to allow charges to be conducted into a lightning bolt channel. In other words, how would the ice or water particles that supposedly collect charge move these charges toward a strike channel without some kind of conductive plasma between them? Is there plasma that reaches into large cloud regions? If so, how does it form and what might be the lifetime?
The strike does not happen instantaneously - it may take milliseconds for the stepped leader to reach the ground - so there is some time for the charges, spread over a large area of a cloud, to reach the discharge path. At very high field intensity levels, common in charged clouds, air molecules are easily ionized and serve as carriers of the discharge current. Once the discharge has started (due to a particularly strong field in some location), negative ions start flowing downwards (assuming a cloud-to-ground lighting), while positive ions flow back to the cloud. This intensifies the field at the root of the discharge channel and, as a result, the ionization spreads to the adjacent areas of the cloud, intensifying the field there, etc. So, presumably, we have an avalanche ionization process that allows charges from significant area of the cloud to flow into the lighting channel.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/411133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A ball attached on a moving string If there is a ball attached on a string and the string's point of hanging is accelerating horizontally at $\vec{a}$, what will be the forces exerted on the ball that is hanging? It is obvious that there will be a gravitational force downwards and a tension force, and there should be another horizontal force on the ball in the opposite direction of the acceleration of the string, but where does that force come from? It should be from the ball's inertia, but how can that be a force?
The ball rises until the vertical component of tension equals gravity. In this stable state the horizontal component of tension is accelerating the ball at the same rate as the vehicle as seen from an external frame of reference. In this stable state, to an observer in the car, the acceleration of their frame (car) causes all objects in it to experience a force to the rear. The ball experiences this force, which is balanced by the horizontal tension in the string.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/411275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How do we know the maximum speed of conventional matter is the same as the speed of light? Is there an argument, apart from experiments, that we know this is true? And if we only know it by experiment, how do we know the experiments are precise enough to conclude this? Stated differently, what argument is there that prevents us from replacing c in the Lorenz transformations by $c' = c + \epsilon$
We know from Maxwell's equations that the speed of light is constant: $$c=\frac{1}{\sqrt{\epsilon_{0}\mu_{0}}}$$ We know from Galilean Relativity that the laws of motion are the same in all inertial frames. So for each observer to measure the same value of the speed of light, it must be the same no matter how fast you're moving relative to it. If you work out the math, you will get some equations that predict that particles with mass are constrained to move slower than the speed of light. Photons are allowed to travel at the speed of light because they have no mass - meaning that they cannot ever be at rest. Having mass means that you can always find a frame relative to which you are at rest.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/411375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Plane wave approximation Consider a proton in harmonic motion along vertical direction. http://physics.weber.edu/schroeder/mrr/MRRtalk.html Near a point source, direction of electric field is along the curve. https://en.wikipedia.org/wiki/Plane_wave But at greater distance from point source, in plane wave approximation, electric field is not along the sinusoid but it is perpendicular to axis. How the direction of electric field is determined in plane wave approximation?
The electric field of an oscillating point charge consists of a Coulombic component and a radiative component: * *the Coulombic part is generally directed away from the charge, and it goes down as $1/r^2$ with the distance from the center of the oscillations. *the radiative part is generally transversely polarized, and it goes down as $1/r$ with the distance from the charge. When we're considering radiation, we keep the $1/r$ component, because it dominates completely over the $1/r^2$ Coulombic near-field when you're far away from the charge. This explains the discrepancy you observe. As to how the direction of the electric field is determined - that obviously depends on the situation. Plane waves are a model, and nothing more, and they are generally a terrible model for the field radiated by a point charge. (Instead, you normally use spherical EM waves.) Different situations call for different models, and different characteristics of the radiation within those models.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/411558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating Canonical State Sum with fermions? my question is regarding to the fact that we say that $n=0,1$ for fermions/electrons but why not $n=0,1,2$ if a spin up and a spin down electron can simultaneously occupy the same state? Thanks for the replies!
Because we count the occupation number for a state in the full single particle Hilbert space (not for orbital states). The full state of an electron is specified by its orbital state and its spin state. That is, there are two states (one for each spin projection) for each orbital state, since the total single particle Hilbert space is the tensor product of the spin states with the orbital states. This gets especially important if the spatial and the spin parts of the Hamiltonian do not separate (for example, when there is a space dependent magnetic field, or if you include spin-orbit coupling in your calculations). Then you can no longer say that there is a spin up and a spin down state per orbital state, since the eigenstates of the Hamiltonian are no longer tensor products of spin states and orbital states. Also, the description with $n = 0,1$ also holds for any spin $m/2$ fermions (where, in the case of Hamiltonian where spin and orbit parts decouple) for which there are $m+1$ spin states. And even the artificial spinless fermions we use in our theoretical toy models. (Artificial in the sense that no such thing exists in nature.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/411667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What would happen to a 10 meter sphere of room temperature water if released into space? Imagine that we had a space station with a relatively large hangar, and we allowed a ball of water to accumulate that had a 10 meter diameter and a water temperature of 20C. While the hangar is pressurized, someone decides to use a (closed loop) rebreather tank to sit in the middle of the sphere and breathe, so they're not dying and they're not exhaling any air into the water (just to keep things simpler). Someone cycles the airlock and the sphere is now floating in the middle of the hangar in a hard vacuum. What would happen to the water, and what would happen to the person inside? Would the sphere of water maintain enough pressure on the person that they would be fine, would the water boil off so quickly that it wouldn't be useful for long, or would the water freeze? I see several options, and this is a question I've wondered about for awhile, but I haven't been able to solve it.
It is a perennial but pernicious myth that liquid water would flash into vapor in space if the pressure were suddenly released. Even though the free energy difference (between water and ultra-tenuous vapor) would favor vaporization, evaporation is very endothermic. The water must acquire the heat of vaporization (over 500 cal/g) from the environment, and/or it must cool off. Heat delivery in space is especially slow because the only sources are sunlight and IR radiation from the Earth below, or in your scenario, the space station. As for the unhappy fate of Astronaut Aqualung in the middle of your ball of water, the depressurization of the hangar would almost immediately result in depressurization of his environment as well. Since the rebreather is not designed to maintain pressure, he would die of anoxia long before he was encased in ice, which would ultimately sublime, leaving his cadaver to freeze dry. (The very thought makes my blood boil, but only figuratively.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/411829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
2D ${\cal N}=(2,2)$ Super Yang-Mills with Superspace I'm reading this famous paper by Witten. There is the expression of field strength for the abelian vector multiplet (eq. (2.16)): $$\Sigma = \frac{1}{\sqrt{2}}\bar{D}_+D_- V\;.\tag{2.16}$$ I'm wondering what is the expression for a non-abelian vector multiplet, written explicitly. Eq. (2.15) in principle give what I want: $$\Sigma=\frac{1}{2\sqrt{2}}\{\bar{\mathcal{D}}_+,\mathcal{D}_-\}\;,\tag{2.15}$$ however I cannot see definition of $\mathcal{D}$ and $\bar{\mathcal{D}}$. Moreover eq. (2.8) is $$ \{\mathcal{D}_\alpha,\bar{\mathcal{D}}_{\dot{\alpha}} \} = -2i\sigma^m_{\alpha\dot{\alpha}}\mathcal{D}_m\;, \tag{2.8} $$ which, if plugged in $(2.15)$ seem to give not the right result. Also in Mirror Symmetry Book only deals with the abelian case. Do you know where can I find the general case? Or how can I extract by myself the field strength? Addendum I tried some obvious generalization such as $$ \Sigma = \frac{1}{\sqrt{2}}\bar{D}_+e^{-V}D_-e^V\;, $$ which transforms correctly as $$ \Sigma \mapsto e^{-\Lambda}\Sigma e^{\Lambda}\;, $$ however, in this case $$ \bar{\Sigma}\Sigma\;, $$ does not transform correctly.
Witten defines it in equation 4.5 of https://arxiv.org/pdf/hep-th/9312104.pdf
{ "language": "en", "url": "https://physics.stackexchange.com/questions/411991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Does the fact that $j^\mu$ is a 4-vector imply $A^\mu$ is, as argued by Feynman? Let \begin{equation} \boldsymbol{\Phi}=\Bigl(\dfrac{\phi}{c},\mathbf{A}\Bigr) \tag{01} \end{equation} the electromagnetic 4-potential. We know that if its 4-divergence is zero \begin{equation} \dfrac{1}{c^{2}}\dfrac{\partial \phi}{\partial t}\boldsymbol{+}\boldsymbol{\nabla}\boldsymbol{\cdot}\mathbf{A}=0 \quad \text{(the Lorenz condition)} \tag{02} \end{equation} then Maxwell's equations take the elegant form \begin{equation} \Box\boldsymbol{\Phi}=\mu_{0}\mathbf{J} \tag{03} \end{equation} where the so called d'Alembertian \begin{equation} \Box\equiv \dfrac{1}{c^{2}}\dfrac{\partial^{2} \hphantom{t}}{\partial t^{2}}\boldsymbol{-}\nabla^{2} \tag{04} \end{equation} and the 4-current \begin{equation} \mathbf{J}=(c\rho,\mathbf{j}) \tag{05} \end{equation} which has also its 4-divergence equal to zero \begin{equation} \dfrac{\partial \rho}{\partial t}\boldsymbol{+}\boldsymbol{\nabla}\boldsymbol{\cdot}\mathbf{j}=0 \quad \text{(the continuity equation)} \tag{06} \end{equation} and is a 4-vector. The question is : under these conditions is the 4-potential a 4-vector ??? I ask for a proof or a reference (link,paper,textbook etc) with a proof. EDIT $^\prime$Mainly Electromagnetism and Matter$^\prime$, The Feynman Lectures on Physics, Vol.II, The New Millenium Edition 2010.
Yes the four potential $A^{\mu}=(\phi(\vec{x},t),\textbf{A}(\vec{x},t))$ is a four vector and it can be seen from the equation that it satisfies: \begin{align} \partial^{2}A^{\mu}=\frac{1}{c}J^{\mu} \end{align} the $\partial^{2}$ operator is a scalar and $J^{\mu}$ is a Lorentz vector leading to $A^{\mu}$ being necessarily a four-vector itself.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/412110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
No sense in the expression $\hat{x}| 1\rangle=\sqrt{\frac{2}{a}}\int_{-\frac{a}{2}}^{\frac{a}{2}}x\cos\left(\frac{\pi}{a}x\right)dx=0$ I am considering a particle of mass m in a symmetric infinite square well of width a in the fundamental state. $$V(x)= \begin{cases} 0 & \mbox{$|x|<\frac{a}{2}$} \\ \infty & \mbox{otherwise} \end{cases}$$ I want to know what values are obtained from the measurement of energy $E$, position $x$ and impulse $p$ and the corresponding probabilities. So I did: $$\psi_{n=1}(x)=\sqrt{\frac{2}{a}}\cos\left(\frac{\pi}{a}x\right)$$ $$E_{1}=\frac{\hbar^2\pi^2}{2ma^2} \,\,\,\,\,\,\,\,\,\,\ P(E_1)=100\%$$ I can not calculate the eigenvalues of the operator position that I imagine is a continuous set of values in the interval $\left[ -\frac{a}{2},\frac{a}{2} \right]$. Those that I have considered up until now are the eigenfunctions of the Hamiltonian and not of the position operator so I do not think it makes sense: $$\hat{x}| 1\rangle=\sqrt{\frac{2}{a}}\int_{-\frac{a}{2}}^{\frac{a}{2}}x\cos\left(\frac{\pi}{a}x\right)dx=0$$ However I do not know how to do it or even for the momentum. I also have a suggestion that to calculate the probability of the momentum it is sufficient to calculate the wave function in the space of the impulses, but I honestly can not understand it
@knzhou already indicated the perfect appositeness of your title. You apply the definitions of your text, as the chthonian pundit suggests, $$\hat{x}| 1\rangle= \hat{x}\int dx ~|x\rangle\langle x| 1\rangle= \sqrt{\frac{2}{a}}\int_{-\frac{a}{2}}^{\frac{a}{2}}dx~~\left( x\cos\left(\frac{\pi}{a} x\right)\right ) ~~~|x\rangle . $$ The position operator just multiplies the wavefunction by x for every position x, but, of course, the state $|1\rangle$ is an x -integral of eigenstates of this operator with x -dependent coefficients, the very heart of Dirac's Bra-Ket formalism. So, then, $$\langle 1|\hat{x}| 1\rangle= \sqrt{\frac{2}{a}}\int_{-\frac{a}{2}}^{\frac{a}{2}}dx~~\left( x\cos\left(\frac{\pi}{a} x\right)\right ) ~\langle 1|x\rangle = \frac{2}{a} \int_{-\frac{a}{2}}^{\frac{a}{2}}dx~~\left( x\cos^2\left(\frac{\pi}{a} x\right)\right ) =0 , $$ $\langle 1|\hat{x}^2| 1\rangle= a^2(1/12- 1/2\pi^2) $, etc. You clearly naively calculated the eigenvalue of $\hat {p}^2$, since you have the eigenvalue of the energy; however, a subtlety prevents $|1\rangle$ from being an eigenstate of of $\hat p$, as the symmetric wave packet $\langle p|1\rangle$ is not infinitely sharp. In any case, you may avoid all this; confirm directly that $\langle 1| \hat{p}|1\rangle =0$, which might not be surprising; and, of course, $\langle 1| \hat{p}^2|1\rangle =\hbar^2\pi^2/a^2$. This is just 10% off saturating the uncertainty principle bound! * *Small footnote to be avoided until the fourth reading. The moot self-adjointness of this $\hat p$ is fully discussed in here. * * *Terms of use agreement, so in the smallest print possible, to only read with mental lawsuits in mind. In his book, Dirac defines $|x\rangle$ via the ''standard ket'' which, up to a normalization, is but the translationally invariant momentum eigenstate $|\varpi\rangle=\lim_{p\to 0} |p\rangle$ in the momentum representation, i.e., $\hat{p}|\varpi\rangle=0$. Consequently, the corresponding wavefunction is a constant, $\langle x|\varpi\rangle \sqrt{2\pi \hbar}=1$. The definition is then $~~~|x\rangle= \delta(\hat{x}-x) |\varpi\rangle \sqrt{2\pi \hbar}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/412236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Dissapation of photon energy For an incident photon to be absorbed by a material, must it exactly equal a difference in the electron energy levels, or does it just have to be more than one such difference. If more is okay, what happens to the remaining photon energy? Does it continue on as a lower energy photon, if so, the remnant photon might not have enough energy to be absorbed. Would it just continue to travel onward through the material indefinitely? Can one or more electrons be knocked off by a single photon? The concepts of exact energy levels sounds a bit unnatural, like perfect sine wave. Are there uncertainty bands in these electron energy levels?
Yes, gamma rays and other high energy light can knock electrons off their orbitals. https://www.youtube.com/watch?v=NT6foiglgow This video explains the bohr atomic model in more detail and would answer your question
{ "language": "en", "url": "https://physics.stackexchange.com/questions/412465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to derive that $pV^k$ is constant in a polytropic process? This is what we did on the lecture: $$\delta Q=nC dT$$ $$dU=nCdT-pdV$$ $$dU=\bigg(\frac{\partial U}{\partial V}\bigg)_TdV+\bigg(\frac{\partial U}{\partial T}\bigg)_VdT$$ $$dU=\bigg(\frac{\partial U}{\partial V}\bigg)_TdV+nC_VdT$$ $$n(C_V-C)dT=-\bigg(\bigg(\frac{\partial U}{\partial V}\bigg)_T+p\bigg)dV$$ And $\bigg(\frac{\partial U}{\partial V}\bigg)_T=0$ in the case of ideal gas, so: $$n(C_V-C)dT=-pdV$$ $$n(C_V-C)dT=-\frac{nRT}{V}dV$$ $$\color{blue}{(C_V-C)dT=-\frac{RT}{V}dV}$$ $$\color{red}{pV^k=constant}$$ where $k=\frac{C-C_p}{C-C_V}$ My first question is, how did we get the red one from the blue, or do you know an alternative derivation? And my second question is, how does this work? For example, what should I do, if I want the $p^V-V^p$ to be constant? How can I get the $k$? It's just a weird example, but I hope you get what I mean.
We did the same derivation too. But I like it this way: $$\mathrm{d}U=\mathrm{d}\left(\frac{f}{2}pV\right)=\frac{f}{2}p\mathrm{d}V+\frac{f}{2}V\mathrm{d}p$$ Because $\delta Q=nC\mathrm{d}T$, we have that $\mathrm{d}U=nC\mathrm{d}T-p\mathrm{d}V$, so: $$nC\mathrm{d}T-p\mathrm{d}V=\frac{f}{2}p\mathrm{d}V+\frac{f}{2}V\mathrm{d}p$$ But $pV=nRT$, so $T=\frac{1}{nR}pV$ and $\mathrm{d}T=\frac{1}{nR}\left(p\mathrm{d}V+V\mathrm{d}p\right)$: $$nC\frac{1}{nR}\left(p\mathrm{d}V+V\mathrm{d}p\right)-p\mathrm{d}V=\frac{f}{2}p\mathrm{d}V+\frac{f}{2}V\mathrm{d}p$$ $$C\left(p\mathrm{d}V+V\mathrm{d}p\right)-pR\mathrm{d}V=\frac{f}{2}Rp\mathrm{d}V+\frac{f}{2}RV\mathrm{d}p$$ Collecting the $\mathrm{d}V$s and $\mathrm{d}p$s to the same side: $$\left(\frac{f}{2}RV-CV\right)\mathrm{d}p=\left(Cp-pR-\frac{f}{2}Rp\right)\mathrm{d}V$$ $$\left(\frac{f}{2}R-C\right)V\mathrm{d}p=\left(C-R-\frac{f}{2}R\right)p\mathrm{d}V$$ $$\left(\frac{f}{2}R-C\right)V\mathrm{d}p=\left(C-\frac{f+2}{2}R\right)p\mathrm{d}V$$ $$\left(C_V-C\right)\frac{1}{p}\mathrm{d}p=\left(C-C_p\right)\frac{1}{V}\mathrm{d}V$$ Integrating both sides: $$\left(C_V-C\right)\log\left(\frac{p_2}{p_1}\right)=\left(C-C_p\right)\log\left(\frac{V_2}{V_1}\right)$$ $$\left(\frac{p_2}{p_1}\right)^{C_V-C}=\left(\frac{V_2}{V_1}\right)^{C-C_p}$$ From this, we have that: $$\frac{p_1^{C_V-C}}{V_1^{C-C_p}}=\text{const}$$ $$p_1^{C_V-C}V_1^{-C+C_p}=\text{const}$$ $$p_1V_1^{\frac{C_p-C}{C_V-C}}=\text{const}$$ $$p_1V_1^{\frac{C-C_p}{C-C_V}}=\text{const}$$ And we wanted to get this. Note: $\frac{f}{2}R=C_V$, because $$C_V=\frac{1}{n}\left.\frac{\partial U}{\partial T}\right|_V=\left.\frac{\partial \left(\frac{f}{2}nRT\right)}{\partial T}\right|_V=\frac{f}{2}R$$ and $\frac{f+2}{2}R=C_p$, because: $$C_p=\frac{1}{n}\left.\frac{\partial U}{\partial T}\right|_p+\frac{1}{n}p\left.\frac{\partial V}{\partial T}\right|_p=\frac{f}{2}R+\frac{1}{n}\left.\frac{\partial \left(pV\right)}{\partial T}\right|_p=$$ $$\frac{f}{2}R+\frac{1}{n}\left.\frac{\partial \left(nRT\right)}{\partial T}\right|_p=\frac{f}{2}R+R=\frac{f+2}{2}R$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/412727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Conformal field theory does not have... conformal symmetry? This post is about 1+1d. It is often said that conformal field theory has an infinite-dimensional symmetry generated by the Virasoro algebra: $$ [L_n,L_m] = (n-m) L_{n+m} + \frac{c}{12} n (n^2-1) \delta_{n+m,0}. $$ (Similarly for the anti-holomorphic branch with generators $\bar L_n$.) But (at least in radial quantization) the Hamiltonian is $H = L_0 + \bar L_0$. This obviously does not commute with the above generators, since $[L_n,L_0] = nL_n$. In other words, it seems the Virasoro algebra functions as a 'spectrum-generating algebra' (since $L_n$ maps eigenspaces of $H$ to eigenspaces of $H$), rather than as a symmetry? Am I misunderstanding something?
The Virasoro algebra is a true symmetry of the theory, in the sense that the action of a conformal field theory is conformally invariant if it exists, and in the sense that the algebra elements map solutions to the equations of motion (quantumly: eigenstates of the Hamiltonian) to solutions of the equations of motion. However, the generators indeed do not commute with the Hamiltonian because they correspond to time-dependent transformations. $[Q,H] = 0$ is only the condition for a symmetry if the symmetry does not transform the time coordinate - the statement for a time-dependent classical symmetry generator is $[Q,H] + \partial_t Q = 0$. Note that the classical infinitesimal symmetry the $L_n$ correspond to is $z\mapsto z + \epsilon z^{n+1}$, and since $z$ is a mixture of time and space coordinates, the generator $L_n = z^{n+1}\partial_z$ is explicitly time-dependent and you cannot expect the quantum generators to commute with the Hamiltonian. Exactly the same is true in a much less confusing theory: The Lorentz boost generators, whose classical expression $t\partial_{x^i} - x^i \partial_t$ is also explicitly time-dependent, do not commute with the zeroth component of momentum - the Hamiltonian - either!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/412975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Difference between real and virtual objects (optics) I do know the difference between real and virtual images, cf. e.g. this Phys.SE post. I would like to know the difference between the real and virtual objects. I need a real life example.
Your diagrams say it all. Real objects are points from which light diverges. A normal eye can take these divergent rays and converge them to points on its retina. Virtual objects are points towards which light converges. If there were no eye or optical instrument in the way, there would be real images at these points. But suppose you place the pupil of your eye at S2 (where that blue thing is in the top right hand diagram). A normal eye wouldn't be able to accommodate (focus) these converging rays, because in everyday life you simply don't have rays converging to a point. However, the action of certain optical instruments can sometimes be analysed using the notion of a virtual object. The concept of a virtual object is quite a sophisticated idea and I would expect students to have met the thin lens equation, $\frac{1}{u}+\frac{1}{v}=\frac{1}{f} $, before meeting virtual objects. So here's a simple exercise that should help with the idea… An illuminated object is placed at the 0.0 cm mark on a metre rule, a converging lens of focal length 10.0 cm at the 15.0 cm mark, and another such lens at the 25.0 cm mark. All three are co-axial. (a) Show that the first lens produces a virtual object for the second lens. (b) Determine the position along the ruler of the real image produced by the second lens.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/413109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sign of work done by friction In Goldstein's classical mechanics (3rd ed.) we read: "The independence of W12 on the particular path implies that the work done around such a closed circuit is zero,i.e. $$\oint \textbf{F}.d\textbf{s}$$ Physically it is clear that a system cannot be conservative if friction or other dissipation forces are present, because $F . d\textbf{s}$ due to friction is always positive and the integral cannot vanish." My question is: why should the work due to friction be "always positive"? Shouldn't it be nonzero instead? Also, $F . d\mathbf{s}$ is a typo and should be $\mathbf{F} . d\textbf{s}$ (please let me know if I'm wrong)
Of Course, I also have the same question. Technically the work done by friction is negative and what that means is the decrease in the energy of the system. But there is similar principle in thermodynamics that decrease in the energy of the system is due to the positive work done by the system. So if we consider a system in which there are frictional forces in action, then the energy of the system must decrease and that decrease in the energy is reflected like the positive work has been done by the frictional forces in that system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/413353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Is photon antibunching a requirement for quantum key distribution? While reading up on single photon sources I often came across photon antibunching being a requirement that had to be demonstrated for a specific single photon source. I understand that antibunching behaviour arises from the fact that the radiative excited states in single photon sources have a specific lifetime and therefore cannot emit while there are either no excited states that decay radiatively or while those states simply do not decay due to their mean lifetime. So the essence of my question is: Are antibunching experiments merely to prove that a source is in fact a single photon source or is antibunching a requirement/beneficial for certain applications (like QKD).
Photon antibunching demonstrates a specific property (indistinguishability) of a specific quantum information carrier (photons). Most quantum protocols, such as QKD, are not tied to any specific implementation, so photon antibunching is not directly related to them: one could in principle implement a QKD protocol using something different than photons as information carriers. Even restricting to a quantum optical context though, the answer is no. The reason is that antibunching witnesses the indistinguishability of a pair (or more) of photons, which is not a fundamental property in many protocols. If, for example, a protocol requires the use of a Bell state $|00\rangle+|11\rangle$, and the qubits are encoded into the polarization of a pair of photons, then the indistinguishability of such photons is not required.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/413438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do contact transformations differ from canonical transformations? From Goldstein, 3rd edition, section 9.6, page 399 after equation 9.101: [...] The motion of a system in a time interval $dt$ can be described by an infinitesimal contact transformation generated by the Hamiltonian. The system motion in a finite time interval from $t_0$ to $t$ is represented by a succession of infinitesimal contact transformations which is equivalent to a single finite canonical transformation. [...] How does the contact transformation differ from the canonical transformation?
Contact transformations were discovered by Sophus Lie in the 19th century. Within this context an infinitesimal homogeneous (time independent) contact transformation: $$ \delta q^i = \frac{\partial H}{\partial p_i}\delta t,\qquad \delta p_i = - \frac{\partial H}{\partial q^i}\delta t $$ is a coordinate transformation that leaves the system of equations: $$ \Delta = \begin{vmatrix} dp_1 ,\dots,dp_n\\ p_1,\dots,p_n\\ dq^1 ,\dots,dq^n \end{vmatrix} =0,\qquad \sum_ip_idq^i =0 $$ invariant [1]. In this context we can interchange contact with canonical according to Qmechanic's answer. In the context of differential geometry, we make a distinction between symplectic transformations on $dim(2n)$ symplectic manifolds and contact transformations on $dim(2n+1)$ contact manifolds. This extends the time independent formulation into an extended phase space (time dependent). [2] We must now take care on how we use the phrase contact. In both symplectic and contact frameworks, we can define a canonical structure, $$ \theta = pdq, \qquad \Theta = pdq-Hdt $$ respectively, that becomes invariant under their respective transformations. [1] The infinitesimal contact transformations of mechanics. Sophus Lie. 1889. Translated by D. H. Delphenich. [2] https://arxiv.org/pdf/1604.08266.pdf, Contact Hamiltonian Mechanics, Alessandro Bravettia, Hans Cruzb, Diego Tapias, 2016
{ "language": "en", "url": "https://physics.stackexchange.com/questions/413589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Relativistic mass of components gives system rest mass? To put it briefly, in the classic thought experiment of a massless box with mirrored insides containing photons, does the relativistic mass of the photons imbue the box with rest mass? I take it that's the case, because I think that's how baryons are supposed to get their mass, but I'm not really getting how this is happening exactly.
The rest mass arises from a difference in the photon pressure against different walls of the box. For example, when the box is stationary in a gravitational field, photon pressure on the bottom of the box is higher than it is on the top, because photons are altered by the field. For another example, when the box is being accelerated by an external force, inertia arises from the fact that the photon pressure on the front wall is less than the pressure on the back wall.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/413664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Misunderstanding of the functioning of the reflective diffraction grating Suppose we have a sawtooth diffraction grating, as depicted below: where the angle $\beta$ is the angle of inclination of the 'teeth' of the grating with respect to the plane of the grating and incident plane monochromatic waves normal to the plane of the grating. I am supposed to determine the angle $\theta$ for which the interference pattern for one 'saw-tooth' has a maximum. The diagram in the mark-scheme is as follows: where the points $A,B$ both belong to the same 'saw-tooth' and the distance between $A$ and $B$ is $d$. The path difference between the two waves, according to the mark-scheme is given by $\Delta = BF - AE = d \sin \beta - d \sin \theta$. My question might seem trivial, but why are the two (BF and AE) not equal? In other words, shouldn't the two parallel incident waves (incident at an angle $\beta$) be simply reflected from the face of the 'saw-tooth' at exactly the same angle, in accordance with the law of reflection? Why even bother defining $\theta$? What am I missing here?
For the light to interfer constructively when propagating at an angle $\theta$, the rays from point $A$ and point $B$ must be in phase. Therefore $$BF-AE=d\sin\beta-d\sin\theta=n\lambda . $$ This leads to the grating equation $$\sin\beta-\sin\theta=\frac{n\lambda}{d} . $$ Here $n$ denotes the diffraction order. If $n=0$, then you have the zero-th diffraction order, for which $\beta=\theta$. Hence, basic reflection, no diffraction. For $n=1$, you have the first diffraction order, for which $\beta\neq\theta$. The design of the grooves is to optimize the diffraction efficiency into a specific order, such as the first diffraction order.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/413776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Electric potential concept Imagine having two charged plates, one positive and one negative, and a negative point charge is placed at the negative plate. Let's set the negative plate to zero potential. The distance between the negative point charge and the negative plate is zero, so $V$ is zero in the equation $V=Ed$. However, since $U=qV$, potential energy is zero at this point. This should not be correct because the negative point charge of course has potential energy at this location. What is the conceptual error in this thought process?
Potential is defined as $V(r) = \int_{ref}^{r} \vec{E} \cdot \vec{dr}$ By convention, ref is set to $\infty$, in this reference point, ofcourse the charge has potential energy. By setting the value of potential to be zero at the location of the plate, you change "ref" to be something other than infinity. Does this actually matter? No. The value of potential doesn't matter, since for all "ref", the change in potential is the same. The value of ref just adds a constant onto the standard definition (when ref is set to $\infty$), which just cancels out when taking the difference in potential. $(V_{0}(b)+c) - (V_{0}(a)+c)$ = $V_{0}(b) - V_{0}(a)$ This addition of a constant by changing the reference point of which potential is measured, also doesn't effect its relation to the electric field Since, $\vec{E} = - \nabla(V_{0}+c)$ $\vec{E} = - \nabla V_{0}$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/414086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Work and Voltage Question 1: Is voltage measured as the amount of work a charge can do as it flows between two points because work is proportional to force? Thus by measuring the amount of work a certain amount of charge is doing as it flows between two points, we are able to indirectly measure the electromotive force acting on that charge? Question 2: Do we measure electromotive force this way because it is somehow easier to measure the loss of energy of the charge than the force acting on it? Question 3: Work is equal to Force x Distance. Does this then mean that we could increase voltage by increasing the distance between the two points we are measuring? Obviously not, since this isn't true in practice. So how does distance affect electromotive force, if at all? It seems as if it shouldn't affect it at all if voltage is truly a force. Please note: I've seen some explanations involving calculus, which I don't know. My math knowledge is only up to pre-calc. Thank you for your help.
In answer to questions 1 and 2, voltage is defined this way, not measured. Voltage is usually measured by allowing a small current to flow across a resistor, and measuring the current. Look up "voltmeter" on the internet. In answer to question 3, no, you cannot increase the voltage between two points by increasing the distance between them. Imagine you attach long wires to the ends of a battery and then move the wire's ends away from each other. The voltage between the ends of the wires will not suddenly increase as the ends move. You could try the experiment using a multimeter, a battery, and some pieces of wire if you don't believe that. The key point is that although the voltage stays constant, the distance changes, so the force exerted on a charged particle between the wire ends must change. The fact that the voltage remains constant while the field strength varies in this and other common situations also explains why we often measure voltages rather than field strengths.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/414241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Virial expansion in terms of pressure I'm studying thermodynamics and I found two forms for the virial expansion: $$pV~=~RT[1+\frac{A_2}{V}+\frac{A_3}{V^2}+\ldots] \tag{1}$$ and $$pV~=~RT[1+B_2p+B_3p^2+\ldots]\tag{2}$$ my problem is that I can not find the correct procedure to express the coefficients $B_k$ in terms of $A_k$. I just find the answer, that is $$B_2=\frac{A_2}{RT}, \qquad B_3=\frac{A_3-(A_2)^2}{(RT)^2}, \qquad \ldots \tag{3}$$ but I can not find the procedure to obtain those relations (Actually I found a very strange procedure that I didn't understand at all) and I have been trying but I can't solve this problem. Does anybody could help me please?.
$$pV=RT\left [1+\frac{A_2}{V}+\frac{A_3}{V^2}+...\right ]\\\Rightarrow p=RT\left [\frac 1V+\frac{A_2}{V^2}+\frac{A_3}{V^3}+...\right] \\\Rightarrow p^2 = R^2T^2\left [ \frac {1 }{V^2}+ \frac{2A_2}{V^3}+...\right]$$ Now substitute for $p$ and $p^2$ into the equation $$pV=RT[1+B_2p+B_3p^2+...]$$ and gather up terms is $\dfrac 1V$ and $\dfrac{1}{V^2}$ and then compare them with the first equation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/414386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What would happen to the Moon if the Earth stopped providing it with a centripetal force owing to its force of gravity? Would the Moon really only travel in a straight line then? What about the other planets and their forces of gravity? Wouldn't they prevent this rectilinear and undisturbed motion of the Moon?
If the Earth just disappeared, the Moon would continue around the Sun on pretty much the same path it has now. To see this, consider the velocity of the Earth/Moon system around the Sun: $2\pi \times 150 \times 10^6 \text{ km} / 365 \text{ days} = 2.6 \times 10^6 \text{ km}/\text{day}$ versus the Moon's speed around the Earth: $2\pi \times 385 \times 10^3 \text{ km} / 29 \text{ days} = 83 \times 10^3 \text{ km}/\text{day}$ The motion around the Sun is a factor of $30$ faster; the Moon's motion around the Earth is just a small perturbation on that.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/414538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the general form of the solutions of the 2-electron system? According to Sakurai the solutions of the two-electron system are of the form $\psi=\phi({\bf x_1},{\bf x_2})\chi(m_{s1},m_{s2})$ Since it's a fermionic system, $\psi$ must be a linear combination of antisymmetric states. If $\phi$ is symmetric and $\chi$ is antisymmetric (or the other way around), then $\phi\chi$ is antisymmetric, and so is a linear combination. With no spin dependence, the Hamiltonian is $\mathcal{H}=({\bf p_1}^2 + {\bf p_2}^2)/2m$, and the spatial solutions are of the form $\omega_A({\bf x_1})\omega_B({\bf x_2})$, so $\phi$ can be written as a symmetrical and antisymmetrical combination \begin{equation} \phi_{\pm}({\bf x_1},{\bf x_2}) = \frac{1}{\sqrt{2}} \left[ \omega_A({\bf x_1})\omega_B({\bf x_2}) \pm \omega_A({\bf x_2})\omega_B({\bf x_1}) \right] \end{equation} In the same way, $\chi$ can be a triplet or a singlet state. But, is every possible solution a linear combination of antisymmetric terms $\phi\chi$? I don't think so, because I found the following state \begin{equation} \psi = \omega_A({\bf x_1})\omega_B({\bf x_2})\chi_{+-} - \omega_A({\bf x_2})\omega_B({\bf x_1})\chi_{-+} \end{equation} And I couldn't write it as a linear combination of the following: \begin{equation} \left\lbrace \begin{array}[l] &\phi_+({\bf x_1},{\bf x_2})\frac{1}{\sqrt{2}}\left( \chi_{+-}-\chi_{-+} \right)\\ \phi_-({\bf x_1},{\bf x_2}) \left\lbrace \begin{array}[l] &\chi_{++}\\ \frac{1}{\sqrt{2}}\left( \chi_{+-}+\chi_{-+} \right)\\ \chi_{--} \end{array} \right. \end{array} \right. \end{equation} The state $\psi$ is antisymmetric, and it is a valid state for the 2-electron system. But it isn't a combination of antisymmetric states of the form $\phi({\bf x_1},{\bf x_2})\chi(m_{s1},m_{s2})$, so these states do not form a complete basis of solutions. I would like to know a complete basis for the system.
With: $$ S \equiv \frac 1 {\sqrt 2}[\chi_{+-}-\chi_{-+}]$$ and $$ T \equiv \frac 1 {\sqrt 2}[\chi_{+-}+\chi_{-+}]$$ and subbing in: $$ \omega_A(x_1)\omega_B(x_2) = \frac 1 {\sqrt 2}[\phi^++\phi^-] $$ and likewise for the other $\omega$: $$ \psi = \frac 1 2 [(\phi^+ + \phi^-)(T+S) - (\phi^+-\phi^-)(T-S)] $$ $$ \psi = \frac 1 2 [(\phi^+T + \phi^+S + \phi^-T +\phi^-S) - (\phi^+T - \phi^+S - \phi^-T +\phi^-S) ]$$ $$ \psi = \frac 1 2 [\phi^+T + \phi^+S + \phi^-T +\phi^-S - \phi^+T + \phi^+S + \phi^-T -\phi^-S ]$$ $$ \psi = \phi^+S + \phi^-T$$ which is the sum of both antisymmetric terms.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/414645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Heisenberg's uncertainty principle and shape of universe On a TV program with some well-known astrophysicists they sad that the effect of Heisenberg's uncertainty principle shortly after the Big-Bang made matter (normal & dark) expand in a non-homogeneous way. Now, here's my question: if I understand it correctly, Heisenberg's principle does state that we cannot know - at the same time - the position and momentum of a particle; this due to the wave-like nature of fundamental particles. What is then the connection between us not being able to know both quantities at the same time and the early universe expanding in a non-homogeneous way? (i.e. forming agglomerations of matter that would become the seeds of stars and galaxies). Any ideas what these experts were talking about?
A Quantum fluctuation is the temporary change in the amount of energy at a point in space. This is part of the Heisenberg uncertainty principle. This allows the creation of particle-antiparticle pairs of virtual particles. Quantum fluctuations were very important in the early stages of the universe, according to the model of expansive inflation, the ones that existed when inflation began, were amplified and formed the seed of the current observable universe. In the early stages of the universe, tiny fluctuations within the university's density led to concentrations of dark matter gradually forming. Ordinary matter attracted to these by gravity, formed large gas clouds and eventually stars and galaxies and voids.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/414746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Can X-rays emitted due to bremsstrahlung radiation have frequency matching with other EM waves like visible ones? The continuous X-ray spectrum has x-rays of widely varying frequencies. Since an E-M wave is characterized by its frequency, is it possible for the X-rays coming out of heavy metals due to bremsstrahlung radiation to have the frequency matching with other light waves like visible ones, radio waves, or others? In short, while producing X-rays, can we produce other types of EM radiation?
The continuous X-ray spectrum comes from Bremsstrahlung radiation, which is the radiation emitted whenever an electric charge is accelerated or decelerated. In this case electrons striking the metal are decelerated by collisions with metal atoms and emit EM radiation as a result. The spectrum is continuous because the electrons experience a range of different decelerations - some electrons will be strongly decelerated and emit high energy X-rays while some will be weakly decelerated and emit lower energy X-rays. Light and radio waves are just electromagnetic waves, like X-rays, and in principle they too will be emitted. However in practice the intensity of the radiation emitted at optical or radio wavelengths is vanishingly small. But light and radio waves are indeed emitted by accelerating charges in other contexts. For example a filament light bulb emits light because electrons are accelerated by thermal vibrations. This too produces a continuous range of frequencies, which is called back body radiation. Radio waves are emitted when we accelerate electrons using an oscillating voltage. A radio transmitter applies an oscillating voltage to its aerial and that accelerates electrons in the aerial and produces radio waves as a result.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/414877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How do fusion reactors deal with blackbody radiation? The plasma of the ITER reactor is planned to be at 150 million K. Using the Stefan-Boltzmann law, setting the surface area as $1000\,\mathrm{m}^2$ (the plasma volume is $840\,\mathrm{m}^{3}$ so this is being generous), and the emissivity as $0.00001$ (emissivity is empirical so I just plugged in an extremely low value) yields a power of $2.87\times 10^{23}\,\mathrm{W}$. It would require somewhere on the order of $10^{35}$ fusion reactions per second just to break even, which clearly is not happening. How can fusion researchers confine plasmas for several minutes if the blackbody radiation is this extreme? It seems like that with this level of heat, the plasma would just cool down within a few nanoseconds, and everyone in the vicinity would be torn to shreds by gamma rays, but evidently this does not happen. How?
Fully stripped atoms can't radiate by having electrons jump between energy levels anymore, because there are no bound electrons. So, that removes the biggest radiation channel unless impurities with high atomic numbers are introduced (such as tungsten). Heavy elements don't get fully stripped of electrons and then the radiation losses are just as bad as you're suggesting and can cause complete disruption and termination of the plasma discharge. Fusion plasmas with mostly hydrogen isotopes and only lightweight impurities like carbon will typically only radiate strongly from the cold edge layers, where radiation actually has the beneficial (as long as it STAYS at the edge) effect of distributing heat exhaust.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/415028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 3, "answer_id": 2 }
How does the electric field produced by a simple circuit look? I have not seen anywhere a description of how the electric field looks inside and around a simple circuit. For example let's say we have the circuit shown below. One DC voltage source, two resistors, and a constant current flowing around. We know that the electric field inside the battery will point from positive to negative, we also know that the field inside the wires is very small and in the direction of the current. Through the resistors there will be a strong field pointing from positive to negative. But in order to maintain the relationship that a closed loop integral of the E field is zero everywhere we must also have a field outside of those circuit elements. I have no idea how this field will look but I have made a crude attempt at sketching it below. Is this a realistic picture of how the field will look?
An electrical circuit is a lumped element model, that does not carry any geometrical information with it. Given a circuit you do not know what is the shape of the resistors, the dimension of the battery, the cross section of the wires, the position of the elements with respect to each other, and so on and so forth. Unless, you know those information by other ways, there is no way to even sketch the electric field. Moreover, the idealized wires that connects the various components of the circuit are mere topological connections, they do not have any electric field inside. a closed loop integral of the E field is zero everywhere That is not what the second Kirchhoff's law says. It can easily be derived from the Faraday's law under the assumption that the magnetic field does not change. $$\oint_{c} E \cdot \text{d}l=-\int_S \frac{\partial B}{\partial t}\cdot \text{d}S$$ Since, $B$ does not change with time its derivative is zero , thus $$\oint_{c} E \cdot \text{d}l=0 \;\;\;\;\;\;\;\;\;\;\;\; \forall c$$ it must be true for all possible close loops c. This equation does not state anything about the field out of loop. Simply, it says that the sum of the electric field all around a loop is zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/415160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Help with recursion relation for 3D Conformal Blocks I'm trying to calculating the four point function for the 3D Ising model. To do so I need to calculate the 3D Conformal Blocks . I found a paper which has a recursive relation for calculating the Conformal Blocks: equation (3.10). The authors are Filip Kos, David Poland and David Simmons-Duffinb, and the title is Bootstrapping the O(N) vector models. $$h_{\Delta,\ell}(r,\eta)=h_\ell^{(\infty)}(r,\eta)+\sum_i\frac{c_i r^{n_i}}{\Delta-\Delta_i}h_{\Delta_i+n_i,\ell_i}(r,\eta)\tag{3.10}$$ However, I am unsure how to actually implement it. I want to compute (3.10) up to $r^{12}$. I imagine my expression is going to look like a sum of the $h^{(\infty)}$ times coefficients with powers of $r$. However I'm not so sure that is the correct interpretation of this recursion relation. If anyone can point me in the right direction that would be much appreciated.Also I do not know how many times I need to plug the right hand side into the left hand side. If I want to order 12 I imagine I do it 12 times. As an example of what I am doing. I'll sum up to $k=1$ and pretend I'm looking for order up to $r^2$ only $$h(V,L)=\sum _{k=1}^{\frac{L}{2}} \frac{c(k)_3 r^{2 k} h(L+2,L-2 k)}{2 k-L+V-2}+\frac{c(1)_1 r^2 h(1-L,L+2)}{L+V+1}+\frac{c(1)_2 r^2 h\left(\frac{5}{2},L\right)}{V-\frac{1}{2}}+H(\infty ,L)$$ after iterating this once so I only get $r^2$ terms I get $h(1.4,0)=1.11111 c(1)_2 r^2 \left(\frac{2}{7} c(1)_1 r^2 h(1,2)+\frac{1}{2} c(1)_2 r^2 h\left(\frac{5}{2},0\right)+H(\infty ,0)\right)+0.416667 c(1)_1 r^2 \left(\frac{1}{4} c(1)_1 r^2 h(-1,4)+2 c(1)_2 r^2 h\left(\frac{5}{2},2\right)-c(1)_3 r^2 h(4,0)+H(\infty ,2)\right)+H(\infty ,0)$ and now expanding up to $r^2$ yields $h(1.4,0)=r^2 \left(0.416667 c(1)_1 H(\infty ,2)+1.11111 c(1)_2 H(\infty ,0)\right)+H(\infty ,0)$ all of the $h$'s from the rhs are gone and I have something in terms of $H(∞,2)$ which I know how to evaluate. Is this the correct approach, I'm asking because even though I'm doing it this way I'm getting utter nonsense for my answer. To get the Conformal Block I then multiply this by $r^V$ which gives $.08$ instead of $.6707$ I should be getting.
To the order $r^2$, the block is given by the contributions of three poles with $k=1$, $$ h_{\Delta,\ell} = h^{(\infty)}_\ell + \frac{c_1(1)}{\Delta+\ell+1} r^2 h^{(\infty)}_{\ell+2} +\frac{c_2(1)}{\Delta-\nu} r^2 h^{(\infty)}_{\ell} + \frac{c_3(1)}{\Delta-\ell-2\nu+1} r^2 h^{(\infty)}_{\ell-2} + O(r^4) $$ To the order $r^4$, let us assume for simplicity $c_2=c_3=0$, we find $$ h_{\Delta,\ell} = h^{(\infty)}_\ell + \frac{c_1(1,\ell)}{\Delta+\ell+1} r^2 h^{(\infty)}_{\ell+2} + \left(\frac{c_1(1,\ell)}{\Delta+\ell+1}\frac{c_1(1,\ell+2)}{\Delta+\ell+3} + \frac{c_1(2,\ell)}{\Delta+\ell+3}\right) r^4 h^{(\infty)}_{\ell+4} + O(r^6,c_2,c_3) $$ To do such calculations you need good and consistent notations. (The original paper does not help, for example $c_1(k)$ actually also depends on $\ell$, see eq. (3.13).)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/415354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Weird Reflection Pattern in Reading Glasses While fidgeting with a pair of reading glasses, I noticed a strange reflection pattern (shown in video and photo). I would appreciate it if anyone that knows more about this could help me figure out why there were eight dots in the reflection instead of four, and why there were different colors when all the light sources were an orange/yellow/warm color. There were eight dots in each frame, four orange, four green. They made a cube shape, each one of the dots being a corner of the cube. The same image could be seen in each frame, and if I focused my eyes it looked 3d. The picture below shows the light source that the reflection came from, just four ordinary ceiling lights. I know it couldn't have been anything else because it was night, and to test this I turned off all other lights and it was still there. In fact it was then even brighter. I looked into the glasses again in the morning, the effect was still there. Thanks, Ella
Think that it's just the reflection of the four lights off your glasses, with one set of four reflections being due to the lights directly reflecting off the glass surfaces. The other four reflections are probably due to the remaining light passing into the glasses and then reflecting off the rear glass-air surface that it encounters when it tries to exit the glass. The fact that the reflections have different color tints may be due to an anti-reflection coating on the glasses, which causes some colors to be reflected slightly more strongly than others, or due to a slight tint in the glass itself. See diagram below. Note that according to the diagram there should be more reflections than the eight lights that you noted in each glass lens because the light continues to bounce back and forth inside the glasses. If you look closely, you may be able to detect another set of four reflected lights due to the "3rd reflection" shown in the diagram. The 4th reflection and higher may be too faint to see, though.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/415594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
Berry Phase for Bloch electrons I am new to the topic of Berry phase. The definition says that Berry phase depends only on the path in the parameter space of $R$, where the Hamiltonian is $H(R)$, but whatever problems I have seen, the parameter itself has a time dependence. Even for the case of Bloch electrons, we can calculate Berry phase for cyclic excursion in the parameter space $k$ of the lattice. The real space in the lattice is absolutely time independent; my question is that will there be a Berry phase, if we perform a cyclic excursion of an electron in the real space of the lattice?
In real space, the adiabatic Berry phase of a closed orbit just measures the magnetic flux through the orbit's area. Explanation: (please see Sundaram and Niu ) The semiclassical equations of motion of a Block electron in phase space are given by (Sundaram and Niu equation 3.8) $$\mathbf{\dot{x}_c} = \frac{\partial \mathcal {E}_M}{\partial \mathbf{k_c} }-\mathbf{\dot{k}_c} \times\mathbf{ \Omega}$$ $$\mathbf{\dot{k}_c} = -e \mathbf{E} - \mathbf{\dot{x}_c} \times\mathbf{ B}$$ Where: $\mathbf{x_c}$, $\mathbf{k_c}$, are the electron wavepacket center of mass position and momentum respectively, $\mathcal{E}_M$ is the magnetic Bloch band energy, $\mathbf{ E}$ and $\mathbf{ B}$ are the electric and magnetic fields respectively and $\mathbf{ \Omega}$ is the berry curvature. These equations of motion can be obtained from the Lagrangian (Sundaram and Niu equation 3.7): $$L = \mathcal {E}_M(\mathbf{k_c}) + e \phi(\mathbf{x_c})+ \mathbf{\dot{x}_c}\cdot\mathbf{k_c} – e \mathbf{\dot{x}_c} \cdot \mathbf{A} + \mathbf{\dot{k}_c} \cdot \mathbf{A}_B $$ Where $\phi$ is the electromagnetic scalar potential $\mathbf{A} $ is the electromagnetic vector potential and $\mathbf{A} _B$ is the Berry potential. Please observe that the above formulation is symmetric in between the configuration space and the momentum space. The Berry (geometric) phase emerges from the vector potential terms in the Lagrangian. Thus just as the adiabatic Berry phase in the momentum space integrates the Berry potential over the orbit, the Berry phase in the configuration space integrates the electromagnetic potential over the orbit giving the magnetic flux. Namely, the full Berry phase is given by : $$\phi_B = e^{i \oint e \mathbf{A} \cdot d\mathbf{x} + i \oint e \mathbf{A}_B \cdot d\mathbf{k}} = e^{i \int_{\Sigma} e \mathbf{B} \cdot d\hat{\mathbf{x}} + i \int_{\Sigma_k} \mathbf{\Omega} \cdot d\hat{\mathbf{k}}} $$ Where the line integrals are over an orbit in phase space. The second equality is a consequence of Stokes' theorem; the second surface integral is the usual Berry phase evaluated in momentum space while the first is the magnetic flux.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/415789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Physical intuition behind Poincaré–Bendixson theorem The Poincaré–Bendixson theorem states that: In continuous systems, chaotic behaviour can only arise in systems that have 3 or more dimensions. What is the best way to understand this criteria physically? Namely, what is is about a space of dimension 1 or 2 that cannot admit a strange attractor? Why does this only apply to continuous systems and not discrete ones?
At each point along a chaotic trajectory, the following three directions must exist: * *A direction of time, along which the trajectory is going. *A direction of expansion, along which the phase-space flow is diverging, so you can have sensitivity to initial conditions. *A direction of contraction, along which the phase-space flow is converging, so the entire dynamics remains bounded and recurring. Strictly speaking, this only holds on average, e.g., the expanding dimension may be locally converging due to the way phase-space is deformed. Since the phase-space flow is, well, a flow being locally linearisable, these directions need to be linearly independent. Thus each direction needs its own dimension. For discrete-time systems, this does not hold anymore: The state does not change in a flow, but jumps all over the place between time steps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/415971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Is everything a resistor? Resistance is due to collision with protons, and pretty much everything contains protons. So technically is everything a resistor? (Or at least, can anything be a resistor?)
The definition of what a resistor is is not always clear. As an EE, I would recommend phrasing it "Everything has a resistance. Not everything is a resistor." Through every object, if there is a voltage difference from one side to the other, current will flow through it, however minuscule. I would not call them resistors because it is more useful to reserve the term "resistor" for a component which I use in a way which is generally consistent with Ohm's Law. For example, a capacitor has resistance. Electrons will eventually move from one side of a capacitor to another, given a sufficient voltage across the capacitor. I can calculate it's resistance. However, the behavior of a capacitor is generally very far from that of a resistor, so thinking of that capacitor as a resistor would only confuse me unless I am specifically looking at the leakage currents through a circuit. Likewise, any high voltage electrician will tell you that everything conducts: air, rubber, plastic, glass, sulfur-hexafloride. Everything conducts. Not everything is considered to be a conductor. Those insulators holding up the power lines above our heads have one job: to not be a conductor. That being said, they do indeed conduct some current. They are just designed to do it so minimally that they can be used as an insulator as well.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/416085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 7, "answer_id": 1 }
Pauli exclusion responsible for "solidity"? I have heard Frank Close say that the reason you can't put your hand through a solid object is the Pauli exclusion principle. However Richard Feynman in his "Fun to Imagine" series attributes it to electrostatic forces. I have two questions: Firstly, who is correct here (or maybe both)? Secondly, on a classical scale can the Pauli exclusion principle be interpreted as a force? The reason you can't put your hand through a solid object is because of a normal reaction force. So if the PEP is responsible for this it must be creating a force. I have sometimes seen the singularity at 0 in the Lennard-Jones potential interpreted as due to the PEP. EDIT: I understand that the PEP is not a "fundamental force" carried by force-carrying particles. But it seems to clearly manifest as a force on a classical scale.
Close and Feynman are both right. If you try and overlap one electron-state with another then, thanks to the Pauli Exclusion Principle,this is impossible. One of the electrons must move to a higher state, which requires energy, which is why the Lennard-Jones potential shoots up, and there is a resistive force. The reason this state has a higher energy is that the electron involved is further from the nucleus, so its electrostatic potential energy is higher.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/417626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is a fermion field complex? The Lagrangian of a fermion field is \begin{equation} \mathcal{L} = \overline{\psi} (i\gamma_{\mu} \partial^{\mu} - m)\psi \end{equation} It is said that the fermion field $\psi$ is necessarily complex because of the Dirac structure. I don't quite understand this. Why is the fermion field complex from a physical point of view? A complex field has two components, i.e., the real and imaginary components. Does this imply that all fermions are composite particles? For example, an electron is assumed to be a point particle that does not have structure. How can it have two components if it is structureless?
A charged particle requires a complex valued field. For a neutral particle it is believed that a real valued field suffices. For example the Schrödinger and Klein-Gordon current operator is zero for a real wave function.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/417886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Why does this paper use 1/cm for units of frequency? Reading this paper from 1963 $^*$, they use units of cm$^{-1}$ for frequency. Here is an excerpt: It doesn't seem like wave number, as they clearly call it frequency. What's going on here? $^*$ Sievers III, A. J., and M. Tinkham. "Far infrared antiferromagnetic resonance in MnO and NiO." Physical Review 129.4 (1963): 1566.
Sometimes physicists use (lenghts)$^{-1}$ to indicate frequencies, expecially in spectroscopy... As you well now, $\omega = 2\pi \nu$ and $c = \lambda \nu$ so that $\omega = 2\pi \frac{c}{\lambda}$ which is proportional to $ \lambda ^ {-1},$ since $c$ is a constant. So when one says $\omega_1 = 29 cm^{-1}$ is actually saying $\omega_1 = c_{cgs}29 s^{-1} $ where $c_{cgs}$ is the speed of light in the cgs system
{ "language": "en", "url": "https://physics.stackexchange.com/questions/418033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Terminal velocity to calculate velocity as a function of time? Using the equation for drag force, $F = c_d \times \rho \times v^2 \times A \times \frac{1}{2}$, where $c_d$ is coefficient of drag, $\rho$ is air density, $v$ is terminal velocity, and $A$ is reference area for the object, and accounting for acceleration due to gravity f = mg, am I allowed to divide both sides by $m$ (mass) to obtain $\frac{dv}{dt} = -9.81 + \frac{c_d \times \rho \times v^2 \times A \times \frac{1}{2}}{m}$? Therefore, $dt = \frac{dv}{(-9.81 + c_d \times \rho \times v^2 \times A \times \frac{1}{2})/m}$. Integrating for both sides means that I can obtain a velocity function related to time. However, something doesn't seem right. Isn't the force of drag in itself the terminal force of drag, a variable which does not change depending on time assuming fixed conditions? But, I solved this equation, and the velocity obtained as a result was a velocity that became constant after a particular time - in other words, I obtained the terminal velocity. It started from $v=0$ and $t=0$, and eventually reached one peak value which no longer changed. This suggests that it is not incorrect to do this. Although, for one of my methods of solving it, due to the approximations made, I did obtain a hyperbole which attained the same peak value but did not pass through v = 0 t=0. Am I correct in thinking that I am allowed to manipulate this equation in this way? If I'm not, why does my solution match up with the concept of terminal velocity?
The EOM's are: \begin{align*} &\textbf{For the vehicle }\\ & m{\frac {d}{dt}}v \left( t \right) =F_{{d}} \left( {v}^{2} \right) +{ \it Fc}&(1)\\ &\textbf{For the wheel }\\ & \theta\,{\frac {d}{dt}}\omega_{{w}} \left( t \right) =M_{{E}} \left( t \right) i_{{g}}-{\it Fc}\,\,r&(2)\\ &\textbf{and the condition for a rolling wheel }\\ &\left( {\frac {d}{dt}}\omega_{{w}} \left( t \right) \right) r={ \frac {d}{dt}}v \left( t \right)&(3)\\\\ &\text{$F_d$ Drag force}\\ &\text{$M_E$ Engine torque }\\ &\text{$i_g$ Transmission between Engine and wheel}\\ &\text{$Fc$ constraint force}\\\\ &\text{We have three equations (1),(2),(3) for three unknowns $\frac {d}{dt}v(t)$,$Fc$ and ${\frac {d}{dt}}\omega(t)_{{w}}$ }\\\\ &\Rightarrow\\ &\frac{d\,v(t)}{dt}=\frac {r\,i_{{g}}M_{{E}} \left( t \right) }{\theta+m{r}^{2}}+\frac {F_{{d}} \left( {v(t)}^{2} \right) {r}^{2}}{\theta+m{r}^{2}} \end{align*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/418374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Understanding what it means that gravitation is proportional to the product of the masses Newton's gravitational law states that $F = G*\frac{M*m}{d^2}$ Intuitively it means that the greater the masses, the stronger the force, but it is more precise than that, it is proportional to the product of the masses not, for example, their sum. I am not sure how Newton derived that, but I can guess he deduced that if $M_1 = 2*M$, then the force on $m$ should be twice as much. However, the question is: what if I was able to move mass away from $M$ and add it to $m$? The total mass of the two bodies did not change, but now the force with which they are attracted to each other has changed. Is there an intuitive explanation of what is going on?
Perhaps it will help to look at the gravitational field strength, which here is measured in units of acceleration ($m/s^2$). Of course we know that force, mass and acceleration are related by $F=m\cdot a$. And we have Einsteins principle that an acceleration and a gravitational field are indistinguishable. So, the gravitational field of the bigger body will be $g = G\cdot \frac{M}{r^2}$. The smaller mass (m) then experiences the force $F=m\cdot g$. Since M and m are the only two objects here, they both experience the same strength force, just in opposite directions. That is the reason why the force between two gravitating Newtonian objects is proportional to their mass. You can swap the two masses m and M and arrive at the same result. Also, consider a limiting case in which $m\rightarrow 0$. The force between the two bodies vanishes. That's not surprising. If there is no second object, there won't be a force on the first.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/418628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why is it not possible to dope semiconductors with elements with 6 or 2 valence electrons? As far as I have learnt, semiconductors are often made using elements from the 4th group, and their properties are often enhanced by doping with either pentavalent or trivalent elements. Take the case of pentavalent doping, which results in a n-type semiconductor. Well this would raise the Fermi energy since there are more electron donors and this enhances the properties of the semiconductor. Wouldn’t it be more effective if doped with an element like sulphur? There would be more electrons donated then. In addition, doping with group 2 elements would result in more holes too! Why is this currently not done? I would like an explanation that explains how these kinds of doping may affect the Fermi energy and why it is hence, not being done.
Silicon is doped with group III (B, hole doping) and group V (P, electron doping) elements because these form shallow impurities when residing at a silicon lattice position. Shallow impurities in silicon have binding energies of 40-50 meV and are ionised at room temperature so that their electrons reside in the conduction band. The cause of this low binding energy is that the excess nuclear potential is shielded by the high dielectric constant of 11.4. It is also possible to dope silicon with group VI elements (S,SE,Te). These elements indeed produce double donors but with much larger binding energy of 300 meV for the first electron and 600 meV for the second. Such donors are therefor not ionised at room temperature and do not contribute to the conductivity. I am not aware of successful group II (double acceptor) doping in silicon. Earth alkali atoms are unlikely to favour the silicon lattice position. Zinc appears to be a double acceptor, also with deep levels.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/418738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Can an Observer Distinguish between Unruh Effect and Hawking Radiation by Measuring Temperature? In Unruh effect, the temperature of background appears to be proportional to acceleration. On the other hand, the temperature of a black hole is inversely proportional to its mass. If the two effects have the same origin as mentioned in https://physics.stackexchange.com/a/259342/85274 temperature in both must include the same proportionality to acceleration and to gravitational acceleration, respectively. According to equivalence principle, an observer hovering above the horizon of a black hole is equivalent to an observer who accelerates somewhere far from any massive body. But he can measure the thermal radiation, use the formulas for the two effects and infer in which situation he is. Does it imply that there is distinction between the situations, from an observer's point of view, or is something wrong with my argument?
You are probably assuming an uniformly accelerating observer, then no, because The Unruh temperature has the same form as the Hawking temperature TH = ħg / 2πckB of a black hole. Assuming an inertial observer makes a difference. An inertial observer doesn't see Unruh Radiation, but far away from a Black Hole will see Hawking radiation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/418898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Questions about shapes and the reason behind the drag coefficient differences * *What is the reason that a streamlined body has a drag coefficient that is lesser than the drag coefficient of a streamlined half-body, when the latter has a completely flat bottom, while the former has bulges on either side of the centerline? *Why do long cylinders have a lesser drag coefficient than short cylinders? Length means more surface for the flow to move against, but it seems that a shorter object with an otherwise same shape, has more drag to it. https://img.bhs4.com/26/7/2675e0e869aa5afcf4ea44bd4908acb8248a8a76_large.jpg
The first thing you need to do when comparing drag coefficients is to check the reference areas. Even though it is not clear from the picture you link to, it is pretty obvious that the frontal area was used. This means for your first question that the half body creates roughly as much drag force as the full body (which has twice the frontal area). Next, you need to know the Reynolds number, surface roughness and the level of turbulence to have a better understanding of how the drag coefficient had been measured. Again, all those details are missing. Now consider that the half-body is mounted on a flat surface. Most likely, the flat plate alone was tested first and then the half-body added, and the drag of the whole plate plus half-body measured. I suspect that the surface area covered by the half-body was subtracted when the additional drag of the plate was eliminated from the measurement, but I have no way of knowing. In all, giving a single number without more details on how the result was obtained is unsound and potentially misleading. Your second question is easier to answer. Essentially, a longer body has a lower drag coefficient because it creates a smaller frontal area of separated flow. When flowing around the forward edge of the cylinder, the flow will separate and follow a curved path that is dictated by the pressure difference between outer flow and the separated region right after the forward edge. The long cylinder will prevent air from behind the cylinder to flow forward and fill up the separated region, so the pressure difference is stronger, leading to a higher curvature of the flow past the edge and eventual reattachment before the end of the long cylinder is reached. On the other hand, the separation field past the edge of the short cylinder joins with the separated flow behind the cylinder and the flow looks like that around a flat disk (see this answer for an extensive list of possible drag coefficients - and yes, Mr. Hoerner gives the valid Reynolds number range and an extensive list of citations for his single-figure numbers).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/419150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Decomposition of the symmetric part of a tensor The rate of strain tensor is given as $$e_{ij} = \frac{1}{2}\Big[\frac{\partial v_i}{\partial x_j}+ \frac{\partial v_j}{\partial x_i}\Big]$$ where $v_i$ is the $i$th component of the velocity field and $x_i$ is the $i$th component of the position vector. From what I read, I understand that $e_{ij}$ is the rate of strain tensor or the symmetric part of the deformation tensor i.e $\nabla \bf{v}$. The rate of strain tensor can be decomposed in the following form: $$e_{ij} = [e_{ij} - \frac{1}{3}e_{kk}\delta_{ij}] + \frac{1}{3}e_{kk}\delta_{ij} $$ From what I could gather, $e_{kk}$ can be written as $\nabla \cdot \bf{v}$ which represents the pure volumetric expansion of a fluid element and the first term is some kind of strain which does not encompass volumetric change. Is this correct or is there more to it. What is the correct physical interpretation for it, and why is it useful? Further more I read that any such symmetric part of tensor can be decomposed into a “isotropic” part and an “anisotropic” part. I am unable to understand Why we can do this and what it represents physically. I would like to have a mathematical as well as a physical understanding for this sort of decomposition. I am very new to tensors and fluid mechanics and would like to have a complete understanding of this. Thank you for the answers.
In principal component form, $$D_{11}=\frac{1}{3}\left[\frac{\partial v_1}{\partial x_1}+\frac{\partial v_2}{\partial x_2}+\frac{\partial v_3}{\partial x_3}\right]+\left[\frac{1}{3}\left(\frac{\partial v_1}{\partial x_1}-\frac{\partial v_2}{\partial x_2}\right)+\frac{1}{3}\left(\frac{\partial v_1}{\partial x_1}-\frac{\partial v_3}{\partial x_3}\right)\right]$$ The first term in brackets represents the isotropic expansion/compression contribution to the rate of deformation tensor. The two terms in the second brackets can be interpreted as non-isotropic "pure shear" deformation contributions to the rate of deformation tensor. This same type of pure shear kinematics is encountered in the interpretation of solid mechanics deformations. Google "pure shear" in solid mechanics. https://www.google.com/search?q=pure+shear
{ "language": "en", "url": "https://physics.stackexchange.com/questions/419501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
What does it mean if the dot product of two vectors is negative? If the dot product gives only magnitude, then how can it be negative? For example, in this calculation: $$W = \vec{F}\cdot\vec{r} = Fr\cos\theta = (12\ \mathrm{N})(2.0\ \mathrm{m})(\cos 180^\circ) = -24\ \mathrm{N\,m} = -24\ \mathrm{J}$$ Why is there a negative sign? What does it tell us?
A dot product between two vectors is their parallel components multiplied. So, * *if both parallel components point the same way, then they have the same sign and give a positive dot product, while *if one of those parallel components points opposite to the other, then their signs are different and the dot product becomes negative. In your specific example of work, a negative $W$ thus means that the force and the displacement are opposite. Clearly this means that that force is not helping but rather counteracting this displacement. For example by slowing down the motion. So this force ducks energy out of the system rather than adding energy to the system. Therefore the negative sign makes sense.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/419657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Is the Bose-Einstein condensation a single particle phenomenon? BEC occurs for noninteracting Bosons. Can we conclude that it can be described with a single particle? What is the significance of the number of the particles?
What @Árpád Szendrei said is correct. I will add some miscellaneous points. * * BEC occurs for non-interacting bosons BEC occurs for interacting bosons as well, and non-interacting BEC is actually a pathological example. It has an infinite compressibility. The speed of sound is zero, and any infinitesimal drag will create excitations. A weakly interacting BEC has a non-zero speed of sound, and acts like a superfluid. It IS possible to make a non-interacting BEC, by modifying the scattering length between atoms to zero, using external fields (see Feshbach resonance). *The "wavefunction" that people usually discuss ($\psi(r) = \sqrt{n(r)}e^{i\phi(r)}$) is technically not the actual many-body wavefunction, but an order parameter of the condensate. This "wavefunction" obeys a non-linear Schrodinger-like equation called the Gross-Pitaevskii equation. * What is the significance of the number of the particles? It would help if the question is more precise, but usually a common question is whether the form of the order parameter mentioned above conserves the number of particles. The fact is, it doesn't, because it has a well-defined phase. It has a well-defined average of numbers, though. There is fluctuation in the number of particles, but (fluctuation)/(average) quickly goes to zero in the thermodynamic limit. [to find fluctuation in numbers, you need to look at the full Hamiltonian in second quantization form to get answers quick, so what I said is not really rigorous but just a sketch].
{ "language": "en", "url": "https://physics.stackexchange.com/questions/419755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Can Bohr-Kramers-Slater (BKS) theory examples be generalized to prove that classical gravity is futile? In the comment in What are the reasons to expect that gravity should be quantized? by Ron Maimon, it is mentioned that taking analogy from classical electromagnetic wave to classical grvational wave, one can notice that conservation of energy is violated. However, general relativity does not really have conservation of momentum as fundamental concept, and thus it is unclear what this would mean. Can anyone explain this? Can Bohr-Kramers-Slater (BKS) theory really serve as an example refuting possible validity of classical gravity?
However, general relativity does not really have conservation of momentum as fundamental concept, and thus it is unclear what this would mean. Can anyone explain this? GR does have conservation of momentum as a fundamental concept. Specifically, the structure of GR requires that the stress-energy tensor have zero divergence, which is a statement of local conservation of the energy-momentum four-vector. What GR doesn't have is a generic global conservation law for energy-momentum, but I don't think that has any logical consequences for the argument you refer to, because we do have such conservation laws for special cases like asymptotically flat spacetimes, and one can in principle detect gravitons, and falsify a classical theory of gravity, in an asymptotically flat spacetime. In any case, the argument about nonconservation of energy in BKS is really more about nonconservation of probability, i.e., it's about unitarity. It's just that in 1927, people described it in terms of having only statistical conservation of energy and momentum.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/419892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is it necessary to irreversibly erase a memory? I know that the most accepted resolution of the Maxwell's demon paradox was proposed by Landauer and revolves around the fact that the demon's memory is finite and will have to be erased at some point. This is an irreversible process that will generate entropy and preserves the second law. My question is this: why is it necessary that the demon erases it's memory instead of just writing over it in a reversible way. Couldn't the part of the machine responsible for writing a new state in memory be made to depend on the previous state in memory? Or is memory necessarly linked to an irreversible process?
Empty computer memory can be viewed as a thermal reservoir at zero temperature. In a very real sense it is possible to convert information (or rather, in this case, absence of entropy) into energy, something that has been experimentally demonstrated. So the initially empty memory can be viewed as a finite reservoir that allows the demon to "produce" energy by interacting with the gas, moving a bit of gas entropy into the memory. But once it is used up, the demon cannot reversibly restore it since that would need it to dump the stored entropy somewhere colder, and it does not have access to any such thermal reservoir.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/419992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Eigenvalues of a quantum field In the book 'Quantum field theory for the Gifted Amateur", the following is stated, cf. 9.3: "A quantum field $\hat{\phi}(x)$ takes a position in spacetime and returns an operator whose eigenvalues can be a scalar, a vector (the $W^{\pm}$ and $Z^0$ particles are described by vector fields), a spinor (the object that describes a spin-1 particle such as an electron), or a tensor." My question: is this statement correct, or should the word "eigenvalue" be replaced with "eigenvector"? If I naively think of a quantum field as being a function valued in some Hilbert space, then it seems to me eigenvalue of the operator $\hat{\phi}(x)$ should be a scalar quantity.
I believe the statement is correct, although one can argue that it expands the definition of "eigenvalue" beyond what one can be used to. The quantum field has more than one component in case of spinors, vectors, or tensors, so components of spinors etc. are eigenvalues of the components of the quantum fields. As for eigenvectors of quantum field, remember that they are vectors in the Fock space.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/420184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How does a satellite take pictures when the surface seems to be always cloudy/white? I've just opened ISS video on youtube for the first time, and I must say I'm underwhelmed. There are no oceans/landforms. It's all white everywhere. I'm a bit confused how the satellites can take pictures when the view from space looks like this ? Google maps has clear satellite pictures. Do they use some other frequency light that go straight through clouds ?
It is not always cloudy. Google Earth and various other map and remote sensing image databases work on the basis of multiple images "pasted" together. When a particular area is clear on a particular pass, the satellite takes an image. This image gets added to the database. The collection of images you see on the web site is constructed from these images. When new images of an area are collected, there is a process to decide if the new images are preferable to the old images. https://www.techwalla.com/articles/how-often-does-google-maps-update-satellite-images
{ "language": "en", "url": "https://physics.stackexchange.com/questions/420532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
does tension in the string affect its equilibrium? In my textbook (Sears and Zemansky's University Physics), it is written that the vector sum of the forces on the rope is zero, however the tension is 50 N. Then is tension different than the force? And if not, then why force is zero while tension is not? A body that has pulling forces applied at its ends, such as the rope in Fig 4.27, is said to be in tension. The tension at any point is the magnitude of force acting at that point (see Fig 4.2c). In Fig 4.27b, the tension at the right end of the rope is the magnitude of $\vec{\mathbf{F}}_{M\ on\ R}$ (or of $\vec{\mathbf{F}}_{R\ on\ B}$). If the rope is in equilibrium and if no forces act except at its ends, the tension is the same at both ends and throughout the rope. Thus if the magnitudes of $\vec{\mathbf{F}}_{B\ on\ R}$ and $\vec{\mathbf{F}}_{M\ on\ R}$ are $50\ \rm N$ each, the tension in the rope is $50\ \rm N$ (not $100\ \rm N$). The total force vector $\vec{\mathbf{F}}_{B\ on\ R}+\vec{\mathbf{F}}_{M\ on\ R}$ acting on the rope in this case is zero!
If you pull the ends of a rope with equal and opposite forces (F on the right hand end and –F on the left hand end), the resultant force on the rope is zero. But the rope will be in a different state from the state it would be in if no forces were being exerted on it. We say that the rope is under tension. Tension is quantifiable. Consider any cross-section of the wire at any distance along it. The section, R, of rope to the right of the cross-section will be pulling the section, L, of rope to the left of the cross-section with a force F, and the section, L, of rope will be pulling the section, R, of rope with a force –F. We say that the tension in the rope is of size F. Both L and R, will individually be in equilibrium, assuming that the rope is not accelerating; indeed that's how we could deduce the "internal forces" I've just described. So tension isn't strictly a force, but a rope under tension F will require forces ±F at either end to keep it in equilibrium, and by Newton's third law, will exert forces –F and +F on whatever it's attached to at either end.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/420708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Hamiltonian operating on a function of time I've seen a few people claiming: $$\hat{H(t)}[\psi(x)T(t)] = \hat{H(t)}[\psi(x)]T(t)\tag{1}$$ i.e. an explicit function of t is not acted upon by H, even if H itself may be dependent on t. A more specific example, Griffiths between equation 9.7 and 9.8 (implicitly): $$\hat{H(t)}[\psi e^{iEt/\hbar}] = \hat{H(t)}[\psi] e^{iEt/\hbar}$$ Is this because t is within an exponential, or is the general statement (1) true? And why? I feel like it has something to do with time being a parameter not a variable (although I don't fully get this concept either)
Hamiltonian operator only have derivatives of space and that too they are partial derivatives, so they don't affect any time-dependent function. Hence your exponential acts as a constant for the Hamiltonian operator.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/420937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If entropy is a measure of disorder, how come mixing water and oil finishes in a well ordered, separate state? By the second law of thermodynamics, entropy tends to increase when the system is let on itself. And if entropy is a measure of disorder, how come mixing oil in water and letting the system reach equilibrium, ends up with the oil and water well separated? I see no disorder whatsoever, while in reality the entropy increased compared to the initial state (oil and water seemingly well mixed by e.g. shaking the container).
This is because entropy has almost nothing to do with the apparent order or disorder you can see with your naked eye. That's just a pop science simplification. Compare the entropy of a dictionary and an identically sized book full of random gibberish. You might think the latter has a higher entropy, because the content is disordered. But the entropy of the words in the book is not even a million millionth of the total entropy, which overwhelmingly comes from thermal motion of the molecules in the paper. (Compare the number of characters in the book to the number of molecules, on the order of $10^{23}$.) If you hold the dictionary for even a second, the heat from your hand will make the entropy of the dictionary higher. In the case of oil and water, it is energetically favorable for the oil to be separate from the water, because these molecules bind more strongly to themselves than to each other. The extra energy released is now available to thermal motion of the oil and water molecules, or to the surrounding air molecules, increasing the entropy of the universe. This overwhelms the decrease of entropy associated with clumping the oil together.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/421056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 1, "answer_id": 0 }
Why is Copper(I) Oxide Red? This may appear to be a chemistry problem. But, after reading the Wikipedia article on copper(I) oxide, it seems to have more to do with semiconductor-physics. For example: … light travels almost as slowly as sound in this medium. Is that true? What have Kramers–Kronig relations got to do with it? To a chemist, who was never brilliant at maths, it takes a bit of understanding. I know that copper(II) oxide (Mott–Hubbard insulator [semiconductor]) is black because of intervalence charge transfer, giving rise to the generation of a highly polarising Cu(III) species. Similarly, the non-stoichiometric form of nickel(II) oxide (Mott insulator) is black because of a Ni(III) species. Again, silver(I) oxide is black … Ag(III) species. This model does not appear to work for copper(I) oxide because the non-stoichiometry, causing the oxidation required for the balancing of charges with the oxide ions, would give Cu(II); which, by definition, is not sufficiently polarising to produce the deep, intense colour observed. Further, the reduced Cu(I) becomes Cu(0), the pure metal. So, why is copper(I) oxide red?
No, it is not true that light moves very slowly in Cu$_2$O. Maybe some polariton extremely close to its resonant energy (I did not look into that), but not generally. It is red for the same reason why vermillion is red. It has a bandgap of about 2 eV, so that blue and green light are absorbed. And the absorption (the imaginary part of the complex refractive index) is related by Kramers-Kronig to the real part of the refractive index, which gives high reflectivity in the red.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/421213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
In a Bell scenario, why can correlations be nonlocal only if there are at least two measurement settings to choose from? In (Brunner et al. 2013), the authors mention (end of pag. 6) that a set of correlations $p(ab|xy)$ can be nonlocal only if $\Delta\ge2$ and $m\ge2$, where $\Delta$ is the number of measurement outcomes for the two parties (that is, the number of different values that $a$ and $b$ can assume), and $m$ the number of measurement settings that one can choose from (that is, the number of values of $x$ and $y$). A probability distribution $p(ab|xy)$ is here said to be nonlocal if it can be written as $$p(ab|xy)=\sum_\lambda q(\lambda) p(a|x,\lambda)p(b|y,\lambda).\tag1$$ This means that if either there is only one possible measurement outcome, or only one possible measurement setting, then all probability distributions can be written as in (1). The $\Delta=1$ (only one measurement outcome) case is trivial: if this is the case, denoting with $1$ the only possible measurement outcome, we have $p(11|xy)=1$ for all $x,y$. Without even needing the hidden variable, we can thus just take $p(1|x)=p(1|y)=1$ and we get the desired decomposition (1). The $m=1$ case, however, is less trivial. In this case the question seems equivalent to asking whether an arbitrary probability distribution $p(a,b)$ can be written as $$p(a,b)=\sum_\lambda q(\lambda)p(a|\lambda)p(b|\lambda).$$ The paper does not mention any reference to support this fact. How can this be proven?
Make a $\lambda_{a,b}$ for every pair $(a,b)$. Then make $q(\lambda_{a,b}) = p(a,b)\,$, and $p(a|\lambda_{a,b}) = p(b|\lambda_{a,b}) = 1.$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/421512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Can Newton’s law of gravitation be derived from Coulomb’s law? I’m casually learning physics and have noticed that Newton’s law of gravitation and the electrostatic force formulas look similar. I’ve asked this question before but would really appreciate another response. Is it possible that the two laws are related? Can the law of gravitation be seen as the macroscopic averaging of Coulomb’s law? So atoms on average have negative charge (positive mass) and thus on a macroscopic scale we observe that two large bodies (eg planets) attract rather than repel. Would it help if we assume that masses can be positive as well as negative? Apologies as I’m not a physicist (rather a data analyst) and these are probably dumb questions.
To the best of our knowledge they are not deeply related although there is a theory called Kaluza–Klein theory that tried to interpret electro-magnetism as curvature of space-time much like gravity. There are, however, no real indications that this is correct. To get back to the original question the relation is that the force equation has identical functional form with just different constants. This can be interpreted as coincident but is useful in mechanics since you can reuse many results for gravity in the case of charges that interact.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/421650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
Do photons violate the uncertainty principle, given that they have a constant speed $c$ with no uncertainty? I have a very basic understanding of quantum physics, but as I understand it the uncertainty principle says that the more precisely you know a particle momentum and the less you know the particle's position. But I wonder with the photon: given that the velocity is a constant $c$ so there is no uncertainty at all in the speed (and so in the momentum), does that mean for a photon that the uncertainty of the position is "infinite"?
As explained in If photons have no mass, how can they have momentum?, it is impossible to assign photons a classical momentum $p=mv$, because their mass is zero. Instead, the photon momentum is determined by its wavelength $\lambda$ via $$ p = \frac h\lambda, $$ where $h$ is Planck's constant. This means that the only way to have a completely determined momentum (i.e. $\Delta p=0$) is to have a completely determined wavelength, and that can only be done if the wavepacket has infinite extent (because, if it doesn't, what's the wavelength at the edge of the wavepacket). Thus, the photon momentum is fully compatible with the Heisenberg uncertainty principle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/421863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Check where am I wrong. Here is how information can be sent faster than light If I have a very long rigid rod which is of rectangular cross section and small mass. Let it's length be 1 light year and total mass be 1 kg now at one end if I rotate that rod, since the rod is rigid it should rotate as a body about the axis parallel to its length and normal to its cross sectional area. If this event happens instantaneously, does it mean that the information of rotating the rigid rod has been sent instantaneously (more than the speed of light)? If the assumption is false what actually happens there?
The solution to this "paradox" is that such a rod simply cannot exist. Relativity prohibits the existence of perfect rigidity. If you rotated one end, that rotation would propagate from one end to the other at approximately the speed of sound in that rod. Since the speed of sound in any object is slower than the speed of light (c, that is in vacuum), your paradox is averted.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/422109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is nesting/ what is a nesting vector in energy contour plots? I am making different plots for a 2-d non-interacting tight binding Hamiltonian $$ H = - t \sum_{<ij>, \sigma} c_{i \sigma}^{\dagger} c_{j \sigma} + h.c$$ I get the dispersion relation $$\epsilon (k) = -2 t ( \cos(k_{x} a) + \cos (k_{y} a))$$ Plotting the contours of this, I get many k values giving me the same energy $\epsilon = 0$, that contour looks like a rhombus. I know that this has something to do with nesting but I don't understand exactly what is it. I'll appreciate referring to good sources on this too. I can't seem to find any that explains this clearly without referring to other things I am not familiar with.
"Nesting" refers to a Fermi surface where two points on the Fermi surface are connected by half a reciprocal lattice vector. When this occurres, it usually indicates the system is critical or unstable with respect to an interaction. If your think about adding an interaction term to the Hamilton via perturbation theory, you'll find that any translationally invariant term only couples states with the same momentum, up to a reciprocal lattice vector. If there's no nesting, the states will have very different energies, and thus be suppressed in perturbation theory. However, if the Fermi surface has nesting, there exists states with the same crystal momentum and same energy, which diverge in perturbation theory.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/422301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }