Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Generalized method for dealing with circuit involving symmetry? I'm a physics tutor for high school students. I'm finding it difficult to teach students how to exploit symmetry while finding equivalent resistance of a given network of resistors. I was Googling but couldn't find any generalized method that is applicable for any circuit with symmetry. The answers to How to find points with same potential while solving an equivalent resistance problem? also say that there isn't a generalized method.
But I found this fantastic course handout of a Microwave Engineering course which tries to define symmetry mathematically using Group theory (I don't know group theory but the handout explained it in such a simple manner that even a lay person could understand).
An example from the slide:
I understood it till the 8th slide. Then it goes on to say:
The importance of this can be seen when considering the scattering
matrix, impedance matrix, or admittance matrix of these networks.
I don't know what these matrices are. They seem to specific to Microwave Engineering.
Question:
How can the concepts developed in this presentation (up to slide 8) can be used for simple resistor circuits involving symmetry and find potentials, currents and equivalent resistances?
To give a sense of the kind of symmetric circuits I'm dealing with, here are few circuits:
| The common approach is to first deduce several equalities using the symmetry of the configuration of the circuit (namely the geometry) which is fairly easy and can be figured out following your instinct. Then, they will ask you to reverse the potential difference applied and figure out several other equalities. These usually can be conbined together to give you more equalities. Then, it is often able to disconnect the circuit at some places without changing the flow of the current. After that, it is possible to reduce the circuit to resistors in series and in parallel.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/356645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Where does the magnetic field energy of a charged particle moving with a uniform velocity come from? Consider a charged particle initially at rest with respect to an inertial frame.
Let a force act on it so that it gains a velocity 'v'.
It now produces a magnetic field that has some energy associated with it.
My question is where does this energy come from? If it comes from the work done by the force acting on the particle, does it mean that $ W = ΔKE $ is not valid in this case?
| I quote from the 1938 edition of the Admiralty's handbook of wireless and telegraphy, anonymously written but clearly from the highest level of specialist knowledge. "Magnetic field energy is clearly inertial in character, just as electrical field energy is clearly kinetic in nature, due no doubt to the the motion of electrons in time and space". The writer goes on to draw attention to analogies between electro-magnetic theories and mechanical action concepts. This
handbook refers to the aether as though it was an ongoing idea, and even refers to the "jar" as a unit of capacitance. Nevertheless, common-sense always prevails in every line, for instance making it clear that "lines" of flux are only an invented aid to grasp of concepts. I'm a great fan of this book Geoff
Harding BSc. (national service REME-trained as an Anti-aircraft computer electrician, then spent my life in polymer science research. Aged 87 now Cheers! Geoff
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/356800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Derive the Lagrangian that yields the free Schrödinger's equation from Galileian Invariance The Lagrangian Density $$L(\Psi, \Psi^*)=i \hbar \dot{\Psi} \Psi^* + \frac{\hbar^2}{2m} \Psi \Delta \Psi^*$$ will yield the schroedinger equations for $\Psi$ and $\Psi^*$. Can we derive this Lagrangian Density, if we impose only quadratic terms and Galileian invariance as conditions for the Density?
Of Course the derivation can be up to total derivative terms, which do not change the physics.
| Galilean invariance only impose the combination $i\partial_t-\frac{\triangle}{2m}$ to any power. This means that the action
$$
\int_{t,x} \sum_n \psi^*\left(i\partial_t-\frac{\triangle}{2m}\right)^n\psi,
$$
is invariant. You need additional constraints (like "simplicity", whatever that means, or comparison to experimental data) to obtain the OP's Lagrangian density.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/357091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Derivation of Newtons second law of motion from the principle of conservation of energy Is newton's second law a consequence of the principle of conservation of energy? How can we arrive at
net force = rate of change of momentum
using only the law of conservation of energy?
| It is true that Newton's 2nd law with conservative force $$m_i{\bf a}_i ~=~-\frac{\partial V}{\partial {\bf r}_i}, \qquad i~\in~\{1, \ldots, N\},\tag{1}$$
implies conservation of the mechanical energy
$$E=\sum_{i=1}^N \frac{m_i}{2}{\bf v}_i^2 + V({\bf r}_1, \ldots,{\bf r}_N).\tag{2}$$
But the opposite $(2)\Rightarrow (1)$ is not true in general. However, if there is only one particle $N=1$ and only one spatial dimension $d=1$, and the speed is non-zero, then the opposite $(2)\Rightarrow (1)$ is true.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/357186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Does the Conventional Current flow consist out of anything? (virtual photons) I've red that virtual photons are a way of interpreting the electromagnetic force between charged particles. Is convention current a electromagnetic field or force? Or is it a movement of positive charge carriers?
I just red some things about it that look contradictory so maybe here I can get some clarification.
|
I wonder if conventional current flow is an electric field.
Conventional current flow is not physical. It literally is nothing but a math trick. It's how electronics engineers understand and analyze circuits. It is based on an old, invalid understanding of what electricity is, but it still works in the context in which engineers use it.
Back when electricity was first studied, nobody knew about electrons. They knew that something flowed, and they called it "charge." An excess of charge was called "+", and a deficit of charge was called "-", and when a conductive path was established, the "flow" of charge from + to - was called "current."
Today we know that a + charge actually is a deficit of electrons, and a - charge is an excess of electrons, and we know that when electrons are the charge carriers, they move in the opposite direction of current flow. But literally, nobody ever bothered to change the old books. We still use the old definitions of "charge" and "current" because the old math still works.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/357432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Ball falling through viscous fluid experiment - strange results I did an experiment in which I dropped three different sized spherical beads (4mm, 6mm, and 11mm diameter) with the same densities through a viscous liquid (a water-detergent solution). They all fell the same distance, but the biggest one fell a full ten seconds faster than the smallest one. What could be the explanation for this? I would have thought the opposite due to friction and the fact that gravity affects everything the same. Why did the biggest one fall fastest?
| You have pointed out the difference between rain drops (large radius) and mist drops (small radius) which fall much slower.
When terminal velocity $v$ is reached the viscous drag on a sphere of density $\rho$ and radius $r$, $6\pi r v \eta$, is equal to the apparent weight of the sphere $\frac 43\pi r^3 (\rho -\sigma)g$ where $\sigma$ is the density of the fluid and $\eta$ its viscosity.
From this you get that $v \propto r^2$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/357583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 1,
"answer_id": 0
} |
Rotation matrix in $ict$-$x$ plane I am reading about SR. The book starts by going over the ordinary rotation matrix, e.g.
$\left( {\begin{array}{cc}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta \\
\end{array} } \right)$
which takes $x\rightarrow x'$, $y\rightarrow y'$.
Then it goes on to say that if the rotation involves an imaginary axis (like the imaginary time axis $ict$) then the angle of rotation needs to be imaginary as well, like
$\left( {\begin{array}{c}
ict' \\
x' \\
\end{array} } \right)=\left( {\begin{array}{cc}
\cos i\theta & -\sin i\theta \\
\sin i\theta & \cos i\theta \\
\end{array} } \right)\left( {\begin{array}{c}
ict \\
x \\
\end{array} } \right)$
which can then be written with hyperbolic functions instead of ordinary trig. Could somebody explain why the angle needs to be imaginary? Of course a complex number can be rotated by multiplying by $e^{i\theta}=\cos\theta + i\sin\theta$ but I don't understand why here the angle itself is imaginary. Some mathematical explanation would be very helpful. thanks
| Note first that working with imaginary time to teach special relativity has fallen way out of favor, it's not considered good pedagogy! And I don't think "hey plug in an imaginary axis" is a very good physical or mathematical argument. But with that out of the way...
The only thing I'm going to do is motivate why when you plug in an imaginary axis, the $i$ hits the angle.
You're right that it has to do with $e^{i\theta}$. It is more enlightening to work with matrices. Denote the identity matrix ${\bf 1}=\begin{bmatrix} 1 & 0 \\ 0 & 1\end{bmatrix}$ and denote another matrix $ i=\begin{bmatrix} 0 & -1 \\ 1 & 0\end{bmatrix}$. Note that $I^2=-\bf 1$, so the matrices of the form $a {\bf 1}+b I$ behave exactly like the complex numbers. So we can write $e^{I \theta}=\cos{\theta}{\bf 1}+\sin(\theta) I=\begin{bmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta)\end{bmatrix}$, where this is the infinite matrix sum ${\bf 1}+I\theta+\frac{1}{2}I^2 \theta^2+\cdots$
In the same way, if you're in three dimensions and you have a vector $\vec{\theta}$, you can create the rotation by an amount $\|\vec{\theta}\|$ about the $\hat{\theta}$ axis, by calculating
$$R=\exp\left( \begin{bmatrix} 0 & -\theta_z & \theta_y \\ \theta_z & 0 & -\theta_x \\ -\theta_y & \theta_x & 0\end{bmatrix} \right)$$
(To prove that it rotates about that axis, note that the vector $(\theta_x,\theta_y,\theta_z)$ times the matrix in the exponential equals zero. So in the Taylor series for exp, identity leaves the vector alone, and every other term just gives zero).
Plugging in $\theta_z=i s$ (in a completely ad hoc manner!) gives:
$$R=\exp\left( i s \begin{bmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0\end{bmatrix} \right)$$
ignoring the third axis gives $R=\exp(i s I)$. And according to our previous formula for $e^{I \theta}$, this is just equal to $R=\begin{bmatrix} \cos(is) & -\sin(is) \\ \sin(is) & \cos(is) \end{bmatrix}$, as the book says.
I don't think the author is trying to state a theorem though, I think they're just trying to motivate hyperbolic rotations. Their argument sounds like it's on the basis of "this sounds reasonable, right?"
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/357784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
As the sun expands, will its Roche limit also expand? So my friend and I had a debate. He stated that we don't have to worry about the sun consuming the Earth ─ we'll already be broken apart by then. He states that as the sun expands, its Roche limit will also expand. The Earth will have been broken apart long before the sun touches its orbit.
I'm arguing that the sun isn't changing mass (ignoring Mercury, Venus, and solar winds) and its gravitational force won't change, ergo eventually the boundary of the sun will be further than the Roche limit. The Earth will stay intact as it gets cooked.
Which, if either, of us is correct?
| You are correct, and your friend is not.
So long as the Sun remains spherically symmetric, its gravitational field can be replaced with the field of a point mass at its centre of mass, which is what determines the Roche limit (so, in particular, it fixes both the gravitational field and the gravitational field gradient at any position you care to name). The radial extent of the mass distribution is irrelevant so long as the spherical symmetry is retained.
I should also say I find it dubious that the Earth would break apart even if it did skim the Roche limit of some other body, because the Roche limit sets the break-apart point for systems that are bound gravitationally, i.e. for loose piles of rubble held together by their own gravity. This is not the case for the Earth, which is held together by strong chemical bonds in the rock of the mantle and core. Given a strong enough gravitational field gradient you can imagine the Earth getting torn apart, but this will happen a good bit further in than the Roche limit, I should think.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/357882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 1,
"answer_id": 0
} |
Spin-1 Particle System Measuring Angular Momentum Suppose a spin-1 system ($l = 1$) has a particle set in the state of the eigenstate in the $L_{x}$ basis, given as such: $$ |1, m_{x} = +1 \rangle = 1/2 \begin{pmatrix} 1 \\ \sqrt{2} \\ 1 \end{pmatrix}$$. The $L_{z}$ component of its angular momentum is measured and the result is $m_{z} = -1 $ which corresponds to the $L_{z}$ eigenstate of $\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}$. Immediately afterwards, the $L_{x}$ component of angular momentum is measured. What are the results obtained and with what probability? If you measured $m_{x} = -1$, and now you decide to measure $L_{z}$ again. What are the possible outcomes and with what probability?
My thought process for this is to first define a Projection Operator $|O\rangle = \sum_{j} |\rho_{j}\rangle \langle \rho_{j}|$ and have it be in the $L_{x}$ basis followed by taking the square inner product with the eigenstates of $L_{x}$ but I don't think I'm quite getting the actual logical flow of the measurements and the math.
| The initial outcome $L_z=-1$ will project you initial state to the appropriate eigenstate of $L_z$. After normalization, the state after this measurement will be $\vert 1,-1\rangle=(0,0,1)^T$. To get the various probabilities of $L_x$ will require you to construct all three $\vert 1,m_x\rangle$ eigenstates and then work out $\vert \langle 1,m_x\vert 1,m_z=-1\rangle\vert^2$.
Measuring $m_x=-1$ again would project your $(0,0,1)^T$ back to your $\vert 1,m_x=-1\rangle$ state, so that measuring $L_z$ would be related to the overlap of eigenstates of $L_z$ on $\vert 1,m_x=-1\rangle$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/358031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What causes light to refract? The common answer to this question is that light refracts because its speed changes in different materials. But this means that the photons has internal attraction between each other. Is that the case? Otherwise is something else like density of different materials the reason why light refracts?
please give a more convincing explanation.
| There an many ways to look at this problem. I am going to add, light refracts because its path satisfies Fermat's Principle : "light travels between two points along the path that requires the least time, as compared to other nearby paths.”
Given the 2 different speeds of propagation in media 1 and 2:
$ v_2 \sin{\theta_1} = v_1 \sin{\theta_2} $
describes the refraction.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/358268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 4
} |
Does the brightness of day follow simple harmonic 'motion'? Is it really true that the brightness during the day on earth follows simple harmonic motion? My teacher mentioned this as an example but it doesn't feel obvious to me by any stretch of the imagination (at least for a tilted earth). So how can we work out whether the brightness of day is actually SHM or not?
My attempt: We need the dot product between the sun's direction and surface of the earth. if we start by assuming that the sun is sufficiently far, then the incident rays are parallel, and the intensity of the rays is constant as earth rotates. (since light follows gauss's law)
Taking the latitude angle to be $\phi$ and the azimuthal angle $\theta$. We can use standard polar coordinates to describe the position on earth as $x = r\cos(\omega t+c_0)\cos(\phi)$, $y = r\sin(\omega t+c_1)\cos(\phi)$, $z=r\sin(\phi)$. We can dot product this with the angle of the sun's rays w.r.t time of the year. Problem is, I don't know how work out how the coordinates transform to make the sun rotate around the earth at some angle.
But instead supposing that the angle is fixed at $d$, take it to be $z_r = I\sin(d)$ and $x_r = I\cos(d)$ where intensity is $I$. The dot product is $$rI(\cos(\phi)\cos(d)\cos(\omega t+c_0) + \sin(d)sin(\phi))$$
which is SHM. But have I got it right? What happens when the sun's rays are suddenly given a $y$ component?
EDIT:
https://arxiv.org/abs/1208.1043
This actually gives us the answer we need $cos(\theta''(L,t))$ on page 6. replacing terms, we find that $d = \epsilon$, the axis tilt of earth, $L = \phi$ and $\phi$ represents the rotation of 'sun around the earth'. I happened to set this $\pi/2$ which gave me a special case. Their equation implies a specific function for the direction of the sun. Can anybody explain where it comes from?
I should rename the question 'the sun's position in the sky'
| The brightness may be periodic, but it is not “simple harmonic”. Simple harmonic requires that each full cycle follows a sinusoidal function. (Clearly not true at night.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/358406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
How to change Pressure and Temperature "The triple point of a substance is the pressure and temperature at which all three phases coexist in equilibrium."
Take water for example.
Let's say I put some water in a rigid container and add some heat.
Then both the temperature and pressure rise because the volume is constant.
If the triple point occurs during the heating, then that implies $V=\frac{nRT}{P}$, where $(T,P)$ is the triple point.
Does that mean the triple point can only exist in that size of a container??
|
Does that mean the triple point can only exist in that size of a container??
No. The ideal gas law you show applies only to gases. So the $n$ in that equation is not the total amount of substance, but only the amount in gas form.
Let's imagine you have two different containers (of different size), but you put the same quantity of water in. Then you seal it and move the temperature to that of the triple point.
Both containers will have the same pressure, but the smaller container will have more of the substance in solid or liquid phase, while the larger container will have more in the gas phase.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/358897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Will warm pizza get colder in deep space? If my Hot Pizza is in deep space and there is no gas molecules around it, how would it dissipate it's energy?
There are neither conduction nor conviction, and since the Pizza is solid it can't radiate its heat
will it stay hot?
| Heat may be considered to be movement of particles. The particles of your hot pizza oscillate. An accelerating electric charge creates an electromagnetic field. The oscillating particles of the pizza emit photons in the wavelength of the field the particles create, generally infrared radiation. As the particles on the surface of the pizza radiate these photons, they become less energetic. For a while, conduction from hot particles within the pizza keep the surface particles oscillating. But eventually, all the paricles of the pizza become less energetic, their heat vibration slows, and the pizza cools.
See the first answer to this similar question: https://space.stackexchange.com/questions/7830/how-is-heat-in-space-dissipated.
Were heat never radiated away from the pizza as photons of electromagnetic radiation, the pizza would remain hot forever. Such a state would contravene the increasing entropy of our universe.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/359042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
A confusion in the representation of states in $k$-space In the book "Introduction to Quantum Mechanics" by Griffiths, chapter 5 section 3 (Solids), the author states the following:
...Each intersection point in $k$-space represents a distinct stationary state. Each block in this grid, and hence also each state, occupies an elementary volume in this space.
As I understood, there is one state for every point (intersection), but according the statement above it seems to me that there is one state for every block too! as indicated here:
Isn't this a contradiction?! It's quite evident to me that each block represents 8 states because it contains 8 points (vertices)!
So what did I miss?! any clarification please.
| Each block contains 1/8 of 8 vertices, because each vertex also belongs to 8 blocks.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/359275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Basic concepts of ionization dectectors I am reading "Techniques for Nuclear and Particle Physics Experiments" by William R. Leo. I have some questions about the ionization detectors chapter (chapter 6). In chapter 6, there is a figure like this:
I understand the basic concept of how the signal is registered (radiation penetrates the cylinder causing the gas to become ionized. The electrons then moves toward the wire causing the constant voltage to drop. The power supply then must do some work to bring the voltage back to $V_0$, and the work is what being registered as signal...please correct me if I am wrong. I am not taking a class on this. This is just independent study). However, I do not understand the electronics part of this. For example, what are the functions of the two resistors and the capacitor? I also would like some more details about how the signal is registered.
| The electronics there implement a simple high-pass filter that prevent flow of the DC high-voltage supply to the DAQ (or ground) but allows the (rapidly changing) signal pulse to pass almost unaffected.
The very simple form used here is acceptable because there is suck a stark difference between the signal you want to keep (time scale of a few tens of nano-seconds meaning frequencies around 100 MHz) and the one you want to exclude (the DC high-voltage).
The feature to notice is that there is no wire that connects the high-voltage to ground anywhere (because a capacitor has a gap, right?). At the same time as charge move on and off the capacitor due to things happening on the tube side of the circuit it causes a detectable change in potential (voltage) on the DAQ side of the circuit.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/359449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Do fields describing different particles always commute? Is it true that field operators describing different particles (for example a scalar field operator $\phi (x) $ and a spinor field operator $\psi (x) $) always commute (i.e. $ [\phi (x), \psi (y) ]=0, \forall x,y $) in interacting theory?
Or is it true only at equal times? (i.e. $ [\phi (t,\vec x), \psi (t, \vec y) ]=0, \forall \vec x, \vec y $)
Or is it in general not true even at equal times?
Finally, if the fields in account are both fermionic must the commutator be replaced with an anticommutator?
| lurscher asks for a physical interpretation of CR Drost's correct answer. I'm answering separately because my response was too long to fit into a comment.
In the trivial case where the two fields are completely uncoupled - either directly or indirectly by way of both being coupled to some third field - the Heisenberg annihilators commute at all times. Otherwise, they have nonzero commutators at different times. We can interpret this second case physically by noting that virtual-particle loops of the second field's particle appear in the exact propagator for the first field. Roughly speaking, if you create a particle $A$ that can scatter/decay into virtual particles of type $B$ by some later time, then in creating particle $A$ there is some amplitude to indirectly create a particle $B$ as well, so the two particles' creation operators are not independent.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/359610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 0
} |
Refraction when angle of incidence is equal to 0 degree Ok i know that mathematically or by snell 's law it can be proved but i want to know that what is its physical significance? I mean why it does not refract at 0 degree?
| Looked at using the Huygens-Fresnel principle: each point in the wavefront will arrive on the surface at the same time. Each point is a new source emitting with the new wavelength, but as they all arrived at once, their waves are all in step so the wavefront stays the same.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/359795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the oscillation frequency of a buoyant cylinder? Suppose a cylinder sits upright in "dry water" (zero viscosity). The cylinder has half the density of the water, and we'll ignore the dynamics of the atmosphere.
If I push the cylinder down some past its equilibrium, the buoyant force pushes back up. If I slowly increase the force I'm using to push down on the cylinder, I find a Hooke's law relationship. (This is why I mentioned a cylinder instead of a sphere, but answering for a sphere would be fine.)
However, if I set the cylinder oscillating, I don't think I can use just this restoring force and the cylinder's mass to find an oscillation frequency. I need to account for some sort of "effective mass" of the water. i.e. the oscillating cylinder puts kinetic energy into the water.
How would you estimate the appropriate mass of water to use? And how much power would the water carry away from the cylinder (e.g. by surface gravity waves) if it were started oscillating and set free? Let's also ignore surface tension for simplicity.
| Let $\rho_{fl}$ and $\rho$ denote densities of the fluid and cylinder; $g$ the gravity field; $H$ the height of the cylinder and $A$ the area of the cylinder.
The cylinder is at a height $h$ above it's equilibrium position. There is a net downward force, equal to the downturn buoyancy force in magnitude.
$$|F|=\rho_{fl}Ahg$$
Newton predicts
$$\rho AH\frac{d^2h}{dt^2}=-\rho_{fl}Agh$$
Which gives
$$T=2\pi\sqrt{\frac{\rho H}{\rho_{fl}g}}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/359929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 2,
"answer_id": 1
} |
Why are work and energy considered different in physics when the units are the same? There is a question that explains work and energy on stack exchange but I did not see this aspect of my problem. Please just point me to my error and to the correct answer that I missed.
What I am asking is this: Why in physics when the units are the same that does not necessarily mean you have the same thing.? Let me explain.
Please let me use m for meter, sec= second , and kg = kilogram as the units for brevity sake.
The units for work are kg * m/sec^2 * m. The units for kinetic energy are kg* (m/sec)^2. They look that same to me. I need them to be the same so I can figure out the principle of least action. Comments are welcome.
|
Why in physics when the units are the same that does not necessarily mean you have the same thing.
Consider these two different things:
*
*The amount of floor space in your house
*The fuel economy of your car
Both are measured in square-meters.
References
Why can fuel economy be measured in square meters?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/360007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 7,
"answer_id": 5
} |
Location of lens having effective focal length We know a/c to Gullstrand's equation that the effective focal length of two lenses separated by a distance $d$ is given as $$\frac{1}{f_{eq}}=\frac{1}{f_1}+\frac{1}{f_2}-\frac{d}{f_1f_2},$$ but the equation doesn't clarify on the position of the lens having this effective focal length, how do i calculate that?
| If you were to switch the two lenses by a single thin lens, then the answer is tricky, because you can't just literally remove the two lenses from your optical table and put somewhere a lens of effective focal length expecting everything else would be the same.
What you can do is make an equivalent lens system: a box containing the effective lens at a given distance from the front $d_f$ and the back $d_b$ of the box. The length of this box is in general $d_f + d_b \neq d$, and so, when you replace your two lenses by this box, you would have to move all the optics you had after the pair of lenses. Note that if the two lenses are near a telescope arrangement, then this equivalent box would have near infinity length.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/360199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Simple question about Schrodinger equation (time independent) For a quantum mechanical description of a system (like a small molecule) we can write:
$$\langle\psi|\hat {H}|\psi\rangle = \overline E$$
Question:
Is that energy the same as zero Kelvin energy obtained by statistical mechanics (using $E_n$ energies and partition functions)?
| Statistical mechanics does not really apply to the kind of system your equation refers to: your equation is good for a pure quantum state $\psi$. In statistical mechanical terms, the system's microstate is exactly and fully specified by $\psi$, and you can't really therefore talk about a system's temperature when it is in a pure state.
Recall that pure states can be non ground states, or in a superposition containing non ground states. So there is, in general, no relationship definable with a "$0{\rm K}$ energy".
A classical mixture of pure quantum states can describe a statistical mechanical system, can therefore be assigned a temperature and is described by a density matrix $\rho$. In such a case, your equation becomes replaced by:
$$\langle E\rangle = \mathrm{tr}(\rho\,\hat{H})$$
Perhaps a kind of converse to your question makes more sense and yields a more concrete answer: as we cool a statistical mechanical system down, and if the ground state is nondegenerate, the system becomes "forced" into a state where all particles / ensemble members are in the ground state. If, further, the ground state is nondegenerate (i.e. there is only one ground quantum energy eigenstate), then the $0{\rm K}$ state is a multiparticle pure quantum state and this state is the ground energy eigenstate. User Joshphysics analyzes this behavior in more detail in his answer here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/360315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If I say time is the fourth dimension am I wrong? As far as I know the prevailing view is that time is the fourth dimension, but I've read there is also a spatial fourth dimension and even higher spatial dimensions after that so I hesitate to say that time is the fourth dimension. So, if I say time is the fourth dimension am I wrong?
| The "sort" of dimensions is really arbitrary.
What is true is that in classical mechanics time is another label to distinguish events, and special relativity enhances this point of view because there are some reference transformation that "mix" time with space (they are Lorentz transformations).
In general relativity there are more reference transformations allowed, and time is more indistinguishable (only locally you can identify the time "direction"), and it puts in the idea of curved spacetime. In this framework is possible interpret the curvature of spacetime as we live in a curved hypersurface of a 4D space.
So the fourth space dimension in this framework is fictitious: imagine to live in 2D on the surface of a sphere, you are 2D, but the sphere is embedded in a 3D space.
Finally, if you don't consider any more complicated theory you live in a 4D spacetime: 3 space dimensions and 1 temporal.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/360621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Understanding the equation for Potential Energy I am having a hard time understanding why Potential Energy can be calculated in the following way:
$$ \Delta U = U_f - U_i = -\int_{x_i}^{x_f} F_x dx $$
In particular, I don't understand why there is an integral in that equation. That is to say, why is it integrating the force in a system.
| Let us take for example the elastic potential energy:
$$U(x)=k\frac{x^2}{2}$$
The force is given by:
$$F(x)=-\frac{\partial}{\partial x}U(x)=-kx$$
As Señor O said in the comments, this is true only for conservative force fields.
If we want to get the potential from the force, we have to integrate the force:
$$U(x)=-\int F(x)dx=k\frac{x^2}{2}$$
$$\Delta U=-\int_{x_{1}}^{x_{2}} F(x)dx=k\left.\frac{x^{2}}{2}\right\rvert_{x_{1}}^{x_{2}}=U_{2}-U_{1}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/360888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Is energy $E$ in Schrödinger equation an observable/ Can $E$ be measured? Take this quantum approach to estimate mean energy of a molecule:
$$\langle\psi|H|\psi\rangle=\overline E$$
Question:
Is $E$ an observable? How we can compare it to an experimental value? i.e how to experimentally measure it and what are the states involved (as energy is all about differences there must be two states)
Edit
It is not a question about how is theoretically defined an observable.
Any help?
| What you wrote is an expectation value, which means an average on some state $|\psi\rangle$ over all possible eigenvalues of the operator under analysis, weighted with the probability of that eigenvalue occurring on that state $\psi$.
So, yes, $\hat H$ is an observable (we reserve this term for operators and the quantities they represent) and this means that its eigenvalues (the energy levels) can actually be measured by means of suitable experiments.
But $\bar E$ is not an observable, and in general each single result of your measurements might have nothing to do with that value.
How to reconcile the two things?
Actually, repeating the same measurement an incredibly large number $N$ of times (or on an incredibly large number of systems) in the same initial conditions, should provide you with a set of results, whose average, weighted with the frequency each value occurs, should provide a value which bill be closer to $\bar E$ the larger $N$ is (they will coincide in the limit $N\rightarrow\infty$).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/361132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Turbulence Model on Unsteady Navier Stokes I am asking you if the Unsteady (Time-Dependant) Navier-Stokes Equation is able to predict accurately the Flow Turbulence? I know that the RANS (with different Turbulence Models like Spalart–Allmaras, k–ε and k–ω models...) is the most used method for simulating the Turbulence.
I'd appreciate a constructive response.
Thanks
| There is no correct turbulent solution of the Navier-Stokes equation. There are various approximations. The nonlinear term is approximated linear with dynamic viscosity, which is chosen from the condition of coincidence with the field experiment. Or they introduce a mean of values, but this results in more unknowns than equations. It is necessary to do the approximation of the correlation function. The situation is common for nonlinear partial differential equations. In the real plane, taking into account the nonlinear term of the solution is not. I received a complex solution of the Navier-Stokes equation in the turbulet mode.
Brief summary of scientific direction: Using complex values of velocity and coordinates when solving nonlinear partial differential equations
Just as the square equation has complex roots, the nonlinear partial differential equations have complex solutions. It turns out that the complex solution is probabilistic. The physical meaning of the real part is the average value of the solution, and the imaginary part means the standard deviation. The nonlinear Navier-Stokes equation is reduced to an infinite system of ordinary differential equations of the first order. The complex coordinates of the equilibrium position describe the turbulent solution. Problems arise when recalculating the imaginary part of a complex solution into a real solution. But in the attached articles, for which the abstract describes the solution to these problems. For different types of roughness, the solution to these problems is different.
YAKUBOVSKIY, EG. "STUDY OF NAVIER-STOKES EQUATION SOLUTION I. THE GENERAL SOLUTION OF NONLINEAR ORDINARY DIFFERENTIAL EQUATION." EUROPEAN JOURNAL OF NATURAL HISTORY 3 (2016): 60-66. https://world-science.ru/pdf/2016/3/14.pdf
YAKUBOVSKIY, EG. "STUDY OF NAVIER-STOKES EQUATION SOLUTION II. THE USE OF LAMINAR SOLUTIONS." EUROPEAN JOURNAL OF NATURAL HISTORY 3 (2016): 67-83.https://world-science.ru/pdf/2016/3/15.pdf
YAKUBOVSKIY, E. G. "STUDY OF NAVIER–STOKES EQUATION SOLUTION III. THE PHYSICAL SENSE OF THE COMPLEX VELOCITY AND CONCLUSIONS." EUROPEAN JOURNAL OF NATURAL HISTORY 3 (2016): 84-87. https://www.world-science.ru/pdf/2016/3/16.pdf
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/361266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
What is a compact symmetry transformation? I am taking a course on particle physics, I am not familiar with a lot of mathematical terminology. So I don't properly understand what topology actually means. So when I looked up on the web for the definition of compact or non-compact symmetry transformation, I face terms related to topology.
Is it possible to explain the concepts of compact and non-compact symmetry transformations in easier terms? May be an example of a static symmetry transformation will be helpful to me.
| Loosely speaking (to avoid topological terminology), compact transformations are expressed in terms of parameters which have finite range, v.g. the rotation angle $\theta$ so that the rotation
$$
R(\theta)=\left(\begin{array}{cc}
\cos(\theta) & \sin(\theta)\\
-\sin(\theta) & \cos(\theta)\end{array}\right)\, , \qquad 0\le \theta <2\pi\, .
$$
The entries are always finite and bounded. The volume of the group, as measured by the integral $\int_0^{2\pi}\, d\theta$, is finite. In this sense the rotation group $SO(2)$ is compact. Likewise the group $SO(3)$ of rotations, with elements parametrized by 3 Euler angles, is also compact. Using the standard range of the Euler angles, the volume of the group $\int \sin\beta d\beta\, d\alpha\, d\gamma$ is finite.
Non-compact transformations are expressed in terms of parameters with infinite range, v.g. the rapidity $w=\hbox{arctanh}(\beta)\ $ in a Lorentz transformation
$$
\Lambda(w)=\left(\begin{array}{cc}
\cosh(w) & \sinh(w)\\
\sinh(w) &\cosh(w)\end{array}\right)\, , \qquad -\infty \le w\le \infty\, .
$$
The entries of the matrix can be arbitrary large. The volume of the group, as measured by the integral $\int_{-\infty}^\infty\, dw$, is infinite. The group of Lorentz transformation is non-compact.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/361740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is the total energy of the Universe? The Law of Conservation of Energy states that:
Energy can't be created nor can be destroyed. It only changes from one form to another.
According to this the total energy in a closed system never changes. I was wondering what this constant energy is when the closed system is the whole Universe.
Is there any estimate of what the total energy of the Universe is? If there is, kindly give a reference for it.
| Regarding the conservation of energy in the Universe, the questions linked by Qmechanic (Total energy of the Universe, Is the law of conservation of energy still valid?, Is the total energy of the universe constant?, Conservation of Energy in General Relativity) have answers that already address this is some detail. Regarding the total energy content of the Universe, that's relatively straightforward. The Universe is observed to have flat geometry, or very nearly so, which means it must have near-critical energy density. The critical density is simply $3H^2/8\pi G$, and can be derived from the Friedmann equations. To give a number with dimensions:
$$\rho_{\rm crit} = 1.8788\times 10^{-26}\,h^2\,{\rm kg}\,{\rm m}^{-3}$$
You should replace $h^2$ with your preferred value for the Hubble constant (at the time of interest) in units of $100\,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}$. At the present day $H_0\sim 70\,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1}$, so $h\sim0.7$.
The volume of the Universe is a bit of a slippery concept (e.g. this answer of mine), so I'll just leave my answer here with the density, and you can multiply by whatever volume you're interested in to arrive at a total energy content for that volume. Note that the critical density should be interpreted as a density averaged over very large scales (think of a volume enclosing many clusters of galaxies). Of course the density locally may be very different.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/361875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Does light heat transparent mediums? I was thinking that if a photon is travelling in the vacuum at $c$, then enters a transparent medium so it's speed becomes $0.8c$, the photon has lost energy. What is that energy transformed into? Is the surface being heated?
| No, Light doesn't imparts heat to the transparent medium. As far as the decrease in speed of light is concerned, energy of the 'visible light waves' depends upon the frequency of light not on speed. As the reduction in speed is nullified by the reduction in wavelength. (ref. Einstein's Photoelectric Equation)
Note: Light here means Visible Light, i.e around 400-800 nm wavelength.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Conformal field theory using Mathematica Can someone suggest some online Mathematica codes/packages that deal with performing standard CFT computations like OPE expansions, computing different conformal blocks etc.
| Conformal Bootstrap
A page by an associate professor at Brandeis University has provided a notebook to implement the Virasoro algebra in Mathematica, and compute conformal blocks. It can also teach Mathematica other operator algebras.
This paper also provides ancillary files which include notebooks that can be used to compute conformal blocks using an alternate method built upon in the paper.
Operator Product Expansion
A Mathematica package provides a means to compute crossing matrices and OPEs for special cases involving degenerate operators.
Finally, a package called Lambda has been designed to calculate $\lambda$-brackets, which can be related directly to show they are equivalent to evaluating OPEs for $d=2$ CFTs.
A bunch of files here and elsewhere on the site of the Mathematica summer school provides exercises and Mathematica notebooks which involve applying the conformal bootstrap. This includes the decomposition of conformal blocks and working in $d=4-\epsilon$ dimensions.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Does the temperature coefficient of a material depend on temperature? In my textbook, a relationship is plotted between the resistance of the material and the temperature, and the temperature coefficient is defined as the slope of that graph divided by an arbitrary resistance $R_1$ on the graph. Does that mean that the coefficient varies depending on the temperature I choose to divide the slope by?
If so, how does it stand as a valid reference to the material’s resistance growth rate with temperature?
If not, then where did I go wrong with this train of thought?
| The temperature coefficient is often defined defined as $\alpha = \dfrac{R_{\rm T}-R_0}{R_0 \, T}$
where the temperatures are in degree Celsius and the reference resistance $R_0$ is measured at $0^\circ \rm C$.
Which gives the equation $R_{\rm T} = R_0(1 + \alpha\, T)$ ie a linear relationship between resistance and temperature.
As you have pointed out the relationship is not linear and so a better relationship like $R_{\rm T} = R_0(1 + \alpha\, T + \beta \, T^2 + . . . .)$ can be used for large temperature variations.
This implies that other constants like $\beta$ must be given as well as the temperature coefficient of resistance $\alpha$ as is shown in this reference for a platinum resistance thermometer.
So it does depend on the accuracy to which you are working and the range of temperatures.
If you are given just $\alpha$ with a stated reference temperature then the implication is that as the temperature deviation from the reference temperature increases there will be a corresponding reduction in the accuracy of resistance or temperature as a result of using the first order equation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Work done by the piston versus work done by the surrounding Suppose a massless, frictionless piston assembly initially has a higher pressure than the external (atmospheric) pressure, and it is pinned so that the piston does not move. Once the pin is removed, the piston would expand until the pressure inside the piston becomes the atmospheric pressure. During the process, the work done by the gas inside the piston is
$$W_{\text{piston}}=\int_{V_1}^{V_2} P_{\text{gas}}\cdot \mathrm{d}V$$
and the work done by the surrounding is,
$$W_{\text{ext}}=\int_{V_1}^{V_2}P_{\text{ext}}\cdot \mathrm{d}V = P_{\text{ext}} \left(V_2 - V_1 \right)
\,.$$
We can pull out the external pressure from the integral because it is constant as an atmospheric pressure.
My question is, the work done by the piston is not the same with the work done by the surrounding because $\mathrm{d}V$ is the same, but ${P}_{\text{gas}}$ is greater than ${P}_{\text{ext}}$ during the process, so the work done by the piston is larger than that by the surrounding. Shouldn't they be the same?
| The pressure on both sides of the unpinned weightless piston is always the same. In case of any difference, the piston would quickly move pressuring the external gas locally above the atmospheric pressure. Your setup is not static and only can be solved by aerodynamics. However, in the limited scope of your question, the answer is that the pressure is always the same on both sides.
In reality, the piston is not weightless, so the initial pressure difference would be compensated by the inertial force of the piston acceleration untill the dynamic pressure is the same on both sides.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
} |
Use of negative frequency for the sake of simplifying mathematics? How can we use the idea of negative frequency for the sake of simplifying mathematics if negative frequency does not exist (to my knowledge) in nature ? For example, when plotting the spectra of a Fourier series.
|
negative frequency does not exist
Depends on how you define frequency. If defining such a thing as negative frequency makes the math easier (it does), why not do it? It's probably less objectionable than defining an imaginary anything, and we do that all the time.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
How do we know that certain quantum effects are random? I was looking at a website that claims to generate random numbers from observation of quantum effects. This lead me to question how we know that the numbers are truly random.
When we observe a probability wave and it collapses in one place into a particle, how do we know that the location of the particle is really random?
Do we have any evidence of the randomness, or is it just that no one can predict the location right now?
| There are two main views. The first view relates to the Copenhagen interpretation of Quantum Mechanics. According to this interpretation, a particle does not have a specific path, but travels like a wave. Upon detection, the wave function collapses and the particle appears at a random point on the screen (according to the probability defined by the wave function).
The second view relates to the "Pilot Waves" theory. It states that a particle has a definite trajectory that ends up with a dot on the screen. However, the trajectory depends on the emission parameters, as the particle is emitted by the source at a certain angle, with a certain phase, etc. These parameters are random, so the result is exactly the same.
In the Copenhagen interpretation, the trajectory is unknown, because a definite trajectory does not exist. In the Pilot Waves theoty, the trajectory is definite, but cannot be known, because it depends on the random parameters of the emission.
In other words, whether we don't know the trajectory, because it doesn't exist, or we don't know it, because it exists, but we can't ever know it, the result is exactly the same. Whether randomness is at the end of the path or at the beginning, the result is unpredictable anyway.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
EMF generated by a rotating rod I am confused by the following example in my textbook.
Question:-
A metallic rod of length $l$ is rotated with a angular velocity $\omega$, with one end hinged at the centre and the other end at the circumference of a circular metallic ring of radius $l$, about an axis passing through the centre and perpendicular to the plane of the ring. A constant and uniform magnetic field $B$ is present everywhere and is parallel to the axis. What is the emf between the centre and the metallic ring ?
Solution :-
The area of sector traced by the rod in time $t$ is $\dfrac12 l^2 \theta$, where $\theta = \omega t$. Therefore $\varepsilon = \dfrac{B\omega l^2}{2}$.
The math in the solution is not hard to understand. I have trouble understanding the physics.
I know $\varepsilon = \dfrac{\mathrm d\ \phi_B}{\mathrm d \ t} = \dfrac{\mathrm d\ (\bf B\cdot A )}{\mathrm d \ t}$ and in the question it is given that ${\bf B}$ is constant and the angle between $\bf A$ and $\bf B$ is $0$. So we are just left with $\varepsilon = B\dfrac{\mathrm d \ A}{\mathrm d \ t}$.
Why the area taken in the solution is the area of the sector traced by the rod in time $t$ ?
Since the magnetic flux is passing through the whole metallic ring, shouldn't the area be the area of metallic ring ? Which is contant. So the emf should be zero.
| Sorry for my poor english !
The above explanations seem very clear to me. It is necessary to close the circuit to make a measurement.
We can, however, give a more formal formulation that takes up the idea of field lines cut by the circuit.
The emf is defined as the circulation of the force per unit of charge, here the magnetic force. When the circuit $CD$ moves during $dt$ the emf is ${{e}_{ind}}=\int\limits_{C}^{D}{(\overrightarrow{v}\wedge \overrightarrow{B})\cdot \overrightarrow{dl}}=\int\limits_{C}^{D}{(\frac{\overrightarrow{\delta r}}{dt}\wedge \overrightarrow{B})\cdot \overrightarrow{dl}}=\frac{1}{dt}\int\limits_{C}^{D}{(\overrightarrow{\delta r}\wedge \overrightarrow{B})\cdot \overrightarrow{dl}}=-\frac{1}{dt}\int\limits_{C}^{D}{\overrightarrow{B}\cdot (\overrightarrow{\delta r}\wedge \overrightarrow{dl})}$
We define the algebraic surface cut by the circuit $\overrightarrow{d{{S}_{c}}}=(\overrightarrow{\delta r}\wedge \overrightarrow{dl})$ and the "cut flux" (in French, "flux coupé"....) $\delta {{\varphi }_{c}}=\int\limits_{C}^{D}{\overrightarrow{B}\cdot (\overrightarrow{\delta r}\wedge \overrightarrow{dl})}$ : the algebraic flux of the magnetic field across the surface crossed by the circuit.
Faraday's law is then ${{e}_{ind}}=-\frac{\delta {{\varphi }_{c}}}{dt}$ and it only leads to Faraday's usual law ${{e}_{ind}}=-\frac{d\Phi }{dt}$ for a closed circuit. In this case, indeed, the magnetic field flux being conservative, $d\Phi =\delta {{\varphi }_{c}}$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/362975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 2
} |
Is there any quantum mechanics process in a prism? Red light has longer wavelength than say blue light and so bends the least as it travels much faster in the same medium. My question is since energy is not stated in the above explanation, is QM required for prism to work?
| You are right! Classical electromagnetic theory is sufficient to describe how light refracts at an interface (Snell's law). Then, if you stipulate that refractive index is frequency-dependent, you get the light-dispersing behavior of a prism. Of course, if you want to know why there is dispersion in glass, you have to go a little deeper, possibly explaining it with a classical electron oscillator model of the polarizability. But depending on how accurate you want to be, you might end up going down a road that ends with QM.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
In physics sometimes we find energy that is negative. What does the negative sign indicate? Sometimes we see energy that is negative, for example, the energy of an electron in orbit. We know energy is something that can do something. In this view does negative energy mean something opposite someway?
| Absolute energies have no observable effect, it is only changes in energy that have physical meaning. How to interpret a negative energy depends on the context.
In the example you give of an electron bound to an atom, the energy is measured with respect to the electron being free from the atom and infinitely far away. So the binding energy being negative means that the bound state corresponds to a lower energy than the isolated state, and thus you would need to supply the bound state with energy in order to isolate the electron.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Does providing more heat to a pan of boiling water actually make it hotter? Sometimes my wife has a pan of water 'boiling furiously'. Is the extra heat (wasted in my opinion) actually making any difference, apart from reducing the amount of water in the pan - which could be done by pouring some away?
| Since most of the water is being cooled by the environment, 100ºC water will only be in the bottom of the pan. Increasing the heat will actually make a difference, since bottom water will boil faster and it will transfer more heat to top cool water, before being cooled by ambient temperature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 6,
"answer_id": 4
} |
Why is a centered parallelogram not a 2D Bravais cell, but a centered rectangle is? We had a disagreement regarding 2D Bravais lattices during a lecture. The lecturer told us that a centered rectangle forms a Bravais lattice in 2D, but a centered parallelogram isn't:
We couldn't come up with a satisfying definition of a Bravais lattice though. It appears to be a lattice with certain periodic/translation/etc. symmetries; but intuitively, putting a dot in the center of a rectangle symmetry-wise has the same effect putting one in a parallelogram does. Why is the distinction then?
So, our questions are:
*
*Does a centered rectangle form a Bravais cell? (Wikipedia lists it as one.)
*Does a centered parallelogram not? (Wikipedia doesn't list any.)
*Why, and/or why not?
Edit: The question (Is the centered parallelogram a Bravais-cell?) was in a test we discussed, so an exact answer to this is encouraged.
| I think this is mainly a matter of convention If you take the monoclinic 2D lattice then you certainly can draw a centred unit cell on it:
(image from Wikipedia)
but there is nothing to be gained by doing so because the primitive cell is just as useful.
As a general guide we tend to use a primitive cell as our first preference, but an exception is made where a rectangular cell can be used instead. Rectangular cells make the geometry a lot simpler so we choose them even when the resulting cell isn't a primitive one. That's we we choose a centred (non-primitive) cell for the orthorhombic lattice.
But when it comes to the monoclinic lattice choosing a centred cell offers no advantage since it doesn't give us a cell with right angles. That's why for the monoclinic lattice we stick with the primitive cell.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does relativistic speed increase gravitational attraction? If something is moving past me at a relativistic speed, does its gravitational force on me increase as its speed increases? That is, not as if its speed is changing, but if multiple of the same (rest) mass go past me in the same path, is the gravitational attraction at the closest point the same for each, or is it higher the faster it's moving?
As some background, I was recently surprised to see that I should discard the idea of "relativistic mass", so I've been trying to work on that. Please forgive me mixing Newtonian physics with relativity, but I'm trying to get a high level understanding before going into the math too much. I understand now that instead of $F=m_{relativistic}a$ I should use $F=\frac{d}{dt}(\gamma m_0v)$, but what about $F=\frac{Gm_1m_2}{r^2}$? Is that still roughly as it is, or do I need to throw a factor of $\gamma$ in there, or something else entirely?
Yes, I know this is talking about acceleration and so it doesn't really fit into special relativity, and lots of other disclaimers, but I'm hoping for a simple answer to get a general understanding.
| Yes. Gravitational attraction in general relativity is based on an object's energy, and an object's energy is greater when its speed is greater.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/363913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What are the boundary conditions for the electron in a hydrogen atom? From what I understand, the wave equation for an electron can be constructed using the "particle in a box" model in three dimensions. However, what would be the boundary conditions in this case? In other words, what are the potential energy barriers?
| Obviously the potential varies smoothly so there are no "barriers" so to speak. (as far as I'm aware there are no discontinuous potentials in reality)
Examples of boundary conditions in the case of the hydrogen atom would be that the radial wavefunction should go to zero at the origin and at infinity, and that the angular wavefunction should be periodic with period $2 \pi$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Rotations of eigenstates of $S_z$ I have a question regarding the rotation of spinors in a spin-1/2 system.
We have a Spin generator $\hat{S}$ for rotations of spinors.
A rotation around the axis $\vec{n}$ with the angle $\phi$
is generated by the operator:
$$
D_{\vec{n}}(\phi) = \exp(-i\phi \hat{S}\cdot \vec{n})
$$
This operator can also be written, for a rotation about $z$ e.g. as:
$$D_z(\phi) = \cos\left(\frac{\phi}{2}\right) - i \sigma_z \sin\left(\frac{\phi}{2}\right)
$$
Here, $\hat{S}_i = \sigma_i/2$ and $\sigma_i$ are the pauli matrices.
$\sigma_1 = \begin{bmatrix}0 & 1 \\
1& 0 \end{bmatrix}$ $\sigma_2 = \begin{bmatrix}0&-i\\ i&0\end{bmatrix}$ and $\sigma_3 = \begin{bmatrix} 1&0\\ 0&-1\end{bmatrix}$
Then we have given two states with a spin into the direction of the z-axis:
$\vert S_z= + \frac{1}{2} \rangle =
\begin{bmatrix}1\\ 0\end{bmatrix} = \vert {\uparrow}\rangle $
and
$\vert S_z = - \frac{1}{2} \rangle = \begin{bmatrix}0\\ 1\end{bmatrix} =
\vert {\downarrow}\rangle $
Now my question is:
With which rotation $D_\vec{n}(\phi)$ can the eigenstate $\vert S_x = +\frac{1}{2}\rangle$ be obtained using $\vert\uparrow\rangle $?
How can I calculate that?
| When you rotate spin operators i.e.
$$D_{\vec{n}}(\phi) \cdot (\hat{S} \cdot \vec{m}) \cdot D^{\dagger}_{\vec{n}}(\phi),$$
it is the same as if you would rotate (right-hand) the vector $\vec{m}$ around the vector $\vec{n}$ at angle $\phi$. It is a consequence of algebraic properties of spin operators (they span $su(2)$ Lie algebra). Following this analogy, $\hat{S}_{x,y,z}$ can be pictured as three orthogonal versors of the 3D Euclidean space. If you rotate versor $e_{x}$ at angle $\phi = -\pi/2$ around $e_y$ you will get $e_z$. So, in terms of spin operators:
$$\hat{S}_z = e^{i\frac{\pi}{2}\hat{S}_y} \cdot \hat{S}_x \cdot e^{-i\frac{\pi}{2}\hat{S}_y}.$$
Now your initial state is an eigenstate of $\hat{S}_z$ i.e.
$$\hat{S}_z |s_z=+1/2\rangle = \frac{1}{2} |s_z=+1/2\rangle.$$
If you use previous relation you can write
$$\hat{S}_x e^{-i\frac{\pi}{2}\hat{S}_y}|s_z=+1/2\rangle = \frac{1}{2} e^{-i\frac{\pi}{2}\hat{S}_y}|s_z=+1/2\rangle,$$
and according to definition of eigenstate
$$|s_x=+1/2\rangle := e^{-i\frac{\pi}{2}\hat{S}_y}|s_z=+1/2\rangle$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If fluid flows faster through a narrower pipe, why do hourglasses work? I'm essentially describing a fallacy due to a flaw in my understanding, and I'm trying to understand what the flaw is.
I know that if a pipe narrows, the fluid moving through it will move faster to preserve the same volumetric flow rate.
But, by that logic, an hourglass should not work, nor any type of funnel, nor should drains ever clog, because fluids should just flow faster through the narrower hole and ultimately pass through the opening in the same amount of time as if it were a wider opening.
What is the flaw in my reasoning?
| The flaw in your reasoning is that you're presuming that the fluid is being forced to flow faster.
Yes, if you have a situation where you want the same volume of fluid to flow through a thinner pipe, the fluid must be made to "flow faster". However, to say that the fluid "will move faster" somehow implies that the fluid will just do that without any particular reason to do it. If that's what you think, you're under a misapprehension.
Short answer: Fluids should not just "flow faster through the narrower". This is a misunderstanding.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
The twin paradox and general relativity The twin paradox in special relativity has been discussed over and over again. Send a twin on a spaceship out to someplace or another accelerating at Earth gravity, then have it go through a series of decelerations and accelerations -- all at 1g -- so it returns to Earth, where the other twin is waiting. And, depending on how long the trip was, the Earth-bound twin is, say, seventy years old while the spaceship twin has aged just a few years.
Within special relativity, it all makes sense. I have done the calculation etc. But what about general relativity? The fundamental observation of GR is, as I understand it, that all accelerating frames with a given acceleration are equivalent. (And I'm betting that my understanding of exactly what "equivalent" means is the answer to my question. But, proceeding ... ) So, the twin on the Earth experiences an acceleration of 1g, as does the twin on the spaceship. Why are their respective frames not equivalent, and they age differently? In a related question, what if we had triplets instead of twins and the third triplet spent the whole time weightless (ignoring health effects) in a space station orbiting the Earth? How would he age compared to his siblings?
|
bob.sacamento asked: "But what about general relativity? "
In general relativity the twin that stayed on the planet can be younger than the twin travelling up and down again, thanks to the principle of maximized proper time.
bob.sacamento asked: "In a related question, what if we had triplets instead of twins and the third triplet spent the whole time weightless (ignoring health effects) in a space station orbiting the Earth? How would he age compared to his siblings?"
If we place one triplet (green) stationary at $\rm r_0=2 \ r_s$, launch another into orbit (red) and the third one (blue) is shot up with the required velocity to have him back at the same event the orbiting triplet returns, the blue triplet will have the longest proper time $(\tau=39.829 \rm \ GM/c^3)$, the one who stayed at home is in the middle $(\tau=8 \pi \surd 2=35.543 \rm \ GM/c^3)$ and the one in orbit will be the youngest $(\tau=8 \pi = 25.133 \rm \ GM/c^3)$
In this example the launch velocities required are $\rm v=c/ \surd 2$ for the red one in a circular orbit and $\rm v=0.574 \rm \ c$ for the blue one going up and down.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why is it "bad taste" to have a dimensional quantity in the argument of a logarithm or exponential function? I've been told it is never seen in physics, and "bad taste" to have it in cases of being the argument of a logarithmic function or the function raised to $e$. I can't seem to understand why, although I suppose it would be weird to raise a dimensionless number to the power of something with a dimension.
| Orthodox view
A bit of a formal take at it: $\exp x$ can be expressed as a series:
$$\exp x=1 + x +\frac{x^2}{2!} + \frac{x^3}{3!} + \cdots + \frac{x^n}{n!} + \cdots$$
So if $x$ has unit $X$, then the terms of this series have respective units
$$\text{None}, X, X^2, X^3, \cdots X^n, \cdots$$
which is not dimensionally consistent. The same argument for $\ln$ or for any analytical function (i.e. a function which can be expanded in such a series). This would apply as well to something as simple as
$$\frac{1}{1-x}=1+x+x^2+\cdots.$$
Actually, one does not even need the whole series. Just two terms of a Taylor expansion is enough to force the variable to be dimensionless. For example if a function $f(x)$ goes like
$$f(x) = x - x^2 + O(x^3),$$
as $x$ goes to 0 e.g., then $x$ can't have a dimension $X$, otherwise one would end up adding $X$ and $X^2$. This applies of course to asymptotic series too, like
$$f(x) = \frac{1}{x^2} + \frac{2}{x^3} + O\left(\frac{1}{x^4}\right),$$
as $x\to+\infty$.
Gaming around the orthodoxy
What about the following argument. I will take a very simple example, involving no series at all,
$$f(x) = x + x^2.$$
The orthodox argument above implies that $x$ shall be dimensionless. But I am going to argue that the coefficients 1 of $x$ and $x^2$ do actually have dimension $X^{-1}Y$ and $X^{-2}Y$, where $X$ is the unit of $x$, and $Y$ would then become the unit of $f(x)$. It makes everything consistent, doesn't it? Yes, but it is a travesty because it means that instead of $f(x)$ we actually deal with
$$f_\text{pseudo}(x) = a\left(\frac{x}{x_0}+\left(\frac{x}{x_0}\right)^2\right),$$
where $x_0$ has unit $X$ and $a$ has unit $Y$, that is to say
$$f_\text{pseudo}(x) = af\left(\frac{x}{x_0}\right).$$
And here it is: the argument of $f$ is indeed dimensionless! The argument generalises to any series. Let's look at exponential as an illustration:
$$\exp x = \sum_{i=0}^n \frac{1}{n!}x^n.$$
So the argument would then be that $1/n!$ has unit $X^{-n}$ actually. Fair enough, but then instead of $\exp$, it means we deal with
$$\exp_\text{pseudo}(x) = a\sum_{i=0}^n \frac{1}{n!}\left(\frac{x}{x_0}\right)^n,$$
where $x_0$ has the dimension $X$, and where now $1/n!$ is dimensionless, and as above $a$ has some dimension $Y$. That is to say that
$$\exp_\text{pseudo}(x) = a\exp\frac{x}{x_0}.$$
So we end up with the argument of $\exp$ being dimensionless.
My visceral opinion about this little game: well, duh! All that for that, really?
Moreover, as pointed out by Emilio Pisanty's in the comments, it requires that we pluck a scale $x_0$ (and yet another scale $a$ potentially) from the sky: the whole point of dimensional analysis is that we have taken into account all possible dimensioned quantities beforehand. Here we introduce another one after the fact and it does not make sense to either Emilio or myself.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "86",
"answer_count": 7,
"answer_id": 1
} |
Calculating acceleration of a particle from Radiation Pressure I am trying to calculate the the acceleration of a particle from radiation pressure, assuming all radiation is absorbed. I got $$\Delta \vec{p} = \frac{\Delta U}{c_0}$$ and the intensity $I_S$=$1367 \ \frac{W}{m^2}$.
I think that $\Delta U = I_S A$. Since $\vec{p}_0=0$, I get $$\vec{p}=\frac{I_S A}{c_0}=m \dot{x}$$ Now I am trying to find a way to calculate the acceleration. Since $$\vec{F}=\dot{\vec{p}}=m \ddot{x}=ma$$ I tried to do $$\frac{\mathrm{d}}{\mathrm{dt}} \frac{I_S A}{c_0} = 0 \quad \mathrm{???}$$ Obviously I can not get the derivate of that part of the equation with respect to $t$. I know there is some correlation with $I_S$ (because its unit is $\frac{W}{m^2} = \frac{\frac{kg m^2}{s^3}}{m^2}$) and time, but I don't know how to differentiate that equation.
| The total energy $\delta U$ is also proportional to time. So, the energy deposited is $I\times Area\times time$ given that the radiation is falling normally on the body. Else you have to take a $cos\theta$ component.Now, it's trivial to see that the acceleration will be constant if the intensity is constant.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/364907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why can a coupled wavefunction of 2 nucleons seemingly arbitrarily give two different expressions? When looking at the Isospin representation of a proton-neutron system (with the notation $|I,I_3\rangle$), you can go from an uncoupled to a coupled representation like this:
$$\textstyle|\frac{1}{2},+\frac{1}{2}\rangle|\frac{1}{2},-\frac{1}{2}\rangle=\sqrt{\frac{1}{2}}(|1,0\rangle+|0,0\rangle)$$
However, you can also find the following antisymmetric wavefunction when switching the first two kets around (or switching proton and neutron):
$$\textstyle|\frac{1}{2},-\frac{1}{2}\rangle|\frac{1}{2},+\frac{1}{2}\rangle=\sqrt{\frac{1}{2}}(|1,0\rangle-|0,0\rangle)$$
If you now want to calculate the probability of finding the proton-neutron system in a $|1,0\rangle$ state, you have to take the square of the inproduct of the above expressions with $\langle1,0|$, which gives $\frac{1}{2}$ for both expressions.
Do you now have to add these 2 probabilities together, since they should be equivalent, though one is symmetric and the other is asymmetric.
However, adding the probabilities means the probability to find the proton-neutron state in the $|1,0\rangle$ state is 1, but you could reason similarly for $|0,0\rangle$ and so find a total probability of 2 which doesn't seem right.
It seems weird that the wavefunction is different if you assume a proton-neutron vs. a neutron-proton system.
As a background for this question, I need to calculate the relative probabilities for 2 reactions, one of which is a pion and deuteron reacting to form a proton and neutron, so I'm writing left and right side in the coupled representation and taking the inproduct.
| It seems there is a bit of confusion here. A state like
$$
\textstyle\vert \frac{1}{2},\frac{1}{2}\rangle_1\vert \frac{1}{2},-\frac{1}{2}\rangle_2 \tag{1}
$$
does not transform into itself under permutation of the particle index so is neither symmetric nor antisymmetric. As written, (1) describes distinguishable nucleons: the first nucleon is certainly the proton and the second is certainly the neutron.
If your nucleons are indistinguishable, you need to work with properly symmetrized states
$$
\vert\psi_\pm\rangle =\frac{1}{\sqrt{2}}\Bigl(\textstyle\vert \frac{1}{2},\frac{1}{2}\rangle_1\vert \frac{1}{2},-\frac{1}{2}\rangle_2\pm
\vert \frac{1}{2},-\frac{1}{2}\rangle_1\vert \frac{1}{2},\frac{1}{2}\rangle_2\Bigr)
$$
which are basically the $\vert 1,0\rangle$ and $\vert 0,0\rangle$ states you already have.
To compute the probability of finding the proton-neutron system (as a system of distinguishable particles) in $\vert 1,0\rangle$, you have
$$
\textstyle
\vert \langle 1,0\left[\vert \frac{1}{2},\frac{1}{2}\rangle_1\vert \frac{1}{2},-\frac{1}{2}\rangle\right]\vert^2 =\frac{1}{2}\, .
$$
This is the probability of having the first nucleon a proton, and the second nucleon a neutron, in the isospin state $\vert 1,0\rangle$.
At this point you can ask about the probability of having the first nucleon a neutron, and the second a proton, and this must be $1/2$ since the probabilities must sum to 1. You can also ask about the probability of having the first nucleon a proton, and the second nucleon a neutron, in the isospin state $\vert 0,0\rangle$, which again must be $1/2$ for the same reason that the probabilities sum to $1$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/365166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What does the Kretschmann scalar really tell us about the geometry of spacetime? The Kretschmann scalar is one of the measures of spacetime curvature. For flat (Minkowski) spacetime it is zero. The dimensions of the Kretschmann scalar are $[L]^{-4}$. What does that physically signify about the geometry of spacetime?
| If we restrict ourselves to vacuum solutions then the Kretschmann scalar has a nice simple interpretation as the strength of the local tidal forces.
This happens because for a vacuum solution the Ricci tensor is zero so the Kretschmann scalar depends only on the Weyl tensor, and this tells about the tidal forces acting. The Kretschmann scalar is effectively proportional the square of the tidal force. For the Schwarzschild solution this gives us:
$$ F_\text{tidal} \propto \sqrt{\frac{48M^2}{r^6}} \propto \frac{1}{r^3} $$
which we immediately recognise as the Newtonian expression for the tidal force.
For non-vacuum solutions I don't think there is a nice simple interpretation. The Kretschmann scalar is then related to both volume changes and shape changes of a test volume element.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/365315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Tensorpart of NN potential The potential of the nucleon nucleon interaction includes a tensor part which is given by:
$S_{12}(\hat{\pmb{r}}) = \hat{\pmb{r}} \cdot \pmb{\sigma_1} \hat{\pmb{r}} \cdot \pmb{\sigma_2} - \frac{1}{3} \pmb{\sigma_1}\cdot\pmb{\sigma_2}$
Where $\hat{\pmb{r}}$ ist a unit coordinate space operator and $\pmb{\sigma_k}$ are vectors of the Pauli matrices acting on particle 1 and 2.
Why is the second term substracted?
| In view of your comments I have explained my answer. It is usual to define the non-central potential in such a way that its average over all directions is zero. However integrating over all angles only the first term would give,
$ \frac{1}{4\pi}\int({\pmb{r}} \cdot \pmb{\sigma_1} )({\pmb{r}} \cdot \pmb{\sigma_2})d\omega = \frac{1}{3} r^2\pmb{\sigma_1}\cdot\pmb{\sigma_2}$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/365588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is torque in an electric motor generated from repelling magnetic dipoles and the Lorentz force on a solenoid? So in an electric motor the torque is generated from the Lorentz force on the current carrying wire by the interaction with the outer magnetic field. There is also another interaction between the magnetic dipole created by the solenoid (or wire) and the external magnetic field which would drive the motor as the B-fields repel and attract each other periodically.
Is an electric motor driven by both these forces or are they the same force (if so, how?)
|
There is also another interaction between the magnetic dipole created by the solenoid (or wire) and the external magnetic field which would drive the motor as the B-fields repel and attract each other periodically.
Your statement, that the rotor is a coil of a current carrying wire and a magnetic field is - beside the external magnetic field -induced, is fully reasonable.
Is an electric motor driven by both these forces or are they the same force?
The external magnetic field reinforces this magnetic field on one side of the coil and weakens it on the other.
Overall, this leads to the fact that the neutral are of the commutator, in which the polarity of the current must be switched over, has to be a little bit shifted in the direction of rotation. Otherwise the motor operates every 180° for a little time against the direction of rotation in generator mode.
Furthermore the induction of this shifted field led to the induction of currents and this to sparks in the commutator.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/366255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Where does the energy for electron degeneracy pressure come from? I just watched a web video about white dwarf stars. It mentions that the star’s electrons flow among the degenerate matter. Since most of the electrons can’t be in the lowest state, due to Pauli’s exclusion principle, they get bumped to higher states and replace the radiation pressure that disappeared when fusion stopped. But where did the energy to boost the electrons to distinct states (modulo spin) come from? What lost energy to make up for it?
| The energy comes from whatever force pulled the electrons into such a dense volume in the first place--in this case, gravity.
When the star is 'normal sized', like the Sun, electron degeneracy produces a small amount of pressure. As the star makes its way along the path to a white dwarf, the electron degeneracy pressure rises as it gets smaller. While the total pressure is less than the gravitational force, the star shrinks, trading decreasing gravitational energy* for increasing pressure energy.
*Gravitational energy is negative. Decreasing gravitational energy means that this energy becomes the negative of an increasingly large amount of energy.
This pressure energy includes the thermal pressure energy at some stages of evolution, but when degeneracy is reached, that means that the pressure energy is predominantly Pauli exclusion principle based.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/366386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Light bulb longevity I have a summer home in North Carolina where there are more people in the summer than in the winter. Light bulbs seem to last longer in the winter when there is less demand on the system. I suspect the line voltage drops with the higher use. Would this affect the life of the bulb?
| You may have noticed that filament light bulbs often fail when the bulb is switched on.
This happens because when a bulb is switched on for a short period of time the current through the bulb exceeds its normal operating current.
This behaviour is shown for two different wattage filament light bulbs in the graphs below.
The reason for the larger current on switching on is that when cold the resistance of the filament is much less that when it is at working temperature.
So there is a power surge when the bulb is first switched on which could lead to excessive heating and the filament breaking.
When hot there is some evaporation of the filament and this can lead to a narrowing of the filament in a small region.
This in turn leads to a larger heating effect in that region thus increasing the local temperature and hence the rate of evaporation in that region.
Thus this avalanche type process results with the probability of the filament breaking in that region increasing.
Putting gas inside the bulb reduces the rate of evaporation and hence increases the lifetime of the bulb.
The more often a filament bulb is switched on (and off) the more chance there is of a failure and it is probable that during the winter in North Carolina light bulbs stays on for long periods of time and are switched off less often than in the summer.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/366589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Density matrices, off-diagonal terms, coherences and correlations I'm trying to better understand the off-diagonal terms of the density matrix - an often brought up question on this site I realise. Specifically my confusion at the moment concerns the interpretation of the real and imaginary components of these off-diagonal terms. The diagonal terms represent the populations, and the off-diagonal terms the coherences in the system. I'm reluctant to say the off-diagonal terms represent the probability of occupancy of the coherent superposition states because their values are complex. But they clearly represent something to that effect. Does anyone have some better intuition on this?
| So long story short, those terms are basis-relative and cannot be given a truly deep philosophical meaning: being Hermitian, the density matrix is diagonal in some orthogonal basis; you just aren't looking in the right one.
With that said we can certainly look a little more in-depth at a state of the form $$\rho = \begin{bmatrix}p&q\\q^* & r\end{bmatrix}$$at which point we begin to see that $p = \operatorname{Tr} \big(\rho ~ |0\rangle\langle 0| \big)$ and $r = \operatorname{Tr} \big(\rho ~ |1\rangle\langle 1| \big)$ are precisely the values of these indicator observables which
tell us how much this state $\rho$ occupies the $|1\rangle$ or $|0\rangle$ states on average.
If one were instead to use the state $|+\rangle = \sqrt{\frac12} |0\rangle + \sqrt{\frac12} |1\rangle$ for the indicator one would find that the occupation was instead $\frac12 + \operatorname{Re}(q),$ while the occupation for $|-\rangle = \sqrt{\frac12} |0\rangle - \sqrt{\frac12} |1\rangle$ is going to be just $\frac12 - \operatorname{Re}(q).$ If one instead uses $|0\rangle \pm i|1\rangle$ as one's orthogonal sets this becomes $\frac12 \pm \operatorname{Im}(q)$ as well.
In that sense we can take $q=x + i y$ and read two probabilities off of it: $\frac12 \pm x$ and $\frac12 \pm y.$ They tell us those probabilities for the $x$ and $y$ axes of the Bloch sphere whereas $p, r$ tell us about the $z$-axis. Just to unify these in notation you could write this as $$\rho = \frac12~I + \delta_x~\sigma_x + \delta_y~\sigma_y + \delta_z~\sigma_z,$$ with six probabilities $\frac12 \pm \delta_\bullet.$
Any off-diagonal term can be viewed this way indirectly; one can restrict one's view of the system to just those two states specified by the row and column, and one has $\frac12 (p + r) \pm \operatorname{Re} q$ for the residence on the $x$-quadrature of that two-level system.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/366849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Why don't spectacles form these weird images? It's an established fact that:
*
*Convex lenses produce inverted images of objects beyond the focus, on the other side of the lens.
*Any object placed at a finite distance from a concave lens appears to be somewhere between the focus and the optical centre when viewed from the other side.
Both these facts come from the lens formula, $$\frac{1}{f}=\frac{1}{v}-\frac{1}{u}$$
But I wonder:
*
*why a person wearing convex lenses doesn't see inverted images of objects beyond the focus, and
*why a person wearing concave lenses doesn't feel that the furthest objects are at a distance of $f$ from their eyes.
Hope my question is clear.
An example
Consider a person wearing spectacles with concave lenses of power $-1$ $\mathbf{D}$. The focal length would be $-1$ $\mathrm{m}$. If an object is at an object-distance $u=-3$ $\mathrm{m}$, simple calculations show that the image would be at an image-distance $v=-0.75$ $\mathrm{m}$ away.
But obviously the object doesn't appear to be so close to the person wearing the spectacles.
More confusingly still, the image would be magnified by a factor of $0.25$, and hence would appear to be rather small.
But again, as anyone wearing concave lenses would tell you, that isn't what they see!
Why is this so?
| The eye itself has its own lens that is intended to create an image at the retina. IF this lens is faulty for one reason or another, the image does not form at the retina - rather, it forms in front of or "behind" the retina. The additional lenses (glasses) shift the path of light rays by a small amount before they enter the eye, moving the location of the focal point of the glasses-eyes system closer to the retina.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/367331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Why do phones land face down? Layman here.
I'm not sure if this is the case or not, but my anecdotal evidence is that mobile phones, especially large screen phones, tend to fall face down when you drop them; much to the owner's dismay, this leads to cracked screens.
I'm sure there is a scientific explanation for this, so I'd like to know: Why do mobile phones tend to fall and land face first (if so)?
I have a feeling it's related to the way your toast always falls butter side down, or how the shuttlecock always turns toward the same direction, but I'd like to know the explanation.
| A physicist working at Motorola actually did this experiment as part of a promotional push for shatter-proof screens. This same physicist had previously written a paper on the same question, applied to the classic "buttered toast" problem (does toast really land butter side down?).
The short answer is: the way the phone lands depends on how it is oriented when it leaves your hand. People tend to hold their phones the same way: face up, at an angle, fingers on either side, slightly below the phone's center of gravity, at just about chest-high. The phone also tends to "fall" the same way: slips out of your hand and you fumble slightly trying to catch it.
Given all those parameters, when the phone drops out of your hand, it typically flips over a half a revolution by the time it contacts ground. If you were holding the phone flat, or upside down, or lower to the ground, the result would be different. But given the relative uniformity of the way people hold the phones, there's a corresponding relative uniformity in the way they land when dropped.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/367792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "89",
"answer_count": 3,
"answer_id": 2
} |
Wave function of many particle I read that the wave function of a system of many particles is formed from the product of the wavefunctions of the individual particles. What is the logic behind it?
| This question is not answered in standard textbooks. To me, this signals that standard quantum mechanics doesn't ask 'What is the wave function for a many-particle quantum system?' I read on the internet various work-arounds for this, but to me they seem not to really tackle the principled question 'what is the quantum mechanical general principle for constructing the wave-function for a many-particle quantum system?'
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/368140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Poisson's equation in regions Suppose I have two regions in space. Region 1 and region 2.
In Region 1 I have a bunch of charges, and in region 2 I have no charges.
Is it true that Laplace's equation is satisfied in region 2?
| The Poisson equation is a differential equation. Any differential equation on $\Omega\subset \mathbb{R}^n$ can be written in terms of a differential operator $\mathfrak{D}$ mapping functions on $\Omega$ to functions on $\Omega$ as
$$\mathfrak{D}f=j$$
where $j$ is what sometimes is called a source term. When $j = 0$ the equation is homogenous.
The differential equation is an equality between functions. But equality of functions is pointwise equality. Thus when $f$ is a solution, $\mathfrak{D}f(x) = j(x)$ for all $x\in \Omega$.
In particular if $\Omega'\subset \Omega$ then $\mathfrak{D}f(x)=j(x)$ for all $x\in\Omega'$.
Now if $j(x) = 0$ if $x\in \Omega'$ this means that restricted to $\Omega'$ $f$ will satisfy $\mathfrak{D}f=0$ which is the homogenous equation associated to $\mathfrak{D}$.
Your is just the particular case $\mathfrak{D}=\nabla^2$ with $j(x)=\rho(x)/\epsilon_0$ and with $\Omega'$ the region $2$. The potential is a solution to $\mathfrak{D}\Phi=j$ and thus in $\Omega'$ the potential satisfies Laplace's equation
$$\nabla^2\Phi=0.$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/368283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Can redshift be used to create an accurate accelerometer in a phone? Currently, accelerometers use small springs which produce wild fluctuations and inaccuracies.
Because light's speed is limited and compresses when its source moves in the direction it's going, and decompresses when it does the opposite, it seems we could use light as a spring which uses the constraints of space as its medium.
What I'm wondering is if a cross-beam of light were put inside of a device, with receivers constantly reading the frequency of the light to detect its red-shift compared to what the output should be, would this be accurate enough to determine the phone's velocity and acceleration in space?
| You can measure rotational velocity very accurately with a fiber optic gyroscope. When you are rotating, the path length along one direction "is shorter". Whether you consider that a red shift or a phase shift doesn't really matter - you are using the finite speed of light to determine the rotation.
Measuring absolute velocity in space is of course impossible (because it's dependent on your reference frame). Acceleration could be measured using the same principle as above, but it's not easy.
I found a nice description of an experiment to measure velocity using interference in this paper which claims an accuracy of velocity on the order of µm/s. Unfortunately, they are moving one mirror in their setup relative to the rest of the interferometer - that's a much more favorable setup than a self-contained device (and explains why they are able to measure velocity and not just acceleration).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/368805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is the correlation function a power law at the critical point? I’m taking my first exam in statistical field theory and critical phenomena. I’ve reached a point in which we use the fact that the pair correlation function decays as a power law at the critical point:
$$\left<\psi(x)\psi(0)\right> \sim\frac{1}{x^{D-2+\nu}}$$
to renormalize it to reach the $\epsilon$-expansion in 4 dimensions, which I’m comfortable with.
The thing is, the whole procedure is based on this assumption and I couldn’t find a way to prove it form the topic we previously discussed, which are Landau $\psi^4$ expansion, Hartree fock approximation and normalization or blocking variables.
Can anyone give me a hint on who to proceed? I'm really missing the thing which glues the two things together.
| I am not expert but I am learning, the scaling law is related to the re-normalization group, see famous K.G.Wilson paper :
https://journals.aps.org/prb/abstract/10.1103/PhysRevB.4.3174
The scaling laws result form the re-normalization group for above case.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/368947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 3
} |
Why is work done equal to $-pdV$ only applicable for a reversible process? In thermodynamics, when we're interested at gases, I know that the work done can be written to be $-pdV$ for a reversible process ($p$ is the pressure of the system, and $V$ is the volume of the system).
This is because $$dW=Fdx=-pAdx=-pd(Ax)=-pdV$$
However, why is it not true also that the work done is $-pdV$ for non-reversible processes as well?
| Suppose that gas of pressure $p$ is contained in a vessel behind a piston of area $A$. In order to get the piston to move you will have to supply an external force
$$
f = p A + \epsilon
$$
where $\epsilon$ is the force due to friction, or anything else that prevents the piston from moving freely. Hence the work done on the system when the piston moves in through a distance $dx$ is
$$
dW = f dx = p A dx + \epsilon dx
$$
The change in volume of the gas is
$$
dV = - A dx
$$
so we have
$$
dW = - pdV + \epsilon dx .
$$
So there is your answer. The work is not $-pdV$ because it never was $-pdV$ in the first place. Rather, that is the answer you get in the limit where the friction (or similar) term is negligible. That limit corresponds to a reversible process because reversibility means that an infinitesimal reduction in applied force will suffice to get the process to change direction. This happens when $f$ and $pA$ are balanced, such that a tiny change in either would be sufficient to make the piston begin to move in or out, and this condition corresponds to $\epsilon = 0$.
Notice that we do not need to invoke either the ideal gas or the notion of external pressure in order to present the reasoning. The result is general.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 2
} |
Mass eigenstates and weak eigenstates of neutrinos I am aware that similar questions have been answered earlier. But still I am not able to convince myself on following question:
*
*If mass(/energy) eigenstates are eigenstates of the Hamiltonian operator, which operator is connected to weak eigenstates and what is its eigenvalue.
|
which operator is connected to weak eigenstates
AFAIK interactions are not connected to operators in a one to one relation. There are four interactions, the electromagnetic, the weak, the strong and the gravitational, characterized by the corresponding ( em, weak, strong) interaction coupling constant
These interaction refer to the elementary particles in the standard model of particle physics
All particles carry quantum numbers which may be conserved or not depending on the interaction.
and what is its eigenvalue.
so the term "weak eigenstate" does not have a meaning , imo.
For neutrinos there are the number operators , operating on the specific neutrino field, the creation and annihilation operators . The quantum field theory formalism has to be used and the search brings up a number of papers on how oscillations can be modeled in quantum field theory:
example
Flavor oscillation of traveling neutrinos is treated by solving the one-dimensional Dirac equation for massive fermions. The solutions are given in terms of squeezed coherent state as mutual eigenfunctions of parity operator and the corresponding Hamiltonian, both represented in bosonic creation and annihilation operators. It was shown that a mono-energetic state is non-normalizable, and a normalizable Gaussian wave packet, when of pure parity, cannot propagate. A physical state for a traveling neutrino beam would be represented as a normalizable Gaussian wave packet of equally-weighted mixing of two parities, which has the largest energy-dependent velocity. Based on this wave-packet representation, flavor oscillation of traveling neutrinos can be treated in a strict sense. These results allow the accurate interpretation of experimental data for neutrino oscillation, which is critical in judging whether neutrino oscillation violates CP symmetry.
It ain't simple.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Turning a bike, why does it lean? I realize this seems like a pretty simple issue of drawing a free body diagram, but I just can't seem to figure it out.
If a bike leans, then it must have had a torque that made it lean. I considered the centripetal force, in this case friction, as a possible source for this torque, but that would create torque in the opposite direction. The only remaining torque causing force in my FBD is weight. But weight was there before the leaning, and would only cause a torque after leaning.
If anything, because of centripetal acceleration, observing from an inertial frame of reference, the bike should lean in the opposite direction because of the introduction of the centripetal force. So the question is, what force causes the bike to start leaning?
| The bike would lean in the other direction, due to centripetal force, without a cyclist. A cyclist leans the bike on purpose into the turn, to counteract said centripetal force.
The torque involved is the cyclist's weight as they lean into the turn.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Using integrals to expand a vector in continuous basis I am new to quantum mechanics. I have been trying to understand why when we want to represent a function $$\psi(x)$$ as a ket in continuous basis |x> we us the integral:
$$\vert \psi(x)\rangle =\int\psi(x)\vert x\rangle dx$$
where in non-continuous basis it is :
$$\sum\psi(x)\vert x\rangle $$
clearly the $dx$ gives different units here so I am not sure if integrals make sense to use to expand the vector in these basis. Also, I have heard that continuous means uncountable which I am not sure how that is uncountable, can't we just index all the basis with natural numbers since last time I checked we have infinity of them?
| You want to think of
$$
\psi(x)=\langle x\vert\psi\rangle
$$
as a (complex) number interpreted as the “component” of $\vert\psi\rangle$ on the basis vector $\vert x\rangle$,
with $\langle x\vert\bar x\rangle=\delta(x-\bar x)$. This way
\begin{align}
\vert \psi\rangle &= \int\,dx\, \psi(x) \vert x\rangle \, ,\\
\psi(\bar x)=\langle \bar x\vert\psi\rangle &= \int \,dx\,
\psi(x)\langle \bar x\vert x\rangle
\end{align}
is basically the (continuous) generalization of
$$
\vec r = \sum_{i} {\hat \iota} \,\left({\hat \iota}\cdot \vec r \right).
$$
where the resolution of the identity
\begin{align}
\hat I&=\sum_i {\hat \iota}\ {\hat \iota}\cdot\\
\hat I\vec r=\vec r&= \sum_i {\hat \iota}\ {\hat \iota}\cdot \vec r
\end{align}
is replaced by the continuous
$$
\hat I= \int dx \vert x\rangle\langle x\vert\, .
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Mass and Newton's Second Law While trying to understand the second law of Newton from "An Introduction to Mechanics" by Kleppner and Kolenkow, I came across the following lines that I don't understand:
"It is natural to assume that for three-dimensional motion, force, like acceleration, behaves like a vector. Although this turns out to be the case, it is not obviously true. For instance, if mass were different in different directions, acceleration would not be parallel to force and force and acceleration could not be related by a simple vector equation. Although the concept of mass having different values in different directions might sound absurd, it is not impossible. In fact, physicists have carried out very sensitive tests on this hypothesis, without finding any variation. So, we can treat mass as a scalar, i.e. a simple number, and write $\vec{F} = m\vec{a}$."
The lines above lead me to question:
*
*Why is it not" obviously true" that force behaves like a vector?
*Why is it not impossible for mass values to be different in different directions?
| If mass is a vector quantity then how does one find the total mass of two vector masses after they are combined?
Is there really any evidence that two objects of equal magnitude mass $m$ when joined together exhibit variations in the magnitude of their combined mass varying from $0$ to $2m$?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/369817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 8,
"answer_id": 6
} |
$dU=dQ$ and $dU=TdS$, but $dQ$ not always equal to $TdS$? Why? $$ dU = dQ+dW $$ $$ dU=TdS-pdV $$ The equations above are always true for a thermodynamic state of a certain system. Now let's say that we have a situation where $dW=0$, this tells us that $$ dU=dQ $$ $$ dU=TdS $$But still I can't write $ dQ=TdS $, since this only works for a reversible change of my system. So if I don't have a reversible system I can work with $ dU=dQ $ and $ dU=TdS $, but I can't work with $ dQ=TdS $.
I get this, but I have been trying to figure out why this is, and I just can't seem to get it.
| For an arbitrary process between equilibrium states of a closed system, we have
$$
\Delta U = Q + W
$$
which is just conservation of energy.
If the process happens quasi-statically, state variables are well-defined at each point in time and we can go to an infinitesimal description
$$
dU = \delta Q + \delta W
$$
where
$$
\delta Q \leq TdS \qquad \delta W \geq -p dV
$$
Equality holds if the process is reversible, ie when the change in entropy is given by $dS = \delta Q/T$ without additional entropy production (due to friction, chemical reactions, what have you...).
In general, work performed externally on a system can get converted into heat within the system, and an infinitesimal descriptions in terms of energy flows and state variables won't match.
Explicitly, we have
$$
\delta Q = TdS - T\,\delta S_\text{irrev} \\
\delta W = -pdV + T\,\delta S_\text{irrev}
$$
where $\delta S_\text{irrev}$ is the additional entropy produced compared to a reversible process.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/370926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Newton's first law and Inertial systems
Newton's first law is part definition and part experimental. Isolated bodies move uniformly in inertial systems by virtue of the definition of an inertial system. In contrast, the assertion that inertial systems exist is a statement about the physical world.
According to me, the assertion in the last lines follows from the following statement.
It is always possible to find a coordinate system in which isolated bodies move uniformly.
Am I right to think that the assertion follows from the above statement?
Next, an isolated body is considered to be free of forces. A body, not isolated, but experiencing a net zero force also moves uniformly in inertial systems. I state the following:
It is always possible to find a coordinate system in which bodies experiencing a net zero force move uniformly.
While I think this makes better sense to me, I am still leaving some room for doubt. Could I be wrong in making the above statement?
| Well, since everything is relative to another, there isn't any standard inertial reference frame. So, you first consider a particular frame to be inertial( generally everything is calculated w.r.t earth even though it's in circular motion, which means it's accelerating), and calculate everything else w.r.t it.
For your first part, an example would be, two cars accelerating in the same direction, when seen from car frame are moving uniformly or at rest, but w.r.t. to an outsider, they are being acted upon by a force. So, you can find a system where bodies move uniformly w.r.t one frame but not all.
The above example answers your second query as well, since net force on any car as seen from other is zero, but not to an outside observer.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/371054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Twin paradox Doppler shift explanation I was reading the wikipedia article on the twin paradox and came upon the section describing it in terms of the relativistic Doppler shift (link).
The image below illustrates the received signals from Earth to ship (left) and ship to Earth (right).
The explanation states that on the outward journey the twin on the ship sees the twin on Earth age only 1 year (illustrated by the few red signals in the left image), while on the return journey he sees the twin on Earth age by 9 years (illustrated by the many blue signals).
I understand this from the explanation, but doesn't this conflict with the concept that time seems to run slower for objects moving relative to an observer? This explanation would lead one to believe that the rate at which an observer sees a moving object travel through time depends on whether the object is moving towards or away from the observer. This should not be case according to the time dilation equation which depends on the absolute value of the velocity, not the direction.
|
But doesn't this conflict with the concept that time seems to run slower for objects moving relative to an observer? This explanation would lead one to believe that the rate at which an observer sees a moving object travel through time depends on whether the object is moving towards or away from the observer.
Time indeed runs slower for objects moving relative to the observer. Time dilation for outward and inward journey depends only on relative velocity. The number of signals received during outward and inward journey differ due to Doppler effect. Doppler effect is about perception whereas time dilation is real. Mike's answer points out this distinction.
Also, if you refer the derivation for relativistic Doppler effect you will see that the formula is derived by applying Doppler shift and time dilation as two independent concepts.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/371357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Force on plate of parallel plate capacitor with dielectric If we have a parallel plate capacitor whose charge is +Q and the polarization charge as Qp as shown in the figure..
then while finding the force acting on the left plate of the capacitor for instance, shouldn't the force due to the polarized charge -Qp and +Qp together be zero and therefore the only force acting be due to the right plate of the capacitor and hence the total force acting on the left plate of the capacitor be independent of the dielectric constant of the medium?
| Agreeing with @Aniansh that the two induced surface charges on both faces of dielectric are separated by a finite distance and therefore must cast a net electric field at a point beyond the dielectric boundaries because one surface will always be closer to the point than the other. However when we are talking of capacitors it is assumed that A>>d i.e. plate area is much greater than the distance between the plates.Even when evaluating electric field inside the plates we use the formula $\frac {\sigma}{\epsilon _o}$ , this formula is valid when the electric field is being evaluated at a point just near the surface of infinite charged sheets . In such a compact setup as that of a capacitor it is safe to assume that any point inside capacitor but outside dielectric will see two equal forces being applied by the opposite faces of dielectric each being of magnitude $\frac {\sigma _d}{2\epsilon _0}$ because we are evaluating field at a point very near the surfaces of dielectric , so even if there is a separation the limiting values of forces are same because the distances from the surfaces are small enough to be considered 'very near to the surfaces'.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/371702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Physics interpretation of the expectation value of an electrons spin components Suppose I have an electron that is in the spin state
$$\chi =A\begin{bmatrix}3i \\4\end{bmatrix} $$
If I calculate the expectation values of its spin components $S_x$ $S_y$ $S_z$, I get
$$\langle S_x \rangle= 0$$
$$\langle S_y \rangle = -\hbar \dfrac{12}{25}$$
$$\langle S_z \rangle= -\hbar \dfrac{7}{50}$$
I can do the math part easily, however I am having a bit of difficulty trying to apply a physical interpretation to my answers.
My attempt at a physical interpretation
Due to the uncertainty principle, we can not having a well defined $S_x$ $S_y$ $S_z$ component simultaneously. Hence, we can only calculate the expectation value of its components. For $S_x$, the probability that the spin will lie in the $S_x$ component will be zero. For $S_y$ and $S_z$ we the spin has either $-\hbar \dfrac{12}{25}$ or $
-\hbar \dfrac{7}{50}$.
| Expectation values are just average values; namely if you do a measurement of the spin at the direction in question "many times" with "identical" setups, the average values of your spin measurements at that direction should be equal to the expectation value given by quantum mechanics.
That said I think your statement about $S_x$ is not so precise. When you measure $S_x$ several times, it doesn't mean that you always get $0$; it's the average that's zero but your measurement result at a particular time may not be zero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/371884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is there an antimatter sweet spot? More Details:
Is there a spot with antimatter where the repulsive force of its nucleus and the attractive force of the positrons cancel out? As in if I took a heavy antimatter atom, ionized it, would there be a scenario in which electrons get attracted to the positrons around the anti-atom but cant get close enough due to the repulsive force from the nucleus, so it just sits around it unable to get closer but still attracted?
| If you start with neutral antimatter atom, then you would have to add positrons in order to make it positively charged so that it would attract electrons. While it would be possible to then add electrons to an orbital, that electron would quickly find a positron and annihilate, leaving behind the original atom.
Matter-antimatter systems have been created in labs, the simplest being positronium, consisting of an electron and a positron orbiting each other. This "atom" lasts an average of 0.125 nanoseconds before annihilating into photons.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is wave made of particles? I always feel confused about the concept of wave. I don't know why we have to develop a term called "wave"? To me, wave made of particles oscillating up and down periodically.
Is wave just a collection of particles or is wave an identity independent of particles?
For example:
A Wave on a string is a collection of string particles
A Sea wave is a collection moving water molecules
A Light wave is a collection of photons
But,then why does the light still travels in a straight path?If light is a wave then,it should move up and down ?
Note:I am a high school student and English is my second language.
| Light is both a wave and particle according to wave-particle duality. In-fact, this is true for every object in the universe.
For e.g. : Light behave as a particle in the photoelectric effect whereas as wave in the double-slit experiment.
And, light wave does oscillate perpendicular to the direction of propagation; that's why it has an amplitude and frequency.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Do higher frequency/energy levels in the EM spectrum mean higher temperatures? I am trying to find concrete evidence that for example, light in the optical spectrum would be hotter than infrared light because it has a higher frequency, and that is directly proportional to energy. Is energy directly proportional to temperature?
If we are to split up the optical spectrum into its components, blue light has a higher frequency than red light, and blue light is hotter than red light. Does this work for the whole spectrum?
| It's complicated. If we are talking about temperature of an object then yes. The hotter the object is the higher the electromagnetic wavelength frequency it generates. From infrared to UV going from 200 degrees to 4000 celcius. In terms of radiation absorption. On a black object then the hottest frequency is about yellow. Not red or infrared as most common objects don't absorb it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Why am I able to see objects within 25 cm? My book defines:
The closest distance for which the lens can focus light on the retina is called the least distance of distinct vision or the near point. The standard value (for normal vision) taken here is $25\, \text{cm}$ (the near point is given the symbol $D$.)
However, in normal everyday life, I've always observed that I can still see objects clearly and distinctly for distances even at around $10\,\text{cm}$, which is much less than the value $D=25\, \text{cm}$. Yes, it does strain my eye to be looking at objects so close at $10 \,\text{cm}$ but I still can see them anyway, distinctly and clearly.
The Wikipedia article on LDDV is a stub. I couldn't any other useful information elsewhere.
Can anyone please resolve this dispute I've arrived at. Thanks!
| The least distance of distinct vision is the minimum distance your eye lens can focus on an object without any strain. This means the eye is in a relaxed state. But eye is a self adjusting lens. When you try to see an object closer than 25 cm(for a normal eye), your eye automatically adjusts the focal length thus decreasing it. This is why your eye gets strained.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 3,
"answer_id": 1
} |
Mercury's precession I read in an article about Mercury's precession that Newton's law of gravitation predicts such precession of planets ;but fails to caluclate the precession of Mercury.But most of popular science books or other articles on the internet suggest that Newton predicts identical ellipses whereas the real orbital shape is like a rose petal. To conclude, Newton's also thought that the orbits should be like rose petal because of the perihelion advancement due to precession .Right ?
Edit 1
the article that i read
| If two bodies orbit each other and the mass of one of them is much larger than the mass of the other body, the equations of motion can be solved exactly. In polar coordinates the solution is given by:
$$r(\theta)=\frac{l}{1+e\cos{\theta}}$$
This is the equation of an ellipse. As the OP correctly mentions, the actual orbit of Mercury looks more like "rose petals". This is because the actual period of the orbit is slightly less than the standard $2\pi$, so after every orbit the ellipse starts again slightly earlier, resulting in these "rose petals". This effect is known as the "perihelion precession".
What causes this effect? The main contribution is purely Newtonian, and is due to the attraction of the other planets. To first-order perturbation theory one can compute the correction to the orbit, yielding the precession of the perihelion. So it is correct to say Newtonian gravitation predicts "rose petals", at least qualitatively. In quantitative terms, the prediction for how much the perihelion should shift every year is off by a small however noticeable amount. This can only be explained by corrections due to General Relativity.
For the Newtonian contribution to Mercury's perihelion precession, a very simple method to compute it is presented in the paper Nonrelativistic contribution to Mercury’s perihelion precession by Price & Rush.
However it should be mentioned that Newton never did the calculations himself, rather these consequences of Newton's laws were explored in the 19th century.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/372807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
If an astronaut orbits earth, can both Time Dilation and Gravitational Time Dilation affect it? I am very new at all of this stuff and this one thing bugs me very much...
If an astronaut is orbiting earth, it should be experiencing Time a bit faster than those on earth, correct?
Well then because of the speed he is traveling, he will also experience Time a little slower than those on earth.
Correct me if I’m wrong but those should pretty much cancel out, but that sounds completely wrong to me.
If you could answer this question and fix any of my poor logic/ reasoning, that would be awesome :)
| Yes on both counts. There is time dilation due to the orbital velocity (special relativity), and there is also time dilation due to the Earth's gravity field (general relativity). The signs of each of these are opposite, but the magnitudes can be different, leaving a net effect. The example I am most familiar with is with GPS (or other GNSS) satellites... here's a good post on that (Why does GPS depend on relativity?).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why does a room not warm up faster when I put the heater's thermostat on a higher value? I would say it should warm up faster because the difference in temperature between the room and heater is higher.
Edit: I am talking about a convection heater.
| The thermostat in a heater is usually an on-off device. It senses the room temperature and runs the heater at full power as long as the room is colder than the target temperature. If the room is hotter than the target temperature, the thermostat turns the heater off.
(In a narrow temperature interval around the target temperature the thermostat will usually stay in the state it had the last time the temperature was outside the target interval, such that it won't incessantly turn on and off based on fractions of degrees of difference).
Your description sounds like you're expecting the thermostat to be based on the temperature of the heating element inside the heater, but manufacturers do their best to avoid that and instead let it sense the actual air temperature in the room, since that it what you as the user actually have a preferrence for.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 4,
"answer_id": 3
} |
Quantum State Representation with Commuting Operators Let $[A,B]=0$. Then, we can find a set of eigenvectors $\{|a_n,b_n\rangle\}$ common to both $A$ and $B$. According to this, and my own understanding, it makes sense to write an arbitrary quantum state as
$$\tag{1}|\Psi\rangle=\sum_n \sum_i c_n^i |a_n,b_n,i\rangle,$$
where the sum over $n$ goes over all the eigenvectors, and the sum over $i$ allows for degeneracy to exist.
To me, it seems like we're saying $|a_n,b_n\rangle$ is a single eigenvector common to $A$ and $B$, that could have very well be written as $|w_n\rangle$. This also makes sense.
Yet, Cohen's quantum mechanics text writes
$$\tag{2}|\Psi\rangle=\sum_n \sum_p \sum_i c_{n,p,i}\ |a_n,b_p,i\rangle.$$ This has greatly confused me as it seems like we are dealing with two different sets of eigenvectors, one for $A$ and one for $B$. This representation (at least to me) says for each $n$, we are going over all $p$ eigenvectors and account for their degeneracy. Whereas the representation in Eq. (1) says to simply go over the eigenvectors $|a_n,b_n\rangle$ and account for their degeneracy.
Any help in trying to understand where I'm going wrong is appreciated.
| If you think of $a_n$ and $b_j$ as eigenvalues, it is quite possible for the eigenvalue $a_n$ to occur more than once, but there is no reason for all eigenstates of $\hat A$ with eigenvalue $a_n$ to have the same eigenvalue of $\hat B$.
An easy example would be eigenstates of the hydrogen atom. The eigenstates $\vert \psi_{n,\ell,m}\rangle$ all have the same energy $E_n$ for fixed $n$, but there are (usually) several states with energy $E_n$ having different eigenvalues of $\hat L^2$ and $\hat L_z$. Hence, an expansion in terms of these eigenstates would contain a different coefficient for each $(n,\ell,m)$ triple of quantum numbers, i.e.
$$
\Psi=\sum_{n\ell m}c_{n\ell m}\vert\psi_{n\ell m}\rangle
$$
One can even imagine situation where some triples occur more than once, in which case you’d need the extra label $i$ to distinguishing between those states.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Will current flow if there's no return path? Here is the problem I was trying to solve:
Find the potential difference between the points A and D
I used Kirchhoff's voltage law for the left loop and right loop and found out the current through the left loop to be $\frac{10}{2+3}$ A (2A) and for the right loop $\frac{20}{4+6}$ A (2A), both flowing clockwise. But this does not take into account the current between B and C (The connecting wire)? By book says current will not flow through BC and they proceeded to find the potential difference by adding\subtracting the potential drops along the way while taking current through that wire as 0.
One explanation was that it's because there's no return path for the current. But even during Earthing, there's no return path, yet charges flow for a short while.
My question:
Why does current not flow through the BC path? If there exists a potential difference between B and C of 4V, charges should flow, right? Shouldn't all the current eventually pass only theough the loop at a lower potential?
Edit: What about a case like this?
Will current flow now?
|
If there exists a potential difference between B and C of 4V, charges
should flow, right?
No, if there were a current through the 1 ohm resistor, the voltage $V_{CB}$ could not be $4\,\mathrm{V}$. This result is an elementary application of KVL and Ohm's law:
$$V_{CB} = I_{CB}\cdot 1\Omega + 4\,\mathrm{V}$$
See that only in the case that $I_{CB} = 0$ is $V_{CB} = 4\,\mathrm{V}$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Bremsstrahlung radiations Why are Bremsstralung radiations ignored in case of heavy ions(such as $\alpha$ particles) and not for $\beta$ particles when calculating the rate of energy loss of the heavy ions moving in some medium. (Bethe formula)
| The power radiated from charged particles is proportional to the square of their acceleration. If equally charged particles are subject to the same accelerating electromagnetic forces then particles with greater mass (i.e. the ions) will experience a much smaller acceleration and therefore emit much, much less power.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Renormalization of a Feynman diagram with zero bare mass Consider the Feynman diagram below:
in the case of $\phi^4$ theory where there is no bare mass:
$$\mathcal{L}=\frac{1}{2} \partial_\mu \phi\partial^\mu\phi-\frac{\lambda}{4!} \phi^4$$
the contribution of this diagram is given by:
$$I=\frac{-i\lambda}{2} \int \frac{d^d k}{(2\pi)^d}\frac{1}{k^2+i\varepsilon}$$
typically such integrals are done using Feynman and Schwinger Parameterizations. In this case, however Schwinger parameterization won't work as you will have a completely imaginary exponential. It also appears to be the case that
$$\lim_{m\rightarrow 0}\frac{-i\lambda}{2} \int \frac{d^d k}{(2\pi)^d}\frac{1}{k^2-m^2+i\varepsilon}$$
which can be done using Schwinger parameterization does not converge onto the same value as $I$. Thus my question is: what is the most common way to deal with integrals of the form $I$ in QFT ideally using minimal subtraction.
| In dimensional regularization, loop integrals which do not depend on physical external momenta are automatically regularized to be zero. This is so because, in absence of a mass scale, the integral is both UV and IR divergent and the divergences are regulated to zero.
Then, in dimensional regularization, you can regulate any integral of this sort
$$
I_a =\int d^d k\, k^a = 0
$$
for any value of $a$. You can see this result is really obvious since the mass dimension of $I_a$ is $[I_a] = d+a$ but since you don't have any mass scale in the problem, what dimensionful result different from zero would you expect?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Reading the y value of the Branching fraction diagram for Higgs boson decay modes I am simply confused about the way y-axis is scaled in the figure below. So for example, if I were to read off the branching fraction value for Higgs decaying to ZZ, how would I precisely read the value?
Photo Courtesy: University of Edinburgh Particle Physics Lecture Note
| The "branching fraction" is the fraction of decays that occur in a particular channel; all of the branching fractions for all of the decay channels must add up to $100\%=1$. This is a graph predicting how a Standard Model Higgs would decay as a function of its mass, which was much more interesting before we managed to measure the mass of the Higgs at $\rm125\,GeV$.
So this graph tells you that a very light, $\rm80\,GeV$ Higgs would decay about 80% of the time into $b\bar b$, about 9% of the time into $\tau\tau$, and so on. There's a mass region around $\rm170\,GeV$ where the Higgs would decay into something like 98% $WW$ and 2% $ZZ$. Reality, around $M_H = \rm125\,GeV$, seems to be where the $ZZ$ and $c\bar c$ channels are equally likely at about 2.5%.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to maximize Peltier devices' cooling capacity? I am currently working on a project requiring the use of Peltier devices. I have attached the cool side of the device to a copper plate and the hot side to a heat sink with a fan.
*
*What would be the best way to isolate the copper plate from the hot side?
*Should I leave it at ease with just air in between them?
*Or will it be better to put styro to insulate the plate that is cooler?
| inside the device, the cool side is already in physical contact with the hot side: they form a junction through which electricity is flowing, and it is that flow of electricity which makes one side of the junction get hot and the other side cold. the very best you can do to maximize the usefulness of a peltier thermojunction is to put fins and fans on both sides of the junction so as to get as much of the heat out of the hot side and cold out of the cold side (so to speak). indeed, the cold side needs to be well-insulated too so the cooling effect is not lost to the environment. styrofoam is good for this, as you point out.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/373971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Higgs Boson abundance in the universe What's the Higgs boson abundance in the universe, in $\%$? In total? Does it even make sense? I cannot find any estimate on the internet.
| The Higgs boson is one of the components along with the Goldstone bosons in two doublets of scalar fields. The three Goldstone bosons couple to the $Z,~W^\pm$ to introduce a longitudinal component to their dynamics, which corresponds to mass. The remaining particle couples to nothing. It has a mass of $125\,\mathrm{GeV}$ and requires considerably more energy to be produced in sufficient quantity. In addition the Higgs particle has a lifetime of about $10^{-22}\,\mathrm s$. The gamma factor for the colliding protons is about $\gamma=10^4$ and even if there is a gamma factor that dilates the life of the Higgs particle it is only to maybe about $10^{-19}\,\mathrm s$.
The universe is very cold, about $3\,\mathrm K$ and this means the average energy of systems is only about $10^{-5}\,\mathrm{eV}$. Compare that to the energy needed to produce the Higgs particle. It would require a remarkable place to produce Higgs particles, and in some sense we might say the LHC is one of them. Since the end of the first $3$ seconds into the big bang there have been few energy systems that can produce Higgs particles. Also the particle is highly unstable. I think these are enough to make the case there are very few Higgs particles in the universe. At least they do not constitute a large percentage of the universe.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/374638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Finding a path of beam in a gradient-index media I'm trying to find a polynomial that describes a path of a beam in a gradient index media.
It is a path of least time (Fermat's principle) meaning that it takes path that takes the least time to get from A to B, not the shortest one.
The velocity of a beam is non-linear. It's given by $v=c/n$, where $c$ is speed of light in vacuum and $n$ is index of refraction, which is a function of $y$.
Polynomial should be a function of A.height, B.height, width of the media and $n(y)$.
I need to solve this problem in Wolfram Mathematica, but I will value any tips.
| Almost exactly this problem is explained in detail in these notes. I refer you to those notes for the derivation. They conclude that it's better if $y$ is the independent variable, in which case you have to minimize the integral
$$\int n\cdot d\ell = \int f(y)\sqrt{1+x'^2} dy\\
\rightarrow\\
\frac{d}{dy}\frac{\partial}{\partial x'}\left[f(y)\sqrt{1+x'^2}\right]=0\\
f(y)\frac{x'}{\sqrt{1+x'^2}}=C$$
Rather than a polynomial, it turns out that for the linear changing refractive index, $f(y)=n_0(1+\alpha y)$, there is a closed form solution of the path of the light:
$$y = -\frac{1}{\alpha}+\frac{C}{\alpha n_0}\cosh \left(\frac{\alpha n_0}{C}(x-x_0)\right)$$
Now you just need to solve for the values $x_0$ and $C$ so the curve goes through $A$ and $B$. I will leave that as an exercise.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/374750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How does an electromagnetic wave move? Somewhere I found the explanation that the EM fields create and destroy each other during the oscillation (I suppose by Faraday's law) and this makes the wave "move".
I can't imagine this because unitary vectors in E,B and k directions are a right hand ordered set of vectors and by the fact of E and B are in phase.
So, this is my question, what is the reason because the wave moves?
| Short answer
You've got half of the answer right when mentioning Faraday's Law of Induction, which tells us how electric fields can be generated from time-varying magnetic fields. The other half of the answer involves Ampère's Law with Maxwell's correction, which tells us how magnetic fields can be generated either from an electric current or time-varying electric fields. This coupled interaction between the electric and magnetic fields allows the EM waves to propagate.
Long answer
The previously stated claim is typically proven mathematically in most standard textbooks written on classical electromagnetism, and so such derivations can readily be found in the appropriate literature. I will give one such derivation here, adapted from the Wikipedia article that you can and should read in order to learn more about EM radiation in fuller detail:
Consider Maxwell's Equations in the microscopic form (in SI units),
$$\nabla \cdot \mathbf{E} = \frac{\rho}{\epsilon_0} \tag{1}$$
$$\nabla \cdot \mathbf{B} = 0 \tag{2}$$
$$\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t} \tag{3}$$
$$\nabla \times \mathbf{B} = \mu_0 \mathbf{J} + \mu_0 \epsilon_0 \frac{\partial \mathbf{E}}{\partial t} \tag{4}$$
with (3) and (4) being, respectively, the Maxwell-Faraday Equation and the Maxwell-Ampère equation.
In free space (i.e. in a location that contains no electrical charges or currents), the equations take the form,
$$\nabla \cdot \mathbf{E} = 0 \tag{5}$$
$$\nabla \cdot \mathbf{B} = 0 \tag{6}$$
$$\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t} \tag{7}$$
$$\nabla \times \mathbf{B} = \mu_0 \epsilon_0 \frac{\partial \mathbf{E}}{\partial t} \tag{8}$$
Taking the curl of (7), we end up with,
$$\begin{align} \nabla \times \left(\nabla \times \mathbf{E}\right) & = \nabla \times \left(-\frac{\partial \mathbf{B}}{\partial t}\right) \\
& = - \frac{\partial}{\partial t} \left(\nabla \times \mathbf{B}\right) \\
& = - \mu_0 \epsilon_0 \frac{\partial^2 \mathbf{E}}{\partial t^2} \end{align} \tag{9}$$
where we have made a substitution in the last step using (8).
From vector calculus, for a vector field $\mathbf{F}$, $\nabla \times \left(\nabla \times \mathbf{F}\right) = \nabla \left(\nabla \cdot \mathbf{F} \right) - \nabla^2 \mathbf{F}$, where $\nabla^2$ is the vector Laplacian operator, is an identity. Using this identity to rewrite (9),
$$\nabla \left(\nabla \cdot \mathbf{E} \right) - \nabla^2 \mathbf{E} = - \mu_0 \epsilon_0 \frac{\partial^2 \mathbf{E}}{\partial t^2} \tag{10}$$
But since we're in free space, by equation (5) the first term on the left-hand side of (10) vanishes, and after redefining $\mu_0 \epsilon_0 = 1/c^2$ we're left with,
$$\nabla^2 \mathbf{E} - \frac{1}{c^2} \frac{\partial^2 \mathbf{E}}{\partial t^2} = 0 \tag{11}$$
which is clearly the wave equation with the electric field $\mathbf{E}$ as the dependent function.
Taking the curl of (8) and performing an almost identical procedure, the wave equation for the magnetic field $\mathbf{B}$ can also be derived,
$$\nabla^2 \mathbf{B} - \frac{1}{c^2} \frac{\partial^2 \mathbf{B}}{\partial t^2} = 0 \tag{12}$$
The fact that Maxwell's Equations in free space directly lead to these wave equations shows that the coupling between electric and magnetic fields described by Faraday and Ampère directly allows for the forward propagation of electromagnetic waves.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/375052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Why are some materials dull rather than shiny (cloth, coal, matte paint etc.)? The converse of this question is perhaps "why are metals shiny". From what I understand, metals are covered by a sea of free electrons that oscillate in response to incident light/EM wave, and the oscillation is in turn associated with another EM wave travelling in the opposite direction from the first one. Hence the reflected light back to our eyes. (correct me if I am wrong. Also I am not too clear on the precise mechanisms of EM waves. For example, how does the electric field in incoming wave cause the electrons to move if there are no charged particles in the wave? This may be a self-explanatory question in that electric fields by definition affect charges, but how? I know this is a really basic concept, but I've never been able to get it. How is the oscillation of electrons associated with the new wave, what determines its direction? Does this oscillation use up some sort of energy, if so, where from?)
So with non-metals, in which electrons are fixed in place, is the dull appearance because of the lesser extent to which the electrons can oscillate? How does this relate to the intensity of the reflected light (maybe because of the smaller amplitude of the reflected wave?) Or, does the reflection occur via a completely different mechanism?
The other aspect is absorption. Then the question becomes, what is the difference between metals and non-metals that make the rate of absorption different, if the dullness is due to the greater absorption of light by materials such as cloth etc. ? What is meant by absorb more light anyway, which quantity does 'more' correspond to apart from the bright/dim perceived by our eyes?
There are so many questions here but any help is greatly appreciated!
| The most important aspect for the difference between matt & shiny surfaces is the roughness of the surface; if it is rough it scatters light in many directions looking rough, whereas if it is smooth light is reflected in the same manner and so appearing shiny.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/375416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Nuclear Fusion: Why is spherical magnetic confinement not used instead of tokamaks in nuclear fusion? In nuclear fusion, the goal is to create and sustain (usually with magnetic fields) a high-temperature and high-pressure environment enough to output more energy than put in.
Tokamaks (donut shape) have been the topology of choice for many years. However, it is very difficult to keep the plasma confined within the walls because of its high surface area (especially in the inner rings).
Why hasn't anyone used spherical magnetic confinement instead (to mimic a star's topology due to gravity)? - Apart from General Fusion
E.g. injecting Hydrogen into a magnetically confined spherical space and letting out the fused energy once a critical stage has been reached?
| Because that's just one of the many alternative approaches to fusion, and resources are limited.
Nuclear fusion seems very promising, but for all approaches tried so far the challenges have proved to be more numerous and difficult to overcome than initially expected $-$ and the magnetized target fusion that General Fusion has been pursuing is likely no different. One needs to deal with extreme magnetic fields and temperatures, which is hard enough, and at the same time keep a rather fine control in order to maintain stability. That means that too much technology still has to be demonstrated.
Tokamaks and even Stellarators are more mature, developed technologies, so it's natural that efforts concentrate on those designs. That's very expensive research, a long-term investment with returns that are very uncertain $-$ so the alternative approaches have to compete for funding. And once a team has a head start on a given design, there's little incentive for others to follow before it's at least partially proved. That's why these new proposals are mostly being tried each by a single research group.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/376161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 3,
"answer_id": 1
} |
Why is then the mobility of holes lesser than the mobility of electrons?
In a semiconductor, mobility of holes is less than the mobility of the
electrons.
However, we know that, when an electron leaves its place, a hole is created. In other words, electron mobility constitutes hole current. Since moving electrons constitute hole current, why is then the mobility of holes lesser than the mobility of electrons?
| The conduction electrons reside in the conduction band and the missing electrons (holes) reside in the valence band of the semiconductor. The conduction band electron effective mass is usually smaller than the valence band hole effective mass.This is one of the reasons that in a semiconductor the electron mobility is usually larger than the hole mobility.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/376264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How many spectral lines are emitted by atomic hydrogen excited to the n-th energy level? This is a question from Irodov's Problems in General Physics.
The Rydberg formula for calculating the wave number of a spectral line is:
$$
n = R(1/n_f^2-i/n_i^2)\,.
$$
where $n_f$ and $n_i$ represent the final and initial states of the electron respectively, and $R$ is the Rydberg constant.
So for a given value of $n_f$ and a given value of $n_i$, you get one particular spectral line. Change $n_i$ or $n_f$ (or both) and you get a different spectral line.
In the above problem, $n_f$ and $n_i$ can have any integral value from 1 to $n$. The number of spectral lines is simply the number of ways in which you can "choose" two distinct integers from 1 to $n$. That would be $^nC_2$.
However, what if two spectral lines have exactly the same wave number? In other words, what if
$$
1/n_1^2-i/n_2^2=1/n_3^2-i/n_4^2\,
$$
where $n_1$, $n_2$, $n_3$, $n_4$ are distinct integers from 1 to n? In this situation, the two spectral lines would "overlap", and it makes no sense to count them twice when they are actually just the same line. So, shouldn't the actual answer be less than $^nC_2$?
| First the formula is wrong. It should be:
$$\tilde \nu = \dfrac{1}{\lambda} = R \left(\dfrac{1}{n_f^2}-\dfrac{1}{n_i^2}\right)$$
where $\tilde \nu$ is the wavenumber of the line, R is the Rydberg constant. Both $n_f$ and $n_i$ are positive integers such that $n_f < n_i$.
The OP asked: However, what if two spectral lines have exactly the same wave number?
If two spectral lines overlap, then they overlap. There are still two lines since the transitions are between different states. In real spectrometers the overlap would be mostly because of the resolving power of the spectrometer rather than two lines having exactly the same energy.
I had doubted that two different lines could have the exactly the same energy, but I was wrong. I asked the question on the Math forum, and user Hw Chu analyzed the number theory. It turns out that among other solutions:
$$\dfrac{1}{5^2}-\dfrac{1}{7^2} = \dfrac{1}{7^2}-\dfrac{1}{35^2}$$
Hw Chu also named the sets of integers that satisfy the conditions as "Rydberg quadruples" which seems like a very nice name.
Now I'll speculate again and guess that there is no nice way to calculate the number of different energies of spectral lines given some upper limit of $n_i$ since there is trial and error in finding the Rydberg quadruples.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/376462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why do fundamental particles have a specific size? If Quantum Field Theory is accurate, all particles are actually just excitations of the field in which the particle interacts.
Therefore, wouldn't it be possible to have particles of any conceivable size, provided the energy, couldn't you have a photon the size of a building? Or one unimaginably smaller than the accepted size of a photon?
Am I missing something, or is this one of those unanswered questions that linger in physics?
If there are hard limits on particle size, why those sizes, what makes them meaningful?
| The standard model is a collection / list of the properties of quantum entities, it does not describe the quantum entities themselves that possess those particular properties.
So the words electron, quark etc, are really just shorthand for a bunch of properties that have been measured. That's all physics, as an empirical discipline, can say about these entities.
There are no hard limits, on both the large and the small scale. It's more a problem of measurement and classification, in order to establish a system of description that is useful in the prediction of events in the classical world.
You could describe the electron, for example, as "larger" than we currently consider it, but if a proton is near it, as in an atom, how do we deal with that? By making it larger?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/376624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
In semiconductors when electrons jump from VB to CB, do they leave behind their parent atom's nuclei? In semiconductors/conductors when electrons jump from VB to CB, do they leave behind their parent atom's nuclei?
If yes, when this happens in Si (electron jump from VB to CB) why don't they ionize Si to Si +?
If no, why do they ionize doped As atom when the unbonded free electron from VB goes to CB?
| First, the electrons in the valence or conduction band are not localized to a single nucleus. They move about in the crystal lattice much like atoms in a gas, so we call them an "electron gas".
Second, the conduction and valence bands are bands of states of electrons, not of whole molecules. Finding an electron in one or the other band doesn't tell you anything about the energy state of the nuclei nearby (and the energy states of the nuclei don't much affect semiconductor electrical behavior).
If yes, when this happens in Si (electron jump from VB to CB) why don't they ionize Si to Si +?
Because the electron has only changed energy state, it hasn't moved to a different location far away from where it was before. So any positively charged nuclei nearby are still balanced by the negative charge of the electron.
Note, though, that ionized donor impurities do contribute an electron to the conduction band while leaving a localized positively charged nuclear site behind.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/376854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Inverse Square relationship using paint problem confusion I want to ask a question about the inverse square relationship using an aerosol paint spray mentioned in my book.
I am reading the book Advanced Physics by Steve Adams, and it mentions this in the book.
Imagine you are holding an aerosol paint spray at $50$cm from a wall. By squirting it for one second, you make a circle of radius $10$cm.
Now, I am aware that I can find the area of the circle as follows:
$$A_{10} = \pi r^2 = \pi \times 10^2 = 100 \pi$$
The book next talks about increasing the distance from a wall:
Now imagine you move along the wall and stand twice as far from the wall - 100cm. You squirt for the same length of time.
Because he is standing twice as far away from the wall, the radius of the patch is doubled to 20cm.
This therefore means that the area of the circle is now:
$$ A_{20} = \pi r^2 = \pi \times20^2 = 400 \pi$$
Now, I have studied the inverse-law relationship ($I \propto \frac{1}{r^2}$ previously in regards to a light source, and I wanted to understand this concept more easily using this paint example.
However, I cannot understand why standing twice as far away from the wall, the radius of the patch is doubled to 20cm.
Can someone explain why this is the case please?
| The spray comes out as a cone with opening angle $\alpha$ such that $$\tan\alpha=\frac{10cm}{50cm}=\frac{1}{5}$$
If you double your distance to the wall, then the opening angle $\alpha$ stays the same and therefore the radius of the paint patch $r$ is such that
$$\frac{1}{5} = \tan\alpha = \frac{r}{100cm}$$
If you solve this for $r$, you will see that it is now doubled, that is, $20cm$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why does gravity need a graviton? Einstein theorized that gravity is a phenomena manifested by the curvature of spacetime, in effect it IS the curvature of spacetime. If this is so, why do we need a graviton to convey the force of gravity? If I have mis-understood Einstein then I would appreciate a little help in grasping the relationship between warped space and gravity.
| The short answer is that we need $G_{\mu\nu}$ to be quantised because $T_{\mu\nu}$ is. You can try getting around that by e.g. replacing the stress tensor with its own expectation in the Einstein field equations, but that causes all sorts of headaches people have investigated, such as nonlinear quantum mechanics.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Solving the Lie algebra of generators: path from algebra to matrix representation Given the Lie algebra, what is the systematic way to construct the matrix representation of the generators of the desired dimension? I ask this question here because it is the physicists for whom representation of groups is more important than mathematicians.
Let us, for example, take $SU(2)$ for concreteness. Starting from the generic parametrization of a $3\times 3$ unitary matrix $U$ with $\det U=1$, and using the formula of generators $$J^i=-i\Big(\frac{\partial U}{\partial \theta_i}\Big)_{\{\theta_i=0\}}$$ one can find the $3\times 3$ matrix representation of the generators.
However, I'm looking for something else.
*
*Given the Lie algebra $[J^i, J^j]=i\epsilon^{ijk}J^k$, is there a way that one can explicitly construct (not by guess or trial) the $3\times 3$ representations of $\{J^i\}$?
*Will the same procedure apply to solve other Lie algebras appearing in physics such as that of $SO(3,1)$ (or $SL(2,\mathbb{C})$)?
| If you have the structure constants, i.e. the coefficients $f_{ab}^c$ in the commutation relations $[T_a,T_b]=\sum_c f_{ab}^cT_c$ with $a,b,c\in \{1,\ldots p\}$ then you can construct $p$ matrices $M_a$ (labelled by $a$) of size $p\times p$ with entries $(M_a)_{cb}$. These matrices will be a $p\times p$ representation of the algebra (in fact, the adjoint representation.)
This will work for every algebra. However, in the case of non-compact algebras, the resulting representation is obviously finite dimensional and thus cannot be make hermitian (i.e. it cannot exponentiate to a unitary, finite dimensional representation of the group.)
Be aware that in the case of $so(3,1)$, there is a subtle point that comes in going from the complex back to the real form. Over $\mathbb{C}$, the adjoint of $so(3,1)$ is reducible into $su(2)\oplus su(2)$ but over the reals the adjoint is irreducible. In other words, for non-compact real forms, there are issues with reducibility and unitary.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Intuitive logic behind this beautiful result of successive time of collisions of two bodies INPhO 2017 Problem 3
Two identical blocks A and B each of mass $M$ are placed on a long inclined plane (angle of inclination = $\theta$) with A higher up than B. The coefficients of friction between the plane and the blocks A and B are respectively $\mu_A$ and $\mu_B$ with $\tan\theta > \mu_B > \mu_A$. The two blocks are initially held fixed at a distance $d$ apart. At $t = 0$, the two blocks are released from rest.
Consider each collision to be elastic. Find the time instants of first, second, and third collision of the blocks.
My query:
If you note carefully, the successive ratio of the times of collision in this rather complex situation is a beautiful $1:3:5:7:...$ This looks suspiciously similar to Galileo's Law of Fall, but that is the ratio for the successive distances of a single body, not for the times of collisions of two bodies.
I have been able to successfully derive this result by long and boring step-by-step calculations. I don't want that as an answer. Instead, I want an intuitive explanation (without too much calculation) for the observed ratio. Thank you!
| The equation giving the position of each individual block is
$$x=x_0 + v_0 t + a t^2 $$
with $a$, the acceleration, dependent on the friction. The distance between the blocks is given by $\Delta x=x_A - x_B$. If you write down an expression for $\Delta x$, you will find it has the same form as the equation for $x$, but with different values for the constants. This tells you that as long as the blocks are not touching, the distance between the blocks behaves just as the position of a single block.
Then we need to see what happens when the blocks collide. Because they have the same mass, after collision, block A will assume the velocity block B had and vice versa. The velocity difference $\Delta v = \frac{d\Delta x}{dt}=v_A-v_B$ will invert its sign upon collision. This is also what happens to the velocity of a single block when it bounces from a solid wall.
Therefore the distance between the two blocks behaves exactly as the position of a single block. Just as you already calculated.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Path integral kernel dimensions and normalizing factor I am currently reading Quantum Mechanics and Path Integrals by Feynman and Hibbs. Working on problem 3.1 made me wonder why the 1D free particle kernel: $$ K_0(b,a) = \sqrt\frac{m}{2\pi i \hbar(t_a - t_b)} \exp \left(\frac{im(x_b - x_a)^2}{2\hbar (t_b - t_a)} \right)$$
has dimensions $1/\text{length}$ .
More generally this kernel has dimensions $1/(\text{length})^d$ , where $d$ is the number of dimensions of the system. Why is that?
On the other hand it is stated, that the kernel can be interpreted as a probability amplitude. In my understanding this would imply its dimensions to be $\sqrt{1/(\text{length})^d}$ in position space, because the kernels absolute square can be interpreted as a probability density.
Is there a physical interpretation for the normalizing factor $\sqrt{m/2\pi i \hbar(t_a - t_b)} $ other than "fixing" the equivalence to the Schrödinger equation?
To clarify my question, how can a probability amplitude have dimensions $1/(\text{length})^d$ ? Its absolute square would have dimensions $1/(\text{length})^{2d}$ which are not the proper dimensions for a probability density in my understanding.
| The main point is (as Ref. 1 mentions in Problem 3.1) that the probability distribution (coming from the path integral) is only relative, i.e. its normalization is unphysical over an unbounded position space $\mathbb{R}^d$. See also this and this related Phys.SE posts and links therein.
References:
*
*R.P. Feynman and A.R. Hibbs, Quantum Mechanics and Path Integrals, 1965, Problem 3.1.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Do the distances and velocities observed from galaxies other than Milky way really show that every point in the universe is the center? Hubble's observation from redshift shows a pattern that the speed of the galaxies is proportional to their distance, by using those informations we can map the position of the galaxies.
But these observations were made just from Earth. So what I am asking is, do the observed distances between other galaxies other than Milky way (ie. distances between galaxyA , galaxyB and galaxyC) show that their velocities also follow Hubble’s pattern?
Are we certain that if we live in another galaxy, we will see other galaxies moving away from us following Hubble’s pattern?
Or is it an assumption made just from observations on Earth that every point in the universe is the center?
So that I can rule out the idea that we see other galaxies moving away from us in that pattern because Milky Way is near to the center of the universe.
| Basically, yes. However, when calculating what would be seen from the vantage point of other galaxies, relativity has to be taken into account. So if the distance from Galaxy A to Galaxy B is, as calculated from the earth's frame of reference, x times the distance from Galaxy A to Galaxy C as calculated from earth, it does not follow that the velocity of Galaxy B calculated from Galaxy A is x times the velocity of Galaxy C calculated from Galaxy A. Also, the Hubble constant is constant in space (that is, when we look at the ratio of relative velocity to speed, the number is generally the same wherever we look in the universe), but not actually constant in time. Since we are seeing galaxies as they were in the past, the Hubble constant that those galaxies see is not going to be the same as ours.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Temperature of gas leaking into chambers An initially evacuated and thermally isolated chamber has a small hole opened in its side through which an ideal gas effuses from the outside. The gas outside is at standard temperature and pressure.
A second, smaller hole directly opposite the first hole on the opposite side of the first chamber lets gas into a second evacuated and thermally isolated chamber. The initial temperature of the gas entering the second chamber, when the first hole is opened, is the same as the temperature for the gas in the first chamber.
Why this is the case?
Edit: Note that the hole is considered to be smaller than the mean free path. Thus, the gas will not be able to reach equilibrium with the surroundings.
| The temperature of the ideal gas will be the same at all times in all chambers whose walls are maintained at the standard temperature or are thermally isolated. The expansion of an ideal gas into vacuum doesn't perform any work therefore there is no temperature change.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/377924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Unitary Transformation of Eigenstates Suppose I have two operators, $A$ and $B$, with eigenstates $A \lvert a \rangle = a \lvert a \rangle$ and $B \lvert b \rangle = b \lvert b \rangle$, where $a$ and $b$ are all unique. Furthermore, suppose that $A$ and $B$ are related by a unitary transformation $$A = U B U^{-1}.$$ This is equivalent to saying that the eigenstates are related as $$\lvert a \rangle = U \lvert b \rangle.$$
Then it seems I can prove the following: since $$A \lvert a \rangle = a \lvert a \rangle,$$ I also have $$A U \lvert b \rangle = U B U^\dagger U \lvert b \rangle$$ by inserting the identity, so that $$A U \lvert b \rangle = U B \lvert b \rangle = b U \lvert b \rangle = b \lvert a \rangle.$$ Thus, $a = b$.
Doesn't this imply then that the eigenvalues for corresponding eigenstates of $A$ and $B$ are equal, and therefore-- by the assumption that they are unique-- that the unitary transformation doesn't actually do anything?
| Well, a similarity transformation for an invertible (not necessary unitary) operator$^1$ $U$ does generically change the eigenspaces but does not change the eigenvalue spectrum $\{a_1, a_2,\ldots, \}=\{b_1,b_2,\ldots\}$. Hence it would be inconsistent to claim that all the eigenvalues $\{a_1,a_2, \ldots, b_1,b_2,\ldots\}$ for both $A$ and $B$ are different, if that's what OP means by that they are all unique. Whether the individual spectra are degenerate or not is irrelevant.
--
$^{1}$We will ignore subtleties with unbounded operators, domains, selfadjoint extensions, etc., in this answer.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/378063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Experiment on friction coefficient Here you can see the results of the experiment about a friction coefficient:
The mean of the friction coefficient becomes 0.262 but when I do a linear regression in the form of y=mx the slope is 0.31. Shouldn't it be the same? I used $F_N$ as x values and $F_D$ (friction force) as y values.
regression: https://www.desmos.com/calculator/njj4utvsdk
|
The mean of the friction coefficient becomes 0.262 but when I do a linear regression in the form of y=mx the slope is 0,31. Shouldn't it be the same?
No.
Linear regression and arithmetic mean are not the same thing.
Linear regression is trying to fit a linear plot to the points you gave it with the best $R^2$ value.
Arithmetic mean is just the sum of all the values divided by the total number of values. Statistically they measure different things, so you can't expect the same value from them.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/378240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Does Fluctuation Theorem prove the 2nd Law of Thermodynamics? Does Fluctuation Theorem or Crooks Fluctuation Theorem prove the 2nd Law of Thermodynamics from statistical point of view?
https://en.wikipedia.org/wiki/Fluctuation_theorem
https://en.wikipedia.org/wiki/Crooks_fluctuation_theorem
| Yes, the fluctuation theorem gives (among other things) the probability that a system will evolve from a state of greater statistical entropy toward one of smaller entropy. The second law of thermodynamics is only probabilistically correct for large but finite systems, and the fluctuation theorem gives the correction to the second law for finite-size systems. As the number of degrees of freedom in a system goes to infinity, the probability of an entropy-reducing fluctuation goes to zero, which verifies the second law for systems of infinite size.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/378398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.