Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Physics-based derivation of the formula for entropy I am looking for a derivation of the formula
$$S~=~-\Sigma_ip_i \log (p_i).$$
for entropy, from first principles. I only wish to assume the laws of physics, and without involving concepts in information theory. (After all, the concept of entropy and Boltzmann's formula for it is far older than information theory.)
What is a good definition of entropy? What assumptions are needed to arrive at this? What is the justification to maximizing entropy of a system to arrive at thermodynamics?
| The expression
$$
I = \sum_i -p_i\log p_i
$$
is a function of probabilities $p_i$ and although it is often called entropy, it is not the thermodynamic entropy of Clausius (that $S$ from thermodynamics defined through $\int dQ/T$). This is not only because of absence of $k_B$, but also because in order to give $I$ value, one must put in probabilities $p_i$.
No probabilities occur in classical thermodynamics, hence it is not possible to derive the above formula from thermodynamic laws.
However, there is a connection between $I$ and thermodynamic entropy $S$. This connection is: if a system is in equilibrium with reservoir so that it has volume $V$ and average of energy is $U$, a statistical estimation of its thermodynamic entropy $S^*$ (a function of $U,V$) can be calculated as the maximum possible value of $k_BI$ for all possible values of $p_i$ under the imposed constraints (volume is fixed to $V$, average of energy is $U$).
This rule was not, as far as I know, falsified for macroscopic bodies for which it is meant to be used. Why it is valid is not immediately clear.
The information theory comes in when we ask: what is the meaning of $I$ for arbitrary values of $p_i$? The answer it gives is: it is a measure of amount of data that is needed to exactly specify the microstate of the system given those probabilities.
With this interpretation of $I$ the connection can be rephrased in this way:
if a system is in equilibrium with reservoir so that it has volume $V$ and average of energy is $U$, the measure of uncertainty $I$ about the exact microstate given the macroscopic constraints $U$,$V$ is the same function of $U,V$ as thermodynamic entropy divided by k_B.
This relation has been verified for rarified gas and other simple cases and it is simply assumed it holds universally for any macroscopic system in thermodynamic equilibrium.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/44647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Is Pauli-repulsion a "force" that is completely separate from the 4 fundamental forces? You can have two electrons that experience each other's force by the exchange of photons (i.e. the electromagnetic force). Yet if you compress them really strongly, the electromagnetic interaction will no longer be the main force pushing them apart to balance the force that pushes them towards each other. Instead, you get a a repulsive force as a consequence of the Pauli exclusion principle. As I have read so far, this seems like a "force" that is completely separate from the other well known forces like the strong, electroweak and gravitational interaction (even though the graviton hasn't been observed so far).
So my question is: is Pauli-repulsion a phenomenon that has also not yet been explained in terms of any of the three other forces that we know of?
Note: does this apply to degenerate pressure too (which was explained to me as $\Delta p$ increasing because $\Delta x$ became smaller because the particles are confined to a smaller space (Heisenberg u.p.), as is what happens when stars collapse)?
| The Pauli Exclusion Principle isn't a fundamental force because it doesn't have the same origin as the 4 fundamental forces. It's like the pressure you feel from a normal gas in that it definitely exists, but it comes from the fact that you have many particles in the system and are averaging over their behavior. We usually call that an "emergent phenomenon," which is a property we detect at the macroscopic level but looks very different at smaller scales.
One of the other answers described the Pauli Exclusion Principle at the atomic scale for a particle in a well — you can see that it isn't treated like a force in the same way that electromagnetism is. An emergent example is the pressure that keeps a white dwarf from collapsing on itself.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/44712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "70",
"answer_count": 6,
"answer_id": 5
} |
How to determine whether a large container is air-tight? In constructing a kitchen-waste digester at home, I use a 50 Litre HDPE drum. The base of the drum is holed with a plug fitted to allow drainage when necessary.
The top has two openings - one for inlet, the other to act as outlet for the generated fluid CO2/CH4.
The drum is to be used filled to about 2/3rds lying on it's side. Therefore any leakage at the drain will be immediately visible. The same applies to the inlet pipe - it too shall be filled with water.
A heated iron nail was used to make the outlet hole. Then a ball-point pen was inserted into it; a very snug fit. I then melted a silicon glue-stick to attempt to seal any leaks at this point.
The problem at hand is to determine whether the outlet is air-tight. It is possible to dunk this container in water and test for bubbles; but the same test will not be possible with a larger container (300 litres) to be used if this experiment is successful.
Suggestions, anybody?
| A 300 liter container might not fit completely in a bathtub, but you might submerge different parts of it a different time and then check for bubbles, depending on its overall shape.
Or you can just fill it with water and see if it drenches.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/44784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
The requirements for superconductivity Which properties are sufficient evidence for a material to be not superconducting?
I am looking for a set of statements like
If the material is semiconducting, it is not superconducting
Edit:
I am not looking for a definition of superconductivity, or for introductional literature like the famous W. Buckel.
I am looking for properties, that would forbid superconductivity. If you have a source for it i would be very glad. As far I remember magnetic atoms will forbid superconductivity too, but i could not find a source yet.
| This question has a semi-canonical answer; Matthias' rules for superconductivity. This was a real set of empirical criteria proposed well before the cuprates were discovered, but here is the tongue-in-cheek version (I'm not sure who to attribute this presentation to, however -- comments appreciated).
*
*Symmetric lattices (i.e. cubic),
*Avoid oxygen,
*Avoid magnetism,
*Avoid insulators,
*Avoid theorists ;)
Obviously the cuprates are a knock against all of those, except the bit about theorists. But this should serve as a warning. There are some aspects of superconductivity that are very well understood, but trying to predict its presence or absence in a given material is not a productive activity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/44862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
} |
Does matter with negative mass exist? Or does it exist mathematically?
Is it really inconsistent with a common-sense, mathematics or known physical laws?
As far as I understand, if it exists, it must be far away from the "positive" matter because of repelling force, so it explains why there is no observations of such matter.
It would be also very consistent with the idea that the total energy of the Universe is zero. Doesn't it imply that the total mass of the Universe is zero?
Can you please clarify the statements below that seem to contradict each other?
1) "all structures that exist mathematically exist also physically" -Max Tegmark.
2) Despite being completely inconsistent with a common-sense approach and the expected behavior of "normal" matter, negative mass is completely mathematically consistent and introduces no violation of conservation of momentum or energy.
3) Such matter would violate one or more energy conditions and show some strange properties, stemming from the ambiguity as to whether attraction should refer to force or the oppositely oriented acceleration for negative mass.
| So far as we know $m \geq 0$ --all observations are compatible with this--.
Mathematically you can imagine negative masses, imaginary masses or many other masses, but this does not mean they are real.
No, negative mass is not consistent with the idea that the total energy of the Universe is zero. This zero of energy is because gravitational energy is negative and in the free-lunch cosmology compensates the positive energy associated to matter giving a zero total energy. Zero energy does not imply zero mass or vice verse; for instance photons have zero mass but positive energy.
The quote from Max Tegmark looks particularly misguided to me. In his lectures on physics Feynman explains the difference between physics and maths.
How do you know that a negative mass "introduces no violation of conservation of momentum or energy"?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/44934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 3
} |
Velocity vs Time Bounce Could someone please explain the trajectory of the ball that is bouncing in this picture...
The vertical component of the velocity of a bouncing ball is shown in the graph below. The positive Y direction is vertically up. The ball deforms slightly when it is in contact with the ground.
I'm not sure what the ball is doing and when, what happens at 1s?
| Initially, the ball is at rest. Therefore, the initial velocity is zero as mentioned in the graph. The negative gradient states that the ball has a downward acceleration. The moment, it hits the ground, the velocity reaches to its maximum. Then, it starts accelerating in the upward direction. The graph indicates a larger upward velocity due to the deformation and reformation of the ball.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/45007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 7,
"answer_id": 3
} |
Hamiltonian in position basis Let $ H = \frac{-h^2}{2m}\frac{\partial^2 }{\partial x^2}$. I want to find the matrix elements of $H$ in position basis. It is written like this:
$\langle x \mid H \mid x' \rangle = \frac{-h^2}{2m}\frac{\partial^2}{\partial x^2} \delta(x -x')$.
How do we get this? are we allowed to do $\langle x | \frac{\partial^2}{\partial x^2} \mid x' \rangle = \frac{\partial^2}{\partial x^2} \langle x \mid x' \rangle$? Why? It seems some thing similar is done above.
| Wave functions of position states are Dirac delta functions:
$$| x' \rangle \leftrightarrow \varphi_{x'}(\xi) = \delta(\xi - x')$$
If we apply the Hamiltonian to the wave function, we obtain
$$\hat{H} \varphi_{x'}(\xi) = -\frac{\hbar^2}{2 m} \frac{\partial^2}{\partial \xi^2} \varphi_{x'}(\xi) = -\frac{\hbar^2}{2 m} \frac{\partial^2}{\partial \xi^2} \delta(\xi - x')$$
Finally, we take the inner product with $\langle x |$ and apply the sifting property of delta functions:
$$\langle x | \hat{H} | x' \rangle = \int \varphi_x^*(\xi) \hat{H} \varphi_{x'}(\xi) \, d \xi = -\frac{\hbar^2}{2 m} \int \delta(\xi - x) \frac{\partial^2}{\partial \xi^2} \delta(\xi - x') \, d \xi = -\frac{\hbar^2}{2 m} \frac{\partial^2}{\partial x^2} \delta(x - x')$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/45055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
} |
What's the difference between two Hydrogen atoms? If we are given two Hydrogen atoms, would the only difference between them would be their quantum state (Energy level or eigen value, and the corresponding Orbital or eigen state) and their location (say you are the origin and each of the Hydrogen atoms are located an arm's distance apart)
This leads to my next question which would be what's the difference between two Hydrogen atoms separated arm's distance and a Hydrogen molecule? Wouldn't the only difference be that they are located very close to each other?
| One talks of a molecule only if the constituents are in a bound state, which implies that they are withnin a microscopic distance of each other.
The probability that the two hydrogen atoms in an $H_2$ molecule are at arm length distance is extremely small, far smaller than inaccuracies of the traditional hydrogen model. Thus it makes no sense to consider it as a real possibility.
Hydrogen atoms are indistinguishable particles, i.e., there is no difference between any two of them. A difference is created only by pointing to one of them, which is then distinguished by its position.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/45114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
What is the expectation value of the number operator when the vacuum has a VEV? The number operator N applied to a field whose vacuum has zero VEV gives $N|0>=0$. What if we apply it to the Higgs field?
The background of this question is that in popular scientific accounts, the Higgs field is sometimes described as a 'sea' of particles. I would like to clarify the meaning of this, and what is the physical interpretation of a non-zero VEV.
| If a field $\phi(x)$ has a nonzero VEV $v$, the field whose Fourier components define the creation and annihilation operators is $\phi(x)-v$, which has a zero VEV, and again $N|0\rangle=0$.
An interpretation in terms of the original $\phi$ is highly ill-defined and cutoff-dependent, and cannot be given a sensible physical meaning, although formally, it looks like a sea of indefinitely many virtual particles.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/45204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
What size will the Sun become once it is a red giant? How big will the Sun be once it becomes a red giant? How much of the solar system will it engulf?
| This is answered in How fast will the sun become a red giant?. I'm just adding a note here because it's not answered directly in a form a non-expert might spot.
The maximum size of the sun is estimated to be 256 times it's current radius, the Earth's orbit is 215 times the sun's radius - so it will consume Mercury, Venus, Earth and a bit of the way toward mars.
It's a little complicated because as the Sun expands it losses mass - large stars blow off their outer atmosphere. With the Sun having less mass it's gravity is weaker and Earth's orbit moves further out. The linked paper says (if I read it correctly) that the Sun will expand first, passing the Earth, before it has lost enough mass for the Earth to move far enough away.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/45252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Special Relativity Could someone explain to me how special relativity works?
I know there are thousands of sources and databases of knowledge out there, but I find it difficult to understand, even after reading up on those sources.
(Note: if you're an admin to close my question down, would you please be so nice as to point out something to help me with this question?)
| I agree with Crazy Buddy that Lay explanation of the special theory of relativity? is a good approach to SR, but my own preference is to view it a bit differently. You probably heard it said that general relativity is a geometrical theory. Well special relativity is a geometrical theory as well.
If you take two points in Euclidean space ($x_1$, $y_1$, $z_1$) and ($x_2$, $y_2$, $z_2$) and denote $x_2 - x_1$ by $dx$, $y_2 - y_1$ by $dy$ and $z_2 - z_1$ by $dz$, then the distance between the points, $ds$, is simply given by Pythagorus' theorem:
$$ ds^2 = dx^2 + dy^2 + dz^2 $$
and the distance $ds$ is an invarient. We can rotate or translate our co-ordinates, or travel at any speed we like, and we'll still calculate the same value for $ds$. This is all pretty obvious, for example $ds$ might be the length of a metal rod (with the two points at its ends) and in Euclidean space the length of the rod isn't going to change.
To move to special relativity all we have to do is change the equation we use to calculate the distance between the spacetime points ($t_1$, $x_1$, $y_1$, $z_1$) and ($t_2$, $x_2$, $y_2$, $z_2$) to be:
$$ ds^2 = dt^2 - dx^2 - dy^2 - dz^2 $$
and insist that the line interval $ds$ is an invarient i.e. all observers will calculate the same value for $ds$ no matter how fast they're moving. This simple principle then gives all the weird effects we see in SR.
I call this a geometrical approach because it's the SR equivalent of Pythagorus' theorem. It's just a prescription for calculating the distance between two points.
Whether this is helpful, or maybe just even more confusing I don't know, but you can see how this gives results like a finite speed of light by looking at my answer to What is the relationship between the speed of light and virtual particle production
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/45308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Physical intuition for higher order derivatives Could somebody give me an intuitive physical interpretation of higher order derivatives (from 2 and so on), that is not related to position - velocity - acceleration - jerk - etc?
| I've always liked the discrete explanation.
In physics, you can almost always approximate a function $f: \mathbb{R} \to \mathbb{R}$ by saying what the value of $f$ is at a discrete set of points $\{x_i\} \subset \mathbb{R}$.
If you do this, you approximate the derivative by looking at how $f$ changes when you move from a point to one of its nearest neighbors. So if your mesh is a lattice of points $a\mathbb{Z} = \{an | n \in \mathbb{Z}\}$, you only get to step a distance $a$. Second order derivatives are constructed from nearest neighbor differences of derivatives, so you can travel a distance $2a$ when constructing 2nd order derivatives. Likewise, for $n$-th order derivatives, you can see information that is distance $na$ from you in the lattice.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/45517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 6,
"answer_id": 1
} |
Given Newton's third law, why are things capable of moving? Given Newton's third law, why is there motion at all? Should not all forces even themselves out, so nothing moves at all?
When I push a table using my finger, the table applies the same force onto my finger like my finger does on the table just with an opposing direction, nothing happens except that I feel the opposing force.
But why can I push a box on a table by applying force ($F=ma$) on one side, obviously outbalancing the force the box has on my finger and at the same time outbalancing the friction the box has on the table?
I obviously have the greater mass and acceleration as for example the matchbox on the table and thusly I can move it, but shouldn't the third law prevent that from even happening? Shouldn't the matchbox just accommodate to said force and applying the same force to me in opposing direction?
| This is a really valid doubt and most of us have this on our Mind while trying to understand Newton's third law.
Now yes, $\vec{F_1}=-\vec{F_2}$ is valid and the forces here are an action reaction pair acting in opposite direction with the same magnitude.
So why doesn't a body remain in equilibrium?
These forces(the action reaction pair) act on different bodies and not on the same body.A body is said to be in equilibrium if two forces acting on the same body cancel out each other but that is not the case here. Therefore when we represent Newton's third law we write $$\vec{F_{12}}=-\vec{F_{21}}$$ which means force on body $1$ due to body $2$ is equal to the negative of force on body $2$ due to body $1$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/45653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "179",
"answer_count": 21,
"answer_id": 17
} |
Why is temperature constant during melting? This is an elementary question but I do not know the answer to it. During a phase transition such as melting a solid to a liquid the temperature remains constant. At any lower temperature the heat provided went to kinetic energy and intermolecular potential energy. Why is it that at the melting point no energy goes into kinetic (that would increase the temperature)?
| Roughly speaking, this additional energy will be at first step kinetic, it will increase molecule bouncing around there equilibrium points, until it will be enough to take them out of that equilibrium, and then this energy spent to make the phase transition.
More precisely your confusion is because of the fact that the statement you mentioned "temperature will not change.." is right in quasi-statistical equilibrium processes, that is it's right when looking on the process from macro point of view and on a long term, but locally on the particle level and for a small period of time situation is seems different because you didn't reached yet the equilibrium state of your system yet.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/45721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
} |
Effect of gas or liquid within a compound lens system Hi my question is if a compound lens system if filled with gas or a liquid how does it affect the system when compared to the lens system being separated by air alone. Does this affect the focal power of the system or the effective power at all.
| The angle of refraction at the surface of a lens (or any other boundary) is given by Snell's law:
$$ \frac{sin\theta_1}{sin\theta_2} = \frac{n_2}{n_1} $$
where $n_1$ and $n_2$ are the refractive indices of the two media. Suppose $n_1$ is the air and $n_2$ the glass: the refractive index of air is pretty close to one, so we have a tendancy to ignore it and just talk about a single refractive index $n$ that is the same as $n_2$.
However if you immerse your lens in water $n_1$ is very different to one, and all the angles of refraction will change and hence the focal length of the lens changes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/45833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Why don't we use the concept of force in quantum mechanics? I'm a quarter of the way towards finishing a basic quantum mechanics course, and I see no mention of force, after having done the 1-D Schrodinger equation for a free particle, particle in an infinitely deep potential well, and the linear harmonic oscillator.
There was one small mention of the fact that expectation values obey classical laws. I was wondering why we don't make more use of this fact. For example, in the linear harmonic oscillator problem, one could obtain the temporal evolution of $\langle x \rangle$ using the classical expression $\left(-\frac{dV(x)}{dx}=m\frac{d^2\langle x\rangle}{dt^2}\right)$, and if we could get the time-evolution of $\sigma$ and tack this on, we could re-create the Gaussian and get back $|\Psi(x,t)|^2$. Of course, that last part may not be very easy.
I was just wondering if anybody has tried doing something like this, or if there an obvious flaw in thinking about it this way.
| Beside the reasons already provided, I will add that that because forces in there classical meaning are not fundamental, what is more fundamental? it's potentials, and what is even more fundamental are fields, and the usual classical forces, are nothing more than (roughly speaking) a sum of the effects/interaction of those fields, and the equation that you provided in your question, is nothing more than a mathematical description of that averaging those fields/quantum interactions over time, will behave very similar to the familiar Newtonian physics, thus the concept of force is nothing more than statistical averaging/approximation of fields interaction.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/45869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 4
} |
Evaluating propagator without the epsilon trick Consider the Klein–Gordon equation and its propagator:
$$G(x,y) = \frac{1}{(2\pi)^4}\int d^4 p \frac{e^{-i p.(x-y)}}{p^2 - m^2} \; .$$
I'd like to see a method of evaluating explicit form of $G$ which does not involve avoiding singularities by the $\varepsilon$ trick. Can you provide such a method?
| Expanding on dmckee's comment:
The $+i\epsilon$-trick has the blessing of OCD mathematicians because it follows directly from a deep fact about the group of spacetime translations: the group $\{e^{-i\langle P,x\rangle/\hbar}| x \in \mathbb{R}^n\}$ of spacetime translations is the boundary of an analytic semigroup $\{e^{-i\langle P,\xi\rangle/\hbar}| x \in \mathbb{C}^n \mbox{ and } Im(\xi) \leq 0\}$.
Many quantities in field theory are expressed in terms of these translations, and frequently these quantities can be computed more easily by analytically continuing from real "Minkowski" time to imagininary "Euclidean" time, where the delicate cancellation of phases becomes the crude suppression of exponential damping. When you use the $+i\epsilon$-trick, what you're really doing is saying that the particular cancellation of phases you want is the one which respects this analyticity. This is precisely what's happening when you use the $+i\epsilon$-trick to evaluate the Klein-Gordon propagator. You've got an integral which does not converge absolutely, and you're picking out a certain resummation which does. The $+i\epsilon$ is not just a trick here; it's really the definition of the quantity you're after.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/45930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 0
} |
Canonical Commutation Relations Is it logically sound to accept the canonical commutation relation (CCR)
$$[x,p]~=~i\hbar$$
as a postulate of quantum mechanics? Or is it more correct to derive it given some form for $p$ in the position basis?
I understand QM formalism works, it's just that I sometimes end up thinking in circles when I try to see where the postulates are.
Could someone give me a clear and logical account of what should be taken as a postulate in this regard, and an explanation as to why their viewpoint is the most right, in some sense!
| Your running into circles will stop once you commit yourself to a choice.
What to regard as postulate is always a matter of choice (by you or by whoever writes an exposition of the basics). One starts from a point where the development is in some sense simplest. And one may motivate the postulates by analogies or whatever. The CCR are a simple coordinate-independent starting point.
However it is more sensible to introduce the momentum as the infinitesimal generator of a translation in position space. This is its fundamental meaning and essential for Noether's theorem, and has the CCR as a simple corollary.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/46110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
} |
Conservation of Energy in a Capacitor Consider a parallel-plate capacitor in free space. A negatively charged point particle with initial velocity $v$ passes through the space between the pair of parallel plates (with an initial path perpendicular to the normal vector of the plates).
The point particle accelerates towards the positively charged plate but passes beyond the edge of the plate.
How is energy conserved, given that the capacitor does work on the particle by accelerating it in the direction towards the negatively charged plate?
EDIT: Was reminded by Art Brown that a negatively charged particle accelerates towards the positive plate.
| This question is quite a common one for those first learning about capacitors.
First, let's remember that an electric field caused by stationary charges is conservative--this can easily be explained since a single charge creates a conservative field, and superposition of two conservative fields creates another conservative field.
So, the field generated by a floating capacitor has to be conservative. The universe isn't crazy, so it's probably us missing something? Are there any assumptions that we made while calculating the field? Yes, there are:
We assumed that the capacitor was infinite in size, and thus the field became uniform.
But, here, we are dealing with the edges of the capacitor. The field is not uniform here, it is more like (second half of image):
or:
When it comes back out, the x-component of the field will be against the velocity of the particle, slowing it down back to the initial speed.
For example, for a positively charged particle, the trajectory is as follows:
The green indicates the force on the particle at various points. Once the particle exits, it is "pulled back". The net effect is that the speed stays the same but the direction does not. Perfectly in accordance with conservation of energy.
Ignoring fringe fields can lead to some interesting apparent paradoxi, like the origin of the force that pulls a dipole slab into a capacitor.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/46260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Terminology for opposite null lines Is there a name for two null lines that lie on the opposite sides of the null cone? Each line can be obtained from the other by reflection in the axis of the null cone (the time-axis). In terms of world-lines, this corresponds to two photons moving in the opposite directions. If there is not a standard name, what would you choose to call them as a pair?
| There isn't an official standard name for opposite null lines. Note that opposite null lines are not a coordinate-independent geometric (invariant) notion, and hence it is not a very useful concept. If two null lines happen to lie on opposite sides of the light-cone in one reference frame, then they may not lie on opposite sides of the light-cone wrt. a boosted reference frame. Conversely, two different non-opposite null lines wrt. one reference frame may be boosted in such a way that they become opposite null lines wrt. the new reference frame.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/46320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Photons arriving from the Sun Given that the Sun is a bit less than 10 light minutes away from Earth, is it correct to assume in principle (I understand actual processes in the core of the Sun make the situation at a photon's emission far more complicated) that the photons that hit a human eyes on a clear day actually departed from the star less than ten minutes ago?
If you don't mind me saying so in a scientific forum, I find this notion (if confirmed) similarly endearing as the other notions that most elementary building blocks (chemical elements) in our bodies stem from bygone distant stars, and that we never see distant parts of the universe (or the Sun, for that matter) as the are "now", just as they were at a certain past distance in time.
| Solar photons arrive to the Earth about 500 seconds after leaving the photosphere. However, the very energetic photons created in the Sun's core take many millions of years to arrive on Earth as they traverse the radiation and convection zones before arriving at the photosphere.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/46378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How do I start learning particle physics? I am 16 at the moment. I am really interested in physics. Especially particle physics. Can someone please tell me how to start learning the subject. like what to learn first. like which fundamental theories and concepts, the math needed in it, etc, etc.
| What you want to learn is called "Quantum Field Theory", but it is a subject that requires having learnt other things first. At least some differential calculus, Special Relativity and Quantum Mechanics (a subject that itself requires some other previous knowledge).
But you can try. The simplest serious text above popular level may be "Quantum Field Theory Demystified" by David McMahon. It is a nice, cheap book, with short chapters, good explanations, solved examples and a quiz at the end of each chapter. This can be the starting point. (Later edit: Warning! It has many errata and notational inconsistencies, although the general explanations and complexity level is still nice... There seems to be no alternative text at this introductory level, although I am finding Srednicki really useful and clear - but that is a big book departing from a somewhat higher level of knowledge)
With respect to McMahon's books, please see the cooperative effort to make errata sheets here
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/46448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Potential Energy tends to infinity on the N-Body Problem I need help to solve this problem related with the N-Body problem, i dont understand quite well what I need to define or to express in order to solve it.
We assume a particular solution to the N-Body Problem, for all $t>0$, and $h>0$, where $h$ is the total energy of the N-Bodies, show that $U\rightarrow \infty $ as $t\rightarrow \infty $. This mean that the distance between a pair of particles goes to infinity? (No.)
In the N-Body problem $U$ is given by $U=\sum_{1\leq i< j\leq N}\frac{Gm_{i}m_j}{\left \| q_i-q_j \right \|}$, where $G$ is the gravitational constant. The Kinetic energy is $T=\sum_{i=1}^{N}\frac{\left \| p_i \right \|^2}{2m_i}=\frac{1}{2}\sum_{i=1}^{N}m_i{\left \| \dot{q}_i \right \|^2}$
The vector $q_{i}$ define the position vector of the $i$ particle. So Basically $U$ is like the sum of all the potential energies between all the $N$ particles.
Also by the Lagrange Jacobi Formula, we have that $I$ is the moment of inertia, $T$ the kinetic energy so we can express:
$$\ddot{I}=2T-U=T+h\quad,$$
where $h$ is a conserved quantity.
I think that if $U\rightarrow \infty $, then $T\rightarrow \infty$ (because $h$ is constant), the problem is that the only way that i see to $U\rightarrow \infty $ is when the distance between all the particles $\left \| q_i-q_j \right \| \rightarrow 0$, but it means that it will be a collision, so if we have a collision then $t\rightarrow t_1$ and not to $\infty$, because a collision takes a finite amount of time (Sundmanns theorem of total collapse), as I said i dont know what i have to define to show that $U\rightarrow \infty $ as $t\rightarrow \infty $, or maybe i need to define a $q_i(t)$ that in some way that $\left \| q_i-q_j \right \| $ goes very near to zero, but never zero, so $t$ can $t\rightarrow \infty$?
Also, what about the question of a pair of particles going to infinity? It is clear that they should not go to $\infty$ because then $U\rightarrow 0$, and we are trying to prove the other case.
| From virial theorem, stationary states are given by $2T=U$. The "particular solution" your teacher is assuming is a gravitational collapse where $U \gt 2T$ and therefore $U\rightarrow \infty$ as $t\rightarrow \infty$. Of course, the interparticle distance goes to zero in a collapse but this is not a collision: there is a lower bound in a collision and after collision particles increase their separation. In a collapse there is an asymptotic evolution towards a singularity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/46526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Non-Degeneracy of Eigenvalues of Number Operator for Simple Harmonic Oscillator
Possible Duplicate:
Proof that the One-Dimensional Simple Harmonic Oscillator is Non-Degenerate?
I'm trying to convince myself that the eigenvalues $n$ of the number operator $N=a^{\dagger}a$ for the quantum simple harmonic oscillator are non-degenerate.
I can't see a way to do this just given the operator algebra for creation and annihilation operators. Is there an easy way to show this, or does it depend on something deeper? I'd appreciate any detailed argument or insight! Many thanks in advance.
| Recall $ \hat{H} = \left( \hat{N} + \frac{1}{2} \right) $ and $ \left[ \hat{a}, \hat{a}^\dagger \right] = 1 $ (dropping $\hbar$ and $\omega$).
*
*Assume the ground state $\left|0\right>$ is non-degenerate. You can prove this by solving $\left<x\right|\hat{a}\left|0\right>=0$ in position representation, but I don't know how to do it algebraically. The rest of the proof is algebraic.
*Let the first excited state be $k$-fold degenerate: $\left|1i\right>$, $i=1,\ldots,k$, where $\left|1i\right>$ orthonormal. Then, by the algebra we have
$$ \hat{a} \left|1i\right> = \left|0\right> $$
and
$$ \hat{a}^\dagger \left|0\right> = \sum_i c_i \left|1i\right> $$
where $ \sum_i c_i^\star c_i = 1 $.
*Now, for these states to be eigenstates of $\hat{H}$ with energy $\frac{3}{2}$ they must be eigenvalues of $\hat{N}$ with eigenvalue 1. This requires
$$ \begin{matrix}
\hat{N}\left|1i\right> &=& \hat{a}^\dagger \hat{a}\left|1i\right>\\
&=& \hat{a}^\dagger \left|0\right> \\
\left|1i\right> &=& \sum_j c_j \left|1j\right>
\end{matrix}$$
This must hold for all $i$, which leads to an immediate contradiction (no solution for the $c_i$) unless $k=1$.
Induction proves non-degeneracy for the higher states.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/46639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Is there Pair production in between charged plates In classical electromagnetic theory, If parallel plates are charged oppositely and placed close to each other, there will be no charge will not flow from one plate to another.
How does this situation change if one considers Quantum electrodynamics? Can the electric field in between the plates cause pair production? What is the probability, if it happens? How does one apply the formalism of quantum field theory to such a question ? I am rather new to the subject.
| Yes, the effect you're looking for is called Schwinger pair production. It requires immensely strong electric fields (of the order of $10^{18}$ V/m) for a constant field.
One of the methods for computing the rate is the worldline method, described briefly here. To follow it, some knowledge of effective action methods are required.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/46697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Internal energy according to the van der Waals equation I am trying to derive the internal energy of a gas which obeys the van der Waals equation.
I have however encountered some problems. I calculate the integral of $dU$ from $V=0,T=0$ to $V=V, T=\infty$ to $V=V,T=T$.
I can calculate the work:
$$\left(p+\left(\frac{a}{n}\right)^2\right)(V-nb)=nRT \implies p=\frac{nRT}{V-nb}-\frac{an^2}{V^2}$$
For the second part of the path $V$ is constant so $W=0$.
$$W=-\int\limits_\infty^V p\textrm dV=\int\limits_V^\infty \frac{nRT}{V-nb}-\frac{an^2}{V^2} \textrm dV\\=nRT\ln(V-nb)|_V^\infty+\frac{an^2}{V}|_V^\infty\\=
\infty-\infty +\frac{an^2}{\infty}-\frac{an^2}{V}=-\frac{an^2}{V}$$
I know I haven't been mathematically rigorous but that is not really important to me at the moment. I think this is right.
I can't however think of how i should calculate the heat involved in following this path.
Any help on how to do this is appreciated.
EDIT: I see now that the work I calculated is wrong as well as $$nRT\ln(V-nb)|_V^\infty\neq\infty-\infty$$
| You can't just subtract infinities and write $\infty-\infty=0$. In fact, $\infty-\infty$ is a major example of an indeterminate form. The result may be anything and needs a precise analysis to be obtained. Moreover, as you realized later, there was really no $\infty-\infty$ over there, it was $\infty-f$ where $f$ is finite.
When you are calculating $\int p\,dV$ where $p$ is a combination of $1/(V-nb)$ and $1/V^2$ pieces, you should calculate the indefinite integral – effective the definite integral going from a lower limit at a finite point to the given upper limit – and it is
$$ - E(V) = \int_{V_0}^V p\, dV' = nRT\ln(V-nb)+\frac{an^2}{V}+C(V_0) $$
The integration constant $C$ is undetermined but one should think it's finite. You may see that both "main" terms are infinite for $V\to 0$ (which means an infinite amount of work would be needed to shrink the volume to zero) and the logarithm is infinite even for $V\to\infty$ (which means that the expansion of the gas to an infinite volume still produces infinite energy). For most purposes, the value of $C$ doesn't matter. But if one wants some "preferred" values of $C$ anyway, it's the value of $C$ that implies $E(V)=0$ or another reasonable value for some microscopic, tiny value of $V$ (so that the molecules are almost maximally squeezed).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/46737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 2
} |
The meaning of imaginary time What is imaginary (or complex) time? I was reading about Hawking's wave function of the universe and this topic came up. If imaginary mass and similar imaginary quantities do not make sense in physics, why should imaginary (or complex) time make sense?
By imaginary I mean a multiple of $i$, and by complex I mean having a real and an imaginary part, i.e., $\alpha + i\beta$, where $\alpha, \beta \in {\mathbb R}$.
| Let'me just sketch an idea to inroduce "imaginary time" .
A photon in a black hole or in a singularity, has to disappear, it is energy should be 0.
If that photon had a previous existence, the black hole has to distruct its energy:
$E, A$, or its energy $a+a-=\left(N+\frac12\right)h\nu$
The simplest way is to consider the phase factor of the field $\exp^i(\Omega t-Kr)$
The photon should be destroy in setting :
$t->i\tau$, that squeezes the factor to zero when $\tau\rightarrow \infty$.
This could an idea for what imaginary tau) time should be helpful
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/46798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40",
"answer_count": 4,
"answer_id": 2
} |
Is it possible to reduce the sound, when two metal objects collide (perhaps with some coating) without reducing the rigidity of the surface? I have a system, where there are ball bearings on the pistons that clamp the metal plate with special dents for ball bearings. The system should be precise, because it is used for microscopy. It also should be as noiseless as possible. It also should be fast, so the impact at high velocity is inevitable.
I've thought of introducing some resin coating, but it will reduce the rigidity. Are there any solutions for this problem? Is there any strong relationship between sound and rigidity? I believe that there may be some rigid materials that somehow don't favor phonons.
| The sounds waves will emanate at the point of impact and then echo off of various boundaries. You will want to engineer those boundaries to minimize or more likely direct the reflections and lead the energy to where it can be absorbed and converted to heat. I think the simplest way to do that is to make what the ball bearings collide with simply be as massive as possible. But it's not clear from your description what the geometric constraints are.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/46942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Could an NEO strike the moon with sufficient velocity for the shrapnel to escape Lunar gravity, and be attracted by Earth's? Between the news item of an asteroid giving Earth a close shave, and another news item of the impending GRAIL impact ; I find myself wondering whether a NEO could be a hazard to Earth via the moon. I'm not sure this scenario is realistic. Perhaps such a large body may naturally be captured by Earth gravity instead of Lunar gravity before impact itself...
That is to say -
Could an NEO be
*
*large enough &
*have sufficient velocity
to impact on Luna so that fragments would
*
*escape lunar gravity, and
*make the down-hill run to Earth, and
*remain large enough to cause loss of life/property on Earth?
| I guess you are referring to objects like 2012 DA14 which weighs about 130,000 metric tons.
Lunar meteorites have reached the Earth's surface before. So yes, it is possible.
There don't seem to be any large NEOs that have any significant chance of such an impact in the near future.
According to the UK Natural History Museum
There have been no recorded deaths due to a meteorite fall. A dog was, however, reputedly killed by the fall of the Nakhla martian meteorite in Egypt in 1911 and a boy was hit but not seriously injured by the fall of the Mbale meteorite in Uganda in 1992.
The Nakhla meteorite weighed about 10 kg.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/47016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the Laughlin argument? The fundamental question is
Why is Hall conductance quantized?
Let's start with the Hall bar, a 2D metal bar subject to a strong perpendicular magnetic field $B_0$. Let current $I$ flow in the x-direction, then the y-direction develops a voltage $V_H$. The Hall conductance is $\sigma_H = I/V_H$
To make Laughlin's charge pump, how should we wrap the Hall bar? Identify the left and right, or top and bottom sides?
Based on my understanding, we should paste top and bottom side together. (correct? Figure 1. left of the Paper maybe a little confusing.)
Laughlin assumes the Fermi level is in the middle of the gap, so that the ring is an insulator. But the changing flux will induce an current by taking "adiabatic derivative" of total energy w/r flux
$$I = c\frac{\partial U}{\partial \Phi}$$
which flows in y-direction and where $c$ is speed of light.
Following Laughlin's calculations, as one threads one flux quantum, there will be $p$ (number of filled Landau levels) electrons transported. Then
$$U=peV$$ where $V$ is the potential difference of two edges. From the current formula, we find the quantized Hall conductance.
The heart of the problem is
What is an adiabatic derivative? Why is ${\bf j} = \partial {\cal H}/\partial {\bf A}$ valid?
| Laughlin explains the whole derivation, including the point mentioned, nicely in his Nobel lecture:
http://www.nobelprize.org/nobel_prizes/physics/laureates/1998/laughlin-lecture.html
What helped me in addition to understand Laughlin's text was the appendix of Jean Dalibard's lecture "Artificial Gauge Fields for Quantum Gases":
http://www.phys.ens.fr/~dalibard/publications/2015_Varenna_JD.pdf
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/47078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Why does the nature always prefer low energy and maximum entropy? Why does the nature always prefer low energy and maximum entropy?
I've just learned electrostatics and I still have no idea why like charges repel each other.
http://in.answers.yahoo.com/question/index?qid=20061106060503AAkbIfa
I don't quite understand why U has to be minimum. Can someone explain?
| Well, I'd mention that entropy is significantly more counter intuitive than some may think. In particular since all microstates have equal probability, or in other words are equivalent, if you were to cut your finger off, the state were the chunk comes back into place all by it self if a perfectly valid assumption. There is nothing in this respect that ties this state to a specific energy level.
Now, to answer the question, why has this never been seen? (And not why is this impossible...). Since our body consists of billions of atoms, you'd need to have them jump back all at once were they came from. As opposed to simply hop around in uncorrelated moves, or in short... Decay.
So, that's the core of the entropy principle. Macro states are the results of billions of different microstates, and we only see an average value.... The most probable one is simply the one that has the largest number of compatible microstates. Hence the chunk is very unlikely to go back to its place on its own.
Some would say, it is just an artificial way to put it.
What would be the odds, when assembling dumb atoms, of an intelligent life form? Or in other words how large the sampling experiment should be to witness something else than a dead rock?
:)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/47253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 4,
"answer_id": 2
} |
experimental technique for measuring temperature of an ant I am taking a course on thermodynamics. I have a question from my text(halliday & resnick,physics-1). They asked me to measure temperature of an ant or an insect or a small body,like a small robot. If I build a thin thermometer then it is probable that surface tension would have greater influence than thermal expansion. Then How can I measure the temperature of an ant?
| Ants are cold blooded. Therefore the temperature of an ant is the ambient temperature in which it exists at the moment. If the ant is in an environment of variable temperature and is moving around, then it has no temperature in the thermodynamic sense. If it is in a region of uniform temperature, then that is its temperature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/47304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 2
} |
Can one heat up a vacuum? I've got a question about heating a vacuum. If there were, say, a container in space, at 2.7 degrees kelvin (the typical temperature of space, if I'm not mistaken) and as empty as space (as close to a vacuum as space allows), how would one go about pressurizing and heating that container? If a gas such as oxygen were introduced, would it freeze due to the temperature or would it sublimate due to the vacuum? If the former, I don't understand how heat could be introduced because heat needs a medium to heat. If the latter, once the vacuum was overcome, and a sufficient pressure was acquired, wouldn't the oxygen freeze and re-create the vacuum? Would both heat and pressure need to be introduced at the same time?
Thank you.
| "space" is at 2.7K because that's the temperature of the microwave background.
If you put an empty box in space the walls of it will eventually end up at 2.7K (in theory) and so anything inside it can only cool to that temperature. If anything was hotter it would radiate heat to the colder walls and if anything tried to get colder, the "hot" walls would heat it back upto 2.7K
(we can get colder than 2.7K in the lab by doing work to take the extra heat that leaks in, concentrating it, and pumping it into the warmer lab - just like your kitchen fridge manages to get below room temperature)
If you put oxygen in the box it would radiate heat away until it reached that temperature (assuming the walls could themselves radiate away the extra heat to the rest of the universe). Depending on the size of the box and the amount of oxygen you would either have a very dilute gas of individual oxygen molecules or all the molecules would be stuck to the walls of the container.
At low enough pressures it's a little pointless to talk about whether an individual oxygen molecule is a solid liquid or gas.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/47522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Is the Earth's atmosphere a Faraday cage? X-ray telescopes are required to be above the atmosphere as the atmosphere blocks EM waves with wavelengths < UV . Does this mean that the Earth's atmosphere can be thought of as a Faraday cage only allowing low energy light to pass through?
| No, there is no relation. Earth's atmosphere is not a Faraday cage.
A Faraday cage requires conductor with freely moving electron so that the potential energy inside the cage is always constant by the rearrangement of electrons.
On the other hand, Earth atmosphere is opaque (see figure) in the short wavelength limit because its air molecule absorb EM waves with short wavelength. In fact, short wavelength EM waves can easily be absorbed by most molecules since the absorption of the photon involve exciting electrons to become free electrons (ionization). Even though the penetration is high, the thickness of the atmosphere can block almost all of them.
Therefore, in order to observe X-ray, we can only send those telescope in space.
source
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/47582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why is the colour of sunlight yellow? I was going through the preliminary papers of other schools and found a question that I did not know. It was "Why sunlight appears yellow?". Can anyone answer it?
| The reason why the sun is yellow is because the color is determined by additive combination of its component spectrum.
In other words, the sun emits a range of light that we can see, from violet to red, at different intensities. To compute the net color, the color we see, you must add together the radiation. In the case of the sun, the dominant wavelengths are green-yellow-red. Those are the wavelengths with the greatest intensity and total energy. If you add green plus red, you get yellow. The sun also emits, for example, blue light, but that light is overpowered by the green and red light.
Note that this is for a human. Other creatures might see a different color. For example, cats can see farther into the red part of the spectrum than humans can. Therefore, to a cat, the sun will appear slightly more reddish orange than it appears to us.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/47694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 2
} |
Why does motion help you balance on ice skates? It's almost impossible to balance on a single ice skate if you're standing still. But give yourself just a little forward motion—it doesn't take very much—and it suddenly becomes easy. You can stand there on one leg and glide effortlessly half way across the rink. Friction will cause you to gradually slow down, and it's only when you've slowed almost to a complete stop that balancing becomes hard again. Why?
This seems suspiciously similar to the question of why it's easier to balance on a moving bicycle. But the standard answer to that involves the angular momentum of the wheels. There's nothing rotating here. In fact, other than the very slight deceleration from friction, moving should make no difference. In both cases, you're standing still in a nearly inertial reference frame.
Does that slight deceleration somehow matter? Or does the interaction between the ice and a moving skate somehow help you to balance?
| In motion, the skate easily tracks left and right to find balance. At rest, the skate resists side to side motion. The skater awkwardly pivots over the fixed skate waving arms. Fore and aft skate motion however, is all too easy, usually the beginning skater's downfall. Any forward motion of the skate while the body is stationary results in an unsupported condition.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/47741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
} |
Change in intensity of electric field with constant velocity Consider a +Q charged particle is travelling towards another test charge +Q. Now what would be the difference in electric field experienced by the test charge(avoid the gradual decrease in distance between them)? Would the field lines look compressed and effective field strength increased for the test charge?
| If you are looking for an effect separate from the particle's position, at classical velocities there isn't one. The electric field is
\begin{equation}
\mathbf{E}=\mathbf{E}(\mathbf{r},t),
\end{equation}
that is, the electric field is only a function of the position $\mathbf{r}$ and the time $t$. At any given instant in time, the force a test charge 'feels' due to another charge depends only on its position $\mathbf{r}$, and not on its velocity.
This velocity independence breaks down when the charges' relative velocities approach the speed of light. If a reference frame has an electric field, a frame boosted with the respect to the reference appears to have some magnetic field. For a frame boosted by a velocity $\mathbf{v}=v_x \mathbf{\hat{x}}$ where the separation $\mathbf{r}$ between the charges is given by $\mathbf{r}=r\mathbf{\hat{x}}$ (in other words, the charges are moving directly toward each other), so that
\begin{equation}
\mathbf{\beta}=\beta_x=v_x/c,
\end{equation}
and
\begin{equation}
\gamma=\left[1-\left(\frac{v_x}{c}\right)^2\right]^{-1/2},
\end{equation}
then for an electric field in the frame of the stationary charge $\mathbf{E}=E_x\mathbf{\hat{x}}+E_y\mathbf{\hat{y}}+E_z\mathbf{\hat{z}}$ with a background magnetic field ($\mathbf{B}$), the test charge will 'see' fields $\mathbf{E}'$ and $\mathbf{B}'$ given by
\begin{equation}
\mathbf{E}'=\gamma(\mathbf{E}+\beta_x \mathbf{\hat{x}}\times\mathbf{B}) - \frac{\gamma^2\beta_x^2}{\gamma+1}(\mathbf{\hat{x}}\cdot\mathbf{E})\mathbf{\hat{x}}\\
\mathbf{B}'=\gamma(\mathbf{B}-\beta_x \mathbf{\hat{x}}\times\mathbf{E}) - \frac{\gamma^2\beta_x^2}{\gamma+1}(\mathbf{\hat{x}}\cdot\mathbf{B})\mathbf{\hat{x}}
\end{equation}
(Source, J.D. Jackson 1999, section 11.10.)
The end result is that electric fields in a rest frame look like magnetic fields from a moving frame.
Interestingly, if the test particle is moving directly toward the charge, the electric and magnetic field along its trajectory will always be the classical one and relativity will have no effect. It is only when the boost has a component perpendicular to the rest-frame fields that the boost-frame fields are different.
There is compression of the field lines at relativistic velocities, but again, only for field lines that are not parallel to the velocity. If you picture the field lines radiating out of a stationary charge, then a moving charge looks similar, but with the field lines perpendicular to the boost velocity more tightly bunched together.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/47808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Don't understand the integral over the square of the Dirac delta function In Griffiths' Intro to QM [1] he gives the eigenfunctions of the Hermitian operator $\hat{x}=x$ as being
$$g_{\lambda}\left(x\right)~=~B_{\lambda}\delta\left(x-\lambda\right)$$
(cf. last formula on p. 101). He then says that these eigenfunctions are not square integrable because
$$\int_{-\infty}^{\infty}g_{\lambda}\left(x\right)^{*}g_{\lambda}\left(x\right)dx
~=~\left|B_{\lambda}\right|^{2}\int_{-\infty}^{\infty}\delta\left(x-\lambda\right)\delta\left(x-\lambda\right)dx
~=~\left|B_{\lambda}\right|^{2}\delta\left(\lambda-\lambda\right)
~\rightarrow~\infty$$
(cf. second formula on p. 102). My question is, how does he arrive at the final term, more specifically, where does the $\delta\left(\lambda-\lambda\right)$ bit come from?
My total knowledge of the Dirac delta function was gleaned earlier on in Griffiths and extends to just about understanding
$$\tag{2.95}\int_{-\infty}^{\infty}f\left(x\right)\delta\left(x-a\right)dx~=~f\left(a\right)$$
(cf. second formula on p. 53).
References:
*
*D.J. Griffiths, Introduction to Quantum Mechanics, (1995) p. 101-102.
| Well, the Dirac delta function $\delta(x)$ is a distribution, also known as a generalized function.
One can e.g. represent $\delta(x)$ as a limit of a rectangular peak with unit area, width $\epsilon$, and height $1/\epsilon$; i.e.
$$\tag{1} \delta(x) ~=~ \lim_{\epsilon\to 0^+}\delta_{\epsilon}(x), $$
$$\tag{2} \delta_{\epsilon}(x)~:=~\frac{1}{\epsilon} \theta(\frac{\epsilon}{2}-|x|)
~=~\left\{ \begin{array}{ccc} \frac{1}{\epsilon}&\text{for}& |x|<\frac{\epsilon}{2}, \\
\frac{1}{2\epsilon}&\text{for}& |x|=\frac{\epsilon}{2}, \\
0&\text{for} & |x|>\frac{\epsilon}{2}, \end{array} \right. $$
where $\theta$ denotes the Heaviside step function with $\theta(0)=\frac{1}{2}$.
The product $\delta(x)^2$ of the two Dirac delta distributions does strictly speaking not$^1$ make mathematical sense, but for physical purposes, let us try to evaluate the integral of the square of the regularized delta function
$$\tag{3} \int_{\mathbb{R}}\! dx ~\delta_{\epsilon}(x)^2
~=~\epsilon\cdot\frac{1}{\epsilon}\cdot\frac{1}{\epsilon}
~=~\frac{1}{\epsilon} ~\to~ \infty
\quad \text{for} \quad \epsilon~\to~ 0^+. $$
The limit is infinite as Griffiths claims.
It should be stressed that in the conventional mathematical theory of distributions, the eq. (2.95) is a priori only defined if $f$ is a smooth test-function. In particular, it is not mathematically rigorous to use eq. (2.95) (with $f$ substituted with a distribution) to justify the meaning of the integral of the square of the Dirac delta distribution. Needless to say that if one blindly inserts distributions in formulas for smooth functions, it is easy to arrive at all kinds of contradictions! For instance,
$$ \frac{1}{3}~=~ \left[\frac{\theta(x)^3}{3}\right]^{x=\infty}_{x=-\infty}~=~\int_{\mathbb{R}} \!dx \frac{d}{dx} \frac{\theta(x)^3}{3} $$ $$\tag{4} ~=~\int_{\mathbb{R}} \!dx ~ \theta(x)^2\delta(x)
~\stackrel{(2.95)}=~ \theta(0)^2~=~\frac{1}{4}.\qquad \text{(Wrong!)} $$
--
$^1$ We ignore Colombeau theory. See also this mathoverflow post.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/47934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38",
"answer_count": 3,
"answer_id": 0
} |
First and Second Moment of Mass I recently came across the definition of the Center of Mass of a system as the point about which the first moment of mass is zero.
Further, it defined Moment of Inertia as the second moment of mass.
My question is, What is this 'moment of mass'?
|
I don't know whether this is right or wrong, coz its like bringing back the high school...
When physicists define a "moment of something", then it necessarily means Distance $\times$ the "something". Moment of mass simply implies Distance $\times$ Mass.
For a system of $n$ particles, in order to obtain the center of mass - we consider a reference point. The effective mass times the distance to center of mass (which is a moment) will be equal to the sum of moments of individual masses. If $x_{c}$ is the distance from center of mass to the reference point, then
$$\sum_{i=1}^nm_i\ x_{c}=\sum_{i=1}^nm_i x_i$$
Hence, at the center of mass - plugging both equations to the left side,
$$\sum_{i=1}^nm_i(x_{c}-x_i)=0$$
Well, I think this is the first moment of mass (which has also equated to zero).
As far as I can see in your definition, I guess that the second moment roughly says, it is the moment of (moment of mass) which means $$I=\sum_{i=1}^nm_ix_i\times x_i\implies I=MX^2$$ For my luck, it also satisfies with the units $\text{kg m}^2$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/48068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Total Energy of the Universe? I've heard the total energy is zero, but I've also heard it cannot be said to be zero since there's so much unknown stuff in the universe. Is that true?
| Conservation of energy doesn't actually apply in any straightforward way to cosmology. The modern understanding of energy conservation is that it is a consequence of Noether's theorem and time translation invariance. In other words, the laws of physics are the same as they were yesterday and they will be tomorrow. This gives rise to the conservation of energy. When the expansion of the universe is important you are doing physics on an expanding background. This breaks the time translation invariance and hence the conservation of energy.
It can be argued that the gravitational field has energy and when this energy is included you get zero total energy for the universe. The problem with this is that there isn't an unambiguous definition for the gravitational energy of an expanding universe (this is somewhat controversial).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/48120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Banach Space representations of physical systems I think most physicists mostly model physical systems as some kind of Hilbert space.
Hilbert spaces are a strict subset of Banach spaces.
Questions:
*
*Can physical systems really have non-compact topologies, as a Banach
space has?
*Does anyone have an example of physics which requires a physical space
which is Banach and not Hilbert?
| Hilbert spaces are occur everywhere where the Lagrangian\Hamiltonian is quadratic in derivatives. If the Lagrangian is non-quadratic then the Hilbert spaces are no longer so convenient. In particular in analysis of Navier–Stokes equations the Banach spaces(not Hilbert spaces) are active used.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/48192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Why doesn't my pinhole camera work? We all know that light travels in straight a line, which can be proved by pinhole imaging as in the picture shown :
But when I'm doing this little experiment with an apple, no matter how I change the distance between the object and the pinhole, an image can never be observed on the cardboard behind. So what's going wrong? Please help!
| Take a look at this picture - and ask yourself why the operator of the camera has the cloth over his head (the cloth is black on the inside) as he is looking at the back of his camera - which has a piece of ground glass where the image from his pinhole camera is forming:
This used to be how photography was done:
Align the camera to the subject, focus (if you had a lens - pinholes don't need focusing). Cover the aperture. Insert photographic plate. Tell subject to stand still and stop breathing. Remove protection from photographic plate. Open aperture. Light magnesium to produce lots of light. Close aperture. Put protective cover back on photographic plate. Take plate to darkroom. Develop. Fix. Rinse. Dry.
The pinhole is tiny - not a lot of light comes through. All the other light from the environment will drown out the image. You need to make sure that the only light you see is from the object.
The simplest "pinhole camera" is formed by the leaves on a tree. Did you ever notice how the sunlight coming through the leaves makes circles? Those are "pinhole camera images" of the sun. And when there is a partial eclipse of the sun, those circles turn into little crescent shapes:
Which only goes to show that a pinhole camera works really well when there is more light coming from the subject (in this case, the sun) than from any other object.
Incidentally, you can sometimes get a similar effect in a dark church when a single beam of (sun) light comes through a hole in a dark window: when clouds pass near the sun, you will see their image on the floor/wall of the church.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/48243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Earth moves how much under my feet when I jump? If I'm standing at the equator, jump, and land 1 second later, the
Earth does NOT move 1000mph (or .28 miles per second) relative to me,
since my velocity while jumping is also 1000mph.
However, the Earth is moving in a circle (albeit a very large one),
while I, while jumping, am moving in a straight line.
How much do I move relative to my starting point because of this? I
realize it will be a miniscule amount, and not noticeable in practise,
but I'd be interested in the theoretical answer.
| Ugh, assuming constant radial gravity $g$ I need to solve the equations of motion in polar coordinates $r$, $\theta$ as
$$ \ddot{r} = r \dot{\theta}^2 - g \\ \ddot{\theta} =- \frac{2 \dot{r} \dot{\theta}}{r} $$
which I do not know how to do. When I find out I will add to this answer. This system varies the direction of gravity and not it's magnitude for an approximate solution that should be fairly accurate.
related notes
There is trivial solution with $\theta=\dot\theta=\ddot\theta=0$ and $\ddot{r}=-g$, but this does not match the initial conditions of $$r(0)=R \\ \dot{r}(0)=v_{jump} \\ \theta(0)=0 \\ \dot\theta(0) = \Omega$$ where $R$ is the radius of the Earth and $\Omega$ it's rate of rotation and $v_{jump}$ is the take off speed.
The ODE system is $g-r^2 \omega^2 + \ddot{r}=0$ and $r \dot{\omega} + 2 \dot{r} \omega = 0$ with $\omega = \dot{\theta}$.
The solution to the second equation is
$$ \omega = \frac{\Omega R^2}{r^2}
\\ \dot{\omega} = -\left( \frac{2 \Omega R^2}{r^3}\right) \dot{r} $$
and so the first equation becomes
$$ \frac{{\rm d} \dot{r}}{{\rm d} t} = \frac{\Omega^2 R^4}{r^3} - g $$
which is solved by direct integration $\int \dot{r}\,{\rm d}\dot{r} = \int \left( \frac{\Omega^2 R^4}{r^3} - g \right)\,{\rm d} r $ as
$$ \frac{\dot{r}^2}{2} = - \left( \frac{\Omega^2 R^4}{2 r^2} g r\right) + \left( \frac{\Omega^2 R^2}{2} + R g + \frac{v_{jump}^2}{2} \right) $$
Now for an approximation. Change variables to $y = r - R$ and $\dot{y}=\dot{r}$ to get
$$\boxed{ \dot{y}^2 = v_{jump}^2 + \Omega^2 R^2 - 2 g y - \frac{\Omega^2 R^4}{(y+R)^2} }$$
$$ \dot{y}^2 \approx v_{jump}^2 + y \left( 2 \Omega^2 R - 2 g \right) $$
$$ t = \int_0^y \frac{1}{\sqrt{v_{jump}^2 + y \left( 2 \Omega^2 R - 2 g \right)}}\,{\rm d} y $$
$$ y = v_{jump} t + \frac{1}{2} \left(\Omega^2 R-g \right) t^2 $$
which is Doh! nothing more than a projectile under constant gravity.
Let's do a 2nd order approximation of $\dot{y}^2$ above
$$ \dot{y}^2 \approx v_{jump}^2 + y \left( 2 \Omega^2 R - 2 g \right) - 3 \Omega^2 y^2 $$
with solution
$$ \boxed{ y(t) = \left( \frac{g}{3 \Omega^2} - \frac{R}{3} \right) \left( \cos(\sqrt{3} \Omega t)-1 \right) + \frac{v_{jump}}{\sqrt{3} \Omega} \sin(\sqrt{3} \Omega t) }$$
with time in the air:
$$ t = \frac{\pi}{\sqrt{3}\Omega} +
\frac{
2 \arctan\left( \frac{\Omega^2 R -g}{\sqrt{3} \Omega v_{jump}}\right) }{\sqrt{3}\Omega} $$
These equations match the numerical solution
My excel sheet with both the numeric and the above solution is at Public Dropbox. Caution you need to have macros enabled as they are used for the numeric results.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/48287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 3
} |
Evaporation of water content from a solid material by applying low pressure I have a raw material which melts at $96\ ^\circ C$. My aim is to make water content evaporate at temperature below this temperature.
I can apply vaccume oven for this. I want to know at what pressure the water evaporates by keeping low temperature so that the material doesn't melt.
| Water will evaporate as long as the relative humidity of the surrounding air is below 100%, although the process may be (very) slow. You seem to be confusing evaporation with boiling, but you don't need to boil the water in the material to dry it.
It depends a lot on your configuration, but you are probably better off blowing hot dry air over your material than creating a vacuum, because then water vapor molecules will be moving away from the surface of your material by diffusion, which is normally a much slower process than convection.
Still, if you want to boil the water content of your material, your guide should be the vapor pressure of water at your temperature of choice, which you can check here. As an example, the vapor pressure at $90\ ^\circ\mathrm{C}$ is $70\ \mathrm{kPa}$, so if you heat your sample to that temperature and have a vacuum of less than $0.7\ \mathrm{atm}$, there will be boiling.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/48528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to calculate the Darcy-Weissbach friction factor for shear thinning laminar flow in a pipe? The Darcy-Weissbach friction factor for laminar flow would be $\frac{64}{Re}$
Now, having a shear thinning (non-newtonian) fluid where the viscosity is not constant how do I arrive at $Re$?
To know an apparaent viscosity, I'd need to know the shear rate, but that is not constant over the diameter of the pipes.
Obviously I need to make allowances anyway (like assuming that my fluid obeys a power law over the relevant shear rates), so the method doesn't neet to be uber-exact. Bu I will want to know where I'm off.
Googling this, I only ound numerical/CFD solutions to far more complex problems an I couldn'T draw my answer from there.
| In my view, the objective of knowing the friction factor, is for one to be able to calculate what is the pressure drop needed to push a given flow $Q$ through a given pipe diameter. This kind of relations exists for several models of non Newtonian fluid, take for example the power law model:
$\tau=K\gamma^n$
In this case the solution gives:
$Q=\pi(\frac{\Delta P}{2KL})^{1/n}(\frac{n}{1+3n})R^{(1+3n)/n}$
where $R$ is the pipe radius and $L$ is the pipe length. You can rearrange an expression of this type to obtain an effective viscosity, depending of your definition of "effective viscosity". For example "the value of viscosity that plugged into the Newtonian pressure drop-flow relation will give the correct value of pressure drop for given $Q$".
You can review the solutions for Bingham plastic or other type of models also.
A more general approach is the one used in the Rabinowitsch-Mooney relations, where you determine experimentally a relation between flow and pressure drop, which allows you to find the shear rate at the wall, and deduce a shear rate- shear stress curve for the fluid. There are also definitions of "generalized Re", for non Newtonian flows.
Fluid dynamics books treat this topics in an accessible(algebraic, not CFD) manner, search for chapters on "non Newtonian fluids", or review Perry's Chemical engineerss handbook.
My experience is from chemical engineering though.
Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/48587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Showing that position times momentum and energy times time have the same dimensions I've been asked to show that both the position-momentum uncertainty principle and the energy-time uncertainty principle have the same units.
I've never see a question of this type, so am I allowed to substitute the units into the expressions and then treat them as variables?
If so, here's my attempt. Forgive me if I've done something silly, as I'm no physicist.
Starting with the position-momentum uncertainty principle:
$$\Delta{}x\Delta{}p \geq h / 4\pi$$
Substituting the units into the expression (at this point, diving by $4\pi$ won't necessarily matter):
$$(m)\left(kg \cdot \frac{m}{s}\right) \geq J \cdot s$$
Combining $m$ and bringing $s$ to the other side:
$$\frac{kg \cdot m^2}{s^2} \geq J $$
Knowing that $J = kg \cdot m^2/s^2$:
$$J \geq J$$
Now, for the energy-time uncertainty principle:
$$\Delta{}E\Delta{}t \geq h / 4\pi$$
Substituting the units into the expression (again, diving by $4\pi$ won't necessarily matter):
$$J \cdot s \geq J \cdot s$$
Diving by $s$:
$$J \geq J$$
Is this valid? Or could I not be more wrong?
| I think you are on the right track. There are a couple of bits of advice you may follow:
*
*You may simply note that if $A \geq B$, then it follows that $A = B$ is a valid solution, thus $A$ and $B$ must have the same units.
Therefore $\Delta{p}\Delta{x}$ has the same units as $h$ which has the same units as $\Delta{E}\Delta{t}$.
*The method you used is called Dimensional Analysis and it's perfectly correct to use it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/48663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How come vibrations? We all know that sound sensation is produced only when sound waves reach upto us. We all know that sound waves are disturbances propagating in air, Vibration is necessary for the generation of sound, but it always forces me to ponder that how was it known or deduced that vibration is necessary for any form of sound wave? Hope someone explains me this.
|
how was it known or deduced that vibration is necessary for any form of sound wave?
It is likely that theories involving wave propagation have been around for thousands of years.
The speculation that sound is a wave phenomenon grew out of observations of water waves. The rudimentary notion of a wave is an oscillatory disturbance that moves away from some source and transports no discernible amount of matter over large distances of propagation. The possibility that sound exhibits analogous behavior was emphasized, for example, by the Greek philosopher Chrysippus (c. 240 B.C.), by the Roman architect and engineer Vetruvius (c. 25 B.C.), and by the Roman philosopher Boethius (A.D. 480-524). The wave interpretation was also consistent with Aristotle's (384-322 B.C.) statement to the effect that air motion is generated by a source, "thrusting forward in like manner the adjoining air, to that the sound travels unaltered in quality as far as the disturbance of the air manages to reach."
Excerpts from Chapter 1 of
Acoustics: An Introduction to Its Physical
Principles and Applications
by
Allan D. Pierce
(published by the Acoustical Society of America)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/49751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
What is the covariant expression for action of the Lorentz force density on charge-current density? In a continuous medium the Lorentz force density is known to be written in the form:
$f_\alpha = F_{\alpha \beta} J^\beta$,
where $F_{\alpha \beta}$ is an electromagnetic field tensor, and $J^\beta$ is a charge-current density.
Whould it be correct saying that the action of this force on charge-current density reads as follows:
$\frac{dJ^\alpha}{dt} = f^\alpha = F^{\alpha \beta}J_\beta$ ?
It seems reasonable because in this case the charge-current density 4-vector undergoes Lorentz transformation, i.e. it is "accelerated" along direction of $\vec{E}$ proportionally to the magnitude of $\vec{E}$, and "rotated" around direction of $\vec{B}$ to the angle proportional to the magnitude of $\vec{B}$.
| For a cold (no pressure) charged gas the electromagnetic filed must contain a self-field contribution too in order to describe the density variations due to repulsion of charges. Plasma equations contain it all.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/49821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Quantum superposition of states: experimental verification How can somebody demonstrate the quantum superposition of states directly by other means than the double slit experiment?
And why can't macroscopic objects like a pen be in superpostion of states? Will it ever be possible to have an object like a pen to be in superposition of more than one state?
| Others have covered the usual microscopic systems that are currently amenable to controlled quantum manipulations, but there are in fact "macroscopic" objects (to some definitions of the word) that can be placed in quantum superpositions. These fall broadly within a field known usually as cavity optomechanics, which has a reasonable wikipedia page. The essential idea is to couple the oscillations of light to those of one of the mirrors of a cavity; this allows 'quantumness' in the state of the light to translate into the state of the mirror.
In this and usually all macroscopic examples, it is only one degree of freedom - the centre-of-mass position in this case - that gets put in a superposition; all other degrees of freedom are left in classical, often thermal, states. This is nevertheless quite enough to get quantum behaviour. For example, you can obtain double-slit interference patterns using buckyballs (C$_{60}$) using their centre-of-mass positions, while maintaining the rotational and vibrational degrees of freedom (which are merely a better way of accounting for all motional degrees of freedom apart from the average) in (fairly cold) thermal, classical states.
Another quantum superposition of a macroscopic object - (just) visible to the naked eye! - is the placing of a microwave oscillator in a superposition state using other microwave sources en lieu of laser beams on an atom. This is fairly well explained by Aaron O'Connell in his TED talk. For more formal references try the UCSB Cleland group page; good popular science articles are listed in O'Connell's wikipedia page.
Finally, I would try macroscopic quantum phenomena" in wikipedia.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/49865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
Can an induction coil heat two layers of metal? Imagine we have an induction coil which is strong enough to heat a sheet of metal. We can put a sheet of ferromagnetic metal close to the coil at distance $h_1$, and it gets heated to temperature $t_1$, or at distance $h_2 > h_1$ so that the sheet gets heated to temperature $t_2 < t_1$.
I want to know what happens if we have two identical sheets at once, one at distance $h_1$ and one at distance $h_2$, on the same side of the coil, with some insulator between the sheets (the insulator does not conduct electricity, is not ferromagnetic, and does not conduct heat well). Will the sheet farther away from the coil heat up at all, or will the closer sheet shield it from the electromagnetic field in some way? What will the temperatures $t_1'$ and $t_2'$ of the sheets in this experiment be, higher, lower or the same as $t_1$ and $t_2$?
Does the answer to the above change if we have a small conducting connection between the two sheets of metal, e.g. a wire which touches both the close and the far one, but most of their surface is still separated?
The application for this question: I am thinking of getting a cast iron waffle iron to use on my induction stove, and I am trying to imagine how this will function. By the way, I know that I will get some heat conducted through the waffle itself, and I will probably turn it anyway so both plates get hot, but please ignore these effects when answering the question and tell me the effects of induction only.
| The answer is no. The penetration depth of the magnetic field in the first sheet is too small. Read this for example. The penetration depth $\delta$ is typically given by a formula looking something like this:
$$
\delta=\sqrt{\frac{\rho}{\pi\mu f}}\approx\sqrt{\frac{1\cdot10^{-7}}{\pi\cdot8.8\cdot10^{-4}\cdot20\cdot10^3}}\approx4.3\cdot10^{-5}\mathrm{m}=0.043\mathrm{mm}
$$
Where $\rho$ is the resistivity of the material (I've assumed steel), $\mu$ is the magnetical permeability (I've assumed steel) and f is the freequency of the magnetic field (20 kHz is on the lower end of the range used in induction stoves. Higher frequencies would give even shorter penetration depths.) Assuming each side of the waffle iron is considerably thicker than 0.043 mm, your plan won't work.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/49955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Clarifications about Poisson brackets and Levi-Civita symbol I need some clarifications about Poisson brackets.
I know canonical brackets and the properties of Poisson Brackets and I also know something about Levi-Civita symbol (definition and basic properties), but I have some doubts.
I don't know how I could apply Poisson brackets properties if I have a summation, for example if in the case of a system of N particles I have to solve $[L_i, x_{\alpha j}]$. I know that a generical component of total angular momentum is given by $L_a=\sum_{\alpha=1} ^N l_{\alpha a}$ and also know that the components of algular momentum af a particle is given by $l_a= \epsilon_{aij} x_i p_j$.
Now, if I have to calculate $[L_i, x_{\alpha j}]$, I have these doubts:
1) how can I decide the indices of Levi-Civita symbol that I'm going to use for solving the problem?
2) how can I use the property of linearity of Poisson brackets in this case?
and an other (general) question:
3) If I have a Levi-Civita symbol that multiplyes a sum of two terms and each term is mulplied for a Kronecker delta, I have to follow these steps:
a) multiply the Levi-Civita symbol for each term
b) impose the condition thanks to each Kronecker delta isn't equal to zero
c) eventually, substitute these conditions in the two Levi-Civita symbol, but I have to substitute in each Levi-Civita symbol the condition that I found for that Kronecker delta that at the step a) was multipling just that one Levi-Civita symbol
Is it correct this way to go on?
| 1.) Always choose indices in such a way that the free indices on both sides of the equation match. Furthermore, make sure that you don't mix up summation indices of different summations.
2.) Linearity implies that if you enter a sum as the argument of a Poisson brackets, you get a sum of Poisson brackets, with each of them having a single term your original sum as the argument, i.e.
\begin{equation}[\Sigma^N_{\alpha=1}B_\alpha,A]=[B_1,A]+[B_2,A]+[B_3,A]+\ldots+[B_N,A] \end{equation}
3.) If you mean something like
\begin{equation}\epsilon_{ijk}(A_{jl}\delta_{lk}+B_{kl}\delta_{jl})=\epsilon_{ijk}A_{jl}\delta_{lk}+\epsilon_{ijk}B_{kl}\delta_{jl}=\epsilon_{ijk}A_{jk}+\epsilon_{ijk}B_{kj},\end{equation}
the answer is yes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/50035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Geometrical significance of gauge invariance of the QED Lagrangian The QED Lagrangian is invariant under
$\psi(x) \to e^{i\alpha(x)} \psi (x)$, $A_{\mu} \to A_{\mu}- \frac{1}{e}\partial_{\mu}\alpha(x)$. What is the geometric significance of this result? Also why is $D_{\mu}=\partial_{\mu}+ieA_{\mu}(x)$ called the covariant derivative?
| You can interpret gauge invariance in terms of fiber bundles. One can think of a fiber as the space of configurations of the gauge field, connected by gauge transformations.
The covariant derivative is called "covariant" because it transforms covariantly, i.e.
$\begin{equation}D'_\mu=U^{-1}(x)D_\mu U(x)\end{equation}.$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/50084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
What is the electrical conductivity (S/m) of a carbon nanotube? I have been searching around for a while for this but I am having trouble finding any actual figures, all I can seem to find is that it is "very high".
So I am wondering, does anyone have any figures of what the electrical conductivity of a carbon nanotube is, a theoretical or estimated answer is fine. I am preferably looking for the answer in $Sm^{-1}$.
| The numbers will greatly vary depending on the kind of nanotube. The following are some examples from cursory Google searches.
Electrical conductivity was increased by 50 percent to 1,230 siemens
per meter.
http://news.ncsu.edu/releases/wms-zhu-cnt-composites/
And that’s not all: colossal carbon tubes are ductile and can be
stretched, which makes them attractive for applications requiring high
toughness. They also have high electrical conductivities of around 103
siemens per centimetre at room temperature, compared with 102 siemens
per centimetre for multi-walled carbon nanotube fibres.
http://physicsworld.com/cws/article/news/2008/aug/08/carbon-nanotubes-but-without-the-nano
The researchers found that the electrical conductivity increased with
increasing nanotube content and temperature – in contrast to earlier
findings. They observed a maximum conductivity of 3375 siemens per
metre at 77°C in samples that were 15% nanotube by volume.
http://physicsworld.com/cws/article/news/2003/aug/20/nanotubes-boost-ceramic-performance
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/50148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Crystal momentum and the vector potential I noticed that the Aharonov–Bohm effect describes a phase factor given by $e^{\frac{i}{\hbar}\int_{\partial\gamma}q A_\mu dx^\mu}$. I also recognize that electrons in a periodic potential gain a phase factor given by $e^{\frac{i}{\hbar}k_ix^i}=e^{\frac{i}{\hbar}\int k_idx^i}$. I recall that $k_i$ plays a role analogous to momentum in solid state physics. I also recall that the canonical momentum operator is $P_\mu=-i\hbar\partial_\mu-qA_\mu$. Notice that when you operate with the momentum operator on a Bloch electron, $\psi(x)=u(x)e^{\frac{i}{\hbar}k_ix^i}$, you get $e^{\frac{i}{\hbar}k_ix^i}(-i\hbar\partial_i+k_i)u(x)$.
My question is whether a parallel can be drawn between the crystal momentum, $k$, and the vector potential $A$. It seems they play a similar role quantum mechanically, but I have never seen Bloch's theorem described in terms of vector potentials. I suppose one does not even need a nontrivial vector potential for Bloch's theorem to hold. Still, crystal momentum and the vector potential play very similar roles in phase factors and I wonder whether there is any deeper meaning to that.
| While all the statements you made about crystal momentum only apply exactly for Bloch states in which the momentum operator is diagonal, the fact that the phase due to the vector potential is $e^{i \int A}$ is true for all states in the one-charged-particle Hilbert space.
This is of course a manifestation of the fact that momentum and an EM field are physically very different and any analogy you want to draw won't go very far.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/51353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 4,
"answer_id": 2
} |
What is the nature of the correspondence between unitary operators and reversible change? Why does the formalism of QM represent reversible changes (eg the time evolution operator, quantum gates, etc) with unitary operators?
To put it another way, can it be shown that unitary transformations preserve entropy?
| Like all 20th century physics, the formalism is invariant with respect to time reversal. This was true in classical mechanics and it remains true in QM because canonical quantization does not alter the meaning of energy - it just becomes an evolution operator. Unitary operators satisfying $A A^{\dagger} = I$ are associated logarithmically to Hermitian ones with real eigenvalues. Since measured quantities must be real, this was historically the inevitable route for QM.
Entropy should not be confused with unitary evolution. It is a thermodynamic quantity requiring a system of many states and it always increases for one direction in time, even when the underlying laws are time reversal invariant. This was Boltzmann's original insight (see kinetic theory for gases). With black holes we can talk about other forms of entropy, but the general view is that unitary evolution applies to all the usual quantum states so that information is conserved even for black hole processes. This is the so called Information Paradox.
The story is much more complicated in QFT but time reversal symmetry is still maintained at a fundamental level, even though there is a time operator with symmetry broken for specific interactions by CP violation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/51424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Shape of a string/chain/cable/rope/wire? The height of a string in a gravitational field in 2-dimensions is bounded by $h(x_0)=h(x_l)=0$ (nails in the wall) and also $\int_0^l ds= l$. ($h(0)=h(l)=0$, if you take $h$ as a function of arc length)
.
What shape does it take?
My try so far: minimise potential energy of the whole string,
$$J(x,h, \dot{h})=\int_0^l gh(x) \rho \frac{ds}{l}=\frac{g \rho }{l}\int_0^l h(x) \sqrt{1+\dot{h}^2} dx$$
With the constraint
$$\int_0^l \sqrt{1+\dot{h}^2} dx- l=0$$
If it helps, it's evident that $\dot{h}(\frac{l}{2})=0$.
Generally, this kind of equation is a case of a constrained variational problem, meaning that the integrand in
$$\int_0^l \frac{g \rho }{l}h(x) \sqrt{1+\dot{h}^2} +\lambda(\int_0^l \sqrt{1+\dot{h}^2} dx- l)dx$$
Must satisfy the Euler Lagrange equation. The constraint must also be satisfied.
But, in truth, by this point I am clueless. $\lambda$ is worked through $\nabla J = \lambda \nabla(\int_0^l \sqrt{1+\dot{h}^2} dx- l)$. I have tried this , but get nonsensical answers.
Is this method the best? If so, in what ways am I going about it wrongly thusfar?
| Your approach so far is correct. Now, the first thing to do is to create a coordinate axes. In this case, I will say that the “poles” are at x=-a and x=a, and y=0 is the point where the rope is attached to the “poles”. Now, we can assume constant linear density μ. This is not strictly necessary, but it makes the calculations quite easier. Now, we can set up our constraints in this problem, which would be the length in this case.
$$ J \equiv L = \int_{string} dS = \int_{-a}^{a} \sqrt{1 + (\frac{dy}{dx})^2} dx$$
From now on, in order to maintain brevity, I will write $\frac{dy}{dx}$ as y’. Now, we need to find the quantity that needs to be minimized in this problem. In this case, we want to minimize the potential gravitational energy, $U_g$. So now we need to find the value of our differential, $dU_g$. Because $U_g = mgy$, it is easy to see that $dU_g = (μ dS) \cdot gy$. Putting this into an integral:
$$ U_g = \int_{-a}^{a} μgy \sqrt{1 + (y’)^{2}} dx$$
Now we need to implement our length constraint:
$$ K \equiv U_g + λJ = \int_{-a}^{a} [μgy \sqrt{1 + (y’)^{2}} + λ \sqrt{1 + (y’)^2}] dx$$
Now upon inspection, we can consider the function $$F(x, y, y') \equiv μgy \sqrt{1 + (y’)^{2}} + λ \sqrt{1 + (y’)^2}$$
Now notice that $F$ does not depend explicitly on $x$, so we can use the Beltrami Identity:
$$F - y' \cdot \frac{\partial F}{\partial y'} = C$$
Applying this identity,
$$μgy(1+(y')^2) + λ(1+(y')^2) - μgy(y')^2 - λ(y')^2 = C\sqrt{1+(y')^2}$$
$$μgy + λ = C\sqrt{1+(y')^2}$$
$$(μgy + λ)^2 = C^2 + C^2(y')^2$$
$$y' = \sqrt{\frac{(μgy + λ)^2}{C^2} - 1} \Rightarrow dx = \frac{1}{\sqrt{\frac{(μgy + λ)^2}{C^2} - 1}} dy$$
Now, although this integral might look challenging, it can be made quite easier with a simple substitution:
$$let\ \cosh u = \frac{μgy + λ}{C} \Rightarrow \sinh u\ du = \frac{μg}{C} dy$$
$$x+K_1 = \frac{Cu}{μg} \Rightarrow x+K_1 = \frac{C}{μg} \cosh ^{-1}(\frac{μgy + λ}{C})$$
$$\cosh (\frac{μg}{C} (x+K_1)) = \frac{μgy + λ}{C}$$
$$y = \frac{C}{μg} \cosh (\frac{μg}{C} (x+K_1)) - \frac{λ}{μg}$$
Now in order to solve for some of these constants, we can apply our boundary condtions.
When x = $\pm a$, $y = 0$:
$$0 = \frac{C}{μg} \cosh (\frac{μg}{C} (a+K_1)) - \frac{λ}{μg} = \frac{C}{μg} \cosh (\frac{μg}{C} (-a+K_1)) - \frac{λ}{μg}$$
$$0 = \cosh (\frac{μg}{C} (a+K_1)) = \cosh (\frac{μg}{C} (-a+K_1))$$
$$K_1 = 0$$
$$ \frac{C}{μg} \cosh (\frac{μga}{C}) - \frac{λ}{μg} \Rightarrow λ = C \cosh (\frac{μga}{C})$$
$$\boxed{\therefore\ y = \frac{C}{μg}( \cosh (\frac{μgx}{C}) - \cosh (\frac{μga}{C}))}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/51485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Translation Invariance without Momentum Conservation? Instead of the actual gravitational force, in which the two masses enter symmetrically, consider something like $$\vec F_{ab} = G\frac{m_a m_b^2}{|\vec r_a - \vec r_b|^2}\hat r_{ab}$$ where $\vec F_{ab}$ is the force on particle $a$ due to particle $b$ and the units of $G$ have been adjusted. Whenever the masses are unequal, the forces are not equal and opposite, violating Newton's third law and conservation of momentum in the process.
As momentum conservation has been violated, my understanding is that translation invariance should be violated as well by this force. But the force law still depends only on separations rather than absolute coordinates, so the physics seems to be translation invariant. What am I getting wrong?
| Your forces are always equal. It is the accelerations that are unequal in case of equal masses. The situation is similar to the Coulomb interaction. The total momentum is conserved. There is no problem here.
EDIT: As Michael Brown kindly pointed out, the forces are implied to be different. Then indeed the momentum conservation does not hold. The situation is similar to that with a known motion of a "sourcing body" $\vec{r}_b (t)$: although the force on a probe body at $\vec{r}_a$ depends only on the relative distance $|\vec{r}_a$-$\vec{r}_b(t)|$, the momentum is not conserved (neither is the energy).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/51523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
What is the current through the lamp? We have the following circuit:
A neon lamp and a inductor are connected in parallel to a battery of 1.5 $V$. The inductor has a 1000 loops, a length of $5.0 cm$, an area of $12cm^2$ and a resistance of $3.2 \Omega$. The lamp shines when the voltage is $\geq 80V$.
*
*When the switch is closed, $B$ in the inductor is $1.2\times 10^{-2} T$.
*The flux then is $1.4 \times 10^{-5} Wb$
(calculated myself, both approximations).
You open the switch. During $1.0 \times 10^{-4} s$ there is induction. Calculate how big the current through the lamp is.
My textbook provides me with the following answer:
$U_{ind} = 1000 . 1.4 \times 10^{-5} / 1.0 \times 10^{-4} = 1.4 \times 10^{2} V$.
$ I = U/R_{tot} = 1.4 \times 10^{2} / (3.2+1.2) = 32A$
My concerns:
*
*How do we know that $1.4 \times 10^{-5}$ is $|\Delta \phi|$? This is the flux in the inductor while the switch is closed, but when you open it doesn't induction increase/decrease the flux? Or will the flux just become 0 and hence give us $1.4 \times 10^{-5}$ ?
*Why do we have to take the $R_{tot}$? What does the resistance of the inductor have to do with the lamp?
p.s. - This question can't be asked on electronics SE, since their site doesn't allow for such a question.
| Yeah ok so this problem, is like I said a little silly. It seems like you have to assume the current drops to zero in the given time and therefore so does the flux. This gives you the first part, the induced voltage across the inductor. For the second part, it seems we simply have to apply Kirchoff's first loop rule and Ohm's law to find the current in the loop.
This all seems very odd to me, because we are assuming the current is changing in order to induce a voltage, but also a single value for the current. Really, the current should be time dependent and the induction occurs for all time, not simply a finite amount. For the sake of completing this homework problem, we are done, but in reality we have to solve a differential equation and end up with exponential behavior.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/51584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Confused about magnetic and geographic north I'm slightly confused about the "north-seeking" pole of a magnet: does it point towards magnetic north, or is it towards geographic north?
I ask because I've been finding different explanations in different places.
| 'Magnetic North' is a LOCATION, not the polarity of that location. It is so-named to distinguish it from 'True North'. Since the 'North-Seeking' pole of a compass needle points in that direction, 'Magnet North' (the location) has a south magnetic polarity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/51672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Confused about fire? Im confused about fire.
The way I see it :
*
*Heat creates (kinetic) energy in mass and this creates stronger vibrations of atoms.
*When those vibrations are strong enough the electrons interact stronger due to electromagnetic forces.
*This causes the electrons to fly away. This is the creation of a plasma and the direction of most electrons is the direction of the flame.
*the shape and heat of the flame depends mainly on the amount of electrons and their velocity ( apart from the source of heat and the considered gas and solid material)
*the electrons move freely ( but not random ) in the flame.
*there is a chemical reaction going on that usually produces smoke or such.
*the entropy increases in all the above 6 steps and the process is repeated.
*the electromagnetic energy is converted into kinetic energy of the electron and conductivity in the plasma (flame).
*the moving electrons create an electromagnetic field.
That is more or less how I see it.
What steps are ok ?
And the main question , why is there light involved ?
I mean why cant we just have kinetic energy of the electrons and a changing magnetic field ? Why are there photons ?
If the electrons are in a high energy state why cant they just move faster instead of emitting a photon ???
Why is the light usually visible ? is that because of our structure of air ?
If the light is an electromagnetic wave why does the flame go up ? does that imply all electromagnetic waves go up ?? But electromagnetic waves are not influanced by gravity or buoyancy are they ??
What is the final destiny of the electrons and photons if the heat is gone ??
I am puzzled by this.
| Fire is a bunch of subatomic particles. Because they are not bound into an atom, fire technically is a plasma.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/51738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
What exactly are we doing when we set $c=1$? I understand the idea of swapping from unit systems, say from $\mathrm{m\ s^{-1}}$ to $\mathrm{km\ s^{-1}}$, but why can we just delete the units altogether?
My question is: what exactly are we doing when we say that $c=1$?
| All we're doing is using a set of units where certain quantities happen to take convenient numerical values. For example, in the SI system we might measure lengths in meters and time intervals in seconds. In those units we have $c = 3 \times 10^8\ \text{m}/\text{s}$. But you could just as well measure all your distances in terms of some new unit, let's call it a "finglonger", that is equal to $2.5 \times 10^6\ \text{m}$, and time intervals in a new unit, we'll call it the "zoidberg", that is equal to $8.33 \times 10^{-3}\ \text{s}$. Then the speed of light in terms of your new units is
$$ c = 3 \times 10^{8}\ \text{m}/\text{s} = 1\ \frac{\text{finglonger}}{\text{zoidberg}} .$$
The units are still there – they haven't been "deleted" – but we usually just make a mental note of the fact and don't bother writing them.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/51791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 9,
"answer_id": 5
} |
Current without Voltage and Voltage without Current? At school I've always learned that you can view Current and Voltage like this:
The current is the flow of charge per second and the Voltage is how badly the current 'wants' to flow.
But I'm having some trouble with this view. How can we have a Voltage without a current? There is nothing to 'flow', so how can it be there? Or is it 'latent' voltage, I mean is the voltage just always there and if a current is introduced it flows?
Also, I believe you can't have current without voltage. This to me seems logical from the very definition of current. But if you have a 'charge' without a voltage, doesn't it just stay in 1 place? Can you view it like that? If you introduce a charge in a circuit without a voltage it just doesn't move?
| For e.g. a battery there is voltage even it is not connected anywhere.
Thus voltage(Potential difference between two points) exists without current(flow of charge with respect to time) but current doesn't exist without voltage .
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/51875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 2
} |
Is Schrödinger’s cat misleading? And what would happen if Planck constant is bigger? Schrödinger’s cat, the thought experiment, makes it seem like as if measurement can cause a system to stop being in a superposition of states and become either one of the states (collapsed).
So does a system always exist in a superposition? In this sense, do things in the macroscopic world have a wavefunction? Is it because of the size of the everyday object, so it won't behave so much like an electron? Theoretically, if the Planck constant is to be bigger, everyday object would start behaving more like particles in a quantum scale?
| I'm pretty sure Planck's constant only refers to the size of quantum scale objects. If it were larger, all it would mean is that Newtonian physics would only apply at a larger scale.
And Schrodinger's cat only means that until observed, we cannot be sure of the state. It has a state, whether we know it or not. What we are not able to know is which state it is in. Observing something doesn't set the state, it just stops us from needing to account for multiple possible stats. And quantum mechanics is based partly around not knowing beforehand which state it will be in.
Macroscopic objects do have a wavefunction, actually. Since for any wave, E=hc/lambda, lambda = hc/E. You can then substitute in the formula for E to get the wavefunction of that object. Most objects simply do not have a visible wavelength, but a good example of adding energy to an object to shift it into visible light is heating up metal until it glows.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/51935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
The viscous force between the layers of liquid is same, then why there is variation in the velocities of its layers? I have learned in my textbook that when the liquid flows the bottom layer of the liquid never moves because of friction, but the upper layers move with increasing velocities how it is possible if the viscous force between all these layers is same
| Viscosity and friction are not the same thing. Viscosity is about how a unit of a fluid is sheared between regions of different velocity. Friction is about how one body has zero velocity with respect to another until a certain minimum amount of shear stress is applied.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/52042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Where does the mass term come from in the Proca Lagrangian? There are many good books describing how to construct the Lagrangian for an electromagnetic field in a medium.
$$
\mathcal{L}~=~-\frac{1}{16\pi}F^{\mu\nu}F_{\mu\nu}-\frac{1}{c}j^{\nu}A_{\nu}
$$
When moving to the Proca Lagrangian (and a massive photon), I know what the mass term looks like but I have know idea where it came from.
$$
\mathcal{L}~=~-\frac{1}{16\pi}F^{\mu\nu}F_{\mu\nu}-\frac{1}{c}j^{\nu}A_{\nu}+\frac{\mu^{2}}{8\pi}A_{\mu}A^{\mu}
$$
Why is $A_{\mu}A^{\mu}$ the correct term to include? I guess that it must be Gauge and Lorentz invariant so why wasn't it included in the original Lagrangian? Why is the factor of $\frac{\mu^{2}}{8\pi}$ needed?
| In addition to the answer of Sam's, I would say there is no gauge invariance requirement for the massive field $A$, only Lorentz covariance.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/52110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Is it possible for a physical object to have an irrational length? Suppose I have a caliper that is infinitely precise. Also suppose that this caliper returns not a number, but rather whether the precise length is rational or irrational.
If I were to use this caliper to measure any small object, would the caliper ever return an irrational number, or would the true dimensions of physical objects be constrained to rational numbers?
| I think that because you have measurements that are real numbers not isomorphic to the natural numbers or countably infinite, you firstly have assumed the universe infinitely dense. Therefore any measurement as mentioned in some other answer's would justly be required to have infinite many decimal points.
This is seen from the fact that the set of real numbers can be viewed as a set of infinite sequences of integers. Because measurements are positive, any measurement can be represented in the form $r = \sum_{i=0}^{\infty}\frac{a_i}{10^i}$ such that $a_0 \in \Bbb N$ and for $ i>0; a_i \le 9$. Then $r$ is defined as the limit as $i$ to $\infty$.
So in short you can see the irrational measurements just correspond to specific types of sequences above where they do not repeat.
Rather then going further in defining the integer sequences, I would like to consider other notions as well. The possible measurements are not countable!
Keep in mind that though traditionally the mathematics used in physics are defined over the reals or complex, they typically correspond to sets isomorphic to the integers or that are countable in actual calculation.
It seems that mathematics considers the realm of possibility (where some reals aren't even definable), I do not know if it corresponds to the constituents of the universe.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/52273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 13,
"answer_id": 9
} |
Representations of Lie algebras in physics Why is an invariant vector subspace sometimes called a representation? For example in Lie algebras, say su(3), the subspace characterized by the highest weight (1,0) is an irreducible representation of dimension 3 of su(3).
However, a representation of a Lie algebra is a Lie algebra homomorphism from the algebra to a subspace of the so called general linear algebra of some vector space. Or in other words, the representation is a map that assigns elements of the algebra to elements of the set of linear endomorphisms of some vector space.
In the previous example, the subspace (1,0) is a subspace in which the action of the endomorphisms maps its elements into themselves. So by the definition, the irreducible representation should be the mapping that associates the endomorphisms to the elements of the algebra and not the space in which they act.
| To add to what @Qmechanic says, note also that for a representation $\rho:\mathfrak g\to \mathfrak{gl}(V)$ of a Lie algebra $\mathfrak g$ acting on a vector space $V$, a vector subspace $W$ of $V$ is called an invariant subspace of the representation provided $\rho(X)w\in W$ for all $X\in\mathfrak g$ and $w\in W$. A representation $\rho$ is said to be irreducible provided it has no invariant subspaces except for $\{0\}$ and $V$. This is why one encounters physicists "identifying irreducible representations with invariant subspaces." Strictly speaking, it doesn't make sense to even talk about invariant subspaces of a vector space unless you already have the representation in hand.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/52417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Can low-gravity planets sustain a breathable atmosphere? If astronauts could deliver a large quantity of breathable air to somewhere with lower gravity, such as Earth's moon, would the air form an atmosphere, or would it float away and disappear? Is there a minimum amount of gravity necessary to trap a breathable atmosphere on a planet?
| The moon has 85% of the gravity of Titan (which has a thick hydrocarbon atmosphere), so I cannot believe for 1 second that it's gravity is too weak to retain a viable atmosphere.
Factors like Sola winds stripping the atmosphere due to lack of protection from a magnetic field, is a valid explanation, but low gravity cannot be, because the existence of Titan disproves that.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/52527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 6,
"answer_id": 3
} |
Equivalence principle question I understand the equivalence principle as "The physics in a freely-falling small laboratory is that of special relativity (SR)." But I'm not quite sure why this is equivalent to "One cannot tell whether a laboratory on Earth is not actually in a rocket accelerating at 1 g".
| The equivalence principle (the version you mention) means that you cannot tell, locally, whether you are in a freely falling frame or in "outer space", i.e., in a region of space with no gravitational field. This version is the EEP.
There is another version, namely, WEP which says that the inertial mass is the same as the gravitational mass. This means that when standing on earth, the force you experience is proportional to your gravitational mass, which is the same mass that you would put in Newton's second law. Thus, if you are in a rocket accelerating at 1g you will experience the same force.
Now, from WEP you see that if you throw a ball upwards, the trajectory it follows will be the same on earth and in the rocket. Therefore we can reformulate WEP in terms of freely falling objects: locally, the motion of freely falling particles are the same in a uniformly accelerated frame and in a gravitational field.
Of course, WEP doesn't imply EEP because in SR the mass is not unique. But it is easy to see that you can generalize it by imposing SR in the motion of the objects.
However, EEP is stronger in the sense that not only the trajectories are the same, but all laws of physics. This is the basis of QFT.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/52593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
What would be the effect of a slanted muzzle on the trajectory of a bullet? Let's say I cut off the end of a gun barrel at a 45° angle: What would the effect be on the trajectory of a bullet fired through that barrel?
Would the bullet be less stable (I guess)? Would it make the gun fire with an angle, and would that be toward the "small" end?
| Suppose you make the cut so that the slant is downward – i.e., the top part of the barrel is shortest, the bottom is longest. The bullet is driven down the barrel by the pressure of propellant gases on its base. As the base of the bullet leaves the barrel the propellant gases begin to act asymmetrically on the base. If the bullet weren't stabilized you might expect this asymmetry to introduce an overturning moment.
However, bullets are spin stabilized and so in practice no such effect is observable. In fact a slant cut like you describe is the simplest form of "muzzle brake" and has been used on many rifle designs: Its benefit (as with all muzzle brakes) is to divert some of the muzzle blast from the direction of the bullet, reducing the recoil force on that axis, and in particular offsetting "muzzle rise" associated with most small arms (which discharge from an axis above their center of mass).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/52646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Particle in infinite potential well which is doubled in size at $t_0$ I am currently studying for an exam in Quantum Mechanics and came across a solution to a problem I have trouble with understanding.
The Problem:
A Particle sits in an infinite potential well described by
\begin{align}
V(x) &= 0, & 0 \leq x \leq L \\
V(x) &= \infty, & \text{otherwise}
\end{align}
We know that the energies are given by $E_n = \dfrac{n^2 \pi^2 \hbar^2}{2 m L^2}$ and $\Psi(x) = A_n \sin(n \pi x /L)$.
At time $t_0$ the potential well is suddenly doubled in size, such that the potential is now
\begin{align}
V(x) &= 0, & 0 \leq x \leq 2L \\
V(x) &= \infty, & \text{otherwise}
\end{align}
So the energies are now given by $\tilde{E}_n = \dfrac{n^2 \pi^2 \hbar^2}{2 \cdot 4 m L^2}$ and $\tilde{\Psi}(x) = \tilde{A}_n \sin(n \pi x /2L)$.
*
*If the particle is in the ground state of the potential well before the change, what is the probability to find the particle in the ground state of the new potential after the change?
This is absolutely clear to me. We find a non vanishing probability as a result. But now it gets tricky:
*
*What is the expectation value of the energy of the particle directly after the change? How does the expectation value of the energy evolve in time?
The solution suggests that the expectation value of the energy does not evolve in time, which is clear to me, since the Hamiltonian is time independent and thus energy is conserved. But it also suggests that the expectation value does not change after we double the width of the potential wall which I understand from the argument of energy conservation but not in terms of quantum mechanics. If the probability that the particle is in the state $\tilde{\Psi}$ does not vanish the particle could have the energy $\tilde{E}_n$ which is lower than $E_n$ and this would mean that the expectation value of energy could change (with a given probability).
What am I missing here, where is my mistake? Any help is appreciated!
| Under sudden perturbation the state does not change, but the basis does. This state gets expanded in the new basis whose coefficients evolve correspondingly. Normally it is covered in chapters with the time-dependent perturbation theory $\hat{V} = \hat{V}(t)$.
If the potential is time-dependent, the energy is not conserved in general case. In your case the energy from certain becomes uncertain.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/52767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
When do I apply Significant figures in physics calculations? I'm a little confused as to when to use significant figures for my physics class. For example, I'm asked to find the average speed of a race car that travels around a circular track with a radius of $500~\mathrm{m}$ in $50~\mathrm{s}$.
Would I need to apply the rules of significant figures to this step of the problem?
$$ C = 2\pi (1000~\mathrm{m}) = 6283.19 $$
Or do I just need to apply significant figures at this step?
$$ \text{Average speed} = \frac{6283.19~\mathrm{m}}{50~\mathrm{s}} = 125.664~\mathrm{m}/\mathrm{s} $$
Should I round $125.664~\mathrm{m}/\mathrm{s}$ to $130~\mathrm{m}/\mathrm{s}$ since the number with the least amount of significant figures is two?
| Keep precision all the way through to the number you report and then truncate accordingly at the end.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/52837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Dark Matter 'Stars' I'm aware that the Milky Way has a dark matter 'halo' around it, presumably a spherically symmetric distribution.
But I'm completely ignorant regarding the theories explaining dark matter... Is there any reason to not expect a star-sized object to also be made of dark matter?
I know they'll be extremely difficult to detect, but I'm wondering if it's even physically possible to exist.
| If dark matter was self interacting, (and there is some evidence that it is self interacting) then it might form star-like clumps. Some groups even think that dark matter might be sort of copy of all the standard model particles.
If those star sized and massed clumps of dark matter were to pass through a nebula, it would (most likely) trigger star formation. And there seems to be evidence that dark matter does trigger starbursts.
https://www.sciencedaily.com/releases/2016/03/160309140048.htm
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/52877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
London into Australia in 90 minutes Me and my friend are having a debate on whether it would be possible for a human to travel at 15,000 miles an hour from London to Australia in the matter of 90 minutes. Would a human be able to survive travel in such at fast speeds knowing he will have to overcome immense amount of g's. Basically is it possible for a human? or will he suffer death in the process?
| The distance from London to Australia is about 17,000km. If you wanted to minimise the acceleration you'd feel during the trip you'd accelerate continuously for the first half of the journey (8,500km) then decelerate at the same rate for the second half. To work out what acceleration is required you use the SUVAT equation:
$$ s = ut + \frac{1}{2}at^2 $$
For half the journey the distance $s$ is 8,500km and the time $t$ is 45 minutes (2700 seconds), so using the above equation the acceleration required is about 2.33m/s$^2$, which is only about a quarter of a $g$. The only trouble is that your speed at the halfway point would be about 6,300m/s (about Mach 18), so you'd need a rocket rather than a plane to do it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/52935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to get "complex exponential" form of wave equation out of "sinusoidal form"? I am a novice on QM and until now i have allways been using sinusoidal form of wave equation:
$$A = A_0 \sin(kx - \omega t)$$
Well in QM everyone uses complex exponential form of wave equation:
$$A = A_0\, e^{i(kx - \omega t)}$$
QUESTION: How do i mathematically derive exponential equation out of sinusoidal one? Are there any caches? I did read Wikipedia article where there is no derivation.
| Consider the following derivative: $[\cos{x} +i\sin{x}]' = i \sin{x} - i\cos{x} = i(\cos{x} +i\sin{x})$. That sure looks like $[e^{ix}]' = ie^{ix}$. So the question from a physics point of view is why is the oscillatory behaviour of $\sin$ and $\cos$ so fundamentally connected to the behaviour of $\exp$ governing growth and decay?
One answer may be self-similarity--$\exp$ is self-similar, so for instance in radioactive decay, the number of decays is always proportional to the number of atoms present. Compare that with an pendulum, where the acceleration (velocity change) is proportional to the displacement and the displacement change is proportional to the velocity.
Both these ideas are combined in the damped oscillator-where a single complex frequency's real part describes the oscillation and the imaginary part describes the damping.
When applied to unstable particles one considers the Breit Wigner resonance formula, so when one makes Delta baryons the mass is on average 1232 MeV--but not always. The lifetime is so short ($5 \times 10^{-24}\ $ s) that we speak of the width of
the resonance (~114 MeV)--with the two being connected by the Heisenberg Uncertainty Principle. (The mass drives the oscillator part of the wave function, while the width drives the decay--so that a complex frequency unifies the two phenomena).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/53005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
How does the correlation length of weather emerge? The question is pretty simple: If I know the weather where I stand, I can estimate the weather 5 meters or 1 km away away pretty well, but I'll have a hard time guessing what the weather is, say, 50 km away.
Therefore, it seems that the climatic system has a length-scale. Where does it come from? Navier-Stokes equations do not feature an internal length scale, and it doesn't seem that the scale comes from earth's radius either.
|
Therefore, it seems that the climatic system has a length-scale. Where does it come from?
Let us not forget that the weather system is a classical case of chaotic dynamics: several interacting differential equations are at work, not just Navier Stokes. Think of tides, think of seasons, think of clouds/albedo etc.
But mainly it is the boundary conditions to the solutions of the equations imposed by geography: mountains lakes rivers valleys seas etc. which will define a type of length. In a sense similar to the way the size of a lake defines the wavelength of the waves and the height possible : this is due to the boundary conditions they impose to the solutions of the complicated system of coupled differential equations.
Weather programs do a good job of simulating terrain and are fairly successful for a few days projections. Local projections are not always as good since systems can move faster or slower higher or lower on the map than what is programed. But the geography is taken into account.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/53189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
How scientists could say that such meteorite comes from Mars How could scientists affirm that a meteorite comes from Mars and not from another source ?
This is a probability or an absolute certainty ? How much percent ?
| Essentially, a chemical and mineralogical comparison is made between the meteorite and samples taken from the Martian surface and atmosphere - particularly from the article The SNC meteorites are from Mars (Treiman et al. 2000, Planetary and Space Science, vol. 48, pp. 1213-1230), that states:
Most telling is that the SNC meteorites contain traces
of gas which is very similar in elemental and isotopic compositions to the modern Martian atmosphere as measured by Viking landers on
Mars and spectroscopy from Earth. The Martian atmosphere appears to have a unique composition in the solar system, so its presence
in the SNCs is accepted as strong direct evidence that they formed on Mars.
Figure 1 of that article shows a comparison of relative abundances of isotope from the Martian atmosphere and the Martian meteorites found on Earth.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/53311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
Electric Flux Contradiction I am currently reading about electric flux; and from this passage I am reading, I am sensing a bit of a contradiction:
"If the E-field is not perpendicular to the surface area, then the flux will be less than EA
because less electric field lines will penetrate A. Consider the wedge shape surface
below. The electric field lines are perpendicular to the surface area A' but not to A.
Since the same number of electric field lines cross both surfaces, the flux must be the
same through both surfaces."
Clearly, the surface A is not perpendicular to the electric field, but surface A' is. So, the number of electric fields lines passing through A should be less than the number passing through A', as they suggest in the passage before the picture. Yet, they go on to say that the number of electric fields lines passing though each surface is the same. What is going on?
| It is a misleading diagram due to the way the field lines are drawn and the way angle is defined. The angled surface has actually increased in Area, thereby inadvertently keeping the flux the same. The usual definition is that $\theta$ is zero when E and the surface are perpendicular.
$\theta=0$ , flux is proportional to A
$\theta\ne0$, flux is proportional to $A\cos\theta$ until the field lines and surface are parallel ($A\cos90=0$)
This is a better (although somewhat exaggerated and not as pretty picture):
Flux through a surface at an angle
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/53387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the proof that a force applied on a rigid body will cause it to rotate around its center of mass? Say I have a rigid body in space. I've read that if I during some short time interval apply a force on the body at some point which is not in line with the center of mass, it would start rotating about an axis which is perpendicular to the force and which goes through the center of mass.
What is the proof of this?
| What you are talking about is called the instant center of percussion. To purely rotate a rigid body about an axis (the rotation axis) a force needs to be applied along the axis of percussion which is a) perpendicular to the rotation axis, b) on the far side of the center of gravity from the pivot and c) located a distance $ \ell =c + \frac{I}{m c}$ from the pivot ($m$ mass, $I$ mass moment of inertia about cm and $c$ distance between pivot and cm).
Derivation
Consider a body with desired rotation $ \vec{\omega} = (0,0, \omega_z)$ about a point A aligned with a local $\hat k$ axis, and the center of gravity located along the local $\hat i$ axis, with coordinates $\vec{c} = (c_x,0,0)$.
An impulse with components $\vec{J}=(J_x,J_y,J_z)$ is applied at a location $\vec\ell = (l_x,l_y,l_z)$ relative to A with the equations of motion at the center of mass
$$ \vec{J} = m \left( \hat 0 + \vec{\omega} \times \vec{c} \right)
\\ (\vec{\ell} -\vec{c} ) \times \vec{J} = I \vec{\omega} $$
in components the above is
$$ \begin{pmatrix} J_x \\ J_y \\ J_z \end{pmatrix} = m \begin{pmatrix} 0 \\ 0 \\ \omega_z \end{pmatrix} \times \begin{pmatrix} c_x \\ 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 0 \\ m c_x \omega_z \\ 0 \end{pmatrix} $$
So $J_x=J_z=0$ making $\vec{J}$ to be along the local $\hat{j}$ axis.
$$ \begin{pmatrix} \ell_x - c_x\\ \ell_y \\ \ell_z \end{pmatrix} \times \begin{pmatrix} 0 \\ J_y \\ 0 \end{pmatrix} = \begin{bmatrix} I_x & 0 & 0 \\ 0 & I_y & 0 \\ 0 & 0 & I_z \end{bmatrix} \begin{pmatrix} 0 \\ 0 \\ \omega_z \end{pmatrix} $$
$$\begin{pmatrix} -(m c_x \omega_z) \ell_z \\ 0 \\ (m c_x \omega_z) (\ell_x-c_x) \\ \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ I_z \omega_z \\ \end{pmatrix}$$
with solution $\ell_z =0$ and $\boxed{\ell_x = c_x + \frac{I_z}{m c_x}}$. Note that the value of $\ell_y$ is irrelevant since it along the force axis $\vec{J}$.
Here are some reference posts:
See relevant answer to a similar question (https://physics.stackexchange.com/a/81078/392)
The full equations of motion about an arbitrary point are derived in (https://physics.stackexchange.com/a/80449/392)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/53465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 8,
"answer_id": 1
} |
Why is ski jumping not suicidal? At least on television, ski jumpers seem to fall great vertical distances before they hit the ground - at least a few dozen meters, though I couldn't find exact distances via a quick search. And yet they almost always land on their feet as if they just fell two or three meters. (Here's a whole lot of footage from the Vancouver Olympics if you need to refresh your memory.)
Without going into the level of equations (which I wouldn't understand), why are ski jumpers able to fall such great heights without seriously injuring themselves?
| As @ChrisF correctly says, landing too far towards the end of the slope, hence towards the end where it becomes less steep and less parallel to the flight path of the jumper, would be very dangerous.
In ski jumping, every jump and slope is designed for a fixed jump length. This length is given by the K-Point which designates how far at most the contestants should jump. Up to this point, a safe landing can be achieved by any well-trained contestant. Landing beyond the K-point is dangerous. Or rather, it would be.
According to the rules, before the contest, the jury decides on the length of the inrun, taking into account the state of the jump, the weather and in particular the wind, thus making sure that the chance of contestants landing beyond the K-point is minimized.
Also, if one contestant reaches 95% of the jump length, the jury will interrupt the contest and decide whether the inrun should be shortened[1].
Note that there's quite a lot of additional rules about the equipment of the contestants, including details about the suit and the skis, which ensure that the jump widths in one particular contest tend to be somewhat predictable.
So, it's the rules that ensure that ski jumping on a carefully constructed jump is not suicidal. In fact, jumping on your self-built jump might be more dangerous than any jump you see on TV.
[1] Don't ask me what happens to the contestants that have performed their jumps up to that point in time.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/53530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 1
} |
Confused over complex representation of the wave My quantum mechanics textbook says that the following is a representation of a wave traveling in the +$x$ direction:$$\Psi(x,t)=Ae^{i\left(kx-\omega t\right)}\tag1$$
I'm having trouble visualizing this because of the imaginary part. I can see that (1) can be written as:$$\Psi(x,t)=A \left[\cos(kx-\omega t)+i\sin(kx-\omega t)\right]\tag2$$
Therefore, it looks like the real part is indeed a wave traveling in the +$x$ direction. But what about the imaginary part? The way I think of it, a wave is a physical "thing" but equation (2) doesn't map neatly into my conception of the wave, due to the imaginary part. If anyone could shed some light on this kind of representation, I would appreciate it.
| The wave function itself is not a "real" thing. I.e. it is not an observable quantity. What's "real" is the probability distribution which is associated with the wave function. The probability of finding the particle between points $x=a$ and $x=b$ (restricting to one dimension for simplicity) is given by:
$$P(a\leq x\leq b)=\int_a^b |\Psi|^2 \mathrm{d}x$$
where $|\Psi|^2=\Psi^* \Psi $ and $\Psi^*$ is the wave-function's complex conjugate. $|\Psi|^2$ is a real-valued function (i.e. its imaginary part is zero). It isn't particularly useful to think of the wave function itself as being a physical wave. What matters is the magnitude of the wave function.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/53608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 1
} |
Why use Fourier expansion in Quantum Field Theory? I have just begun studying quantum field theory and am following the book by Peskin and Schroeder for that.
So while quantising the Klein Gordon field, we Fourier expand the field and then work only in the momentum space. What is the need for this expansion?
| I think it's also important to emphasize the physical significance of the Fourier modes in the context of QFT. The Fourier modes $a^\dagger(\mathbf k)$ and $a(\mathbf k)$ in the context of the quantized Klein-Gordon field, for example, create and destroy particles with momentum $\mathbf k$ respectively. Namely, if $|\emptyset\rangle$ is the vacuum of the theory, then
$$
a^\dagger(\mathbf k)|\emptyset\rangle
$$
gives a state with a single particle of momentum $\mathbf k$, and more generally
$$
a^\dagger(\mathbf k_1)a^\dagger(\mathbf k_2)\cdots a^\dagger(\mathbf k_N)|\emptyset\rangle
$$
represents a state with $N$ particles with momentum $\mathbf k_1, \mathbf k_2, \dots, \mathbf k_N$ respectively.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/53731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 2
} |
Identity as a trivial reducible representation In particle physics, I was taught that a representation of a group is a function $r: group \rightarrow matrices\,(n\times n)$ such that $r(g_1)r(g_2)=r(g_1g_2)$ and $r(e)=I_{n\times n}$. Then, that a representation is reducible when you can find a matrix $A$ such that $Ar(g)A^{-1}$ is in diagonal-block form for every element of the group.
Then the professor tried to find in complicated ways reducible representations of $SO(N)$, $SU(N)$ and so on. But the trivial function that assigns $I_{n\times n}$ to every value of $g$ is not already a reducible transformation? I know it must be somehow useless, but what did I lose?
| What you have constructed is a representation, but not a faithful one. Since your homomorphism $r$ is not injective, you lose some of the structure of the group. In fact, since $r$ is trivial, you lose all the structure of the group. While most useful statements about $G$ apply to $r(G)$ equally well, you cannot pull back anything useful from $r(G)$ to $G$, so your representation doesn't tell you anything about $G$, defeating the whole purpose of using representations in the first place.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/53794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
3D: Get linear velocity from position and angular velocity I want to find out the linear velocity of a point in 3D space, (Euclidean), given:
*
*Its position
*Its angular velocity
*The point it's rotating around (fulcrum)
(This is a problem I need to solve for 3D graphics programming with a physics engine).
The position of the point and position of the pivot point will be 3-value vectors, $x$, $y$ and $z$.
The angular velocity will also be a 3-value vector, representing Euler angles.
What operation(s) would I need to perform to calculate the linear velocity of the point?
The 3d/physics engine has various high level mathematical operations including matrix, vector and quaternion operations, so hopefully what I need is among those.
| The relation between angular velocity $\vec{\omega}$, position $\vec{r}$ (assuming rotation around the origin) and tangential velocity $\vec{v}$ (which is what you are asking for) is given by
$\vec{\omega}=\frac{\vec{r}\times\vec{v}}{\mid\vec{r}\mid^2},$
where $\times$ is the cross product and $\mid\vec{r}\mid^2$ the norm of the position vector squared. You can write down this equation component-wise to get three equations for three unknown variables (the components of $\vec{v}$) and solve them algebraically.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/53843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Why does Venus transit so slowly? I have calculated that because Venus is $d = 12,103.6~\mathrm{km}$ in diameter and moves at $v = 35.02~\mathrm{km}/\mathrm{s}$, it would take
$$ t=\frac{d}{v} = \frac{12,103.6~\mathrm{km}}{35.02~\mathrm{km}/\mathrm{s}} = 345.62~\mathrm{s} = 5~\mathrm{min}~46~\mathrm{s} $$
for Venus to appear totally in front of the Sun. This time would be from the edge of Venus being against the edge of the Sun to when the opposite edge of Venus is "in touch" with the same edge of the Sun.
But now I think this in reality takes more than just $6$ minutes (about $20$ minutes). If this is true, then why does this measurement not agree with theory?
| There are three main reasons.
1) While Venus is orbiting the Sun at 35.02 Km/s, the Earth is also orbiting the Sun in the same direction [both clockwise] at 29.78 Km/s. This factor will decrease the relative transit velocity of Venus as seen from earth.
2) Venus is travelling at 35.02 Km/s in an elliptical orbit. Hence the actual distance traveled by Venus during the transit will be slightly more than its diameter because it is travelling on a curved path and not a straight line. This factor will increase the actual transit distance covered by Venus. However the contribution of this is negligible and can be ignored except for high precision calculations.
3) There will be a small but measurable impact because of the surface velocity of Earth's rotation at 0.434 Km/s (at the equator) about its axis. Notice that the tangential velocity of an observer on Earth due to the rotation of the Earth about its axis will be in opposite direction to the tangential velocity of both the Venus and Earth around the Sun. This factor will increase the relative transit velocity of Venus as seen from earth.
My calculation, using Kepler's law differ slightly from that of Nathaniel but it is essentially same in spirit. We obtain the transit time of 19 mins 56 seconds which is accurate enough.
$$
t \approx \frac{D_v}{V_v\{1 - (T_v/T_e)^{2/3}\} + v_e} = 19 \min 56 \sec
$$
where
$D_v$ = Diameter of Venus,
$V_v$ = Orbital velocity of Venus,
$V_e$ = Orbital velocity of Earth,
$T_v$ = Orbital period of Venus,
$T_e$ = Orbital period of Earth,
$v_e$ = Rotation velocity of Earth.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/54010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Stability of nucleii and $A=5$ Why there is no stable nuclei with
$$A=5$$
in nuclide the chart and so in nature like we know it?
| As Jerry Schirmer said, helium-4 is an extremely stable nucleus. What does it mean quantitatively? It means that it binding energy is very high, namely 28 MeV. In other words, helium-4 is 28 MeV/$c^2$ lighter than the sum of masses of two free protons and two free neutrons.
The best candidates $A=5$ nuclei would have 2 protons and 2 neutrons in the lowest state - i.e. in the same state as they occupy in helium-4 – but the additional 1 proton or 1 neutron would have to be added to a higher shell. But because this higher shell is so much higher in energy than the ground levels, one can't find an $A=5$ nucleus that would be lighter than the sum of the helium mass and one proton (or one neutron). The binding energy would have to be even greater than 28 MeV which means that the binding energy per nucleon would have to exceed 28/5=5.6 MeV. This is simply too much to ask; the binding energy you could get for 5 nucleons is simply smaller than 28 MeV, so any such object would quickly alpha-decay.
I should insert some calculation of the conceivable binding energy for 5 nucleons here except that there's clearly no "analytic" calculation. It's an extremely messy system one would have to describe by nuclear physics (ill-defined effective theory) or by QCD (calculable via lattice QCD, with big computers etc.). But let me mention that unlike atoms, where the new valence electrons may always be added and keep the stability, the nuclei are "more neutral" so the attractive force between the helium-4-like "core" of the $A=5$ object and the remaining nucleon is much weaker, sort of dipole-like, and isn't enough to produce a new stable bound state.
However, what I can say is that this fact about the absence of $A=5$ stable isotopes has important consequences. The Big Bang Nucleosynthesis – first three minutes when nuclei are created – essentially stalls once it reaches helium-4 nuclei. They can't absorb new protons/neutrons to become heavier and instead, the next reaction is the much rarer collision of two helium nuclei. One either has helium-3 plus helium-4 goes to lithium-7 plus positron plus photon; or beryllium-7 plus photon on the right hand side. Lithium-7 may absorb a proton to get back to 2 helium-4; beryllium-7 may absorb a neutron to become lithium-7.
These processes are "everything" one may have in empty space. Inside stars, one has pressure and temperature which helps to overcome the binding energy and stars may produce heavier elements, too.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/54205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Difference between torque and moment What is the difference between torque and moment? I would like to see mathematical definitions for both quantities.
I also do not prefer definitions like "It is the tendancy..../It is a measure of ...."
To make my question clearer:
Let $D\subseteq\mathbb{R}^3$ be the volume occupied by a certain rigid body. If there are forces $F_1,F_2,....,F_n$ acting at position vectors $r_1,r_2,...,r_n$. Can you use these to define torque and moment ?
| Torque and moment are essentially the same thing and are calculated in the same way - it's really the context that determines which word is used. 'Torque' is usually used when we're talking about the twisting effect on a shaft and 'moment' is usually used when we're talking about the bending effect on a beam. If you're using a spanner to tighten a bolt, we would say that your hand exerts a moment on the end of the spanner but the spanner exerts a torque on the head of the bolt.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/54383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 7,
"answer_id": 4
} |
Is it possible to calculate atmospheric pressure if given temperature (F) and elevation? I am working on a report at work and need to determine the atmospheric pressure for small intervals over a 24 hour period. Searching Google, I've found charts which give a base pressure of 14.65 psia at sea level. This is at 68F. That changes to 13.17 psia at 3000ft above sea level.
What I am looking to do is create a spreadsheet where I enter the elevation as a constant, then provide the temperature (F), then have it calculate the atmospheric pressure. Knowing it is 13/17 psia at 68F is useful only if the temperature is 68F for the entire 24 hour period, but it isn't. Currently it ranges from 30F to 75F but could move either direction substantially depending on time of year.
Is this possible to determine?
| It depends on the precision you need.
A common and good approximation is the Hypsometric equation, that relates pressure and elevation in the standard Earth atmosphere (source Wikipedia):
$\ h = z_2 - z_1 = \frac{R \cdot T}{g} \cdot \ln \left [ \frac{P_1}{P_2} \right ]$
*
*$h$ = thickness of the layer [m]
*$z$ = geopotential height [m]
*$R$ = specific gas constant for dry air
*$T$ = average temperature throughout the layer in kelvin [K]
*$g$ = gravitational acceleration [m/s$^2$]
*$P$ = pressure [Pa]
You can also write it as:
$( z_2 - z_1 ) = \frac{R \cdot Ta}{g} \ln \left( \frac{P_1}{P_2} \right)$
There exist other approximations, for example based on a climatology of lat/lon what parametrise for $g$ and $R$.
A very simple approximation for a typical temperature and pressure for a standard tropical atmosphere, you can use,
$ z = 16 \cdot 10^3 \cdot 5 - \log_{10}(P_1)) $
where $P_1$ is the pressure you're interested in in Pa.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/54429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is cosmic background radiation dark-matter and/or dark-energy? Dumb question alert: Is it possible that the cosmic background radiation might be the source of dark-matter and/or dark-energy? What is the mass of the background radiation in the known universe?
| No.
Neither dark matter nor dark energy can be seen in the electromagnetic spectrum---that's why it's "dark"---whereas the cosmic background radiation is electromagnetic radiation.
We are able to deduce a number of facts about about dark energy and dark matter from their effects on observable stuff (including the CMB in the case of baryon acoustic resonance), but both are strongly excluded from being "stuff as we know it".
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/54504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Dynamic structure factor Dynamic structure factor is the spatial and temporal Fourier transform of Van Hoves time dependent pair correlation function. It is written as
$$ S(k,\omega)= \frac{1}{2\pi}\int F(k,t)\exp(i\omega t)dt $$
$F(k,t)$ is intermediate scattering function.
My question is how did we use spatial and temporal Fourier Transform of Van Hoves to get dynamic structure factor? and what does spatial and temporal Fourier transform mean?
| A spatial Fourier transform means a Fourier transform in the spatial variable ($x\rightarrow k$), while a temporal Fourier transform is the same transformation, but in terms of the time variable ($t\rightarrow \omega$). The equation you have written is the (asymmetric) temporal Fourier transform of $F(k,t)$.
The spatial transform looks like some variation of
\begin{equation}
F(k,t) = \frac{1}{2\pi} \int G(x,t)e^{i k x} dx
\end{equation}
where $G(x,t)$ is the (one-dimensional) Van Hove function.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/54638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Equivalence between QFT and many-particle QM My understanding from my QFT class (and books such as Brown), is that many-particle QM is equivalent to field quantization. If this is true, why is it not an extremely surprising coincidence? The interpretation of particles being quanta of a field is -- at least superficially -- completely different from the quantum mechanical description of N point particles.
| Many body quantum mechanics is the non-relativistic limit of an underlying relativistic quantum field theory (QED in the case of electrons in atoms or metals, QCD in the case of nucleons in a nucleus). This can be made manifest by constructing a non-relativistic effective field theory which describes the low energy limit of the underlying theory. The Dyson-Schwinger equations of the non-rel QFT are easily seen to be equivalent to the schroedinger equations.
There is nothing fundamentally different about non-rel QFTS. The main difference is that in a Gallilean invariant field theory the number of particles is always conserved. As a consequence, sectors with different numbers of particles decouple, and the state with zero particles is trivial (there is no vacuum polarization in a non-relativistic field theory).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/54854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Why is electric potential scalar? I can't conceptually visualize why it would be so. Say you have two point charges of equal charge and a point right in the middle of them. The potential of that charge, mathematically, is proportional to the sum of their charges over distance from the point ($q/r$). But intuitively, my thought process keeps going back to the concept of direction and how the electric field at that point would be zero. So why would the electric fields cancel while the electric potentials just add up algebraically?
| Because work and charge both are scalar quantity ie electric potential is scalar quantity
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/54900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 6,
"answer_id": 3
} |
How is possible for current to flow so fast when charge flows so slow? How is it possible for current to flow so fast when charge flows so slowly?
We know electrons travel very slowly while charge travels at ~the speed of light.
| One needs to distinguish between two things when it comes to electricity, electric currents and voltages.
1) The electric current is flow of electrons in metal wires, (or in fluids like electrolytes). The electrons are moving in the wire at the drift velocity
$v=\frac{I}{enA}$
where: $I$ is the electric current; $e$ is the elctric charge on the electron; $n$ is the electron number density in the metal material of the wire; $A$ the cross section area of the wire.
Depending on the values of $I$, $n$ and $A$, the speed $v$ has a typical value of several cms$^{-1}$!
2) However, the cause of the motion of the electrons is the electric field, that you set in the wire when you switch on the light say, that travells along the wire at the speed of light. As the field travells along the wire so fast, it sets the electrons along the way into motion all along the wire. So it appears as if the electrons are moving very fast, when in fact they don't. I hope this clarifies your point you were trying to make?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/54995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 6,
"answer_id": 1
} |
Dark matter and QFT My understanding is that the particle is a somewhat artificial notion in QFT (see: Quantum Mechanics: Myths and Facts), and that in general it is possible for a quantum field to have unstable excitations that don't look anything like particles. Is this an active field of research (what is it called)? Are there experimental searches for detection of such non-particles? For example, could dark matter be non-matter: some large-scale unstable oscillation of a quantum field?
| Maybe you mean something like Howard Georgi's unparticle theory, see here or here for example?
This is a high energy theory which extends the standard model by an additional scale invariant sector of particles whose properties such as energy, momentum, and mass can simultaneously be scaled up or down (therefore the term scale invariant). In the standard model, these would only work for photons which are massless.
These new particles, if there are expected to coupling only weakly with "normal" matter at low observable energy scales and behave some kind of similar to neutrinos. At the LHC, such unparticles would for example become noticeable by missing energy.
There are indeed ideas, that dark matter could be made of such unparticles, see for example this paper.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/55044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Deriving the reduced Green's functions in Polchinski's volume 1 In equation 6.2.7, Polchinski defines his reduced Green's functions $G'$ on the 2-manifold to satisfy the equation,
$$ \frac{-1}{2\pi \alpha '}\nabla ^2 G'(\sigma_1, \sigma_2) = \frac{1}{\sqrt{g}}\delta ^2 (\sigma_1 - \sigma_2) - X_0^2 $$
(..where $\sigma_1$ and $\sigma_2$ are two points on the manifold and $X_0$ is the zero-eigenvalue of the Laplacian..Why is he assuming that there is only one zero mode?..)
Now at various place he has written down the solutions to the equation like,
*
*on a $S^2$ it is given by 6.2.9,
$$G' = - \frac{\alpha'}{2}ln \vert z_1 - z_2\vert ^2 + f(z_1,\bar{z_1}) + f(z_2,\bar{z_2})$$
where $f(z,\bar{z}) = \frac{\alpha'X_0^2}{4} \int d^2 z' exp(2\omega(z,\bar{z}))ln \vert z - z'\vert ^2 + k$
*
*For the disk it is given by 6.2.32,
$$G' = -\frac{\alpha'}{2}ln \vert z_1 - z_2\vert ^2 + \frac{\alpha'}{2}ln \vert z_1 - \bar{z_2}\vert ^2 $$
*
*For $\mathbb{RP}^2$ it is given by,
$$G ' = -\frac{\alpha'}{2}ln \vert z_1 - z_2\vert ^2 + \frac{\alpha'}{2}ln \vert 1 +z_1\bar{z_2}\vert ^2 $$
I would like to know how these functions are derived.
*
*Also how is it that the dependence on the $f$ for the first case drops out in equation 6.2.17? If I plug in the functions I see a remnant factor in the exponent of the form, $\sum_{i,j,i<j,=1}^n k_i k_j (f(\sigma_i) + f(\sigma_j))+ \sum_{i=1}^n k_i^2 f(\sigma_i)$
| Recently, I just learned that this has to do with something called Hadamard form of the Green's function, which I am not familiar with. It's roughly about the singularity structure of the two-point function. In two dimensions, the Green's function is roughly a sum of a logarithmic divergent term and a regular term. In higher dimensions, the green's function is roughly a sum of a pole, a logarithmic divergence and a regular term.
To be specific, the logarithmic divergence is $\log(d(x,y))$, where $d(x,y)$ is the distance (geodesic length) between the two points $x$ and $y$. When the two points are very close, the Green's function has a logarithmic divergence. In higher dimensions, there should also be a pole $1/(d(x-y))^{D}$, which comes from the point charge distribution.
You may need the following papers:
*
*http://DOI:2010.1007/BF01196934
*https://hal.archives-ouvertes.fr/hal-00338657/document
Hope they are helpful for you.
The question is related with the following
Simple, physical explanations for Hadamard behaviour of two-point functions
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/55116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Best method for building balsa-wood bridge I'm building a bridge out of balsa-wood strips for school, and wanted some advice. These are the specifications:
*
*Height: 2 to 6 in
*Length: 12 inches, plus 1-3 inches on each side resting on tables
*Width: 2 +- 1/16 in at the base
*Weight: <= 50 g
The objective is maximum efficiency (load it can hold divided by weight), not just load. It will be tested with a block placed in the middle of the base that has a bucket hanging from it. The bucket will be filled with sand slowly.
What type of truss do you recommend I use for this structure? Currently, I was planning on a warren truss-type configuration, but with an arch. What other trusses would you suggest? What height should I use? What length?
Any and all tips or resources are greatly appreciated. Also, I would love to know the reasoning behind any choices (just for personal interest).
| Use paint thinner combined with glue. It thins the glue out so that the glue can get into all of the crevices that people can and CANNOT see. This makes the bridge much more compact and all aspects of the bridge, especially joints, will be bound together with more strength. My bridge was 20 inches long, 160 grams and held 4,287 pounds!!! P.S - As well, use a gusset technique. If you don't know what that is, then search it up online...
PS - my bridge had to cover a 15" span and be 5cm tall by 5 cm wide. Mine was 4" by 4".
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/55219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Distance traveled in a simple two body problem I'm trying to program an $N$-body simulation and I'd like to be able to test it with a known solution to a simple, two-body problem. I've looked at multiple sources, but I just don't know how to apply it to my simple test case.
Two objects at rest placed 10 meters apart with mass of 1. The force between them is a modified gravitational force of F = 10 * m1 * m2 / r^2. How long will it take for each object to travel 4 meters?
| First of all, this is by no means a trivial problem. The usual method goes something like the following. The force from mass 2 on mass 1 is:
$$F_{21} = G\frac{m_1 m_2}{(x_2 - x_1)^2} = m_1 \ddot{x}_1$$
Similarly:
$$F_{12} = -F_{21} = -G\frac{m_1 m_2}{(x_2 - x_1)^2} = m_2 \ddot{x}_2$$
Canceling masses and subtracting the equations from each other gives:
$$\ddot{x}_2 - \ddot{x}_1 = \frac{d^2}{dt^2}(x_2 - x_1) = -G\frac{m_1 + m_2}{(x_2 - x_1)^2}$$
If we define $~r=x_2-x_1$ as the separation between the masses, then our equation becomes:
$$\ddot{r} = -G\frac{m_1 + m_2}{r^2}$$
Now it gets a bit trickier. We use the fact that $\ddot{r}=\dot{r}~d\dot{r}/dr$ to separate the differential equation:
$$\dot{r}~d\dot{r}=-G\frac{m_1 + m_2}{r^2}~dr$$
For $\dot{r}=0$ at $r_0$ (they're initially at rest), the integral of the above yields:
$$\frac{dr}{dt} = \sqrt{ \frac{2 G (m_1 + m_2)}{r} - \frac{2 G (m_1 + m_2)}{r_0}} = \sqrt{ \frac{2 G r_0 (m_1 + m_2) - 2 G r (m_1 + m_2)} {r\ r_0}}$$
So then:
$$\Delta t=\sqrt{\frac{r_0}{2 G (m_1 + m_2)}}~\int_{r_0}^{r} \sqrt{\frac{r}{r_0 - r}}~dr $$
In your case you've set $G=10$, $r_0=10$, and the masses each to one. When they've each traveled four meters, $r=2$. So you need to integrate the above from 10 to 2.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/55278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Number of strings in elementary particles I've seen many articles about the string theory and have a very simple question : I'd like to know how many Strings are in a quark or an electron?
| There are a lot of different ways to get quantum field theories that look like the Standard Model in string theory. In some string theory models (such as the heterotic models), every particle that the Standard Model treats as point-like (electrons, quarks, etc) is a single elementary string. But there are other more complicated models in which the standard model particles are not built out of strings at all, but instead realized as the low energy excitations of D-branes wrapped around various kinds of singularities.
We don't know which (if any) of these models is actually correct, so we can't say with certainty that string theory predicts that an electron is made up of some number N of strings.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/55434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Interaction potential analysis from $\phi^4$ model In this paper, the authors consider a real scalar field theory in $d$-dimensional flat Minkowski space-time, with the action given by
$$S=\int d^d\! x \left[\frac12(\partial_\mu\phi)^2-U(\phi)\right],$$
where $U(x)$ is a general self-interaction potential. Then, the authors proceed by saying that for the standard $\phi^4$ theory, the interaction potential can be written as
$$U(\phi)= \frac{1}{8} \phi^2 (\phi -2)^2.$$
Why is this so? What is the significance of the cubic term present?
In this question Willie Wong answered by setting $\psi = \phi - 1$, why is that? Or why is this a gauge transformation?
Does anyone have better argument to understand the interection potential?
| It's not a gauge transformation, it's a field redefinition. Srednicki gives an example of this in exercise 10.5. In this exercise, a free field theory is turned into what looks like an interacting field theory by a field redefinition, however in perturbation theory, the scattering amplitudes vanish, confirming that the physics hasn't changed.
I suspect you will find the same here (though I haven't done it!) - if you compute scattering amplitudes for the 3-way vertices represented by the cubic terms resulting from this field redefinition, they should cancel.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/55517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Estimate number of hairs on human head A technique of vital importance at all levels in physics is estimation. This is obvious from the first chapter in any introductory physics textbook, but is also related to the working physicist. Checking orders of magnitudes during research presentations is common practice - I've seen many good questions with good followup answers that started with "If I estimated that value I would get something much different". In general, the actually result is not the interesting thing - it's what individual things will affect the result. There are even famous examples of this: Fermi's piano tuner problem, and the Drake equation. Apparently, Fermi was so good at this that he estimated the size of the Trinity nuclear bomb test to within a factor of 2 (see the wikipedia article for a discussion of that).
In this spirit, I would like to see someone try and estimate the number of hairs on the human head. The answer must include the basic assumptions so we can see where the major unknowns lie, and the best answer is one which requires no specific knowledge
| Firstly, I assume that we have 300 hairs per square cm on our head. This can be tested by waxing an area of 1cm^2 on your scalp and counting the number of hairs that are removed.
Step 2, we must calculate the area of the scalp, and we assume 100 hairs per square cm applies to the whole area of the scalp.
I assume the radius of my head is sphere. I measured the circumference to be 60cm.
$C = 2\pi r$
$r = \frac{C}{2\pi} = \frac{60}{2\pi} = 9.55cm$
Therefore,
$A = \pi r^2 = \pi \times 9.55^2 = 286.4 cm^2$
Now I will assume that only 4/5 (slightly more than half) of this ball is covered in hair.
Therefore area covered in hair = 286.4*0.8 = 214.72 cm^2.
Finally we calculate the number of hairs to be:
textNo. of Hairs = 214.72*300 = 64416 hairs
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/55598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.