Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What would happen if charged plates are placed horizontally? My idea is placing charged conducting plates in such a way that they won't see each others' surfaces unlikely to the typical design of parallel plates. If they are placed like this, would be the force that one plate exerting to other same as in the typical design? If we say the charges on the plates are q1 and q2 how could be the force calculated? Since the electric field is perpendicular to surface of a conductor would this placement affect the electrostatic force and field? Here is how will the plates are placed:
If the two plates are made of conducting material, there is nothing preventing charges from flowing as close as possible to each other, which, in this case, means toward the edge of each plate closest to the other, right next to the insulating layer. If we now suppose the layer to be thin (dimension $d$) with respect to the plates' sides $L = 10\; cm$), and also that the plates are thin in the vertical direction, then charge distribution will be essentially linear, with charge density $\lambda = Q/l$ (Q the total charge on each plate). The electric field produced by this configuration, neglecting edge effects (hence the assumption $d\ll l$) is $E = 2\lambda/d$ in a direction perpendicular to the insulating layer axis. Hence the force per unit length $f = 2\lambda^2/d$, and the total force $$ F = 2 \frac{\lambda^2 l}{d} = 2 \frac{Q^2}{d\; l} $$ is attractive and directed along the line joining the centers of the two plates, for obvious reasons of symmetry. If instead the two plates are made of insulating material, carrying constant surface charge density $\sigma = Q/l^2$, the force, again directed perpendicular to the insulating layer major axis, can be recovered by means of the usual quadruple integral. Any CAS (=computer algebra system) can do that.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/130961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Wouldn't the presence of dark matter slow the expansion of the universe? If there is a huge element of dark matter in the universe, wouldn't this extra gravity prevent the accelerated expansion of the universe?
The short answer is yes, the presence of dark matter would act to counter the expansion of the universe. And in fact it does--but not enough to stop the expansion. Dark matter has gravity just like normal matter. In fact, that's pretty much the only reason we know dark mater exists at all: we can observe dark matter's gravitation effects in the rotation rates of galaxies, gravitational lensing, and things like that. Note that you shouldn't confuse dark matter with dark energy, which is presumed to be responsible for the acceleration of the expansion of the universe.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Hilbert space for Density Operators (instead of Banach spaces) Is it possible to construct a well defined inner-product (and therefore orthonormality) within the set of self-adjoint trace-class linear operators? In the affirmative case, dynamics could be analyzed in Hilbert space, which seem way more simple that Banach spaces. Which is the fundamental reason why this is not possible?
It is possible indeed !! It is called Hilbert Schmidt scalar product, it is defined in a Hilbert space of bounded compact operators including trace class operators. $$\langle A|B\rangle := tr(A^\dagger B)\:.$$ The space of Hilbert Schmidt operators is made of all bounded operators $A$ in the considered Hilbert space, such that $A^\dagger A$ is trace class. It is in fact possible to reformulate all QM using that notion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
What is entropy really? On this site, change in entropy is defined as the amount of energy dispersed divided by the absolute temperature. But I want to know: What is the definition of entropy? Here, entropy is defined as average heat capacity averaged over the specific temperature. But I couldn't understand that definition of entropy: $\Delta S$ = $S_\textrm{final} - S_\textrm{initial}$. What is entropy initially (is there any dispersal of energy initially)? Please give the definition of entropy and not its change. To clarify, I'm interested in the definition of entropy in terms of temperature, not in terms of microstates, but would appreciate explanation from both perspectives.
Here's an intentionally more conceptual answer: Entropy is the smoothness of the energy distribution over some given region of space. To make that more precise, you must define the region, the type of energy (or mass-energy) considered sufficiently fluid within that region to be relevant, and the Fourier spectrum and phases of those energy types over that region. Using relative ratios "factor out" much of this ugly messiness by focusing on differences in smoothness between two very similar regions, e.g. the same region at two points in time. This unfortunately also masks the complexity of what is really going on. Still, smoothness remains the key defining feature of higher entropy in such comparisons. A field with a roaring campfire has lower entropy than a field with cold embers because with respect to thermal and infrared forms of energy, the live campfire creates a huge and very unsmooth peak in the middle of the field.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 14, "answer_id": 7 }
Definition of derivative operator on a manifold I'm hoping to understand the motivation for certain parts of the definition of a derivative operator $\nabla$ on a manifold $M$. In Wald's General Relativity, two clauses of the definition are: * *Commutativity with contraction: For all tensors $\mathit{A} \in \mathscr{T}(k,l)$, $\nabla_{d}(\mathcal{A}^{a_1...c...a_k}_{b_1...c...b_l}) = \nabla_{d}\mathcal{A}^{a_1...c...a_k}_{b_1...c...b_l}$. (the parenthesis indicates contraction) *Consistency with the notion of tangent vectors as directional derivatives on scalar fields: for all functions $f \in \mathscr{F}:M \rightarrow \mathbb{R}$ and all tangent vectors $t^a \in V_p$, it is required that $t(f)=t^a\nabla_af$ What is the point of having derivative operators commute with contraction (where we sum over the vectors and dual vectors for some (i,j) slot in the tensor)? What theorems are not possible to prove if this commutativity isn't stipulated? The so-called "Leibnitz rule" for derivative operators on manifolds corresponds to the simple product rule in elementary calculus; does this commutativity-with-contraction rule correspond to something simple as well? My second question is about the notation of clause 2. We know that the $t^a$ are tangent vectors in the tangent space $V_p$ at point $p$ on the manifold. Then what is $t$ supposed to be?
For $1.$, you have, by applying the Leibnitz rule for covariant derivatives : $\nabla_{d}(\mathcal{A}^{a_1...c...a_k}_{b_1...c...b_l}) \\= \nabla_{d}(\mathcal{A}^{a_1...c'...a_k}_{b_1...c...b_l} \delta_{c'}^{c}) \\= \nabla_{d}(\mathcal{A}^{a_1...c'...a_k}_{b_1...c...b_l} g^{ca}g_{ac'}) \\=(\nabla_{d}\mathcal{A}^{a_1...c'...a_k}_{b_1...c...b_l}) \delta_{c'}^{c} + \mathcal{A}^{a_1...c'...a_k}_{b_1...c...b_l} (\nabla_{d} g^{ca})g_{ac'} + \mathcal{A}^{a_1...c'...a_k}_{b_1...c...b_l} g^{ca} (\nabla_{d}g_{ac'})$ The covariant derivative of the metric tensor is zero, then $\nabla_{d} g^{ca} = \nabla_{d}g_{ac'}=0 $: $\nabla_{d}(\mathcal{A}^{a_1...c'...a_k}_{b_1...c...b_l} \delta_{c'}^{c}) = (\nabla_{d}\mathcal{A}^{a_1...c'...a_k}_{b_1...c...b_l}) \delta_{c'}^{c}$ So, covariant derivatives and contractions commute.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
2-slit experiment In the 2-slit experiment, is it possible to "account" for all of the energy in the incoming beam - i.e. does all of the incoming energy show up in the bright spots or is some of it "destroyed" when destructive interference takes place? If it is destroyed, what form is it converted into? If it's NOT destroyed, what would happen if you allowed the light to pass through a hole where the dark spot had been, then, exploiting the different angle that each path arrived from, organised things such that these waves now reinforced each other? We would then have a situation where more energy comes out than goes in. Which is impossible. Or does "spooky action at a distance" come into play and the overall brightness of the previous spots slightly dim to account for the new path?
This question is really classical. If you model two-slit interference in Maxwell's electrodynamics, the same thing happens: opening the second slit causes the intensity at some points on the screen to decrease. It happens with water surface waves too, and any other kind of wave. Destructive interference at one point is always matched by constructive interference nearby, and energy is "diverted" from the former to the latter location through local processes. I find it counterintuitive myself, but you don't need any quantum weirdness to understand it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Physical applications of matrices and determinants Other than notation devices, I don't see any direct application of matrices/determinants in physics. For example, they are just a different way to write a partial derivative and determinants find if they can be explicitly solved if written down as simultaneous equations. Calculus, for instance, can be directly applied to physical problems, but I don't know of any other application of matrices other than representing equations in a different notation. And in most of the cases like vector products, you just realise that a huge term can just be written down as a determinant, so it is essentially a notational tool. They are used in tensor calculus, but for similar reasons. Can someone please guide me on more applications with good sources?
Applications of matrices: * *Matrix (aka quantum) Mechanics, obviously *Mechanics of deformable solids (where matrices describe stresses) *Statics (most in engineering contexts), where matrices describe stresses. *Symmetries (where matrices describe rotations/scaling/translations etc..) *Coordinate transformations, where matrices describe the transformation a coordinate system undergoes. *Represantation of (Linear) Operators (related to quantum mechanics but not only) Determinants: * *Measure volumes (in transformations etc..) *Measure volumes in general sense as measure (for example in Path-integral formulation, in many cases the result is expressed as a determinant of a genearally infinite-dimensional matrix)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Capacitor-like-thing for controlling temperature of fluid? I want to minimise the Gibbs' phenomenon like thing i.e. sudden peaks (temperature peaks here) in a container. Assume you have a cone where you want to block the transmittance of the temperature into the cone like current by capacitors. The ideal situation would be that the material by which the cone is made contains naturally things like capacitors. However, I have not seen such a material and so small capacitors. The other design is to have the cone surface insulator while at the ends of the cone to have things like capacitors to store heat (instead of electric charge) for fluid. Are there any things like capacitors for fluid in such a situation?
The usual hydraulic analogy for a capacitor is an elastic membrane: A capacitor doesn't allow current to flow across it, but you can push charge onto it by applying a potential. In the hydraulic analogy an elastic membrane across the pipe doesn't allow water to flow through it, but you can push some water through the pipe by elastically deforming the membrane.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is density an intensive property? I am still trying to understand what are intensive and extensive properties. Possibly someone can give a pointer to a decent text (preferably on the web), as I am not too happy (to say the least) with what I found so far on the web. I already asked here one question on this, which I finally answered myself. My new problem (among several others) is that density seems to be one of the first properties taken as example of an intensive property. While it seems a good approximation of what I know about solids and liquids, it seems to me a lot more problematic with gas, as they tend to occupy all the available space you give them. But none of the documents I found seems to make any resriction regarding density of gas. It seems to me that my opinion (apparently contested) that velocity is an intensive property, may be easier to support than the intensiveness of density in the case of gas. Or to put it differently, I do not see why pressure should be more intensive than volume, while wikipedia lists pressure as intensive, but not volume. Ideal gas law states that $PV=nRT$, which apparently gives a pretty symmetrical role to $P$ and $V$. And density depends on pressure (actually using this same formula and molecular weight). If it were not for the fact that some principles seem to be based on the concept, such as the state postulate which I found on wikipedia, I would start wondering whether these are real concept in physics.
Consider $10~\mathrm{ kg}$ of a substance. Take a few $\mathrm{kg}$ of the substance and measure the mass density. The density is same as before. So we can say that from the above explanation, density is an intensive property.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Skin depth of current density in magnetic conductor at boundary between two different materials Imagine a magnetic conductor with a cylindrical cross section, surrounded by a coil with a time varying current of $$I = I_0\cdot \cos (2\pi f t)$$ The conductor is split into two parts, the first with a conductivity and a relative permeability of $\kappa, \mu$, the second with $4\kappa, \mu$. There is a magnetic field $B$ through the conductor, which is caused by the current and therefore time varying as well: $$B = B_0\cdot \cos (2\pi f t)$$ The change of this magnetic field induces a voltage inside the material and causes a current density $J$. This current density has the value $J_1$ on the surface of the left conductor and $J_2$ on the right side. The skin depth $\delta$ is defined by the distance from the surface where $J = 0.37 \cdot J_1$, respectively $J = 0.37 \cdot J_2$ , with $0.37 = 1/e$ and also: $$\delta = \frac{1}{\sqrt{\pi f\kappa\mu}} = \frac{\sqrt{2j}}{\alpha}$$ where a $\alpha$ is the propagation constant. I found out by simulation, that at the boundary between both materials, the blue one, and the orange one, applies: $$\frac{1}{\delta_{12}} = \frac{1}{2}(\frac{1}{\delta_{1}}+\frac{1}{\delta_{2}})$$ and therefore $$\alpha_{12} = \frac{1}{2}(\alpha_1 + \alpha_2)$$ But I'm really struggling to prove that. Can someone give me some hints, how I could get these relations analytically? Here another plot: The upper one shows the current density at the surface. The second one shows the contour line where the current density decreased about $63\% = skin depth$. At $z=0$ is the boundary between both materials. Though the current density is a step function, the skin depth is continuous and has the value $\delta_{12}=\frac{2}{\frac{1}{\delta_{1}}+\frac{1}{\delta_{2}}}$ at $z=0$.
Well, I hope I didn't go too much off track. I'm open for discussion.(possible solution path is on the bottom) Looking at the formula for $\delta$ I see that it is actually related to the speed of EM waves in the medium. The speed of propagation for a medium with the properties $\kappa_1$ and $\mu_1$ is: $$c_1=\frac{1}{\sqrt{\kappa_1\mu_1}}$$ Therefore we can rewrite it as: $$\delta=\frac{1}{c\sqrt{\pi f}}$$ Looking at it, I would like to have the $\omega$ in it rather than $f$: $$\delta=\frac{\sqrt{2}}{c\sqrt{\omega}}$$ Lets square it: $$\delta^2=\frac{2}{c^2\omega}$$ What stays same, I guess its $\omega$: $$\omega=\frac{2}{c^2\delta^2} $$ The frequency is the same everywhere, so it is: $$\frac{1}{c_1^2\delta_1^2}=\frac{1}{c_2^2\delta_2^2} $$ or: $$\frac{c_2}{c_1}=\frac{\delta_1}{\delta_2} $$ So now we actually reduced it to a problem of finding the speed of propagation of EM-waves at the interface. That will not help directly but it gives me another idea. I would now suggest to look at it similar to a semiconductor PN-junction problem, and calculate the change of $\kappa$ on the interface due to the differences in charged particle density. The drift currents will do the charge compensation on the interface. So the diffusion current is: $$I_{diff}=qD\frac{dn}{dx}$$ Using this formula you get some current, this affects the local $\kappa$ and creates a $\kappa'$. Using this value you should get the $\delta$ on the boundary(and even the functional dependence on the distance from the interface).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/131994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
How much voltage should be applied to an object to create a certain amount of charge? I am curious as to how much voltage should be applied to create a specific charge. Is there a formula to calculate it, and what are the parameters that can affect the relation between voltage and charge created in that object? P.S: I haven't taken course on this subject, so I don't know the details of this subject.
Voltage has absolutely nothing to do with charge. I can "move" an infinite amount of charge trough a superconductor with zero voltage. Are you asking about the relationship of charge to voltage on a capacitor? That's a linear relationship: Q=C*U. The charges, in that case, are not "created" but merely separated. If you want more charge for the same amount of voltage, all you have to do, is to increase the capacitance of the capacitor. Now, if you want to actually create new charges out of "nothing", you would have to create electron-positron pairs, which requires an energy of approx. 1MeV per pair. In that case a simple accelerator would have to operate on a voltage of over one MV, to overcome the threshold of pair production in particle collisions. Even so that would be a very inefficient process, to say, the least, and a detailed analysis of the kinematics shows, that a multiple of that threshold energy is needed. Practical positron sources use multi-MeV photons, which are derived from GeV beam lines and ultrashort laser pulses focused on heavy nuclei, like gold, in which case the emission is caused by complex multi-photon processes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Photons to Represent a Wave I fear that I have a fundamental misconception about the "wave particle duality" of light, but in a related question, the answerer said, in some sense, that a light wave propagates until it hits something, at which point in time it (can) act(s) like a photon. Which is fine to me, but there are a finite number of photons in a wave front, so what happens if you "run out" of photons in a wave front? Certainly the wave needs to interact with everything it touches, so if you have a wave that only effectively has one photon, and it "hits" two electrons, how does it interact with both? Say you have two electrons both a distance $R$ from a photon emitter, emitting circular waves. Or something like that.
"Running out" of photons simply means that your wavefront is absorbed or scattered in a different direction or something like that. Either way, the original wave is "consumed", so you loose intensity or photons, depending on which picture you like better. For the case of a single photon source: One photon can only interact with one electron. However, there are more complex cases, where the electrons could be coupled (like in Cooper pairs), then of course both electrons would somehow "feel" the photon. Or you can think of higher order processes. For example the photon could couple to one electron and form a polariton, which then could interact with another electron.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
How can I calculate the force that is applied on a tube by another tube? Let's say there is two tubes(cylinders with no tops or bottoms) with charges $q_1$ and $q_2$, radii $b_1$ and $b_2$, lengths $l_1$ and $l_2$. These tubes are located along the axis of each other's surfaces like in this figure: If the electric field that the first tube creates on a point is; $$ E = \frac{q}{4\pi\varepsilon_0}\left(\frac{1}{\sqrt{b^2 + (c-a)^2}} - \frac{1}{\sqrt{b^2 + (c+a)^2}}\right) $$ where $b$ is the radius of the tube, $c-a$ is the distance between the centre of the furthest part of the tube and the point, $c+a$ is the distance between the centre of the closest part of the tube and the point, $q$ is the total cahrge on the tube and $\epsilon_0$ is the electric constant. Here is the figure of the tube and the point for those who didn't understand from my description: The question is how can I calculate the force between these two tubes? Update:The electric field formula I found is not true since it is valid for a point on axis of the cylinder. Thus I would be pleased if you could show me how to solve the problem from the beginning.
Are you talking about the forces two parallel current-carrying wires exert on one another? Given two current-carrying wires, $a$, and $b$, we can determine the force exerted on wire $b$ by wire $a$ with $$F=(µ_oI_a/2πr) I_bL$$ Where F is the force exerted, $µ_o$ is the magnetic permeability of a vacuum, $I_a$ is the current flowing through wire $a$, $I_b$ is the current flowing through wire $b$, and $l$ is the length of the section of wire $b$ in the the magnetic field of wire $a$, and $r$ is the distance between the wires. So, let's work through an example problem. What is the force wire $a$ exerts on wire $b$ if both wires carry a current of 3 Amps, are 0.25m apart, and if wire $b$ has 1m of wire in the magnetic field of wire $a$. $$F=(µ_oI_a/2πr)I_bL$$ $$F=(µ_o(3)/2π(0.25))(3)(1)$$ $$F=3(µ_o⋅3/1.57079)$$ $$F=3(µ_o⋅1.9098)$$ $$F=3((4π×10^{−7})⋅1.9098)$$ $$F=3(2.4⋅10^{-6})$$ $$F=7.2⋅10^{-6}\text{ Newtons}$$ I would like to apologize in advance in the case that I misunderstood the question. I hope I was able to help. I'm going to add this to my favorites list to see what happens. That is, if I didn't answer your question.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Sound difference between musical instruments I know that the difference between two musical notes is given by the sound frequency, and the difference in volume is given by the amplitude. What I am wondering is why does the same note sound different on different musical instruments? What in the wave makes the difference between the sound of a harmonica and the sound of a violin singing the same note?
It's not just a pure single frequency of sound that is being transmitted by an instrument. Just like with light, if you ask the frequency of the sun's emission, the answer would be that it's a whole broad spectrum (hence its ability to produce a rainbow, or allow objects to reflect colours other than yellow) but its peak frequency is yellow. You can ask for a distribution of the colours (or light frequencies) that it transmits, and you'll get a plot of intensity vs light frequency (this is also known as the Fourier Transform of the plot of the actual amplitude of light waves travelling from the sun)... the plot will peak at the frequency represented by yellow light. You will see something similar for a sound note. If you look at the Fourier transform of the middle C played by a piano string (approximately 262 Hz), you will see a plot with a bunch of hills and valleys, the tallest hill peaking at 262 Hz, the second tallest at 524 Hz, the third tallest at 786 Hz, etc (note that they are integer multiples of the note itself) but those hills will have some shape to them meaning that other frequencies outside of the peak note are represented in the note itself. It's the shape of those hills (as well as the ratio of the peak of those hills to the following integer multiple peaks) that determine the style of the sound.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Can a third type of electrical charge exist? Upon reading my book on physics, it mentions that there are only two discovered types of electric charges. I wonder if there could be a third type of elusive charge, and what type of effects could it have upon matter or similarly?
Mathematically, electric charge current 4-vector conservation refers to the invariance of theory under U(1) transformations, so there aren't different types of electric charge (like in SU(n) theories) excepting the usual plus-minus. Moreover, the fact of conservation of physical quantity means that corresponding operator commutes with hamiltonian which is constructed from fields operator. It's not hard to show that particle must have charge which is opposite to the antiparticle one.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 4, "answer_id": 1 }
How can metal objects become electrically charged, if current flow means that an equal number of electrons enter/leave the object? I am trying to answer to the question in the title. I am restricting my question to metal objects only. Here is my logical reasoning: * *Current is the flow of charge over time. *In a circuit (simple series circuit, let's say), the flow of current is the same at every point in the circuit. *Therefore, the same # of coulombs of charge is flowing at every point in the circuit. *Electrons are the "material" of the charge that is flowing. *Therefore, equal flow of charge at every point in the circuit must mean equal flow of electrons at every point in the circuit. *Therefore, current can never cause a metal object to become positively or negatively charged, because the net number of electrons in the metal object will never change due to current flow. (!) Of course, objects CAN become electrically charged, gaining or losing electrons. So something is wrong with my reasoning or my premises. I just don't know what it is. Where am I going wrong?
There's a problem in that you are assuming that all current takes place in a circuit. But in some circumstances, like in a lightning strike or other form of electrostatic discharge for example, a current exists for a while, but it does not take place in a circuit.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Which of these two textbook equations of geodesic deviation is correct? My previous question Textbook disagreement on geodesic deviation on a 2-sphere got shot down as “off topic”, so I'm having a second stab at it. Misner et al's Gravitation (p34) gives the geodesic deviation equation as$$\frac{D^{2}\xi^{\alpha}}{D\tau^{2}}+R_{\phantom{\mu}\beta\gamma\delta}^{\alpha}\frac{dx^{\beta}}{d\tau}\xi^{\gamma}\frac{dx^{\delta}}{d\tau}=0,$$ with the right-hand side $\xi$ index $\gamma$ equal to the second lower index on the Riemann tensor. Lambourne's Relativity, Gravitation and Cosmology (p185), on the other hand, gives $$\frac{D^{2}\xi^{\mu}}{D\lambda^{2}}+R_{\phantom{\mu}\alpha\beta\gamma}^{\mu}\xi^{\alpha}\frac{dx^{\beta}}{d\lambda}\frac{dx^{\gamma}}{d\lambda}=0,$$ with the right-hand side $\xi$ index $\alpha$ equal to the first lower index on the Riemann tensor. My question is, which of these two equations is correct? I tried to answer this question myself by using the two equations to calculate the geodesic deviation on the surface of a unit 2-sphere. With Misner's equation (substituting $\lambda$ for $\tau$) I got $$\frac{D^{2}\xi^{\theta}}{D\lambda^{2}}=\left(\sin^{2}\theta\right)\left(u^{\phi}u^{\theta}\right)\xi^{\phi}-\left(\sin^{2}\theta\right)\left(u^{\phi}u^{\phi}\right)\xi^{\theta}$$ and $$\frac{D^{2}\xi^{\phi}}{D\lambda^{2}}=\xi^{\theta}\left(u^{\theta}u^{\phi}\right)-\xi^{\phi}\left(u^{\theta}u^{\theta}\right).$$ You can see my calculation on my previous question Textbook disagreement on geodesic deviation on a 2-sphere With Lambourne's equation I got $$\frac{D^{2}\xi^{\theta}}{D\lambda^{2}}=0$$ and $$\frac{D^{2}\xi^{\phi}}{D\lambda^{2}}=0.$$ This didn't seem right to me so I concluded that Lambourne's equation is incorrect.
Despite my comment, on second look your second equation, attributed to Lambourne, is always identically zero. This is because you multiply the symmetric tensor $$\frac{dx^{\beta}}{d\lambda}\frac{dx^{\gamma}}{d\lambda}$$ against $R^{\mu}{}_{\nu\beta\gamma}$, and the riemann tensor is antisymmetric on those last two indices, and tracing a symmetric tensor against an antisymmetric tensor always gives zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/132931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
If a photon has no mass, how can it be attracted by the Sun? I read that the photon doesn't have mass, but my teacher says that the photon has mass because the sun can attract it (like in the experiments to prove the theory of relativity). I think that there is another reason to explain that. How can I explain that the photon doesn't have mass and the sun attracts photons?
To properly understand what is going on you need to understand general relativity. Massless particles, like photons, travel on null geodesics and mass bends spacetime so the null geodesics are not straight lines. The problem is that neither you nor your teacher understand general relativity so this isn't a very convincing argument. But here is an argument to show photons are attracted by gravity even in Newtonian gravity. If you have a large mass $M$ attracting a small mass $m$ and the distance between the two masses is $d$ then the force between them is given by Newton's equation: $$ F = \frac{GMm}{d^2} $$ To get the acceleration $a_m$ of the small mass $m$ we use Newton's second law $F = ma$ so: $$ a_m = \frac{F}{m} = \frac{GMm}{md^2} = \frac{GM}{d^2} $$ Note that the mass of the small object has cancelled out, so the acceleration doesn't depend on $m$ at all. That means a massless object like a photon experiences exactly the same acceleration as a massive object. So even in Newtonian gravity we expect the path of a light ray to be deflected by gravity. In fact with some head scratching the equation for the expected deflection can be derived, and it is: $$ \theta_{Newton} = \frac{2GM}{c^2d} $$ where $d$ is the distance of closest approach and $\theta_{Newton}$ is the angle that the light ray is bent. As I mentioned at the start, to properly describe the light ray you need general relativity and using this we find that the deflection is actually twice as big as Newtonian gravity predicts: $$ \theta_{GR} = \frac{4GM}{c^2d} $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Scientists observe the laws of the physics but, Where do they come from? Has anyone ever considered how the laws of physics that we study came into being.
Actually... There is a branch of physics that attempts to answer that question. It's called Physical Cosmology. Among many other things, cosmologists want to know why the physical laws are as they are. The trouble is, there just are not that many other universes ready for us to compare. We also don't know what is going on with most of our universe, like with dark matter, and that takes up most of the scientific community's time.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
A step in zeta function regularization I'm just wondering about the mathematical step $$\sum_{n=1}^\infty n\exp[-\epsilon n\sqrt x]=\frac1{\epsilon^2 x}-\frac1{12}+\mathcal O(\epsilon).$$ Why is this equality so? I see that $$\sum_{n=1}^\infty n\exp[-\epsilon n\sqrt x]=-\frac1{\sqrt x}\frac{\partial}{\partial\epsilon}\frac{1}{1-\exp[-\epsilon\sqrt x]}\simeq-\frac1{\sqrt x}\frac{\partial}{\partial\epsilon}\frac{1}{\epsilon\sqrt x}=\frac1{\epsilon^2x}.$$ But how about the $-\frac1{12}$?
\begin{align*} \sum_{n=1}^\infty n\exp[-\epsilon n\sqrt x] &=-\frac1{\sqrt x}\frac{\partial}{\partial \epsilon}\frac{1}{1-\exp[-\epsilon\sqrt x]}\\ &=\frac{\exp[-\epsilon\sqrt x]}{(1-\exp[-\epsilon\sqrt x])^2}\\ &=\frac{1}{\exp[\epsilon\sqrt x]-2+\exp[-\epsilon\sqrt x]}\\ &\simeq\frac{1}{\frac2{2!}(\epsilon\sqrt x)^2+\frac{2}{4!}(\epsilon\sqrt x)^4}\\ &=\frac{1}{\epsilon^2 x}\frac1{1+\frac1{12}\epsilon^2 x}\\ &\simeq\frac{1}{\epsilon^2 x}(1-\frac1{12}\epsilon^2 x)\\ &=\frac1{\epsilon^2 x}-\frac1{12}. \end{align*}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Potential difference between point on surface and point on axis of uniformly charged cylinder Question: Charge is uniformly distributed with charge density $ρ$ inside a very long cylinder of radius $R$. Find the potential difference between the surface and the axis of the cylinder. Express your answer in terms of the variables $ρ$, $R$, and appropriate constants. $Attempt:$ I am struggling with determining which Gaussian surface to use. If I use a cylinder, then the cylinder would have an infinite area, right? How can I deal with that? If I use a sphere (since I am trying to find the potential difference between only two points, one on the surface and one on the axis), what will be the charge inside the sphere? If I use a sphere as my Gaussian surface, I get: $$\int \overrightarrow{E}.d\overrightarrow{A}=\frac{Q }{\epsilon _{0}}$$ $$\Delta V = -\int_{i}^{f}\overrightarrow{E}.d\overrightarrow{s}$$ $$E = \frac{\rho }{4\pi R^{2}\epsilon _{0}}$$ $$\Delta V = \frac{\rho }{4\pi R^{2}\epsilon _{0}} \int_{0}^{R}dR=\frac{\rho }{4\pi R\epsilon _{0}}$$ But this is wrong.
By Gauss' Law, $E\cdot A=\frac {q}{\epsilon_0}$ (assuming that the Electric Field is constant at every $dA$ and that it is always parallel to $dA$, which it is in this case) Let us define the charge contained in the original problem cylinder as being $Q$ whereas the charge in the smaller Guassian cylinder as being $q$. Therefore, the charge in the smaller Guassian cylinder is dependent on the ratio between the volumes of the two cylinders due to the uniform charge distribution: $$q=Q\frac{\pi(r^2)L}{π(R^2)L}$$ This simplifies to $$q=Q\frac{\pi(r^2)}{π(R^2)}$$ Also we know that $Q=ρV=ρπ(R^2)L$ Substituting this in we get $$E\cdot A=\frac{ρπ(R^2)L(r^2)}{ε_0(R^2)}$$ $$E=\frac{ρπ(R^2)L(r^2)}{ε_0(R^2)2πrL}$$ Cancelling out from top and bottom gives us the answer $$E=\frac{ρr}{2ε_0}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Dark matter a medium for light propagation Is dark matter a candidate to fill void left by luminiferous ether as a medium for light travel?
No. There is no void left by the lack of an aether. The very notion of aether should serve as a warning as to how catastrophically analogical reasoning can fail. "Water waves are in water, sound waves are in air, therefore there must be something in which light propagates." This is flawed logic, and decades of physics were arguably hindered by adhering to it. In fact, any material medium for light would contradict the beautiful result of Michelson and Morley, showing that the speed of light does not depend on velocity with respect to some material's frame. This invariance is in fact now at the very heart of modern physics, and is the basis for relativity, which has been verified in innumerable experiments. Dark matter is, according to the leading theories, some form of matter that is basically normal except that it essentially doesn't interact via the electromagnetic force. As such, it is actually a poor candidate for explaining anything to do with light, even if there were something that needed explaining.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How does Dirac show that $\langle B|\bar{\bar{\alpha}}|P\rangle\;=\; \overline{\langle P|{\bar{\alpha}}|B\rangle}\;=\; \langle B|{\alpha}|P\rangle$? Dirac shows that the conjugate imaginary of $\langle \!P|\alpha$ is $\bar{\alpha} |P\!\rangle$ and then starts with the identity on page 27 in his book: $$\langle B|\bar{{\alpha}}|P\rangle\;=\; \overline{\langle P|{{\alpha}}|B\rangle}\tag {4}$$ He then says this expression is true for any linear operator $\alpha$ and ket vectors$|P\!\rangle$ and $|B\!\rangle$; so replacing $\alpha$ with $\bar\alpha$ we get $$\langle B|\bar{\bar{\alpha}}|P\rangle\;=\; \overline{\langle P|{\bar{\alpha}}|B\rangle}\;=\; \langle B|{\alpha}|P\rangle$$ by using (4) again with $|P\!\rangle$ and $|B\!\rangle$ interchanged. Why should this give the second equality? If (4) is applied again, I would expect ${\bar\alpha}\rightarrow \bar{\bar\alpha}$ getting back to the LHS expression, yet Dirac has ${\bar\alpha}\rightarrow \alpha$
The relation (4) literally switches the states, adds an overall complex conjugate, and removes a hermitian bar over the operator. (Actually, no one uses bars anymore to denote hermitian conjugates, they use daggers instead. And because stacked bars get ugly, I'll use stars for complex conjugation of plain complex numbers.) Thus we have $$ \langle B \mid (\alpha^\dagger)^\dagger \mid P \rangle = \left(\langle P \mid \alpha^\dagger \mid B \rangle\right)^* = \left(\langle B \mid \alpha \mid P \rangle^*\right)^* = \langle B \mid \alpha \mid P \rangle. $$ Since this holds for any $\lvert B \rangle$, $\lvert P \rangle$, and $\alpha$, this shows in a roundabout way that $\alpha^{\dagger\dagger} = \alpha$ for any $\alpha$. I think what's confusing you is that you assumed such an obvious fact before Dirac did, and you thought (4) meant "switch states, add an overall complex conjugate, and add a hermitian bar over the operator (which may cancel with one already there)."
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Derivation of formula of potential energy by a conservative force the formula for potential energy by a conservative force is given by: $$ F = -\nabla U(r), $$ which in one dimension may be simplified to: $$ F = -\frac{dU}{dx} .$$ My question is how is it derived and why do we use a negative sign in the formula?. Is this by definition or is there some other reason?
If the particle moves from the point $x$ to $x+dx$, and assume $dx\gt 0$ for simplicity, then its potential energy increases by $$ dU = \frac{dU}{dx}dx $$ Well, it increases if $dU$ is positive and decreases if $dU$ is negative. So far I have only used the definition of the derivative – pure mathematics. However, the total energy is conserved. The sum of the kinetic energy and the potential energy $$ E = T + U = {\rm const} $$ is constant. It means that if the potential energy increases, the kinetic energy decreases, and vice versa. However, an increasing kinetic energy is exactly the situation when the force $F$ is positive (directed in the same direction as the speed or $dx$). In other words, the equation $$ dU = \frac{dU}{dx} dx $$ may be rewritten as $$ dT = -\frac{dU}{dx} dx $$ because $T$ is effectively $-U$, up to the constant whose differential is zero, but because the kinetic energy increases if the force and $dx$ have the same sign i.e. $$ dT = F\cdot dx $$ (pushing a right-moving particle by a right-directed force accelerates the particle; the expression above is the infinitesimal work), we may compare the two equations and see that $$ F = -\frac{dU}{dx} .$$ So the sign effectively arises from the "anticorrelation" of the kinetic and potential energy (along with the convention that all the terms are included in the total energy with the plus sign; the convention that the kinetic energy is positive, and so on).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Advantages/Disadvantages of "hanging off" a motorcycle when leaning The closest question I could find with regards to this subject was this one: Countersteering a motorcycle However, it does not address the specific physics of what I would like to know. There are 3 ways to lean when turning a motorcycle: * *Upper body remains upright while the bike leans. *Whole body remains aligned with bike. *Most of the body "hangs off" the side leaning in. I'm trying not to make any assumptions to allow for detailed and proper answers addressing issues I may not have considered; hopefully, without being too generic. So to summarize, I would like to know whether the first 2 items are sufficient for all conditions or whether the 3rd has some physical properties necessary in certain conditions.
The important thing about leaning in is that it puts you closer to the road so you don't have so far to fall when you exceed the stickiness of your tires.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/133766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
A light so strong it has a shadow I have recently taken an interest in shadows. I know that in order for a shadow to exist that you must have a solid in the way of the light. My hypothesis is that there can be a light so strong, like a laser beam, that it acts like a solid in the sense where it doesn't let light pass through... is this plausible?
Yes and no. Photons don't interact in free space. So a beam of light can't block another beam of light in vacuum. Photons can interact due to the nonlinearity of the medium. So it's plausible to block another beam of light if you have the right mediators. It's however not the light itself becoming a solid. See, for example, electromagnetically induced opacity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
If everything is relative to each other in this universe, why do we keep the Sun to be the reference point? and study the solar system and universe relative to it and why not relative to the Earth?
When you're trying to understand the mechanics of a system it's usually convenient to choose coordinates that reflect the symmetry of the system. The solar system is roughly centrally symmetric because the Sun is by far the largest mass in it, and the coordinates that reflect this symmetry are polar coordinates with the Sun at the centre. For example in these coordinates if the Earth was the only object apart from the Sun, the Earth's orbit would be (nearly) a ellipse. The presence of the other planets (mainly Jupiter) perturbs the Earth's orbit, but we can handle this by perturbation theory starting with the elliptical orbit and adding on the perturbations caused by the other planets. So taking the Sun as a reference point is a reflection of the symmetry of the Solar system. As noted in other answers, if we're describing the galaxy the Sun is no longer the best place to set the origin of our coordinate system, and we'd use polar coordinates centred on the centre of symmetry of the galaxy. Likewise to describe a galaxy cluster we'd choose the origin to be the centre of mass of the cluster. At the very largest scales the universe is isotropic and homogenous, so it doesn't matter where we place the origin.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
How to prove the Levi-Civita contraction? I want to prove the following relation \begin{align} \epsilon_{ijk}\epsilon^{pqk} = \delta_{i}^{p}\delta_{j}^{q}-\delta_{i}^{q}\delta_{j}^{p} \end{align} I tried expanding the sum \begin{align} \epsilon_{ijk}\epsilon^{pqk} &=& \epsilon_{ij1}\epsilon^{pq1} + \epsilon_{ij2}\epsilon^{pq2} + \epsilon_{ij3}\epsilon^{pq3} \end{align} I made the assumption that $\epsilon_{ij1} = \delta_{2i}\delta_{3j} - \delta_{3i}\delta_{2j}$, then I tried to argue the following using cyclical permutations \begin{align} \epsilon_{ijk}\epsilon^{pqk} &=& (\delta_{2i}\delta_{3j}-\delta_{3i}\delta_{2j})(\delta^{2p}\delta^{3q}-\delta^{3p}\delta^{2q}) \\&+& (\delta_{3i}\delta_{1j}-\delta_{1i}\delta_{3j})(\delta^{1p}\delta^{3q}-\delta^{1p}\delta^{3q}) \\&+& (\delta_{1i}\delta_{2j}-\delta_{2i}\delta_{1j})(\delta^{1p}\delta^{2q}-\delta^{2p}\delta^{1q}) \end{align} and then I realized that this was getting long and messy and I lost my way. How does one prove the Levi-Civita contraction?
The product $\epsilon_{ijk}\epsilon^{pqr}$ has certain symmetry properties. They are the same properties as the determinant $$\begin{vmatrix} \delta_i^p & \delta_i^q & \delta_i^r\\ \delta_j^p & \delta_j^q & \delta_j^r \\ \delta_k^p & \delta_k^q & \delta_k^r \end{vmatrix} $$ It's a rank 6 tensor, it changes sign under exchange within $ijk$ and $pqr$ (swapping a row or column). It is equal to 1, -1, 0 under the same conditions that the Levi Cevita product is e.g. repeated index within $ijk$ or $pqr$ makes two rows/columns equal so the determinant is 0. They are the same thing. From here, just contract via $\delta_r^k$ and expand the determinant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Is real voltage always the real part of complex voltage? If I have a complex voltage $V_z$, is real voltage $V$ (i.e. the voltage used in the normal ohms law and the voltage we normally talk about) always given by $V=Re(V_z)$? And if it is not the case, how do we find $V$ from $V_z$? Does the same apply to current?
A voltage or current given as a complex constant is a phasor. A voltage given as the complex constant $V_z$ represents the real voltage $$V(t) = \operatorname{Re} \left( V_z e^{i\omega t} \right)\ \ ,$$ where $\omega$ is the voltage's angular frequency and $t$ is time. Currents represented as phasors work the same way.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why uncharged particles do not feel the Lorentz force? Why uncharged particles do not feel the Lorentz force? Please do not answer with the formula $ \vec F = q\left( \vec E + \vec v \times \vec B \right) $. Edit after an answer which is an circular reasoning. Let me explain this question with an example. Imagine, you nothing know about car traffic and you are standing at a traffic light junction. What is the law you could formulate? The green light moves the cars. In my question the moving charged particles are the cars, the light junction is the magnetic field. A running horse does not stop on red and the deeper answer is that the driver accelerate the car when he see green. So what "saw" charged particles what don't "saw" uncharged particles? Why an charged and not moving relatively to a magnetic field particle does not feel this force?
The Lorentz force is by definition the force acting on a charged particle due to electric and magnetic fields. Therefore, if the particle has no charge, then any forces on it, by definition, cannot be Lorentz forces. Thus, it is easy to say that uncharged particles do not feel the Lorentz force because it is only defined as the Lorentz force when acting on charged particles. That does not mean that uncharged particles do feel forces from electric and magnetic fields (that would be an invalid interpretation). It means that were they to feel such forces, we would call these forces by a different name.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Difference between heat capacity and entropy? Heat capacity $C$ of an object is the proportionality constant between the heat $Q$ that the object absorbs or loses & the resulting temperature change $\delta T$ of the object. Entropy change is the amount of energy dispersed reversibly at a specific temperature. But they have the same unit joule/kelvin like work & energy. My conscience is saying these two are different as one concerns with temperature change and other only at a specific temperature. I cannot figure out any differences. What are the differences between heat capacity and entropy?
If you consider a constant volume transformation, the corresponding specific heat will be defined as: $C_v(T) \equiv \left( \frac{\partial U}{\partial T}\right)_{N,V}$ Now, it is not forbidden to use Leibniz rule for the decomposition of partial derivatives and for instance: $\left( \frac{\partial U}{\partial T}\right)_{N,V} = \left( \frac{\partial U}{\partial S}\right)_{N,V} \cdot \left( \frac{\partial S}{\partial T}\right)_{N,V} = T \left( \frac{\partial S}{\partial T}\right)_{N,V}$ Which means that $C_v(T) = T \left( \frac{\partial S}{\partial T}\right)_{N,V}$ Hence, $C_v$ and $S$ are definitely two different things. In particular, the specific heat contains some (partial) information about the entropy of the system (and its possible variation under some constraints) but not all of it. Hence in term of heat exchanged, we know that: $\delta Q = TdS$ upon expanding $dS$ as total differential, we thus have one possible reading of the heat exchanged (at fixed number of particles): $\delta Q = T \left( \frac{\partial S}{\partial T}\right)_{N,V} dT + T \left( \frac{\partial S}{\partial V}\right)_{N,T} dV = C_v(T) dT + T \left( \frac{\partial S}{\partial V}\right)_{N,T} dV $ The second term $T \left( \frac{\partial S}{\partial V}\right)_{N,T}$ is what any specific heat function (regardless whether we look at constant volume or pressure) will always miss and is ultimately related to the thermal expansion properties of the material.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 6, "answer_id": 4 }
How would the universe change? How would the universe be modified if protons (as we know them) have negative charge and electrons (as de know them) have positive charge.
As the comments say, it would do nothing if you change all negative charges to positive and viceversa for all know particles. You actually have a real physical example: antimatter (which in most theories behaves just as standard matter but there might be some non-symmetries when you include all particles (it depend on the specific modification of the standard model). Now, if you only change the charge of the electron and the proton, and nothing else, the answer is not so clear: you would need a radically new theory because you will no longer have conservation of charge. The new behavior is not theoretically predictable a priori, only experiments could tell what the new physics would be.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there any evidence that matter and antimatter continuously appear and disappear on the edge of a black hole? I heard Stephen Hawking got a Nobel prize for this, someone said there was no evidence for it which I find quite strange since he got an award for it.
According to the Hawking radiation Wikipedia article, there was one experiment in 2010 which the experimenters claimed showed evidence of Hawking radiation, but that claim is in doubt, and there hasn't been any other experimental evidence of Hawking radiation. Stephen Hawking has received a number of awards and honors, but the Nobel Prize is not among them.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Moduli spaces in string theory vs. soliton theory In both string theory and soliton theory, moduli spaces are frequently used. As far as I known, for soliton theory, moduli spaces are something like collective coordinates for solitons, and for string theory, moduli spaces is the spaces of all metrices divided by all conformal rescalings and diffeomorphisms. It seems like these two definitions(?) of moduli spaces are quite different, but the same terminology is used in both cases. I also learned that the name 'moduli spaces' comes from abstract geometry, but I don't know if that's any help here. My question is the following: Could anyone provide an intuitive connection between the two uses of moduli spaces, or highlight the differences?
This is a situation where knowing the history of the terminology can be helpful. The QFT/string theory terminology comes from algebraic geometry, where the term moduli space is used for any space whose points correspond to some kind of geometric object. The projective space $\mathbb{P}(V)$, for example, is the moduli space of lines in the vector space $V$. Likewise, a moduli space of instantons is the space of solutions to a set of instanton equations. And the moduli space of complex curves is what you end up integrating over in perturbative string theory after accounting for the gauge symmetries acting on the worldsheet metric. The word 'modulus' (plural 'moduli') just means 'parameter'. Moduli spaces were originally thought of as spaces of parameters, rather than as spaces of geometric objects; mathematicians were interested in how the various ways of parameterizing geometric objects were related and eventually realized these parameters were coordinates on a space. String theorists have resurrected this old terminology by using the term 'moduli field' to refer to a field which parametrizes a moduli space.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Projectiles Launched at an Angle with unspecified Initial Velocity I'm attempting to do my Physics homework, and I did the first one right, but that problem gave me initial velocity. This problem gives me only the angle relative to horizontal and the distance it covers. Can anyone help me figure out where to start? I've tried but I can't find any formula that I can find initial velocity without having time, or vice versa. Any help would be much appreciated. Here's the problem in full: A golfer hits a golf ball at an angle of 25.0° to the ground. If the golf ball covers a horizontal distance of 301.5 meters, what is the ball's maximum height? (Hint: At the top of its flight, the ball's vertical velocity component will be zero.) I realize that the vertical velocity component has something to do with it, but I can't figure out where that would fit in.
In projectile motion the horizontal velocity is always same through the journey, only vertical component of velocity changes. After resolving the given velocity vector U (say) into X and Y components as Ux and Uy respectively. You can write R=Ux.T ( R is horizontal range, T is time of flight) Therefore, R=u^2 sin(2 theta)/g…(1) Here theta is the angle between velocity vector with the horizontal. You can derive the expression for maximum height using 3rd equation of motion - H=u^2sin^2(theta)/2g…(2) Here H is maximum height. Now combining equations (1) & (2) We get H=R.tan(theta)/4 You can use this relation for deriving the maximum height of the projectile if Range and angle is given.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/134811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is going on in front of and behind a fan? Why is it that when you drop paper behind a fan, it drops, and is not blown/sucked into the fan, whereas if you drop paper in front of a fan, it is blown away?
Here's a standard fan with some (hard to see) arrows indicating air flow. The fan works by pulling air in and then making it move faster. The air flow behind the fan is slow moving and wide (you can see the arrows behind the fan coming from above and below the fan blades) whereas the air flow in front of the fan is fast moving and narrow (which follows from the conservation of mass flow.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42", "answer_count": 3, "answer_id": 0 }
Is there a classical analog to quantum mechanical tunneling? In comments to a Phys.SE question, it has been written: 'Tunneling' is perfectly real, even in classical physics. [...] For sufficiently large temperatures this can put the system above a hump in its potential energy. and the only difference between the classical case and the quantum mechanical one is that classical physics is a random walk in real time, while QM is a random walk in imaginary time. I understand that in a system of particles with finite temperature some particles can overcome a potential barrier. That's how I interpret the first statement. I don't understand the business of "random walk in imaginary time". Can someone explain? Update What I was originally looking for was 1.) classical system that can transport mass through a forbidden region and 2.) explanation of "random walk in imaginary time". So far, I don't see anything for question 1.), but I think I'll grok 2.) if I invest some time and energy.
Evanescent waves are the mechanism beind both quantum tunneling and frustrated total internal reflection in @SteveB's answer. Evanescent waves and frustrated total internal reflection are not limited to light, but can occur in any phenomena governed by the wave equation, including sound and water waves.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 7, "answer_id": 5 }
What is the current in a circuit with two voltage sources in series? What happens? We have 2 voltage sources and 2 currents. When I2 and I3 come together (they have opposite sides), what happens? I tried writing down the voltage relations but I'm stuck because I don't know what current should I work with (to determine positive sides): I2 or I3?
Well, a net current results. I will write the general equation for the current, and determine all possible scenarios, keeping in mind that in writing this equation I assume the current is going clockwise (I assume the left voltage source is at a higher potential, and if this wasn't the case, my current will simply be negative and it will be flowing counter-clockwise.) * *: my assumption is correct and the net current will flow in the clock-wise direction (going toward the right voltage source, entering the positive terminal.) *: the net current is zero. *: my assumption is wrong and hence the net current will flow in the counter-clockwise direction. Notes: * *Please stick to the scientific nomenclature. It is important because we have to be on the same page and talk the same language. What do you mean by "intensity"? Is it the electric current intensity, the electric field intensity, or what? You can call it the current, current intensity, the electric current, or the electric current intensity. *In electrical engineering we don't usually call a voltage source of this symbol (the symbol you drawn) a generator. Yes it is correct that it generates a voltage, but this symbol is used for DC voltages (static E-field potentials) of low values, for example: batteries, whereas a generator is usually used for AC signals or DC generators of very high values (like what electricity stations produce.), so we call it a voltage source or a battery.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What happens to ice cream when you stir it? I hope this is the appropriate forum for my question. I also considered posting it in the chemistry forum. When I eat ice cream I often stir it into a texture similar to that of soft serve. During the process, the bowl in which the ice cream is kept, tends to become quite cold. Temperature measurements indicate that the temperature of the ice cream increases in the stirring process, so it seems to be the case that the bowl gets cold as a result of heating the ice cream. I am however not quite sure about why the ice cream is heated. Could it simply be that stirring the ice cream constantly brings parts of lower temperature to the surface thus speeding up the heat transfer that would otherwise occur anyway? Thanks in advance!
This is a bit of a soft question (get it?). Intuitively, the ice cream and bowl (and your hand) will move towards a state of equal temperature (second law of thermodynamics). When you stir the ice cream you are doing at least four things: * *you are 'encouraging' the heat to become more uniformly distributed (as you suspected), *causing the ice cream to come into contact with regions of the bowl from which it has yet to absorb energy, *causing the ice cream to adopt the same shape as the bowl and thus increase its contact area (and therefore the rate of heat exchange), and *adding (a tiny amount of) energy by stirring it. This has a rather noticeable effect on the texture because ice cream contains ice crystals and air bubbles. Edit: Today I stumbled upon a book on The Science of Ice Cream, for those interested.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
Is there any physical quantity that does not have uncertainty? I saw this video and I got a thought: Is there any physical quantity that does not have uncertainty? Basic models are: for lenght for time end energy (so for mass too) and I realized that (based on the video) photons near to each other have uncertain amount of substance - so the result will be uncertain luminosity too. And what about electrical charge? Is there some uncertainity for it?
I'm convinced that nothing in this world can be measured without uncertainty. Take the measurement of a current. One have to use in this case an ammeter which has to be a low as possible resistor. But it has to have an ohmic resistor. Take the measurement of electric potential. One have to take a voltmeter which has to be a high as possible resistor. But it's not perfect too and the resistance is not infinitely high. In both cases the measurement has some uncertainty. So in the macroscopic world the uncertainty is a common thing. Then ever a physician measure something he will write the result in the form $x \pm y (unit)$. To be a little more sophisticated One can say that to catch the moment of the full moon is an impossible thing. What ever you calculate I ask you to calculate it with a higher precision. At some point we end with an atomic clock. But this clock has an uncertainty too. What Heisenberg told us is the predictable value of uncertainty on the atomic and subatomic level. His principle is true until it is not possible to manipulate (or measure) particles with smaller quants (which we dont know at the moment).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
How to get the relation for dependence of anomalous dimension on regularization? Here is the anomalous dimension: $$ \gamma_{\Gamma}(t, g) = \left[\frac{\partial }{\partial t}\ln \left(Z_{\Gamma}(t , g) \right)\right]_{t = 1}, $$ where $Z_{\Gamma}$ is renormalization factor which arises in n-point functions $\Gamma $, $t$ denotes change of renormalization parameter $t = \frac{\mu{'}}{\mu}$. $Z_{\Gamma}$ arises explicitly after making shift of renormalization parameter $\mu$ (for fixed type of renormalization): $$ \Gamma (xt , g) = Z_{\Gamma}^{-1}(t , g) \Gamma (x, \bar{g}(t , g)), \quad x = \frac{k}{\mu}, \quad t = \frac{\mu}{\mu{'}}. $$ Let's change type of regularization (coupling constant will change to $g \to \tilde {g}(g)$. Then n-point function will change as $$ \Gamma \left(\frac{k}{\mu} , g \right) = q(g) \tilde {\Gamma}\left( \frac{k}{\mu} , \tilde {g}(g) \right). $$ How to get from these equations that $\gamma_{\Gamma}$ will change to $$ \tilde{\gamma}_{\Gamma}(\tilde {g}(g)) = \gamma_{\Gamma}(g) - \beta (g)\frac{d\ln (q(g))}{dg} $$ (the definition for $\beta$-function see here)?
I'm not quite sure where the details of the last equations come from, but I think that the step that you are missing is to identify, $$q(g) = \frac{1}{Z_\Gamma(g)} \, .$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Blowing your own sail? How it this possible? Even if the gif is fake, the Mythbusters did it and with a large sail it really moves forward. What is the explanation?
As others have said, the skater would move faster if he simply pointed the leafblower behind him, rather than bouncing it off the umbrella. However, there is a real use for this technique. Jet engines normally suck in air from all directions and blow it out of the back in order to move forward. However they are also capable of reverse thrust if fitted with a device to redirect the air towards the front. This is used for braking after landing. http://en.wikipedia.org/wiki/Thrust_reversal Of course this is much less efficient than normal forward thrust. Often, only air from the large fan on the front of the engine is passed through the reverse thrusters. The hot exhaust gas which drives that fan still goes out of the rear (for obvious reasons of material temperature.) This makes the efficiency even worse. Still, as it is only used for a few seconds on landing, this does not matter. And it saves a lot of wear on the wheel brakes. Next time you are on a plane, listen for the brief but strong boost in engine power that occurs immediately after landing. That is the thrust reversers being applied.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 6, "answer_id": 0 }
$\rm Lux$ and $W/m^2$ relationship? I am reading a bit about solar energy, and for my own curiosity, I would really like to know the insolation on my balcony. That could tell me how much a solar panel could produce. Now, I don't have any equipment, but I do have a smartphone, and an app called Light Meter, which tells me the luminious flux per area in the unit lux. Can I in some way calculate W/m2 from lux? E.g. the current value of 6000lux.
Lux is a unit that depends on the sensitivity of the "standard" (e.g. more or less average) human eye, as well as on the power distribution of light within the visible part of the spectrum. Your previous answers deal with that well enough. The conversion from irradiance, or flux, in Watts per meter squared, to apparent magnitude is more simple, since these are both physical quantities independent of the spectral sensitivity of the human eye or of any detector. m = −2.5 log F − 18.98224 where m is apparent magnitude F is flux in Wm⁻² m,F pertain to the same spectral band The derivation is fairly simple in the sense that it requires only algebra and a careful attention to the units of length involved (parsecs, meters).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 3 }
Tensor product notation convention? For two particle state, the Dirac ket is writren as $$\lvert\textbf{r}_1\rangle \otimes \lvert\textbf{r}_2 \rangle. $$ Then how do we write its bra vector, $$\langle\textbf{r}_1\rvert \otimes \langle\textbf{r}_2\rvert ~~\text{or}~~\langle\textbf{r}_2\rvert \otimes \langle\textbf{r}_1\rvert ~~\text{?} $$ Is there any rule or convention? I'm just asking the order of bra vector.
Remember that by definition of the tensor $$(a_1\otimes b_1)(a_2\otimes b_2)=(a_1a_2)\otimes(b_1b_2),$$ and that $\mathbb C\otimes\mathbb C=\mathbb C$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/135914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
Difference between high-level and low-levels of electromagnetic radiation can someone please explain me what we mean by 'high-level' or 'low-level' in electromagnetic radiation? for example, it is believed that high-level microwave radiation is harmful to human but not the low-level one. what is this level here we are talking about? if the frequency and wavelength is same, how high-level and low-level radiation differs?
It is believed that high-level microwave radiation is harmful to human ... It is not just believed but well known that extremely intense microwave radiation will cook people. We use microwaves to cook meat, after all, and a good portion of our bodies is in the form of meat. Lesser intensities can cause survivable burns, even lesser intensities might cause cataracts and possibly sterility. Below that, microwave radiation is generally safe. It's non-ionizing. What is this level here we are talking about? That would be the specific absorption rate, which is the rate at which the human body (or some part of the human body) absorbs non-ionizing radiation. The specifics of how this is calculated and the threshold levels vary from country to country.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Rabi oscillations with quantized light: which is the "quantum" effect, collapse, revival or both? In wikipedia http://en.wikipedia.org/wiki/Jaynes%E2%80%93Cummings_model#History it is stated that It was later discovered that the revival of the atomic population inversion after its collapse is a direct consequence of discreteness of field states (photons).[2][3] This is a pure quantum effect that can be described by the JCM but not with the semi-classical theory. A similar claim has been made by my teacher during a lecture: he said that the revival, and not the collapse alone, in Rabi oscillations is a striking feature of the EM field quantization, not explainable with the quantization of the atom alone. I do not understand why, since the corresponding (Rabi) model predicts periodical oscillation in the population, so that even observing just the collapse should be a proof of the field quantization.
Both Rabi oscillations and the revivals are quantum mechanical effects. However, they consequences of the quantization of two different systems. Rabi oscillations can be explained and derived with a semi-classical theory in which the atomic system has quantized energy levels but the incident light fields are classical. Rabi oscillations do not require quantized fields. Quantum revivals however, in which the Rabi oscillations decay to zero before rising again, can only be explained with quantized fields via the Jaynes-Cummings model.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Why does Energy-Momentum have a special case? I was reading Energy-momentum, and I came across this simplified equation: $$E^2 = (mc^2)^2 + (pc)^2$$ where $m$ is the mass and $p$ is momentum of the object. That said, the equation is pretty fundamental and nothing is wrong when looked upon, I similarly also believed this but I came across a "special" cases where this does not apply: * *If the body's speed $v$ is much less than $c$, then the equation reduces to $E = (mv^2/2) + mc^2$. I find this really crazy, because first Einstein, always wanted to create a theory\equation that applied to every aspect of physics and has no "fudge" factors, that said irony is present from Einstein. Next, why does this not work in every aspect? surely a equation should be "universal" and should still work with any values given. Most importantly, why does this not work, if velocity is "much" slower than light? What do they mean by "much slower", what is the boundary for "much slower"? Regards,
I didn't see anyone mention the practical reason to use an approximation for energy. It is that in most problems you will be computing differences in energy. In that case, for small velocities, you can not only go the approximation ${m v^2\over 2}+m c^2$, but if you are also not converting mass to energy or vice-versa, you can drop the $m c^2$ as well since it will subtract out. Now you have Newton's ${m v^2\over 2}$. This is much more appropriate for most Earthly problems, and you can calculate it more readily without the $m c^2$. If you wanted to compute the energy change from accelerating an object by 20 m/s, and you tried to use $E^2=(m c^2)^2+(p c)^2$, then the change would be out in the 14th decimal place of $E$. That is where approximations and dropping constant terms becomes not only useful but essential.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 10, "answer_id": 4 }
What is zero impedance in AC circuit? If a capacitor is connected with an inductor, then because $$Z=\frac{1}{j\omega C}+j\omega L,$$ the Z may be zero. Does that mean when I apply a voltage, the current will be infinite large? What's more, in transmission line theory, the characteristic impedance could be $\sqrt{L/C}$ when $R=0 \ and~ G = 0$. Why capacitors and inductors could generate real impedance?
Essentially, the answer to your question is yes but your equation is not quite in the general form. Typically, impedance is $$Z=R + jX$$ with $R$ being the resistance, and $X$ being the reactance which is almost the equation you show, but without the imaginary component. Specifically, $$X = \omega L - \frac{1}{\omega C}$$. What this means is that a component with $Z=0$ would have zero values for both real and imaginary portions; $Z=0 \implies R=X=0$. In such a case, a voltage applied across such a component would lead to infinite current through it, due to Ohm's law. Such devices don't actually exist, but one could approximate one with a large wire; in other words, a short-circuit.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How do photons "decide"? I was reading that when horizontally polarized light hits a vertical Polaroid all the light is blocked out. But when the Polaroid is off the vertical, some but not all photons "decide" to jump into the new plane of polarization. Could this be a "road less traveled" kind of effect? If a run of two or three photons make the jump then conditions are affected in such a way, that the next photon is less likely to make the jump. Then as one or more photons get blocked, conditions cool down a bit increasing the likelihood that another run of jumps will occur: a mechanism of so called "deciding".
If that happened, we would be able to detect it by looking at correlations between successive photons' "decisions." That is, suppose you represent each pair of consecutive photons (1 and 2, 2 and 3, 3 and 4, etc.) with $+1$ if they both made the same "decision" or $-1$ if one went through the polarizer and the other didn't. Take the average of these numbers for all pairs and call it $C$. If a run of photons making the same decision changed the probability for the following photons, $C$ would be greater than zero. In reality, it comes out to zero, meaning that each photon doesn't change its behavior depending on what the one before it did. So we have clear experimental evidence that this modification of probabilities doesn't happen.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does gravity affect magnetism, vice-versa, or do they "ignore" each other? I am suddenly struck by the question of whether gravitation affects magnetism in some way. On the other hand, gravity is a weak force, but magnetism seems to be a strong force, so would magnetism affect gravity? Or do they "ignore" each other, being forces which do not interact? The answer to this is related to this question: If the earth's core were to cool so that it were no longer liquid, no longer rotated, and thus produced no magnetic field, would this do anything to earth's gravity?
The electromagnetic field tensor $F_{\mu\nu}$ which encodes all the information about the electric and magnetic field, certainly contributes to the energy-stress tensor $T_{\mu\nu}$, which appears in the Einstein Field Equations: $$G_{\mu\nu}= 8\pi G T_{\mu\nu}$$ The left hand side of this equation encodes the geometry of spacetime, while the right hand side describes the 'sources' of gravity. Therefore, we can say that magnetism does have an effect on the geometry of spacetime i.e. gravity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 2, "answer_id": 0 }
Special relativity; rocket moving towards a mirror An observer in a rocket moves toward a mirror at speed $v$ relative to the reference frame in which the mirror is stationary - call this frame $S$. A light pulse emitted by the rocket travels toward the mirror and is reflected back to the rocket. As measured by an observer in $S$, the front of the rocket is a distance $d$ from the mirror at the moment the light pulse leaves the rocket. What is the total travel time of the pulse as measured by an observer in (a) the $S$ frame and (b) the front of the rocket? For (a), I'm thinking we can just use the usual formula: $\triangle t =2\frac{d}{c}$ For (b), let $\triangle t'$ be the time as measured by someone in the front of the rocket. By Lorentz: $\triangle t'=\gamma (\triangle t-\frac{v \triangle x}{c^2})=\frac{1}{3} \triangle t$, after some algebra. I can show this if you want. Thus it seems that the observer in the rocket measures a shorter time [would this be the proper time?] I am finding special relativity a bit confusing and just want to see what you guys think of this. This seems right to me, since the moving observer should measure the smallest time.
Be careful on (a), the rocket is moving too and will have moved a distance of $v \Delta t$ by the time the light comes back. So we have: $$2d-v \Delta t=c \Delta t$$ Solving for $\Delta t$: $$\Delta t = \frac{2d}{c+v}$$ For (b), your answer certainly cannot be correct because it is independent of velocity. The easiest way to solve most problems in Special Relativity is by using the invariant interval. I'll use units where c=1 to make the algebra simpler, and we can put the c's back in using dimensional analysis later. We have: $$(\Delta t')^2 - (\Delta x')^2=(\Delta t)^2 - (\Delta x)^2$$ and: $$\Delta t=\frac{2d}{1+v}$$ $$\Delta x=v\Delta t= \frac{2vd}{1+v}$$ In the rocket frame, both events occur at the same location (the front of the rocket) so $\Delta x' =0$. So we have: $$(\Delta t')^2= (\Delta t)^2 - (\Delta x)^2= \left ( \frac{2d}{1+v} \right )^2 - \left ( \frac{2vd}{1+v} \right )^2$$ After a bit of algebra (and dimensional analysis to put the c's back in): $$\Delta t' = \frac{2d}{c} \sqrt{\frac{1-v/c}{1+v/c}}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the ground state closest to the uncertainty relation? For simplicity, suppose we are only talking about discrete energy levels, i.e., bound state case. The energy levels are $E_1, E_2\cdots$, and the corresponding wave functions are $\psi_1, \psi_2 \cdots$. My question is, is it true that $\sigma_x \sigma_p$ is minimum when $n=1$ for the eigenstates? I came across this question because I found harmonic oscillator and infinite potential well problems satisfy this statement, so I want to know if this is a general case. I think this may be true because for the ground state, there is no node ground state wave function. Thus the $\sigma_p$ may be small compared to other eigenstates.
The ground state of a system is by definition the state of minimal energy, i.e. the system is located at the minimum point of the potential. Now, if we were in classical mechanics, this would mean that the system is at a stable fixed point. Of course in QM that is not possible since we have to satisfy the Heisenberg uncertainty. And so, I would say yes, in general. there might be some configurations where we might be in a false minimum (or a local minimum), which might also satisfy the uncertainty minimum, alternatively there might be a need to transform coordinates and redefine displacement and momentum, but if we are working with canonical conjugates, it is supposed to be true.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/136989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Alternative liquid for Galileo thermometer So a friend of mine broke my Galileo thermometer recently. The glass tube and the liquid inside were lost, but the bulbs survived. I've cleaned out an old tall glass candle, and tried filling it with water. Even when the water is steaming hot the bulbs still float, so the liquid in there was definitely not water. It smelled somewhat like gear oil, so I'm guessing it might have been an oil, but I'm trying to think of low density (clear) liquids that I could acquire to fill the tube with. Preferably water soluble, so I can calibrate it by adding water until it's accurate. Suggestions? (No, I am not buying a new one. I am an intelligent human being, I have been presented a challenge, and I will use science to overcome it—not mere money.)
You should be able to start with methylated spirits - ethanol with a bit of methanol mixed in to make it toxic and cheap (or ethanol if you can get your hands on it - but it will be expensive because of excise taxes unless you can prove "scientific exemption".) It is much lighter than water and highly miscible with it. Once calibrated you do need to seal it in properly or the fumes will get to you. WARNING - this is toxic and flammable stuff. Read safety data sheet for proper handling http://www.jmloveridge.com/cosh/Industrial%20Methylated%20Spirit%2095.pdf I have broken one of these myself in the past - never thought to revive it. You inspire me...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Will 5 pizzas in the same Hot Bag stay warmer than 5 pizzas in 5 separate Hot Bags? For example, say I am delivering 5 pepperoni pizzas to 5 different addresses. In one scenario, I Keep all 5 in the same insulated Hot Bag, I carry that bag to the door, and I quickly remove one of the pizzas from the bag to give to the customer. In the other scenario, I use a separate Hot Bag for each pizza. This would mean that only one bag would need to be opened while the other 4 bags could stay closed. Which method would keep the pizzas warmer?
If they were ideal insulators, the 5 separate bags would be better because you wouldn't have repeated heat loss from opening the same bag 5 times. Primarily this heat loss would be in the escape of hot air, exchanged for colder outside air. If the hot bags were extremely poor insulators - effectively as if you weren't using any - then you'd want the pizzas stacked on top of one another to minimize the exposed surface area. You can combine the advantages by just transporting the 5 hot bags stacked on top of each other.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
How do we estimate $10^{23}$ stars in the observable universe? Now, I read somewhere, that there are $10^{23}$ stars in the observable universe. How did scientists estimate this?
Have a look at this article. It gives the number as $10^{24}$ rather than $10^{23}$, but it's such a vague estimate that a factor of ten is within the expected error. The number is the number of stars in the observable universe i.e. within 13.7 billion light years of Earth at the time the light we see today was emitted. Note that visible means visible to a sufficiently high powered telescope. The number of stars you and I can see by looking up at night is actually only about 5,000. The number of stars is obtained by multiplying the estimated number of galaxies (170 billion) by the average number of stars per galaxy (around a trillion). But both figures are such rough estimates that even a factor of ten is probably too small an estimate of the error.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
What causes the random movement of particles inside a conductor? I'm reading about currents in electricity right now, and it was mentioned that even if there's no electric field inside a conductor, charged particles inside are still undergoing random movement. I wanted to know what forces cause this random movement to occur? Or if it's not any force which causes this mysterious movement, then what is it? Thanks.
Does your conductor have any resistivity? In that case the fluctuation-dissipation theorem applies. In the case that your conductor is a perfect superconductor, it would still couple inductively to the electromagnetic field around it, which, per 3rd law of thermodynamics must have a non-zero temperature. To remove those fluctuations, the total field volume would have to be zero, i.e. your conductor would have to have zero loop area, which is obviously unphysical.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Where does the $\partial \vec{E}/\partial t$ term from Maxwell's equation go in Ampere's Law? One of Maxwell's Equations (ME) is: $$\nabla\times\vec B = \mu_0\vec J+\epsilon_0\mu_0 \frac{\partial \vec E}{\partial t}.$$ While Ampere's Law (AL) is: $$\nabla\times\vec B = \mu_0\vec J.$$ Griffiths E&M book derives that form of AL using the Biot-Savart Law and applying Stokes' theorem. Intuitively, it makes sense to me: a steady current is going to give rise to a magnetic field around it. But then I have trouble reconciling it with the ME I posted above -- while AL seems to say that for a given steady current $J$ you get a straightforward $B$, the above ME seems to say that for a given $J$ you could get many combinations of $B$ and $E$. How is this reconciled?
It is mentioned in the book Introduction to electrodynamics that Ampere could not find the second term because such a thing is hard to detect in laboratory. But now as we all know (because of Maxwell) that changing electric field produces magnetic field. If you take laplace transform of the second term in M.E. you will find that term is directly proportional to frequency. This equation is used when you wish to analyse the interface between two medium, or surface inside a dielectric. Air is not a conductor, right. So there is not current and hence no J in free space. Still we have changing electric and magnetic field waves (electromagnetic waves) in space due to which communication is possible.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Estimating the force needed to increase height of a mountain? How would you estimate the force required by a tectonic plate to make the height of a mountain increase when it pushes against another? I've used a method to try and do it for Mt Everest and have ended up with 8x10^(-7)N required to increase its height which I don't think sounds reasonable By the way, I was doing this for fun (it doesn't have any real life value, I don't think)
When tectonic plates collide, the crust can become thicker at the edge of collision by the folding and faulting of crustal rocks. Because crust has a lower density than the asthenosphere and mantle, the region of thicker crust can rise due to buoyancy forces until it reaches isostatic equilibrium. This model of orogeny is referred to in geology as isostasy. Therefore the "force acting to increase the height of a mountain" you wish to estimate is the buoyant force on the mountain/root system. And it will go to zero approaching equilibrium. Use dimensional analysis to write down an equation for static equilibrium: [mountain weight force]-[buoyant force] = 0 m(h+H)Ag - MHAg = 0 Where m and M are the mass density of the crust and mantle respectively. Where h and H are the mountain's height and the mountain-root's depth respectively. The horizontal cross-section area of the mountain is A. The gravitational acceleration is g. Think about this and you'll see how it may explain how erosion of mountain-tops has caused the uplift of the Cascades, whereas eruption of seafloor basalts has led to the sinking of the Hawaiian islands.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is there only radial motion in the Hydrogen ground state? The ground state of the Hydrogen atom is spherically symmetric. In other words, the wave function Psi depends only on the distance r of the electron from the nucleus. As a consequence all derivatives of Psi with respect to angles theta and phi yield zero. Does this imply that the average kinetic energy in the ground state [which can be calculated without difficulty from the wave function] is determined exclusively by the radial motion of the electron? If so, that would be a rather odd result. Let us say the electron is at position (x, 0, 0). Then the kinetic energy would be the result of motion either away from the nucleus (direction +x) or towards the nucleus (-x), but not from motion perpendicular to the x-axis. So in essence the motion of the electron would be 1-dimensional, like a pendulum.
On average there is no motion at all, i.e., there is no systematic displacements. But there are "fluctuations" with non zero squares averaged. Classically speaking, it is like a Brownian motion in a limited space. But let us set aside a classical picture. Apart from momentum representation of the wave function, there is a simple proof that the electron may have unlimited velocity in the ground state. Let us consider a scattering process in the first Born approximation. The projectile is heavy (proton, for example). From kinematic reasoning, a still electron cannot scatter a heavy proton to large angles, there is a limiting angle determined with the ratio $m_e/m_p$. However the scattering cross section is not zero for larger angles. Although small, the cross section is never zero. It is because the electron may have high instant velocity at the moment of scattering and this may push a heavy projectile back. The latter effect is described with the atomic form-factor $|F(\mathbf{q})|>0 $ for any scattering angle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 2 }
What is the difference between diffraction and interference of light? I know these two phenomena but I want to know a little deep explanation. What type of fringes are obtained in these phenomena?
Diffraction is spreading of the beam of light as it goes through aperture or is emitted from a finite area source. It is due to the fact that the beam of light has some k-vector spectrum that has some finite width. You can think of it like a bunch of photons having a spread of momenta. It is related to uncertainty principle, because having an aperture confines light in spatial position domain therefore broadening its spatial frequency domain. Similar effect to diffraction, which happens in space, is dispersion, which happens in time. It makes a light pulse spread in temporal position domain during propagation, due to it having multiple frequencies (in temporal frequency domain) forming it. However, the causes of these two effects, diffraction and dispersion are a bit different. Diffraction happens because directions of the k-vector spectrum differ, dispersion happens because phase velocity of each frequency differs. Nevertheless equations describing those two phenomena are very similar and for example are leading to the notion of solitons which happen both in space and in time through balancing of dispersion/diffraction with nonlinearity. On the other hand, interference is a phenomenon resulting from a superposition of waves. They can have different amplitudes, frequencies or phases, and it will influence how the superposed final wave (a sum of all the interfering waves amplitudes) will look like. This not only is seen in an experiment with two slits in space, but also you can use it to explain forming of ultrashort pulses through constructive interference of waves in some points in time and destructive in other. Two such pulses in close proximity (close here depends on the spectrometer resolution) will also create interference fringes in the measured spectrum. Interfering waves don't need to be spherical or originate from the same source. If at any point of spacetime some waves of some kind, coming from wherever, will meet, they will interfere in some way or another depending on their parameters.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/137860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 7, "answer_id": 3 }
Superpositioning of fire I once recognized that when you light two candles and you move one of the candles towards the other, you will see that the total fire height (let's call it $f_3$) is higher than the sum of the single fires. Candle 1: fire height $f_1$ Candle 2: fire height $f_2$ Candle 1+2 (the two fires touches each other): height $f_3>f_1+f_2$ I think this has something to do with the superpositioning principle in physics but I am not able to fully explain it with that, do you have any ideas?
When placing the candles next to each other you effectively create a single "fire". We know, from work by Thomas et al (1961) that the flame length is: $$l/D=f\left(\frac{\dot m^2}{\rho^2gD^5\beta\Delta T}\right)$$ Where $l$ is flame length, $D$ is diameter of fuel, $\dot m$ is fuel mass lose/flow rate, $\rho$ is fuel density, $g$ is acceleration due to gravity, $\beta$ is expansion coefficient of air and $\Delta T$ is average excess temperature of flame. Therefore if you increase the mass loss rate the flame length will increase. In your case you will be increasing the fuel mass loss rate as you're increasing the burning area and increasing the heat feedback to the fuel surface and hence the vapourisation of the wax. Simple empirical relationships have been developed for pool fires for a range of fuels. For example by McCaffrey (1995)$^1$: $$l/D=0.23Q_c^{2/5}-1.02D$$ Where $Q_c$ is the convective heat release rate. 1: McCaffrey, B., Beyler, C.L. and Heskestad, G., 1995. SFPE handbook of fire protection engineering. Flame Height.” National Fire Protection Association: Quincy, MA.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What would happen if an accelerated particle collided with a person? What would happen if an accelerated particle (like they create in the LHC) hit a person standing in its path? Would the person die? Would the particle rip a hole? Would the particle leave such a tiny wound that it would heal right away? Something else?
Amazingly this actually happened to a Russian scientist called Anatoli Bugorski (WARNING: this is pretty gruesome). The beam basically just killed all the tissue it passed through. The symptoms were the relatively mundane ones expected from tissue death. The LHC has a much, much greater energy than the one that struck Bugorski, so it would cause a lot more heating and presumably burning of neighbouring tissue. How much extra damage there would be depends on how rapidly the beam is absorbed, and I must admit I don't know this. The total LHC beam energy is 362 MJ, which is enough to turn 150kg of water at body temperature to steam. If any significant fraction of this was absorbed by your head the resulting explosion would probably not leave much of your head behind.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 4, "answer_id": 1 }
Finding 3-Sphere Christoffel connection coefficients using variational calculus, Sean Carrol problem I have A 3-Sphere with coordinates $x^{\mu} = (\psi,\theta,\phi)$ and the following metric: \begin{equation} ds^2 = d\psi^2 + \text{sin}^2\psi(d\theta^2 + \text{sin}^2\theta d\phi^2) \end{equation} I know how to get the connection coefficients using the metric derivatives etc, but I'm looking for a way to do this through calculus of variations. A problem in Sean Carroll (Exercises 3.11 question 8 a) Introduction to General Relativity suggested varying the following integral to find the connection coefficients: \begin{equation} I = \frac{1}{2}\int g_{\mu \nu}\frac{dx^{\mu}}{d\tau}\frac{dx^{v}}{d\tau} d\tau \end{equation} So I have a lagrangian: \begin{equation} \mathcal{L} = \dot{\psi}^2 + (\text{sin}^2\psi) \dot{\theta}^2 + (\text{sin}^2\psi)(\text{sin}^2\theta)\dot{\phi}^2 \end{equation} Which I put into the Euler-Lagrange equation: \begin{equation} \frac{\partial}{\partial \tau}\left(\frac{\partial \mathcal{L}}{\partial \dot{x}^\mu}\right) - \frac{\partial \mathcal{L}}{\partial x^\mu} = 0 \end{equation} Am I on the right track here? What is the strategy for relating this back to the connection symbols? The literature isn't too clear and I'm struggling to make the connection.
I'll show you how to do this for the 2-plane in polar coordinates. Once you work this out, it should be doable to work it out in your case. You start with the metric $$ds^{2} = dr^{2} + r^{2}d\theta^{2}$$ Since the geodesics of this metric (i.e., straight lines) minimizes distance, we know that the geodesics are an extremum of: $$I = \frac{1}{2}\int ds \left({\dot r}^{2} + r^{2}{\dot \theta}^{2}\right)$$ We take the variation of this, and get $$\delta I = \int ds \left({\dot r}\delta {\dot r} + r{\dot \theta}^{2} \delta r + r^{2}{\dot \theta} \delta{\dot \theta}\right)$$ Per our usual procedure, we want to vary with respect to the original variables and not their time derivative. We also neglect the variation on the boundary, and assume that $\delta {\dot x} = \frac{d}{ds}\delta x$. So, we integrate by parts, and we get: $$\delta I = \int ds\left(\left(-{\ddot r} + r{\dot \theta}^{2}\right)\delta r + \left(-{\ddot\theta}r^{2} - 2r{\dot r}{\dot\theta}\right)\delta \theta\right)$$ Since the geodesic must be zero independently of the variations $\delta r$ and $\delta \theta$, we know that the terms inside of the parentheses must be independently zero, and we get: $$\begin{align} 0 &= {\ddot r} - r{\dot \theta}^{2}\\ 0 &= {\ddot \theta} + \frac{1}{r}\left({\dot r}{\dot \theta} + {\dot \theta}{\dot r}\right) \end{align}$$ Now, we have this as a system of equations, and we remember that the geodesic equation, in terms of Christoffel symbols, is $0={\ddot x}^{a} + \Gamma_{bc}{}^{a}{\dot x}^{b}{\dot x}^{c}$, and we conclude that $\Gamma_{\theta \theta}{}^{r} = -r$, $\Gamma_{r\theta}{}^{\theta} = \Gamma_{\theta r}{}^{\theta} = \frac{1}{r}$, and that all others are zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Evaluating Feynman diagram for coupling between scalar field and dirac particle and anti particle If I have a scalar field $\alpha$ and a Dirac particle $\beta$ and its anti particle $\overline{\beta}$, such that the three couple to give a vertex factor of $-ik$ when evaluating the Feynman diagram (where $k$ is a dimensionless coupling constant), how do I evaluate the first order diagram of $\alpha \longrightarrow \beta + \overline{\beta}$?
Basically this is a tree-level diagram of an $\alpha$ particle decaying into a pair of $\beta \overline{\beta}$ pair. You need to draw the Feynman diagram. And now, single "internal" lines are propagators, and external lines are currents. But you need to direct the external lines so as to have a current. For reference look at this diagram, you need to direct the muon lines like in the picture. And then you need to construct the current. Every current is bit different depending on the pertinent Lagrangian. for instance the electron current in QED is given by: $$ J =\overline{u}(p')\gamma^{\mu}u(p) $$ Where $\overline{u}$ is the outgoing "particle", the non overlined $u$ is the ingoing current and there's a $\gamma$ matrix in the middle... You can look up some basic Feynman rules online to help you.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why rubber is incompressible material? Why rubber is incompressible material? I know its Poisson's ratio is nearing to 0.5. So I don't understand physically, what it means by 0.5 Poisson's ratio and incompressibility. When I tried searching it, I found that rubber (or similar polymers) conserve volume after deformation and so they are incompressible. But same is the case with steel (Poisson's ratio around 0.3), it conserves volume after deformation. So can someone explain this?
Conserved volume means volume before and after any deformation must be equal (like in a rolling operation, forging operation, etc). In this situation the Poisson ratio becomes 0.5. Rubber behaves like incompressible deformation; that is, if we stretch rubber its length increases and width decreases proportionally, so its volume remains same. As a result, it has a Poisson ratio of 0.5.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 5 }
How do mirrors look & behave atomically? I was observing mirrors recently and I was thinking about how an mirror would look atomically. I was always used to looking at atoms as being colored that said, I was always concerned at how they looked. I looked at previous questions and found silver atoms could play a part in it but now a days the mirrors are not made of pure silver now a days so I am confused. Not to mention how do mirrors function atomically? Its hard because if say I shoot light towards a very black object then the radiation is absorbed but however when I shoot it at mirror it reflects it back. That said, how do they work? How do atoms reflect the photons back? If so how and why does the black body atoms not do the same?
Reflection,refraction and transmission of light are macroscopic manifestation of a phenomenon called scattering.In this incoming photons are absorbed and either the quantum energy level of an atom is raised (as in case of resonance absorption) or the outer electron cloud is set into motion(this is responsible for light around us).Almost instantaneously another photon is emitted by the atom and this gives rise to reflection,refraction and transmission.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Gibbs free energy and maximum work I'm getting confused between two important results. The Gibbs free energy is $G = H-TS$ where $H$ is the enthalpy and $S$ is the entropy. When the temperature and pressure are constant the change in the Gibbs energy represents maximum net work available from the given change in system . But $dG = VdP-SdT$, so at constant temperature and pressure i'm getting $dG=0$. This is the criteria for phase equilibria. I'm getting Gibbs free energy change at constant $T$ and $P$ as maximum work in one relation and zero in another. How are these compatible?
The math derivation can be made below $$G=H-TS$$ $$dG = dH -d(TS)$$ $$dG = d(U+PV) -d(TS)$$ $$dG = dU + PdV +VdP -TdS - SdT$$ at constant pressure and temperature, $dP=0$, $dT=0$, $$dG = dU + PdV -TdS$$ From above, we know dG decreases when internal energy is transferred out, system does work and system entropy increases.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/138955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Apparent Contradiction with Faraday's Law Say we have a non-uniform magnetic field that is static in time. Specifically, let's make it: $\overrightarrow{B}(x,y,z) = x \hat{z} $ Now say we have a metal loop in the xy plane which has a non-zero velocity in the +x direction. Say we pick a random point in time, and then at that point in time measure the line integral of the E-field around the metal loop. The derivative of the magnetic flux through the metal loop is non-zero, so by Faraday's law the line integral of the E-field around the metal loop will be non-zero. Now imagine a different scenario where there is no metal loop, but we decide to measure the line integral anyway along the exact same closed path that we did in the previous scenario. The B-field is constant, so by Faraday's Law we will get zero. I was wondering if anyone knew the answer to this apparent contradiction. Does the presence of a moving metal loop make the E-field different from what it would be with no moving metal loop? The metal loop is neutral, so you wouldn't think that would be possible.
I'm pretty sure that the answer to this is that we're cheating by saying that there's an E-field in the loop in the first case. Well, we're "cheating" in a very narrow sense of the word. What we're doing is implicitly Lorentz transforming to a reference frame where the loop is stationary. If we do this, then the magnetic field at x = 0 becomes time-variant, and we get a manifest electric field, that can drive a motional EMF. Why do we do this? Because this description is simpler than the one we would have to make by doing an analysis based on pure magnetic fields pushing charge carriers around in the loop. We can simply say, "hey, we've got an e-field, this pushes electrons natively," and be done.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/139038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Can we deviate a bullet from its target? Did anybody try this? Can we deviate a bullet from its target? Means by applying any strong field at the target? Is there any technique so far like this!
Here is another completely different answer with a different technology. It is the use of spaced charged armour to disrupt the jet of molten metal from a shaped charge. The idea is that as the metal jet bridges the two sheets of armour is completes a short circuit as the armour is connected to a high energy capacitor. The resulting current flow disrupts the jet. Electric Reactive Armour
{ "language": "en", "url": "https://physics.stackexchange.com/questions/139116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
A hypothetical question on mechanics Being located in a tropical region, I am quite acquainted with the Ceiling fan. I have a question about it. If the top, that is, the axle (I'm not sure of the terminology: I mean the part which is thin, rodlike, attached to ceiling)...is rigidly fixed, then when the fan is turned on, the blades spin. On the other hand, when the local mechanic brings the fan down and puts it on the floor, then when it is powered on the blades are fixed and the axle spins. My question is this: If, neglecting friction, I keep both top and bottom of the fan free to move (like maybe in outer space), and I turn on the fan, what will happen?
If your fan is not connected to anything, and the blades do not encounter any air drag (outer space) then conservation of momentum means that the blades will turn in one direction, and the motor assembly in the other direction If you know the moment of inertia of the blades, call it $I_b$, and of the motor, $I_m$, then the ratio of the angular velocity of the blades vs motor will be given by the conservation of angular momentum: $$I_b \omega_b = -I_m \omega_m$$ In other words, if the moment of inertia for the blades is the same as for the motor, they will rotate in opposite directions with the same speed (with their relative speed equal to the speed that the motor can reach, given the applied voltage). Note that the above assumes the following layout: I can't quite figure out from your description whether the motor is rigidly attached to the blades (or maybe half the motor is) - so you might need to interpret my answer accordingly... bits that are fixed to the blade add to the inertia of the blade, and bits that are attached to the axle/motor add to their combined moment of inertia.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/139279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why does the sea horizon line always seems to be at the same height as one's eyes? I wonder why, when I look at a sea/ocean, the horizon line always seems to be at the same height as my eyes, no matter how many meters I am above the sea level. This is something I noticed when I trek, and it happened in many places. It's particularly noticeable when I climb down a mountain, from the peek to the beach. However, this isn't the case when landing in an airplane (although maybe in the last seconds, I haven't had the occasion to test), it seems one need to "touch the earth" to see this effect. Thanks a lot for your help! Edit: Here is my understanding of the triangle thing. But it doesn't make any sense as the higher I go, the less far I see. To make it right I should increase the lenght of the two opposite side (cathetus) of the hypothenus but to what extend should I increase them? Thanks a lot for your help! Second try! Where the earth look s like a "line"... I still don't get it!
The construction and calculations below show that if the altitude is very small compared to Earth's radius, the line of sight as measured from local vertical is very near 90 degrees. The Earth's radius is about 6371 km. For the line of sight to fall 1 degree you would have to elevate your point of view by about 370000 km!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/139348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Is it possible for larger antimatter atoms to decay to matter and visa versa? Following on from previous questions: If you have antimatter-matter interactions where there is a larger antimatter particle (say carbon or Silicon), is there any reason to believe that the antimatter particle could decay to matter particles during an interaction, and visa versa? Thanks
In some normal matter there is the phenomenon of positron decay. That is, an unstable atom decays by the emission of an anti-electron. Presumably there is a mirror form of this where antimatter decays by electron emission.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/139437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How detectors in particle colliders can differentiate neutrons from antineutrons? Their mass is the same. None of them interacts with EM fields. And their decay (around 1000s) is far too slow to see their decay products yet in the detector. How is it then possible to differentiate them?
Detectors at particle colliders are layered like onions around the collision vertex. The CMS detector at CERN First there are charged particle sensitive detectors where charged particles leave tracks because of ionisation, but mass density is low so strong interactions do not happen often; their momentum can be measured by the curvature in the imposed magnetic field. Then there are electromagnetic calorimeters, where photons leave their energy and charged particles continue as tracks. Then come the hadronic calorimeters with a lot of mass so that strongly interacting particles, hadrons, protons neutrons antiprotons antineutrons deposit their energy. Protons will have a continuous path up to the hadronic calorimeter due to their charge. Antiprotons will have negative charge. Neutrons will deposit energy without a previous track trace. Antineutrons will also deposit energy without a track, except due to the annihilation with matter the shower will be more energetic. At LHC energies the difference in the multiplicity due to the annihilation for antineutrons will not be distinguishable. At low energies , antineutrons have higher multiplicity showers. Generally in colliders the existence of antineutrons might be guessed at by conservation of charge and baryon numbers, in low multiplicity events.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/139540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Moment of inertia of rods Ok so I'm extremely comfortable with calculating moment of inertia of continuous bodies but how do we do it for a system not continuous. For example if 3 rods of mass $m$ and length $l$ are joined together to form an equilateral triangle what will be the moment of inertia about an axis passing through its centre of mass perpendicular to the plane. i know that moment of inertia of each rod is $ml^2/12$ and c.o.m is at centroid? also if 2 rods form a cross then to calculate the moment of inertia about its point of intersection would it be correct to sum up the individual moment of inertia of the rods form??
The moment of inertia is defined relative to the point of rotation, which in this case is the centre of the equilateral triangle. Then you can multiply this result by 3.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/139625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
What does the $c$ in $eV/c^2$ stand for? I have been wondering(also searching) for what does the $c$ in eV/$c^2$ stand for? (For example, mass of the electron is $0.511 \, \text{MeV}/c^2$.) I have read that this unit has been derived from Einstein's equation, $E=mc^2$, but how it is possible, We use the symbol $c$ for Coulomb. Also, tell how to convert this to our normal units of mass ($\mathrm{eV}/c^2 \to \mathrm{kg}$).
A sample unit conversion for the second half of your question: \begin{alignat}{2} 0.511\,\mathrm{MeV}/c^2 &= 0.511\,\mathrm{MeV}/c^2 \times \frac{10^6\,\mathrm{eV}}{1\,\mathrm{MeV}} \times \frac{1.60\times10^{-19}\,\mathrm{joule}}{1\,\mathrm{eV}} \\ &\quad\qquad \times \frac{1\,\mathrm{kg\cdot m^2/s^2}}{1\,\mathrm{joule}} \times \left( \frac{c}{3.00\times 10^8\,\mathrm{m/s}} \right)^2 \\ &= 9.08 \times 10^{-31}\,\mathrm{kg} \end{alignat} As usual for unit conversion problems, all of the fractions on the right are just clever ways of writing the number one, so multiplying by them doesn't change the value of the quantity at hand.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/139702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Do transmitters create magnetic fields and radiation? In the company I work for (I am a software engineer) we develop a system which uses a transmitter (antenna) that creates a magnetic field. I also know that radio station transmitters create radio waves, so I am somewhat confused. Do the coils built into the transmitter create an EM radiation when a current passes through them or do they create a magnetic field? Maybe both? I have very limited knowledge of physics, so I would appreciate an intuitive rather than a formal type of answer.
The coil controls the fluctuation of the current and thus the fluctuation of the magnetic field around the transmitter. The antenna in that arrangement, I assume an electric dipole or monopole, is a kind of a capacitor and its role is to change the ratio of the electric to magnetic field before it is to hit free space, it is a matching transformer to the impedance of ether $120\pi = 377[\Omega]$. The combination of the coil with the capacitor is a resonator at which radiation is possible with that particular $E$ to $H$ ratio. (If the antenna is a loopy kind then itself is inductive, and instead of a coil the matching is done with an external capacitor controlling the electric field fluctuation.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/139826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is there a difference between the speed of light and that of a photon? As in the title I am curious whether there is a difference between the speed of photon and the speed of light, and if there is what is the cause of such a difference?
In quantum mechanics a particle can be treated as a wave and a wave can be treated as a particle. This is the notorious wave particle duality. I won't go into this any further here because it's been discussed to death in lots of previous questions. Search this site for wave particle duality if you're interested in finding out more. Anyhow, assuming I interpret your question correctly you're asking if the speed of the particle, i.e. the photon, is the same as the speed of the light wave. And the answer is that yes it is. However this is actually rather unusual and in fact it only applies to massless particles like photons. If you take particles like electrons that have a non-zero mass then the wave velocity is not equal to the particle velocity. The reasons for this get very technical very quickly, and I'm guessing you don't want to go into all the gory details. In brief: we associate two velocities with a wave, the phase velocity and the group velocity. At the risk of oversimplifying, the phase velocity is the velocity associated with the wave and the group velocity is the velocity associated with the corresponding particle. For any massive particle the two velocities are different, but for a massless particle they are the same. That's why the speed of the light wave is the same as the speed of the photon.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/140923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
What happen to a spoon which is detached from the satellite? Suppose a spoon is a part of satellite but after detachment from the satellite Does it fall to the ground straight or does is take a parabolic path or any other path before coming to the surface of Earth
A few assumptions before we get started: 1) The satellite that you refer to is traveling in an orbital path around the Earth, as opposed to some other type of motion (You did not specify whether the satellite was in orbital or sub-orbital flight. If assumed incorrectly, please let me know with a comment.) 2) Relative to the satellite, the spoon is released from rest, along the path of the satellite. As long as my assumptions in 1 and 2 are correct, than the spoon will continue traveling along the original path of the satellite, in orbital flight. The gravitational force of attraction between Earth and your satellite is given by $$F=\frac{m_{Earth}m_{satellite}}{r^2}$$, which I will here forward refer to as $m_E$ and $m_s$ for short. For an orbiting satellite, the gravitational force of attraction between the Earth and the satellite acts as a centripetal force to hold the satellite in a circular orbit around the Earth. The equation for this centripetal force is given by $$F=\frac{m_sv^2}{r}$$ Now before we go any further I want to point out that the Earth and satellite each orbit about the combined center of mass of the system. However, for a typical satellite orbiting the Earth, this difference is so small, that for the purpose of your question, we can just say the satellite orbits the Earth. Additionally, your satellite will actually travel in an ellipse, not a circle. However, the eccentricity for an orbiting satellite is close to that of a circle, so again, for the purpose of this question, we can simplify the situation by saying that the satellite orbits in a circle about the Earth. Anyway, because it is the force of gravity that acts as a centripetal force holding the satellite in orbit, we can apply Newton's Synthesis and set these two equations equal to each other $$\frac{m_Em_s}{r^2}=\frac{m_sv^2}{r}$$ This simplifies to $$\frac{m_E}{r}=v^2$$ Now the reason why I went through this whole derivation is just to show you that the orbital velocity of a satellite does not depend upon its mass. The $m_s$ cancelled out of the final equation. What this means is that when the spoon is released from the satellite, it will continue to travel at the same velocity it had before, along the orbital path, due to inertia. Even if you threw the spoon away from the satellite, the new orbit of the spoon would not be too much different, because the velocity you would have provided to the spoon by throwing it is still very small in comparison to the orbital velocity it already had.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/140980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Transformation to a uniformly rotating frame I'm midway through a problem at the beginning of a GR course, my question is simply this: If $$ x=x'\cos\Omega t-y'\sin\Omega t $$ where $x'$ and $y'$ indicate the rotated frame of reference. What does that make $dx^2$? I need this so I can make substitutions into the equation: $$ ds^2=c^2dt^2-dx^2-dy^2-dz^2 $$
$$ dx = \cos (\Omega t) dx' -x' \Omega \sin (\Omega t) dt - \sin (\Omega t) dy' -y' \Omega \cos (\Omega t) dt$$ It's basically just the product rule and the chain rule.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/141139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rewriting the Hydrogen Schrodinger Equation as a system of differential equations I have only ever seen the Schrodinger equation for the hydrogen atom written out in a form like this: $$ -\frac{\hbar^2}{2\mu}\left[\frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\frac{\partial \psi}{\partial r}\right) + \frac{1}{r^2\sin{\theta}}\frac{\partial}{\partial \theta}\left(\sin{\theta}\frac{\partial\psi}{\partial\theta}\right)+\frac{1}{r^2\sin^2{\theta}}\frac{\partial^2\psi}{\partial \phi^2}\right]-\frac{Ze^2}{4\pi\epsilon_0 r}\psi=E\psi $$ I'm still learning the necessary skills to solve PDEs, let alone get to the point of solving this problem, but I wanted to know if someone could show me what this differential equation would look like in a matrix notation or as a system of differential equations.
If you assume separability of the wave function, i.e., $\psi(\mathbf x)=u(x)v(y)w(z)$, you can solve the individual components separately: \begin{align} -\frac{\hbar^2}{2\mu}\frac{d^2u(x)}{dx^2}+V_1(x)u(x)&=E_1u(x)\\ -\frac{\hbar^2}{2\mu}\frac{d^2v(y)}{dy^2}+V_2(y)v(y)&=E_2v(y)\tag{1}\\ -\frac{\hbar^2}{2\mu}\frac{d^2w(z)}{dz^2}+V_3(z)w(z)&=E_3w(z) \end{align} with the further constraint that $$ E_1+E_2+E_3=E $$ We can express (1) as the matrix differential equation, $$ \mathbf u''=A\mathbf u,\tag{2} $$ in which case $A$ is clearly diagonal and $\mathbf u=(u(x),\,v(y),\,w(z))^T$. In the case that the wave-function is not separable, then this method is not appropriate as you'd have a single scalar equation. For your case of the spherical wave function, you can solve the radial component and the angular component separately, $\psi(\mathbf r)=R(r)Y(\theta,\phi)$ with $Y(\theta,\phi)$ the spherical harmonics, as \begin{align} \frac{1}{R}\frac{d}{dr}\left(r^2\frac{dR(r)}{dr}\right)&=\lambda \\ \frac1Y\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial Y}{\partial\theta}\right)+\frac1Y\frac1{\sin^2\theta}\frac{\partial^2Y}{\partial\phi^2}&=-\lambda \end{align} where $\lambda$ is a parameter to be discovered. This is the typical method of solving this particular problem in quantum mechanics textbooks.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/141238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How did Newton establish his famous third law of motion? For every action,there is equal and opposite reaction . This is the famous Newton's third law of motion. But how did he come to this conclusion? We can prove 2nd law using calculus. But how did Newton prove this law? Or did he just used practical examples? [According to me, it may be due to conservation of energy,but did Newton use this?]
This page has a helpful summary of the history--it seems he initially accepted the Aristotelian idea that objects could only continue to move if some "force" inside them was moving them (keep in mind this is before his technical definition of 'force'), and it took him a while to switch to the idea that bodies naturally tend to keep moving unless acted on by a force, i.e. inertia (it seems he got this idea from Galileo and Descartes). After this he developed the concept that "force" must be acting whenever there is a change in motion, i.e. an acceleration. From there, the article suggests he got the idea for the third law from various mechanical experiments in which it could be observed that the total momentum always remained constant (and if you define force as mass*acceleration, conservation of momentum implies that forces must always be equal and opposite): Continuing his investigation of impact, he analyzed a collision between two bodies of unequal mass in the center of gravity frame of reference. He stated that they had “equal motions” in this frame, both before and after the collision. This could only mean mass×speed, or momentum (equal and opposite, of course)—the Third Law. (He realized and stated that during such a collision, the center of mass itself would move at a steady speed.) A Third Law Experiment with Pendulums In fact, there is a Third Law experiment in the Principia, in the second Scholium, right after the Laws of Motion and their Corollaries. He collided together two pendulums (about ten feet long) with different masses, to establish that the impacts (i.e. forces) felt by them were equal and opposite, as measured by how far they rebounded. He went to considerable trouble to account properly for air resistance and imperfect elasticity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/141326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
OPE of fermionic field bosonization in string theory, in Polchinski 10.3.12 In Polchinski's String Theory Vol. 2, equations 10.3.12 are $$e^{iH(z)}e^{-iH(-z)}~=~\frac{1}{2z}+i\partial H(0)+2zT_B^H(0)+O(z^2)\tag{10.3.12a}$$ $$\psi(z)\bar\psi(-z)~=~\frac{1}{2z}+\psi\bar\psi(0)+2zT_B^\psi(0)+O(z^2)\tag{10.3.12b}$$ How are these two OPEs calculated, especially the second and third terms?
I've got the answer by myself. Simply do Taylor expansion of the left hand side. Expand both the exponential, and the field around $H(0)$ or $\psi(0)$, then the right hand follows naturally after plugging in definitions of $T_B$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/141421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Magnetic field of a static disk magnet I have a magnetic disk (Radius R, height h) that creates magnetic field lines (axisymmetrical). I simulated the field lines, exported the data and am now trying to fit a function into the data to have a analytical solution for the magnetic flux density of my specific magnet. For several r's fixed I have the magnetic flux Density $B_z(z)$ as well as $B_r(z)$ I need functions of the following form: $B_z(r,z)$ and $B_r(r,z)$ since I think $B_\phi (r,z)$ is $0$. So I am wondering how I need to start. I found the following equation online but I think its only for the absolute value, not $B_z$ and $B_r$ how I need it: $$B(r,z) = \frac{\mu_0 m}{4\pi (z^2+r^2)^{\frac{3}{2}}}\sqrt{1+\frac{3z^2}{z^2+r^2}}$$ Can someone help me how my function needs to look? I was thinking something like: $$B_z(r,z) = \frac{a}{b+((z+c)^2+(r+d)^2)^{\frac{3}{2}}}\sqrt{1+\frac{3z^2}{z^2+r^2}}$$ or am I completely off now? And furthermore, how would I change to cartesian coordinates then? Many thanks for your help!
My answer will probably be a little off topic, but why do you need this kind of analytical function ? Since you have the magnetic flux density (FEM simulation I guess), you can use any interpolation to get the B value anywhere... Unless you have a very specific need, if you only want to get the value of B anywhere, that is probably the easiest solution. I would use, for instance, scipy.interpolate.griddata or even interp2d if your mesh is regular. From an academic point of view, the analytical solution of the induced magnetic field of a coil/magnet does not exist without the use of simplifications (ex. being far for the magnet (your equation), near the axe...).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/141807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Kinetic energy vs. momentum? As simple as this question might seem, I failed to intuitively answer it. Let's assume there is a $10,000$ $kg$ truck moving at $1$ $m/s$, so its momentum and KE are: $p=10,000$ $kg.m/s$ and $KE=5,000$ $J$. Now if we want to stop this truck, we can simply send another truck with the same mass and velocity in the opposite direction to collide with the first one, and both of them will stop because of the conservation of momentum. But what if we want to stop the truck by doing work in the opposite direction of motion ? Let's assume there is a rope with an end tied to the back of the the truck, and the other end is tied to a $400$ $kg$ motorcycle moving at $5$ $m/s$, then its $p=2,000$ $kg.m/s$ and $KE=5,000$ $J$. Now we have a truck moving in one direction with the same kinetic energy as the motorcycle which is moving in the opposite direction, but the truck has more momentum. So will the truck stop by the energy (or work) of the motorcycle ? If yes, then how is the momentum conserved, and if no, then where does the energy of the motorcycle go? Ignore any friction forces.
A momentum-based analysis is the way to go for the motorcycle-rope-truck scenario. In your kinetic energy argument, you are assuming that kinetic energies add like vectors. This is not the case. If you want to properly apply a kinetic-energy-work argument, you need to think about the force $F$ that the rope exerts on the truck and the distance $d$ over which this force acts. Only by doing this will your momentum and kinetic energy methods agree on the answer. (Note that this ignores any energy-storing capability of the rope.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/141891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 2 }
Reality of "electrical explosion" I have often heard people who have been electrocuted refer to the "explosion" and how they were "thrown back" by the "blast". Sometimes the force of the blast is reported to throw people many metres. There is no explosive involved - how can there be a repulsive force from a discharge of electricity?
CuriousOne's comments basically answered your question. I will add that if enough current is allowed to suddenly flow it can vaporize materials very rapidly, including metal. This sudden vaporization can create a rapid expansion and if that expansion is restricted by something then it can explode in the same fashion as a bomb. You often hear of transformers being "blown" because the high current inside causes it to rapidly expand faster than the pressure can be relieved, thus blowing up the container.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/141929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why do electrons orbit protons? I was wondering why electrons orbited protons rather than protons orbiting electrons. My first thought was that it was due to the small amount of gravitational attraction between them that would cause the orbit to be very close to the proton (or nucleus). The only other idea that I would have is that the strong interaction between protons and neutrons have something to due with this. I have heard that the actual answer is due to something in QM, but haven't seen the actual explanation. The only relation to QM that I can think of is that due to a proton's spin and the fact that they are fermions, the atomic orbitals should be somewhat similar. Do protons have the same types of orbitals, that are just confined by the potential of the strong force? A related question that came up while thinking of this being due to a gravitational interaction: do orbits between protons and electrons have a noticeable rotation between each other (as the sun orbits the earth just as the earth orbits the sun), or is any contribution this has essentially nullified by the uncertainty of the location of the electron (and possibly proton as well)?
The short answer is: protons are much more (1800 times) massive than electrons. That makes them (approximately) the center of mass of the system, that's why electrons are the ones orbiting protons and not vice versa. The term 'orbiting', however, means something essentially quantum. It is the reason of the stability of the atom (electrons don't radiate their energy and, as a consequence, fall on the nuclei), and also what you probably meant by 'the actual answer is due to something in QM'.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/142037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Would using Cherenkov radiation for lighting be feasible? Could Cherenkov radiation be used for general illumination, for example, to replace LEDs, light bulbs etc? I.e. are there, or could there be, methods to produce substantial amount visible light with Cherenkov radiation: * *using devices compact and cheap enough, *safely and *energy-efficiently to actually make any sense? What other problems could there be for using Cherenkov radiation for this purpose?
Quite aside from the issue of ionizing radiation, Cerenkov generating particles also lose energy by other processes and that ends up as heat. Moreover, all the kinetic energy of the particles once they drop below the Cerenkov threshold is lost in non-optical channels (i.e. more heat). So no, they could never be anywhere near as efficient as diode lighting. On top of that the spectrum is really cool to look at, but not the one you want in a lamp: it's far too blue.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/142132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
How do tachyons violate causality? Something moving faster than light should have imaginary mass, like photons have zero mass and thus travel at $c$. I have read this article of physicist E. C. George Sudarshan. He said taking mass to be imaginary we get real energy and momentum. (I think I have understood that.) However, if something moves at a speed greater than $c$, its proper time $\tau=t\sqrt{1-v^2/c^2}$ becomes imaginary (where $t$ is coordinate time). Does this imply causality violation? Does imaginary time mean time is going backward?
In the context of bosonic string theory, the ground state with no oscillator excited, has a mass, $$M^2 = -\frac{1}{\alpha'}\frac{D-2}{6}$$ where $\alpha'$ is the Regge slope, satisfying $\alpha' = 1/2\pi T$, where $T$ is the tension of the spring, and $D$ are the spacetime dimensions. It seems it has an imaginary mass (providing $D\geq 3$). You may have also heard this particle is unnatural because it propagates faster than $c$. Let's go back to quantum field theory for a moment. Generally, the mass squared is simply the term that appears in the quadratic part of the Lagrangian, i.e. $$M^2 = \frac{\partial^2 V(\phi)}{\partial \phi^2} \biggr\rvert_{\phi = 0}$$ Hence, if $M^2 < 0$, we can interpret that as the fact that we are expanding around a maximum of the potential for a tachyon field (see second derivative test). With this perspective, the Higgs field can also be viewed as a tachyon. As D. Tong states, it is unfortunate that bosonic string theory sits at this unstable point in the potential of the tachyon field. To date, we still don't know of a minimum of $V(\phi)$. (One can compute higher order corrections, and find a minimum, but then the next correction destabilizes the minimum again.) So it seems the issue in string theory is not causality, rather they run much deeper.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/142193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 0 }
What does it mean by $t=-1$? if the position vector of a particle is $\hat{r}=\left(4+3t\right)\hat{\imath}+\left(t^3\right)\hat{\jmath}+\left(-5t\right)\hat{k}$, i want to find at what time this particle passes through the point $\left(1,\:-1,\:5\right)$. I found that $t=-1$ for this particle to pass through that point. What does it mean by that? Why there's negative sign for $t$?
Yes, you defined the zero of time as when the particle is at $(4,0,0)$. It passed through $(1,-1,5)$ one second before that.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/142300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Inconsistency in the delta potential I encountered an inconsistency in the one-dimensional delta potential. Suppose we have a one-dimensional infinitely deep square well from $-L$ to $+L$. We know the eigenstates are sine and cosine functions. They are either even or odd. Now let us add one delta potential $g \, \delta(x)$ in the middle. Here $g\in \mathbb{R}$. By parity consideration, the odd states are not affected at all. The even states are coupled together. This means, the new even-parity eigenstates are linear superpositions of the odd even-parity eigenstates. In turn, this means, the even-parity eigenstates should have zero derivative at $x=0$. However, if you do this same problem in another standard way, i.e., by integrating the Schrodinger equation across the delta potential, you would get a boundary condition for the right part of the wave function in the form of $a \psi(0) +b \psi'(0) =0 $, where $a$ and $b$ are finite numbers. This boundary condition means $\psi'( 0) \neq 0 $. So the two approaches yield different results! How do we resolve this inconsistency?
The main point that although a pointwise convergent Fourier series of cosine modes is an even function $\psi(-x)=\psi(x)$, it does not have to be differentiable at $x=0$. A pointwise convergent infinite sum of differentiable functions is not necessarily a differentiable function. More generally, as OP already mentions, the wave function $\psi(x)$ is not necessarily differentiable at the $x$-position of the delta potential and the two well walls. One should allow for a discontinuity in the $\psi^{\prime}(x)$ at these three $x$-positions. See also e.g. my Phys.SE answer here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/142403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Explanation for negative specific heat capacities in stars? I've just found out that a negative specific heat capacity is possible. But I have been trying to find an explanation for this with no success. Negative heat capacity would mean that when a system loses energy, its temperature increases. How is that possible in the case of a star? Musn't there be a source of energy to increase the temperature of any system?
Although John's answer is quite comprehensive, I would like to add this answer in order to reinforce my qualitative understanding of the matter and to try to provide the OP a more intuitive and qualitative explanation for the negative specific heat capacity as the OP seems to be looking for a more qualitative (and intuitive) sort of explanation. For usual objects like rocks and stars, the temperature is a direct measure of the internal kinetic energy of the object - i.e., the kinetic energy of its constituents. Now, if - the configuration of such an object be of such a nature that whenever the internal kinetic energy increases (decreases), the structure of the object has to change in a way that makes its potential energy decrease (increase) by an amount greater than the increase (decrease) in its internal kinetic energy - then clearly the specific heat capacity will be negative! For black holes, the story is a bit different. I haven't studied the work that determines Hawking temperature using the string theoretic microstates of a black hole and thus, I believe I can't really provide an explanatory or a deeper reasoning behind the negative specific heat capacity of black holes - but I will elucidate the mechanism of deriving the specific heat capacity of a black hole and that clearly shows that it must be negative. The temperature of a black hole is given by $T = \dfrac{\hbar c^3}{8\pi GM}$. The energy of a black hole is to be considered as $E = Mc^2$. Therefore, $dE = -\dfrac{\hbar c^5}{8 \pi G T^2} dT$. Thus, specific heat capacity $C = \dfrac{1}{M}\dfrac{dE}{dT} = -\dfrac{\hbar c^5}{8 \pi GM T^2}$. In a qualitative way, one can also think that since the temperature of a black hole is bound to decrease with an increase in its area (larger the black hole, the cooler it is) and the area is bound to increase with an increase in its mass (energy), the specific heat capacity of the black hole has got to be negative.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/142461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 4, "answer_id": 0 }
Why aren't classical phase space distribution functions always delta functions? The phase space distribution function (or phase space density) is supposed to be the probability density of finding a particle around a given phase space point. But, classically, through Hamilton's equations, the system's time evolution is completely determined once the initial conditions are specified. So for a 2D phase space, why isn't the distribution function always the same: $$f(x,p,t)=\delta(x-x(t)) \ \delta(p-p(t))$$ I know that this thinking has to be wrong, and I am definitely confusing some things. I would like to ask for clarification.
You are right that if you know exactly the initial conditions of your system that is the exact location of your system's state in phase space then its evolution is entirely determined. But that's where lies the issue; we don't know exactly the state of the system as described by a point in phase space. Instead we may know some values of macro- or meso-scopic variables and try to infer from them compatible microstate. The reliability/accuracy of the inference we make is then captured by a (usually non-delta) distribution function in phase space.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/142574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Say, a liquid is made to flow in a tube. Why does the layers of the liquid in contact with walls of the pipe have zero velocity? Say, a liquid like water is made to flow in a pipe. Why does the layer of water near the walls of the pipe have zero velocity? Does that mean that the layer of water near the pipe is stationary and the other layers are in motion? How is that even possible?
The article you cited explains it pretty well, unless there was something not explained to me about it in chemistry too. This property is a generalization for viscous fluids. For your purposes we will take fluid to mean liquid Basically a molecule in a fluid forms weak bonds (See Van Der Waals Force) with the other molecules in that fluid, (which is why it stays together and doesn't fly away like a gas). The molecules that are next to the solid boundary also have these bonds with the solid (This is why water "sticks" to the side of a cold surface). If the bonds with the solid are stronger than the bonds with the other molecules in the fluid, then the molecules stuck to the surface will not move with respect to the solid. This is most obvious in viscous fluids. I think it happens with non-viscous fluids it is just negligible in most cases. (no one has ever told me and I haven't found anything about it)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/142665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is $\pi$ used when calculating the value of $g$ in pendulum motion? I am trying to intuitively understand why $\pi$ is used when calculating the value of $g$ using the harmonic motion of a pendulum: $$g ~=~\frac{4\pi^2L}{T^2}.$$ Does it have something to do with the curvature? I am thinking something along the lines of that, aswell as the fact that the oscillation of a pendulum would follow a circular path. The squared value of it would be because it was performed in 3d space. I am just looking for a mathematically intuitive understanding of this.
Does it have something to do with the curvature of the Earth which is assumed to be spherical You'll probably groan when you read this answer since it isn't nearly as complicated as you might think. Essentially, there is factor of $\pi$ since the angular frequency $\omega = 2\pi f = \frac{2\pi}{T}$ A well know result from the linearized pendulum problem is that, for small angular displacements, the angular frequency is $$\omega = \sqrt{\frac{g}{L}}$$ which follows from the differential equation in the angular displacement $\theta$: $$\ddot \theta + \frac{g}{L}\sin \theta = 0 \approx \ddot \theta + \frac{g}{L}\theta\;,\quad \theta \ll 1$$ Thus, $$\left(\frac{2\pi}{T}\right)^2 = \frac{g}{L} \Rightarrow g = \frac{4 \pi^2 L}{T^2}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/142805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
How do the flow equations relate to the actual situation? This question might seem silly but I'll try to make it clear. It's a question (I think) about partial differential equations systems in general, but since currently I'm studying fluid mechanics I'll ask on that context. The equations for an incompressible flow are: $$\begin{cases}\nabla\cdot\mathbf{u} &= 0 \\ \dfrac{D\rho}{Dt} &= 0\\ \rho \dfrac{D\mathbf{u}}{Dt} +\nabla p - (\lambda+\mu)\nabla(\nabla\cdot \mathbf{u}) - \mu\nabla^2\mathbf{u} &= 0\end{cases}$$ Where we have to find $u_1,u_2,u_3$ the components of $\mathbf{u}$, $\rho$ and $p$. The point is: these are the equations, regardless of the flow under study. So, how does this connect to the real situation I have? For example, in Newtonian Mechanics the equation is $\mathbf{F} = m\mathbf{a}$, but here we know the connection with the problem at hand: we plug the forces there and we solve the equation. Now, in fluid mechanics, if I consider pipe flow, or channel flow, or flow past a sphere, or inside some complicated region $D\subset \mathbb{R}^3$ the equations are just the same. Nothing changes, there's not anything connecting the equations to the problem at hand but we might expect the solutions be quite different on each case. In that case, since there are $4$ equations and $4$ unknowns, it seems the solutions would always have to be the same. So I ask: what connects these equations to the real situation? Is it just the boundary conditions? I believe my problem is that I don't know yet how existence and uniqueness works for partial differential equations systems.
Both boundary conditions and initial conditions matter equally when connecting the model to the real world. Consider the flow around a cylinder that you mention. We know that the Reynolds number, $$ {\rm Re}=\frac{u L}{\nu}\tag{1} $$ can characterize laminar or turbulent flows, depending on the values in (1). Below are flows for different Reynolds numbers (values given in the image). If we initialize the flow with different starting velocities, $u(x,t=0)=u_0$, (keeping the length scale $L$ and kinematic viscosity $\nu$ constant for each simulation) and maintain the steady flow with the boundary condition $u(x=0,t>0)=u_0$, then the two combined (IC + BC) generate different results for different IC+BC pairs.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/142879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Is a suit that hides a soldier's heat signature fundamentally possible? I recently played "Crysis", a game where the protagonist wears a suit that allows the player to hide both himself and his heat signature. Then I watched Iron Man 3, where a kid suggests that Tony Stark should have implemented retro reflection panels in his suit. So I'm thinking, well, as is the nature of things, people are going to be pursuing this sort of thing in real life too, pretty soon. But I'm trying to figure out whether a suit can contain a person's heat signature without emitting the heat somewhere. Is such a thing fundamentally possible to do, without over-heating the person within?
I guess this is possible: for example, thermal radiation can be emitted within a very small solid angle, e.g., upwards.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/142971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 6, "answer_id": 3 }
How can I estimate the density of fog? I'm working on investigating the effect fog has on drag. I have assumed an air density of $1.225 \frac{\text{kg}}{\text{m}^3}$ for dry air, but I don't know what value for density I could assume that would be typical of fog. I can't even reason out whether or not fog is more dense or less dense than dry air: I know that air density is lower at higher humidity since water vapor is less dense than air, but it seems to me that fog should have a higher density than air, since you can observe fog being more dense closer to the ground, and collecting in valleys. What air density is typical of fog?
This is an instrument that measures fog density and has an experimental plot, figure 9 . Once you have the relative humidity at the fog appearance at a temperature and pressure , one can use known equations to get the density. This link gives a calculator.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/143106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Unification of the electroweak theory Can the electroweak theory be described by the spontaneous symmetry breaking of $SU(3)$ to $SU(2)\times U(1)$?
It is indeed possible to break $ SU(3) $ to $ SU(2) \times U(1) $. To see that we need to check that $ SU(2) $ and $ U(1) $ are subgroups of $ SU(3) $. Its easy to see that $ SU(2) $ is a subgroup since the first three Gell-mann matrices are given by, \begin{equation} \lambda _i = \left( \begin{array}{cc} \sigma _i & 0 \\ 0 & 0 \end{array} \right) \quad (i = 1 ,2,3) \end{equation} and since these are just the Pauli matrices we know they form a group. Furthermore, it is also well known that there is another Gell-mann matrix that commutes with these $3$, \begin{equation} \lambda _8 \equiv \frac{1}{\sqrt{3}} \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & - 2 \end{array} \right) \end{equation} To break $ SU(3) $ into this subgroup one needs to use a scalar in the adjoint (matrix) representation. This choice will work as long as the VEV of the scalar commutes with the subgroups. To see why this is consider the kinetic term for the scalar, \begin{equation} \mbox{Tr} D ^\mu \Phi ^\dagger D _\mu \Phi = \mbox{Tr} [ \Phi , T ^a ] [ \Phi , T ^b ] A _\mu ^a A ^{ b , \mu } + ... \end{equation} where since $ \Phi $ is in the adjoint representation we have, $ D _\mu \Phi = \partial _\mu \Phi - i A ^a _\mu \left[ \Phi , T ^a \right] $ ($ T ^a $ are the group generators and $ A ^a $ is the vector field). We see that if $ \Phi $ gains a VEV the condition that the vector field, $ A _\mu ^a $ remain massless is that $ T ^a $ commute with the VEV. If a gauge boson remains massless then its gauge symmetry is conserved. With this in mind all we have to do is pick a VEV for the scalar that commutes with our subgroup. This will be the case for the VEV, \begin{equation} \left\langle \Phi \right\rangle = v \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & - 2 \end{array} \right) \end{equation} Note that while one can produce the $ SU(3) \rightarrow SU(2) \times U(1) $ pattern this way, this is insufficient to reproduce the phenomenology of the SM. To do that one would need to fit the SM into multiplets of $ SU(3) $ which you couldn't do without introducing new fields. For more discussion on this point or any of the above I encourage you to look at a treatment of grand unification.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/143269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Electric and Magnetic field's phase difference shift in linearly polarized electromagnetic waves I am a high school student and we currently studying the electromagnetic theory. In my textbook i read that the oscillating electric the magnetic fields have phase difference equal to π/2 rad near the source (for example an antenna) while away from it they agree in phase. Is this true? And if so, why and how is this happening.
Let me explain the nearfield phase shift of π/2 by how an antenna work. The antenna is an open electrical "circuit" where a electrical generator push and pull electrons inside the antenna rod. And the antenna rod one can imagine as a capacitor with its electrical field. How the electrical and the magnetic fields are produced, see here. During each half oscillation of the generator each free electron produces a photon which propagates as an electromagnetic wave with shifted by π/2 electrical and magnetic components. That's not magic, a lot of electrons produce Bremsstrahlung. The wavelength of these photons depends of the material of the rod and other influences too. The wavelength of the radio wave depends from the frequency of the generator and the velocity of light in air. (The right length of the antenna rod is helpful for the antenna efficency.) More amazing is the fact that according to Maxwells equations this radio wave get transformed at some distance to the antenna rod into no shift of the electrical an magnetic component.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/143361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Switching from sum to integral I'm specifically asking about an equation in An Introduction to Quantum Field Theory, by Peskin and Schroeder. Example from page 374: $$\mathrm{Tr} \log (\partial^2+m^2) = \sum_k \log(-k^2+m^2)$$ $$= (VT)\cdot\int\frac{\mathrm{d}^4k}{(2\pi)^4}\log(-k^2+m^2),\tag{11.71}$$ The factor $VT$ is the four-dimensional volume of the functional integral. Why does this $VT$ show up in equation $(11.71)$?
One may only talk about a discrete sum over $k^\mu$ vectors if all the spacetime directions are compact. In that case, $k^\mu$ is quantized. If the spacetime is a periodic box with periodicities $L_x,L_y,L_z,L_t$, then $V=L_x L_y L_z$ and $T=L_t$. The component $k^\mu$ in such a spacetime is a multiple of $2\pi \hbar / L_\mu$ (I added $\hbar$ to allow any units but please set $\hbar=1$) because $\exp(ik\cdot x / \hbar)$ has to be single valued and it's single-valued if the exponent is a multiple of $2\pi i$. So the total 4-volume in the $k$-space that has one allowed value of $k^\mu$ – one term in the sum – is $(2\pi)^4 /(L_x L_y L_z L_t) = (2\pi)^4 / (VT)$. It means that if one integrates over $\int d^4 k$, one has to divide the integral by this 4-volume, i.e. multiply it by $(VT)/(2\pi)^4$, to get the sum – to guarantee that each 4-dimensional box contributes $1$ as it does when we use the sum. In the limit $L_\mu \to \infty$, the integral divided by the 4-volume of the cell and the sum become the same – via the usual definition of the Riemann integral. I have doubts that the 11th chapter is the first one in which this dictionary between the discrete sums and the integrals is used or discussed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/143467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }