Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why singularity in a black hole, and not just "very dense"? Why does there have to be a singularity in a black hole, and not just a very dense lump of matter of finite size? If there's any such thing as granularity of space, couldn't the "singularity" be just the smallest possible size?
Because otherwise general relativity would contradict itself. The event horizon of a black hole is where not even light can escape. Below the horizon all photons must fall. In relativity theory all observers measure the speed of light the same, c; that's a postulate of the theory. Then all physical things (including observers) at and below the horizon must fall and keep falling, lest they measure the speed of light emitted upward to be something other than c. If you could stand on a very dense lump of matter of finite size at the center of a black hole, and pointed a flashlight upward, the photons would somehow have to fall to the ground (without moving upward at all) and you wouldn't measure the speed of light to be c in the upward direction. The theory would be broken. The singularity is the "can't fall further" point and the theory becomes inapplicable there.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/18981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 8, "answer_id": 0 }
What fundamental principles or theories are required by modern physics? We have been taught that speed of light is insurmountable but as we know an experiment recently tried to show otherwise. If the experiment did turn out to be correct and confirmed by others, would it make physics to be rethought of? What other concepts are fundamental to physics, which, if disproved would need radical rethinking? If this sounds too juvenile and/or misinformed, please understand that I am a layman, having nothing, professionally or academically to deal with science, directly, and this question is out of curiosity. I have developed a liking to "science stuff" and been reading popular science variety of literature lately. This question was also prompted by what Sheldon Cooper had to say in one of the episodes (I was watching a rerun).
Part of the problem in confirming physics theories with experiments is that we don't know all of it. Most likely, there are unthought of circumstances and margins of error in the "neutrino experiments". If discrepancies occur in experiments of such well tested theories, more rigorous testing remains before failure of the theory is considered. Not to mention that any subsequently reworked theory has to at least satisfy the old theory as well. So, yes - given sufficient evidence to topple a theory, physicists would go through the five stages, and eventually have no other choice but to yield - science is about the world, not the ego.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/19142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
Why is spacetime near a quantum black hole approximately AdS? In this link, one of the answers contains the statement If you examine the space-time near a finite area quantum black hole, you will see an approximate AdS space. Presumably "approximate" means this is only true to some order in the distance from the horizon ? Could someone outline where the result comes from, or provide a reference.
The answer you quoted was invalid in most respects, including this one. The quantum character of a black hole has nothing to do with the AdS geometry; the AdS geometry is the near-horizon geometry of a classical black hole (or black brane). Which black hole? It has to be an extremal black hole, i.e. it must have the maximum value of a charge or angular momentum that is allowed for the given mass (or mass density, in the case of black branes). For a black $p$-brane which is extended in time as well as $p$ additional spatial dimensions, one obtains $AdS_{p+2}$. For example, one may get an $AdS_2\times S^{d-2}$ from the near-horizon geometry of extremal black holes in $d=4$. See the derivations and comments e.g. around pages 57 and 104 in http://arxiv.org/abs/hep-th/9905111 or search "near-horizon [geometry]" in this paper or another introduction to AdS/CFT. Black holes that are non-extremal have a finite volume of the region near the horizon and quantum phenomena don't change anything about this fact. The extremality is needed for the infinite volume – and AdS spaces have an infinite volume. The derivation of the near-horizon limit of a black hole metric is a purely classical, geometric procedure: one neglects some subleading terms in the metric tensor and only keeps the leading ones (which dominate for a small difference $\Delta R$ from the horizon, relatively to the black hole radius), just like you would expect.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/19239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Why doesn’t gravity break down in a large black hole? By popular theory gravity didn’t exist at the start of the Big Bang, but came into existence some moments later. I think the other forces came into existence a little latter. When a black hole crushes matter to a singularity (infinite density), at some point shouldn’t the forces cease to exist including gravity? Kent
A black hole is in many ways like a time reversed version of the Big Bang. If there were several stages of symmetry breaking as the Big Bang evolved, then it's certainly possible that an observer falling into a black hole would see the symmetries restored as they approached the singularity. This wouldn't really mean gravity "breaks down" to use your phrase. It would mean that around a Planck time before you hit the singularity gravity would unify with the other forces. What exactly that means no-one knows as there's no accepted theory to describe matter under those conditions. There are some objections to this: for example a black hole (probably) isn't a time reversal of the big bang. The Physics FAQ has a good article on this at http://math.ucr.edu/home/baez/physics/Relativity/BlackHoles/universe.html. Also the Weyl curvature is (probably) high as you approach the singularity of a black hole and the Weyl curvature near the Big Bang was (probably) low. The main objection is that this is a somewhat fanciful question and it's currently impossible to put it on any sort of quantitative basis. Fun to chat about over a drink but I suspect you'd struggle to get papers on it accepted anywhere reputable.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/19291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Have experiments ever suggested two different values to the same divergent series? I believe to have understood that some physical experiments suggest finite values to divergent series (please correct me if I'm wrong, my understanding of these matters is limited). I heard, for example, that the "equality" $$ \sum_{n=1}^{ \infty} n = - \frac{1}{12} $$ was suggested by some experiment conducted by physicists. I was wondering if there are experiments in physics that seem to suggest two or more different values to the same divergent series. If not, why is this the case?
No physical experiment ever predicts the result of a mathematical formula. A physical experiment may determine whether a certain model, described in the language of math, applies to a particular physical phenomenon. That being said, divergent series can come up when working within the mathematical framework of quantum field theory. The values of certain physical quantities, like scattering cross sections, can be expressed in terms of infinite series that diverge unless a mathematical technique called regularization, followed by renormalization, is used to allow the series to have finite sums. The value of the sum may vary depending on a value chosen for a parameter called a coupling constant used in the summation of the series. By comparing to experiments, the value of the coupling constant can be determined.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/19356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Entropy of a mass arrangement around the earth An mind experiment, taking the entire Earth as an isolated system Then this is the initial state: N masses are distributed around the earth, at different height. (for example we can use a single grain of sand from one building of every city of the world) Heights are not specified, can be choosen randomly, and those different initial dispositions microstates $\Omega1$ are equivalent to the macrostate (initial state). the next step The N masses are released and fall because of gravity Finally all the N masses lie in its floor, they have become all closer each other, and the height now, relative to it's reference floor, are all equals zero. (If we take a sphere model then it will be zero height, same radial position for all) So, at final state there are fewer possible microstates $\Omega2<\Omega1$, for the same macrostate, then if process was isolated. Why did entropy went down?
An always attractive force, Gravity, seems to be antagonistic with an always disipating "force", Entropy, I mean the mere existence of clumps of matter (planets, stars..) again seems to propitiate the concentration of energy (the opposite of Entropy).. Where is the trick? (Based on Zephyr/UnbanRonMaimon and Greg P comments) When the masses fall (or become a clump), they convert their potential energy to kinetic. In the toy system of particles, there is nowhere for this energy to go, so they will never reach a state of rest at the floor. If it's allowed for energy to be transferred away by friction then entropy will increase. So, taking into account the incredible multiplicity of microstates which correspond to the excitations created when the kinetic energy is converted to heat, this overcome the relative reduction in the height possibilities. The height of each grain of sand is only a single degree of freedom. That tiny number of degrees of freedom does indeed become more ordered, but the zillions of atomic degrees of freedom become more disordered.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/19457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why are geons unstable? Are there other problems with geons? I read in various places geons are "generally considered unstable." Why? How solid is this reasoning? Is the reason geons are not studied much anymore because we can't make more progress without better GR solutions or a better theory of quantum gravity, or is it because it really is a failed theory with fundamental problems (other than the unproven stability question)?
The stability argument is as follows--- the Geon system will have some mass, and it is made out of massless fields orbiting in closed orbits, so if you make the geon a little smaller with the same total energy, you expect the gravity to win and the massless fields to collapse into a black hole, and if you make the geon a little bigger, you expect the massless stuff to disperse to infinity. This argument is hard to make rigorous, because you need to find a way to rescale the nonlinear gravitational theory. So Wheeler studied this situation extensively, with the hope of finding a stable Geon. He didn't find one, and even if there were one, we already have a good model of elementary particles in the black hole solutions and their quantum counterparts, so it is not clear that such a solution would be useful. But it is a strangely neglected field. Perhaps there is an easy argument that establishes instability of all geons, but it is going to be tough, because the Geons can make arbitrarily complicated links of light going through each other, pulling each other into stable orbits.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/19517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Solutions to SHMO equation in Feynman's Thesis I'm reading Feynman's Thesis, and have some background in math and physics, but I'm not sure where Feynman gets his solution to his harmonic oscillator equation. He gives three different formulations. The first includes an integral. Is this standard Fourier Analysis stuff? Can you give me some kind of link? One of them is $$ x(t) = x(0) \cos \omega t + \dot{x}(0) \frac {\sin\omega t}{\omega} + \frac {1}{m \omega}\int_o^t \gamma (s) \sin \omega (t-s) ds $$ $$\gamma = I_y + I_z $$. The "I" is a function involving the coordinates of positions y and z. $$ m\ddot{x} + m\omega^2 x = [ I_y + I_z ] $$
This comes from linearity, the solution of the homogenous equation, plus the response to a delta-function kick. The homogenous equation gives the first two terms. The equation $$ \ddot{x} + \omega^2 x = 0 $$ is solved by a combination of cos and sin which reflect the initial position and velocity $$ x(t) = x_0 cos(\omega t) + {v_0\over \omega} sin(\omega t) $$ You can verify that this is correct by looking near t=0 to see that it starts at $x_0$ with velocity $v_0$. Now consider adding a delta-function kick at some time $t_0$ $$ \ddot{x} + \omega^2 x = \delta (t-t_0)$$ You want the solution to this equation which only has influence into the future, meaning that for $t<t_0$, $x(t)=0$. The delta function is an impulsive kick which makes the particle move with a velocity 1 at time $t_0+\epsilon$, so that the solution from this point onward is the solution to the equation with initial velocity 1 starting at $t=t_0$, or $$ x(t) = {1\over \omega} sin(\omega (t-t_0))$$ Now you consider the source term $\gamma(t)$ to be a sum of delta functions at each time, each one producing this response. This leads to a total response (by linearity) of $$ x(t) = \int_0^t{\gamma(t')\over \omega} sin(\omega(t-t')) dt' $$ And this integral is the last term in Feynman's solution. The forward effects of the sin kernel means that this solution doesn't affect the initial conditions, so that you just add this to the solution with the given initial conditions to find the general solution. The only things he uses here are linearity of the equation, plus the solution of the homogenous equation for a given initial condition.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/19664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
how to represent the effect of linking rigid-bodies together? I have 2 rigid-bodies (b1,b2) if i linked one to the other (as if they are conjoined together) , how to represent b1 effect on b2 and b2 effect on b1 Is there any LAW that affect the position/orientation of the other body ? notes : * *i am using Quaternions for orientations *i don't want to treat them as one body *i have only primitive shapes (box,sphere,..) to link.
If $\vec{p}$ the vector connecting the center of mass of b1 to the center of mass of b2 then you must have $$ \vec{v}_2 = \vec{v}_1 + \vec{\omega}_1 \times \vec{p} \\ \vec{\omega}_2 = \vec{\omega}_1 $$ $$ \vec{a}_2 = \vec{a}_1 + \vec{\alpha}_1 \times \vec{p} + \vec{\omega}_1 \times \vec{\omega}_1 \times \vec{p} \\ \vec{\alpha}_2 = \vec{\alpha}_1 $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/19724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
What is the meaning of speed of light $c$ in $E=mc^2$? $E=mc^2$ is the famous mass-energy equation of Albert Einstein. I know that it tells that mass can be converted to energy and vice versa. I know that $E$ is energy, $m$ is mass of a matter and $c$ is speed of light in vacuum. What I didn't understood is how we will introduce speed of light? Atom bomb is made using this principle which converts mass into energy; in that the mass is provided by uranium but where did speed of light comes into play? How can speed of light can be introduced in atom bomb?
A previous answer has provided a beautiful explanation of what $c$ represents, and that is not necessarily related to light. This answer just adds a bit from the nuclear reaction perspective. You start with a large nucleus, say uranium. Once it splits, it forms two smaller nuclei. The masses of these smaller nuclei put together are less than the mass of the original nucleus. The 'missing mass' is what we use as an energy source, because as previously mentioned, mass not conserved means energy also not conserved. Einstein's equation is simply a tool for us to calculate how much energy we are getting through this process. In that sense, $c$ can be thought of as a conversion factor without physical significance.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/19816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 4 }
Where to find cross section data for $e^{-}$ + $p$ $\longrightarrow$ $p$ + $e^{-}$? Where to find cross section data for $e^{-}$ + $p$ $\longrightarrow$ $p$ + $e^{-}$ ? PDG's cross-section data listing does not include it.
you can search the HEPDATA database at http://www.slac.stanford.edu/spires/hepdata/ with the query string [reac = e- p --> e- p] and the first result will be: "Jefferson Lab. Measurement of the elastic electron-proton cross section in the Q*2 range from 0.4 to 5.5 GeV*2"
{ "language": "en", "url": "https://physics.stackexchange.com/questions/19886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Close electric field lines in wave guides In a wave guide, graphics of propagation of Transversal Magnetic modes show closed field lines for the electric field. For example, for a rectangular guide: $E_x (x,y,z) = \frac {-j\beta m \pi}{a k^2_c} B_{mn}\cos\frac{m\pi x}{a}\sin\frac{n\pi y}{b}e^{-j(\beta z + \omega t)}$ $E_y (x,y,z) = \frac {-j\beta n \pi}{b k^2_c} B_{mn}\sin\frac{m\pi x}{a}\cos\frac{n\pi y}{b}e^{-j(\beta z + \omega t)}$ $E_z (x,y,z) = B_{mn}\sin \frac{m\pi x}{a}\sin\frac{n\pi y}{b}e^{-j(\beta z + \omega t)}$ Is it possible to have closed lines for the electric field?
Yes, it is possible. Maxwell's equations say $$ \oint_l \vec{E}\; d\vec{l} = -\frac{1}{c}\int_{S(l)}\frac{\partial \vec{B}}{\partial t} d\vec{S}. $$ The electric field of this closed line is proportional to the rate of the change of the magnetic flux. There is no problem with energy conservation. An electron moving along this closed line will be accelerated but it consumes the energy we waste to keep the field changing. This effect is used in some particle accelerators while the reversal effect is used in most microwave sources where the EMW consumes the kinetic energy of moving electrons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/19972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How Does Dark Matter Form Lumps? As far as we know, the particles of dark matter can interact with each other only by gravitation. No electromagnetics, no weak force, no strong force. So, let's suppose a local slight concentration of dark matter comes about by chance motions and begins to gravitate. The particles would fall "inward" towards the center of the concentration. However, with no interaction to dissipate angular momentum, they would just orbit the center of the concentration and fly right back out to the vicinity of where they started resulting in no increase in density. Random motions would eventually wipe out the slight local concentration and we are left with a uniform distribution again. How does dark matter form lumps?
I suggest dark matter loses energy by proxy though gravitational interactions with ordinary matter that is losing energy through radiative processes. As ordinary matter loses energy by radiation allowing it to clump gravitationally, dark matter clumps along with it by losing energy to the cooling ordinary matter.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/20097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 2 }
Why does electron-positron annihilation prefer to emit photons? If gravitons are also massless, and neutrinos nearly so, why aren't pairs of either of them normally expected outcomes of electron-positron annihilations? Are they possible but simply unlikely, or is there actually some conserved quantity prohibiting their creation? Edit: I'm talking about the low-energy limit, not in accelerator beam collisions.
It's possible, just very unlikely. You can get a clue of the relevant probabilities by looking at the Feynman diagrams for different kinds of $e^+e^-$ annihilation. Here's $e^+e^-\to\gamma\gamma$: The probability of this occurring (actually, the cross section) is proportional to a factor of $g_\text{EM}$ for each vertex. $g_\text{EM}$ is the electromagnetic coupling, which has a value of about 0.3. So the probability of the entire process can be represented as proportional to $\alpha_\text{EM} = \frac{g_\text{EM}^2}{4\pi} \approx \frac{1}{137}$. For neutrino production, on the other hand, the simplest Feynman diagram is this: The probability of this is proportional to two factors of the weak coupling, $g_\text{weak}$, and $\alpha_\text{weak} = \frac{g_\text{weak}^2}{4\pi} \approx 10^{-6}$ (source). So this process is on the order of 10000 times less likely than the annihilation into photons. (In fact, it's actually even less likely than that, because at low energies, as akhmeteli pointed out, the probability is further suppressed by a factor of $m_W^{-2}$, where $m_W$ is the relatively large mass of the W boson.) Gravity is an even weaker force, so we would expect the corresponding diagram for annihilation into gravitons to be much less probable. You can estimate that $\alpha_\text{gravity} \approx 10^{-39}$. But in this case, it's not even clear how well Feynman diagrams describe the process at all, since we don't have a proper quantum theory of gravity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/20157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
How are the Pauli matrices for the electron spin derived? Could you explain how to derive the Pauli matrices? $$\sigma_1 = \sigma_x = \begin{pmatrix} 0&1 \\ 1&0 \end{pmatrix}\,, \qquad \sigma_2 = \sigma_y = \begin{pmatrix} 0&-i\\ i&0 \end{pmatrix}\,, \qquad \sigma_3 = \sigma_z = \begin{pmatrix} 1&0\\0&-1 \end{pmatrix} $$ Maybe you can also link to an easy to follow tutorial ?
That certainly depends on what exactly you mean. I take your question as "how do you see that the (non-relativistic) electron spin (or more generally, Spin-1/2) is described by the Pauli matrices?" Well, to start, we know that measuring the electron spin can only result in one of two values. From this we see that we need matrices of at least dimension 2. The simplest choice is then of course exactly dimension 2. Moreover the spin is an angular momentum, and thus described by three operators obeying the angular momentum algebra: $[L_i, L_j] = \mathrm{i} \hbar\epsilon_{ijk} L_k$. This together with matrix dimension 2 basically restricts the choice to sets of three matrices which are equivalent to $\hbar/2$ times the Pauli matrices (the freedom of choice of those matrices corresponds to the freedom to use three arbitrary orthogonal directions as $x$, $y$ and $z$ direction). So now why choose from those equivalent choices exactly the Pauli matrices? Well, there's always one measurement direction which is represented by a diagonal matrix; this makes calculations much easier. Of course it makes sense to choose the matrices in a way that this direction is one of the coordinate directions. By convention, the $z$ direction is chosen. This ultimately fixes the matrices.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/20201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 4 }
How does one pronounce this particle's name? How would you read the following particles' names in a conversation in English? I am looking for some "proper" way of doing it. Say, imagine you are reading a technical description in a semi-formal occasion that you would like to avoid being lousy or overly simplistic. $$\Delta(1750)^0 P_{31}$$ $$\bar\Delta(1910)^0 P_{31}$$ $$\Delta(1910)^- P_{31}$$ [EDIT] One additional question, would you write $\Delta^0(1750) P_{31}$ or $\Delta(1750)^0 P_{31}$ ?
For questions about resonances and particles the Particle Data Group is the best reference. One can find the whole Delta resonance family and remind oneself what each number is standing for, and thus know how to pronounce the symbol. The number in parenthesis is the mass in MeV. The superscript is the charge of the particular resonance displayed on the plot, presumably. S,P,D,... are by convention the labels of the angular momentum quantum number "L" , and the two numbers are the numerators of the isospin and J quantum number ( J is the total angular momentum quantum number). So the first one is read as : Delta zero seventeen fifty (Pee three one) or (Pee three halves one half). etc. (The superscript of parity is missing in your information.) The bar over a symbol denotes an antiparticle, antiDelta(1910)zero in the second line. I would try and put the charge next to the main symbol, your first option but the other way is clear also. For similar questions the naming scheme for hadrons would be a help in comprehension as well as pronunciation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/20277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How is Planck's law defined? Now, I found three different definitions of Planck's law: $$ P_1(\nu,T) = \frac{8 \pi}{c}\frac{h \nu^{3}}{c^2} \frac{1}{e^{h\nu/kT}-1} $$ $$ P_2(\nu,T) = 2\frac{h \nu^{3}}{c^2} \frac{1}{e^{h\nu/kT}-1} $$ $$ P_3(\nu,T) = \frac{h \nu^{3}}{c^2} \frac{1}{e^{h\nu/kT}-1} $$ Which of these is correct and will give me the radiated energy for a given temperature and a given frequency?
Your second equation, $P(\nu,T) = \frac{2 h {\nu}^3}{c^2}$ $\frac{1}{\exp\bigl(\frac{h \nu}{kT}\bigr) - 1}$ is what is commonly referred to as Planck's law for radiation, although a more standard symbol used is $B_\nu(T)$. This is the energy radiated per time, per area, per frequency interval, per steradian. It is a formula for the 'specific intensity' of a source, which intuitively is the energy flux along a ray of radiation in a given direction, and so you must normalize by the solid angle subtended by that ray. To get the total energy per time per area radiated by a patch of a black body, integrate over solid angle and over frequency. Be careful performing the solid angle integral, however, because you must include the geometric factor $\cos \theta$ that accounts for the projected area of the patch ($\theta = 0$ corresponds to a ray emitted in the normal direction). Rays leaving one side of a patch can only be directed into the upper hemisphere of the solid angle sphere. So the solid angle integral looks like this: $$ F_\nu = 2 \pi \int_0^{\pi/2} B_\nu (\theta)\, \cos\theta \, \sin \theta \, d \theta$$ The $2 \pi$ out in front is for the azimuthal angle. Here, $F_\nu$ is what is commonly referred to as the specific flux ('specific' because it's still per unit frequency interval). Then, either by reading up on the Riemann $\zeta$ function, or just using a computer to tell you the answer, you can perform the frequency integral and get $$ F = \sigma \, T^4$$ Here $F$ is what we commonly think of as the flux (energy per area per time), and $\sigma$ is the Stefan-Boltzmann constant, $$\sigma \equiv \frac{2 \pi^5 \, k_\mathrm{B}^4}{15 \, h^3 \, c^2}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/20328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Calculating lagrangian density from first principle In most of the field theory text they will start with lagrangian density for spin 1 and spin 1/2 particles. But i could find any text where this lagrangian density is derived from first principle.
The "first principle" for any Lagrangian is the corresponding equation. If you advance, for any particular reason, an equation, you may construct its Lagrangian knowing the structure of the Lagrange equations:$$\frac{d}{dt}\frac{\partial L}{\partial \dot {\phi}}=\frac{\partial L}{\partial {\phi}}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/20353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Is a world with constant/decreasing entropy theoretically impossible? We can imagine many changes to the laws of physics - you could scrap all of electromagnetism, gravity could be an inverse cubed law, even the first law of thermodynamics could hypothetically be broken - we've all imagined perpetual motion machines at one time or another. However, the second law of thermodynamics seems somehow more 'emergent'. It just springs out of the nature of our universe - the effectively random movement of physical objects over time. Provided you have a Universe whose state is changing over time according to some set of laws, it seems like the second law must be upheld, things must gradually settle down into the state of greatest disorder. What I'm particularly wondering is if you can prove in any sense (perhaps using methods from statistical mechanics)? Or is it possible to construct a set of laws (preferably similar to our own) which would give us a universe which could break the second law.
The microscopic laws are reversible in time (if you also change chirality and the sign of all charges). Thus one cannot prove what you'd like to prove. Statistical mechanics, which is the discipline in which one derives the second law from microphysics, always makes one or the other assumption that induces the direction of time actually observed in our universe: That entropy increases (unless the whole world is in equilibrium, which it currently isn't). However, you could run the whole universe backward, and it would satisfy precisely the same microscopic laws (if you also change chirality and the sign of all charges). But entropy would decrease rather than increase. I don't think your friend would like to live in such a world.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/20401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 5, "answer_id": 0 }
How is gradient the maximum rate of change of a function? Recently I read a book which described about gradient. It says $${\rm d}T~=~ \nabla T \cdot {\rm d}{\bf r},$$ and suddenly they concluded that $\nabla T$ is the maximum rate of change of $f(T)$ where $T$ stands for Temperature. I did not understand. How gradient is the maximum rate of change of a function? Please explain it with pictures if possible.
Have a look at http://en.wikipedia.org/wiki/Del. Del, or $\nabla$ , is a generalisation of the gradient to more than one dimension. In one dimension $\nabla$ is the same as the gradient.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/20568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 1 }
Can a photon be emitted with a wavelength > 299,792,458 meters, and would this violate c? Just curious if the possibility exists (not necessarily spontaneously) for a photon with a wavelength greater than the distance component of c to be emitted, and would this inherently violate the scalar c?
See http://en.wikipedia.org/wiki/Ultra_low_frequency EM frequencies below 1Hz, and therefore with a wavelength longer than c meters can be observed in nature. This does not violate relativity since those waves still propagate with velocity c (in vacuum).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/20713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
How do I find average temperature given a temperature distribution? I was told to find the temperature distribution of a wire with a current going through it. So I found $$T(x)=T_{\infty}-\frac{\dot{q}}{km^{2}}[\frac{cosh(mx)}{cosh(mL)}-1]$$ I need to find the average temperature in the wire using this formula. I know I have to use some integral but I can't remember the formula. If someone could just give me the formula then I could probably integrate it myself.
The average of any quantity $s$ is $\frac{\sum\limits_{r=0}^ns_r}{n}$. If the distribution is continuous, lets say as a function of x, then it becomes $\lim\limits_{n\to\infty}\frac{\sum\limits_{r=0}^ns_r}{n}$. This can be rewritten as $\frac{\int s(x)dx}{\int dx}$, taking limits as the length of the wire. In your formula, I don't see any $x$ term in the RHS, nor anything that could depend on x, so I don't see how we can proceed. Please specify what is constant and what is a function of x. So the final formula is $$\frac{\int T(x)dx}{\int dx}$$ If your wire is infinite, you may need to take limits 0 to y, and then limit the expression for average as $y\to\infty$. Update: with the updated formula, assuming the wire spans from x=0 to x=L, $$\langle T\rangle=T_\infty- \frac{\dot{q}}{km^2}\left(\frac{\tanh(mL)}{mL}-1\right)$$ If the wire spans from 0 to y, $$\langle T\rangle=T_\infty- \frac{\dot{q}}{km^2}\left(\frac{\sinh(my)}{my\cosh(my)}-1\right)$$. Limiting y to infinity gives us an infinite answer. So I'm assuming that I've interpreted it correctly in my previous answer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/20926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why do we use only nonrelativistic equations in nuclear physics? What is limit between relativistic and non-relativistic equations? Which conditions do we have to use one of these?
It is useful in beam-based experimental nuclear physics (as opposed to the nuclear power context that Zassounotsukushi discusses) to use energies up to a few GeV. At those energies electrons are highly relativistic, and nucleons are fast enough to that one has to treat them relativistically, but heavy nuclei are generally still Newtonian. Jefferson lab, for instance, does a fair bit of plain old nuclear physics as well as the non-perturbative particle physics that CEBAF was designed for. CEBAF---the Continuous Electron Beam Accelerator Facility---is the big accelerator there and runs at energies up to 6 GeV (with a 12 GeV upgrade in progress). There is also a high-power field effect laser.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/20958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
17 Joules of Energy From a Mouse Trap Do you think it would be possible to get 17 joules out of a standard size mouse trap. By my math, it is a torsion coefficient of 3.45 or so out of the spring.
I haven't used a mousetrap for several decades, but as I recall the moving arm is about 5cm long, so the tip moves 0.05$\pi$ or about 0.16m. To get 17J of work the force at the tip of the arm would need to be 100N. I'm fairly sure the force isn't anything like that great. I remember being able to pull the arm back with one finger. I would guess the force is nearer 10N, so you'd only get around 2J out.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/21017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 0 }
Riemann Tensor Calculation trick(number of element) When we calculate Riemann Tensor for different curvature we have lots of components. However, there are many components that are zero. How can we argue, based on the symmetry of connection , that those elements are zero? For example if I am calculating the Riemann Tensor of $S^2$ sphere, I get only one non zero component i.e. $R_{\phi,r,\phi}^{\theta}$ = $sin^2 {\theta}$ and other components are zero. So, How can I argue, without calculating that all other components are zero. Edit: (Dimension, No. of independent Riemann Components) = (2,1; 3,6 ; 4,20)
The number of independent components for the Riemann curvature tensor $R_{ijk\ell}$ for the Levi-Civita connection is greatly reduced because of symmetries. The last two indices $k\neq \ell$ have to be different, because of antisymmetry $$R_{ijk\ell}~=~-R_{ij\ell k}.$$ Interchange symmetry $$R_{ijk\ell}~=~R_{k\ell ij}$$ then fixes the first two indices $i\neq j$ to be different as well. In two dimensions, if the metric $g_{ij}$ is diagonal, then there is essentially only one non-zero possibility for $R_{ijk\ell}$ and $R^i{}_{jk\ell}$ up to symmetries.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/21060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What's the difference between "boundary value problems" and "initial value problems"? Mathematically speaking, is there any essential difference between initial value problems and boundary value problems? The specification of the values of a function $f$ and the "velocities" $\frac{\partial f}{\partial t}$ at an initial time $t=0$ can also be seen, I think, as the specification of boundary values, since the boundaries of the variable $t$ are, usually, at $t=0$ and $t<\infty$.
When there is only one spatial variable then mathematically the two are indistinguishable. But often boundary value problems are solved over a higher dimensional domain. For example, a common problem in physics is to solve Laplace's equation over a spatial region of three dimensions, with a two dimensional surface providing the boundary conditions. If the boundary condition specifies the value of the solution on the surface, then it is called a Dirichlet boundary condition. However, sometimes the boundary condition specifies the normal derivative of the solution at the surface, and then it is called a Neumann boundary condition. Boundary value problems over multi-dimensional domains are necessarily tied to partial differential equations rather than ordinary differential equations, and so they are more complicated than ordinary differential equations with a single initial value specified.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/21111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
What conditions must be met for a ball to roll perfectly down an incline without slipping? What conditions must be met for a ball to roll perfectly down an incline without slipping? A mathematically rigorous definition, please. I honestly don't know where to begin with answering this problem.
The formula is $$\mu_s \geq \frac{g\tan\theta}{1+\frac{k^2}{r^2}}$$ where $\mu_s$ is static friction coefficient for the ball-incline interface. $\theta$ is the angle of the incline, and $k$ is the radius of gyration of the ball (for a solid uniform spherical ball, $k=R\sqrt{\frac{2}{5}}$). R is the radius of the ball. If you have a more complicated body, R will be the radius of the circular surface that is rolling (This comes into place if you have a spool rolling down an incline). This formula is only applicable when the center of mass of the body is at the center of the rolling circle. Where did I get this formula? I first assumed the friction to be $f$. Now, I calculated the acceleration using Newton's laws, and I similarly calculated angular acceleration through torque. Using $a=\alpha r$, I got a value for $f$. Now I just had to set its upper bound, i.e $\mu_s N$ (N is normal reaction force, denoted by R by some). If you do not understand the explanation, read up a bit on rolling dynamics as @Vineet suggested.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/21254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What longest time ever was achieved at holding light in a closed volume? For what longest possible time it was possible to hold light in a closed volume with mirrored walls? I would be most interested for results with empty volume but results with solid-state volume may be also interesting.
The lifetime of a photon in a resonant cavity is pretty trivial to compute, given the cavity length, internal losses, and mirror reflectivity. Switching momentarily to a wave description, we will let $L$ be the cavity length, $R_1$ and $R_2$ be the reflectivity of mirrors 1 and 3 respectively, and $T_i$ be the loss in the cavity medium. Clearly the intensity of a pulse of light in the cavity will exponentially decay, and the lifetime $\tau_c$ (defined by the $1 \over e$ threshold) can trivially be computed to be $$ \tau_c = -\frac{2 L}{c \ln{\left[R_1 R_2 (1-T_i)^2\right]}} $$ For a one meter cavity with no internal loss and 99% reflective mirrors, this gives a lifetime of roughly 330 ns. There are much longer cavities, and the reflectivity of dielectric mirrors can have quite a few more "9"s tacked on. For example, the LIGO cavity is something on the order of a kilometer, and if we pretend that the mirrors are 99.999% reflective (that's three"9"s after the decimal place) 1 we get a lifetime of 0.333 seconds (wow). Lifetime increases rapidly with mirror reflectivity once you get above 99%, so you'll see that if you repeat that calculation with $R=99.9999\%$, you get $\tau_c = 3.333$ seconds. That's an absurdly long time, but of course that fourth "9" after the decimal place is really starting to get unrealistic as well. 1: This is very much an order-of-magnitude guess. I'm not sure of the exact length of the LIGO cavity, and in fact the mirrors are not highly reflective because they are doing a trick called "Power Recycling," which ends up giving them a longer photon lifetime anyway. However, 99.999% is an impressive, although NOT an unrealistic number for a modern high quality dielectric mirror.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/21301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 5, "answer_id": 0 }
What determines color -- wavelength or frequency? What determines the color of light -- is it the wavelength of the light or the frequency? (i.e. If you put light through a medium other than air, in order to keep its color the same, which one would you need to keep constant: the wavelength or the frequency?)
As FrankH said, it's actually energy that determines color. The reason, in summary, is that color is a psychological phenomenon that the brain constructs based on the signals it receives from cone cells on the eye's retina. Those signals, in turn, are generated when photons interact with proteins called photopsins. The proteins have different energy levels corresponding to different configurations, and when a photon interacts with a photopsin, it is the photon's energy that determines what transition between energy levels takes place, and thus the strength of the electrical signal gets sent to the brain. Side note: I posted a pretty detailed but underappreciated (at least, I thought so) answer to a very similar question on reddit a few days ago. I could edit it in here if you find it useful.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/21336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76", "answer_count": 11, "answer_id": 6 }
Does sending data down a fiber optic cable take longer if the cable is bent? Ok, so, my simplified understanding of fiber optics is that light is sent down the cable and it rebounds off the sides to end up at its destination. Which got me thinking, if it has to bounce more times (and having a shorter travel between each bounce), does the light (data) take longer to get to the other end of the cable? Like this: http://i.imgur.com/pCHUf.jpg A part of me is saying no, because it's still the same distance to travel and bouncing doesn't take up any time, but another part of me is saying yes because the light will have further to travel the more times it bounces, and thus will take longer to get to its destination. I'm swaying towards it taking more time. Thanks!
One of the main reasons fiber optics is a great engineering tool to distribute light is because virtually all of the light is internally reflected. Meaning there is very little light lost from transmission out of the cylinder. The main reason for this is because firber optics cables are small. Larger optical fibers cause more losses to transmission because the incident angle is closer to the normal of the surface. We all know optical fibers are essentially completely transparent (unlike standard glass), so this also means that if the strand is considered infinitely long in relation to it's radius, the only light that won't be lost will be considered parallel to the cylinder axis. So because optical fibers have a small radius and are "infinitely long", the light traveling inside is considered parallel to the cylinder axis. And since bending a strand doesn't change the strand's length, there is no effect on the time. If, however, the radius of the strand isn't negligible in relation to it's length, all hypothesis fail and the light travel time may vary. However, in such a case, bending the strand isn't a trivial matter. I believe the strand would likely break before light travel distance varies enough to even consider taking it into account.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/21378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Dielectric in Parallel Plate Capacitor Given a parallel plate capacitor of width $w$, length $l$, with a dielectric moving along the length $l$. Let the dielectric be from $x$ onwards. The capacitance will be $\frac{w \epsilon_0}{d} (\epsilon_r l - \chi_e x)$. Griffiths (p. 195) says that the total charge $Q$ in the $C=\frac{Q}{V}$ expression is constant as the dielectric moves. But $Q$ here refers to the free charge, and the free charge definitely increases as you move the dielectric in increasing $x$. What am I misunderstanding?
If the plates of the capacitor are isolated then the total amount of free charge on the capacitor plates cannot change. Put another way, if the capacitor pales are isolated where could more/less free charge come from / go to? The situation would be different if there was a voltage source connected to the plates. In that case the free charge on the plates would change to ensure that the potential difference across the plates is kept constant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/21486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What happens when a supersonic airplane flies through a cloud? What happens when a supersonic airplane flies through a cloud? Will it punch a hole or is it more like a bullet through water (= hole closes immediately after the aircraft has passed)? Is there some special effect because of the supersonic speed? Or maybe the question should be: Does the airflow around an airplane change when the sound barrier is broken?
Couldn't find a cloud image but this is interesting edit: Apparently this is due to the drop in pressure immediately behind the shock wave of a supersonic aircraft. (Like a moving cloud chamber?) photo is by US navy and therefore public domain. There are a whole set of similar images here
{ "language": "en", "url": "https://physics.stackexchange.com/questions/21555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Do the trigonometric functions preserve units? I saw an exercise where you had to calculate the units of $C_i, i=1,2$ from an equation like this: * *$v^2=2\cdot C_1x$ and *$x=C_1\cdot \cos(C_2\cdot t)$ where * *$x$ means meters, *$t$ means seconds and *$v$ means velocity. For $C_1$ I got $C_1=m/s^2$. But coming to $C_2$ the cosinus irritates me somehow: $$x=C_1 \cdot \cos(C_2 t)\Rightarrow m=m/s^2 \cdot \cos(C_2 s)\Rightarrow s^2 = \cos(C_2 s)$$ Does this mean, that $C_2$ must have the unit $s$? Thanks a lot!
Trigonometric functions don't "preserve" units. The expression under a trigonometric function must be dimensionless and so is the value of a trigonometric function. Thus, C2 in your equations is in units of frequency: Hz or 1/s. There is an error in one of the equations, perhaps a missing constant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/21704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Noise amplitude increases as sample rate increase I am testing the material properties of some very low stiffness materials. I'm using a force probe connected to software, sensing at about a hundredth of a gram of force. Now, what's interesting is when my sample rate is 1/sec I get a smooth line, as expected. If I increase the sample rate the line get a more jagged, increased frequency line - still as expected. BUT the increased sample rate also increases the amplitude of the noise/signal and this I don't understand. I hope I explained it well enough... But basically why is it that increased sample rate is increasing the amplitude of the signal rather than just the frequency.
From your description of the experiment (please correct me if my assumptions are wrong), it sounds like your apparatus consists of the application of a controlled stress to the sample (and the sensor), and the resulting strain in the sensor is measured. Whenever the stress applied by your apparatus changes, it will take some time for the system to settle to it's new equilibrium. It could be that sampling at $1 Hz$ is allowing plenty of time for equilibration, but sampling at higher frequencies you are recording the oscillations of the system as it has not yet settled. One way to test would be to run the experiment without changing the applied force, just recording the strain at various sampling rates, and looking to see if the noise spectrum still depends on the sample rate in the way you describe. If it does, then the noise is a result of the frequency dependence of the electronics. If it does not, then the noise is resulting from the physical behavior of the sample
{ "language": "en", "url": "https://physics.stackexchange.com/questions/21759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How would you explain spectrum and spectral analysis to grandma? E.g. what the light or sound spectrum is, what it's useful for - in very simple terms that a grandmother or a child would understand.
To a child you have to show pictures. A diamond in your ring or a wedge of glass or crystal, then the Sun light, or .. Then the rainbow and the clouds, all the colors in the cristal, etc ... Use the filters in the equalizer of your hi-fi (or software media player) to show that the sound is composed of many distinct sounds (freqs) that are hidden by all the other sounds. Use the back of a CD, bend it, to play with the colours. Use an aquarium, the light reflected in the skin of the fishes. Play and record two notes at the same time in a piano (or software app) and then one at a time, and then apply FFT and show her the results. She will understand the results as the contribution of two distinct notes (explain the harmonics later). Call the software FFT, or 'softCrystal' (to her is only the name of an active object, like 'windmill' ) and then explain that a crystal behaves like a 'softCrystal' but is made in stone (hardware) that decompose the perceived light into its components. Now, that she have seen the decomposition of the sound and light stimulus, it's time to introduce her to the sine wave concept, amplitude and frequency, phase, and to the fact that in nature all stimulus are composed, at the lowest level, of waves. If you have the time (a grandma always have time ;) you can explain her how the waves interfere, reflect, refract... the distinction between Light - unbounded (free) waves as the ones she obtains in the stretched long rope where she hangs the cloths to dry, and Matter - bounded waves like like the ones that persist in the surface of a drum.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/22105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why does optical pumping of Rubidium require presence of magnetic field? The optical pumping experiment of Rubidium requires the presence of magnetic field, but I don't understand why. The basic principle of pumping is that the selection rule forbids transition from $m_F=2$ of the ground state of ${}^{87} \mathrm{Rb}$ to excited states, but not the other way around ($\vec{F}$ is the total angular momentum of electron and nucleus). After several round of absorption and spontaneous emission, all atoms will reach the state of $m_F=2$, hence the optical pumping effect. But what does the Zeeman splitting have anything to do with optical pumping? Granted, the ground state, even after fine structure and hyperfine structure considered, is degenerate without Zeeman splitting, but the states with different $m_F$ still exists. In addition, how is the strength of optical pumping related to the intensity of magnetic field applied?
Though it's too late to answer, I was looking for answer and saw your question. I observed the same thing through my experiments today. Zeroing the imposed magnetic field leads to vanish the dark state due to a circularly polarized light. My reasoning is related to the most basic concepts of quantum mechanics; based on my knowledge it's impossible to measure a system (say atomic population) without changing that system. In fact, in the absence of any external agency, there is no preferred direction and we have a perfect symmetry (off course prefect doesn't mean perfect actually but it's valid considering our experimental restrictions for resolving broken symmetry). So, there is no Z direction for atoms. Finally, you can observe optical pumping and dark states only if you align atoms in a preferred direction which is defined by the external magnetic field.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/22165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
What will be the relative speed of the fly? It has happened many times and i have ignored it everytime. Yesterday it happened again . I was travelling in a train and saw a fly (insect) flying near my seat. Train was running at a speed of around of 100 km/hr. So according to the physics rules , my speed will also be 100 km/hr as i am sitting inside the train. But as far as the fly is consider, it is flying inside the train , the speed of the fly is very less as compared to the train. So why does not the fly stuck on the one side of the train ? As the fly is not in physical contact with the train, will its speed be 120 km/hr?
Have a look at http://en.wikipedia.org/wiki/Galilean_invariance. This is not too mathematical and explains what's going on. The basic idea is that there is no such thing as absolute motion. For example, because the earth is rotating as I sit here typing I'm moving at about 800 miles per hour. Why am I not splattered against my computer screen? It's because everything around me is moving at the same speed, so relative to where I'm sitting I'm not moving. In the specific case of the fly, the fly moves by beating it's wings against the air. But the air is stationary with respect to you, otherwise you'd be sitting in a 100km/hr wind. That's why you see the fly moving at whatever speed flies normally move at.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/22256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Piston movements in four stroke cycle? I was reading about a four stroke cycle. Here's what I understood: * *In the first stroke, the piston starts at the top and moves down. *In the second stroke, the piston moves upwards. *In the third stroke, the piston moves down due to the combustion by spark plug. *In the final stroke, the piston moves up and the cycle continues. I can understand why the piston moves down in third stroke due to the gasoline explosion. But, what moves the piston up and down in Step 1, 2, and 4?
In an internal combustion engine, we have multiple cylinders. They are attached to a shaft in an alternating manner such that when one set of the cylinders have combustion, they drive the shaft to move down in the other set. See http://commons.wikimedia.org/wiki/File:Cshaft.gif
{ "language": "en", "url": "https://physics.stackexchange.com/questions/22305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does it take significantly more fuel to fly a heavier airplane? I was reading in the papers how some-airline-or-the-other increased their prices for extra luggage, citing increased fuel costs. Now I'm a bit skeptical. Using the (wrong) Bernoulli-effect explanation of lift, I get this: More luggage$\implies$more lift needed $\implies$ more speed needed$\:\:\:\not \!\!\!\! \implies$more fuel needed. At this point, I'm only analysing the cruise situation. When the plane is accelerating, this will come into effect, but more on that later. Now, I know that the correct description of lift involves the Coanda effect and conservation of momentum, but I don't know it well enough to analyse this. Also, there will be drag forces which I haven't (and don't know how to) factored in. I can see that viscosity must be making a change (otherwise planes wouldn't need engines once they're up there), but I don't know how significant a 1kg increase of weight would be. So, my question is: Are airlines justified in equating extra baggage to fuel? Bonus questions: * *If more baggage means more fuel, approximately what should the price be for each extra kilo of baggage? *What happens when we consider takeoff and landing? Does a heavier plane have to use a significantly large amount of fuel?
Lift is roughly proportional to angle of attack, and to speed squared. As a pilot, you instinctively balance these two. ADDED: Like if you suddenly drop a heavy weight, making the plane lighter, its lift isn't any less, so it starts to accelerate upward (climb). You notice this and either push the nose down with the trim wheel (lessen the angle of attack, making the plane go faster at the same power) or reduce throttle to reduce speed because you need less lift at the original angle of attack. Or, you do both, and stay at the same speed. Drag is the sum of parasitic drag (that's mainly your viscosity) and induced drag (drag due to lift). More lift, more induced drag. More drag, more power needed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/22357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 8, "answer_id": 1 }
What are the calculations for Vacuum Energy? In wiki the Vacuum Energy in a cubic meter of free space ranges from $10^{-9}$ from the cosmological constant to $10^{113}$ due to calculations in Quantum Electrodynamics (QED) and Stochastic Electrodynamics (SED). I've looked at Baez and references given on the wiki page but none of them give a clear working for how these values are derived. Can someone point me in the right direction, as to how values like $10^{-9}$ are derived from the cosmological constant; OR $10^{113}$ due to calculations in Quantum Electrodynamics?
You can understand the origin of these numbers from a simple consideration of dimensional analysis, and the cosmological data available. This keeps the answer intuitive, and any more complicated derivation will not change the answer substantially. The first of your numbers, $10^{-9}$ Joules per cubic meter, is simply an empirical measurement in the framework of the Lamda-CDM model. Measurements of the CMB (WMAP), combined with type Ia supernovae, tell us that this is about the energy density of the universe, and that most of the energy density is in the form of dark energy. We assume that the dark energy comes from a cosmological constant $\Lambda$. In natural units where $\hbar$ and $c$ are set equal to 1, a length is essentially an inverse energy. So in these units $\Lambda$ is about $10^{-46}$ GeV$^{4}$. Here comes the essential point: If we consider the Planck mass to be the natural energy scale for the vacuum energy, then the ratio of the observed energy density in the cosmological constant is too small by 122 orders of magnitude (and this is the origin of the second number - it just comes from taking the planck mass to the fourth power in natural units). So, the fundamental puzzle is, why is $\Lambda$ so small compared to the 'natural' scale we would expect? One way out is to argue that a different energy scale other than the planck mass is what we should be comparing $\Lambda$ to.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/22468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 3, "answer_id": 0 }
Magnetic force and work If the magnetic force does no work on a particle with electric charge, then: How can you influence the motion of the particle? Is there perhaps another example of the work force but do not have a significant effect on the motion of the particle?
Work performed by forces acting on a particle is equal to the change in particle's energy. If the forces acting on a particle perform zero work on it, particle's energy does not change. In particular, whenever a force acting on a particle is perpendicular to the particle's displacement as is the case with magnetic component of the Lorentz force, the work performed will be zero and particle's energy will not change. Note that the direction of the particle's velocity may change without affecting its energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/22530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Double slit experiment and indirect measurements In the classic Young double slit experiment, with slits labeled as "A" and "B" and the detector screen "C", we put a detector with 100% accuracy (no particle can pass through the slit without the detector noticing) on slit B, leaving slit A unchecked. What kind of pattern should we expect on the detector C? Probably the right question is: knowing that a particle hasn't been through one of the slits makes the wavefunction collapse, precipitating in a state in which the particle passed through the other slit?
Dreelich, you might want to get hold of a copy of the Feynman Lectures on Physics and take a look at Vol. III, Chapter 37, Section 1-6 "Watching the electrons". The sections leading up to that one are also relevant. In addition to being a great read (well, if you like that sort of thing, but that's likely a safe assumption in this forum!), Section 1-6 confirms what you said: Watching the electron go through the slit makes it behave classically in terms of how it hits the screen. Interestingly, Feynman detested the phrase "wave function collapse." His approach was always to look at the start and end of a process and calculate the probabilities for each end point.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/22670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Is it possible to fly like a bird using semi-motorized wings? On his website http://www.humanbirdwings.net/ the dutch engineer Jarno Smeets claims to have successfully build a set of 17 m^2 bird-like wings from material of a kite. It is claimed that it uses sensors taken from Wii controllers and a smart phone as well as two motors on the back of the "pilot" which amplify the flapping from the "pilots" arms which are connected with robes to the wings. Apparently this has since been debunked/fessed as a hoax, but I wanted to try a back-of-envelope calculation to see if it was even plausible or not. The closest thing I had to hand was this formula (pinched from here) $$P_{total} = P_{drag} + P_{lift} = \frac 1 2 c_d \rho A_p v^3 + \frac 1 2 \frac { (mg)^2 } {(\rho v^2 A_s ) }$$ Where * *$C_d$: drag coefficient (1.15) *$\rho$: density of air (1.3 kg/m3) *$A_p$: frontal area of human (1 m2, adding a bit for wing) *$v$: speed (5 m/s - optimum from $P_{drag} = P_{lift}$) *$m$: mass of man (80kg) *$g$: gravitational acceleration (9.8 m/s2) *$A_s$: square of wingspan (100 m2) Plugging all that in gives me 188W, which is about 5 times more than what an average human can produce with their arms (accordingly to this source, only thing I could find). However, a 1kg lithium-ion battery could apparently (? not sure of my interpretation) contain 150Wh, which could make up the difference. This makes the claim seem far more feasible than I feel it ought to. Am I missing something? UPDATE As @zephyr points out below, I made a mistake at some point when transcribing the formula, the correct one is: $$P_{total} = P_{drag} + P_{lift} = \frac 1 2 c_d \rho A_p v^3 + \frac 1 2 \frac { (mg)^2 } {(\rho v A_s ) }$$ Plugging the numbers in to that, and optimising $v$ gives me a $P_{total} \approx 630W$, which leaves birdman needing 4kg of batteries... (or, as Jim points out, settling for a shorter flight).
http://articles.latimes.com/1986-05-18/news/mn-20955_1_pterodactyl-flight A motorized "life-sized" pterodactyl model did fly by flapping a few times before it crashed. A little bit bigger wing span, a little more powerful motor... who knows? Maybe you also need a much smarter computerized controller. But your conclusion that physics does not prevent this is probably correct. Of course the motor will be a big part of the power story.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/22725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Why is the conductor an equipotential surface in electrostatics? Since the electric field inside a conductor is zero that means the potential is constant inside a conductor, which means the "inside" of a conductor is an equal potential region. Why do books also conclude, that the surface is at the same potential as well?
Because if the surface is not equipotential then it would mean that there is a tangential component of electric field along the surface. This component will result in motion of electrons, but since we have static fields this is not possible. Thus by contradiction we can say that surface must be equipotential.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/22776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 3 }
Areas of research and their transferable skills I've noticed that ads for postdoctoral positions emphasize the skill set that one must have for a particular position. That said, what are the areas of research to avoid because they give you few transferable skills and hence limit your range of possible postdoctoral positions? For example, String Theory might be an area to avoid because it gives you virtually no experience with using scientific software packages that might essential for post docs in other areas. Any others? (My motivation for asking this question is that I think it might be nice to roam into other interdisciplinary areas and fields post-PhD, rather than restrict oneself to a certain field for life.)
Coming from a computer science standpoint, I don't know so much about which fields to avoid. However, I will highly encourage serious physicists to take up general GPU programming. On my side, the boon of physics is obvious. However on research especially dealing with sensors and data, there are so many applications we will likely not run out. We (the GPU programming field) need more cross-discipline understanding, especially from the areas of physics, molecular biology and genetics. It can be a little difficult to explain the full need of having a deeper 1 to 1 relationship in GPU based tools. I will offer the only practical advise I have thought of in this area: anything without some connection to current material science (carbon nanotubes, advanced magnetics, quantum processing, bioprocessing/storage, advanced ceramics, etc) seems to have difficulty finding the placement it sometimes deserves in current trends. For instance though there's great need for advancements, things like geo-imaging and medical imaging are nicely thriving. Tools are made for all sorts of spectroscopy, while some areas of physics are not so applicable. Meanwhile, I'm hoping for more cross-talk in our fields, because we would very much like to make tools in exploration of the more exotic realms of physics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/22827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Slit screen and wave-particle duality In a double-slit experiment, interference patterns are shown when light passes through the slits and illuminate the screen. So the question is, if one shoots a single photon, does the screen show interference pattern? Or does the screen show only one location that the single photon particle is at?
We don't know whether the light source shoots photons or not. We know that if we turn off power to the light source the interference pattern disappears, and that if we turn down the light intensity enough we eventually start seeing individual events, if we have the right sort of measurement apparatus. Again, if we turn off the power those individual events stop (except for the "dark rate" that is characteristic of the detector), so it's definitely the light source that is causing the individual events, but we don't know what happens in between. It's possible to account for this simple kind of experiment using a semi-classical model in which there is an electromagnetic field between the source and the detector, and the detector current flips off and on. It's only when we consider more sophisticated experiments, in particular in which we engineer the light sources so that two or more individual events are closely synchronized in time, that we find that neither shoots photons nor there's an electromagnetic field works very well. Consequently, we might or might not be able to satisfy the premise of "if one shoots a single photon, ...", making it not possible to answer the question with certainty with our current understanding. Nonetheless, I up-voted Slaviks Answer, because that's what is usually said.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/22923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Flow rate of a syringe Suppose a syringe (placed horizontally) contains a liquid with the density of water, composed of a barrel and a needle component. The barrel of the syringe has a cross-sectional area of $\alpha~m^2$, and the pressure everywhere is $\beta$ atm, when no force is applied. The needle has a pressure which remains equal to $\beta$ atm (regardless of force applied). If we push on the needle, applying a force of magnitude $\mu~N$, is it possible to determine the medicine's flow speed through the needle?
I've already modelled this case and you'll find that the flow is indeed laminar and for a medical syringe (say 5ml) with a 26 or 27G needle you'll get a Re value of under 100. This situ changes if the liquid is more or less viscous e.g. due to temperature. Typically forces at the plunger are between 2 to 20N. When using the Poiseuille formula remember that the Po (when you action the syringe in air) will be atmospheric pressure but when injected in real conditions it will be the blood stream pressure or dermis. The P value is the pressure you obtain by applying a force to the syringe plunger. Also the viscosity is dynamic not kinematic viscosity. Initially I would neglect the friction effects in the needle and focus more on the real internal diameter and shape of the needle, hence the gauge value and needle length are more important.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/22978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Faster-than-light communication using Alcubierre warp drive metric around a single qubit? The Alcubierre warp drive metric has been criticized on the points of requiring a large amount of exotic matter with negative energy, and conditions deadly for human travellers inside the bubble. What if the Alcubierre metric is used to just span a tiny bubble around a single spin-1/2 particle to transport either classical or quantum information faster than light? Wouldn't known small-scale effects, such as the Casimir force, suffice to provide the exotic conditions required for an Alcubierre bubble large enough to fit at least one particle? My question explicitly addresses the feasibility of a microscopic bubble using known microscopic effects.
The problem is not the availability of exotic matter. It is the manipulation of it in a practical manner that is not known. The stuff of exotic matter (quantum fluctuation) is everywhere. The Casimir Effect is only a demonstration of its existence in a practical manner. To do more with that so called negative energy that the quantum fluctuation can present is the challenge. The basics of the challenge consists of making the negative energy assymetrical. It is akin making the energy of the quantum fluctuations act on only one side of a piece of the more familiar matter we are used to manipulating.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/23121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
What is lambda R in Richardson's Law? I've got to calculate the thermionic emission through a diode, so I need to use Richardson's Law. However, one thing's got me confused - according to the Wikipedia page: $$J = A_GT^2e^\frac{-W}{kt}$$ I could live with that, but for $A_G$. Apparently, I'm not the only one; Wikipedia's a bit cryptic about what $A_G$ is, mentioning that physicists have struggled with it for decades, "but there is agreement that $A_G$ must be written in the form:" $$A_G = \lambda_RA_0$$ "Where $\lambda_R$ is a material-specific correction factor that is typically of order 0.5, and $A_0$ is a universal constant." That's the last real mention of $\lambda_R$, and the only reference for it is in French. So, what is $\lambda_R$, really? How can I figure out its numerical value so that I can actually use Richardson's Law?
The quantity $\lambda_R$ is the dimensionless extracted tunneling/nucleation amplitude for electrons to get out of the metal. It is not simple to compute because it is an average over the thermal motion of the electrons of the probability of the electron getting far enough away from the metal in order to escape to infinity. There are many models in which you can calculate $\lambda_R$, but it's value is best extracted from experiment--- it will depend on the type of metal and the roughness of the surface geometry, and on surface impurities and dirt. It is hopeless to calculate except in idealized situations.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/23267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Conceptual quantum field theory Often papers and books give some bold(deep physical insight) statements in quantum field theory which are not backed by mathematics, and seldom by citing papers. Being a student I don't grasp the real meaning of those statements, making me think that I don't really understand QFT. I have access to all the standard books in QFT, but is there a book/lecture notes that really aims to explain QFT in a way in which the ideas emerged, it's philosophy with a physical insight at a level of a graduate student ? I am mighty interested in doing theoretical high energy physics, and crave for a better understanding of QFT. Could somebody help me ....
E. Zeidler, Quantum Field theory I Basics in Mathematics and Physics, Springer 2006. http://www.mis.mpg.de/zeidler/qft.html is a book I highly recommend. It is the first volume of a sequence, of which not all volumes have been published yet. This volume gives an overview over the main mathematical techniques used in quantum physics, in a way that you cannot find anywhere else. It is a mix of rigorous mathematics and intuitive explanation, and tries to build ''A bridge between mathematiciands and physicists'' as the subtitle says. See https://physics.stackexchange.com/a/22413/7924 for a more recommendation of the book.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/23386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Maximum efficiency for a counter-current heat exchanger (double flux controlled motorized ventilation) I am not sure if I can explain the question correctly because I don't know the name of this mechanism in English. This is my explanation attempt: In a house, a tube is expelling the air from the inside to the outside, and a tube is aspiring the air from the outside to the inside. The 2 tubes are interlaced in order to transmit the heat between their contents. The goal is to have a good thermal isolation between inside the house and outside: * *When the house is more hot than the outside, the outgoing air is warming the incoming air. *When the house is more cold than the outside, the outgoing air is cooling the incoming air. Question: In theory, what is the maximum efficiency of such thermal exchange? Could it be possible to reach something close to 1 or the upper bound of the efficiency is well under?
It sounds as if you are describing a countercurrent heat exchanger. The theoretical efficiency of these can reach 1, though note that for heat exchangers efficiency doesn't mean the same as for heat engines i.e. heat converted to work. For heat exchangers an efficiency of 1 just means the incoming air is heated to the same temperature as the air in the house and likewise for the outgoing air.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/23432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Why does a glass rod when rubbed with silk cloth aquire positive charge and not negative charge? I have read many times in the topic of induction that a glass rod when rubbed against a silk cloth acquires a positive charge. Why does it acquire positive charge only, and not negative charge? It is also said that glass rod attracts the small uncharged paper pieces when it is becomes positively charged. I understand that a positively charged glass rod attracts the uncharged pieces of paper because some of the electrons present in the paper accumulate at the end near the rod, but can't we extend the same argument on attraction of negatively charged silk rod and the pieces of paper due to accumulation of positive charge near the end?
As we all know that matter in our environment is made up of basic element "atoms" well silk is obtained from cocoons that are living thing thus made from "amino acid" that is the fundamental compound of living being and the components of amino acid are H2NCHRCOOH thus we see that R letter then requires to gain electron thus then making it rubbed by a glass rod gives +ve charge to the rod.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/23515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 5, "answer_id": 3 }
Why would it be true that people with longer legs walk faster than ones with shorter legs? When a person walks, the only force acting on him is the force of friction between him and the ground (neglecting air resistance and all). The magnitude of acceleration due to this force is independent of the mass of the object (longer legs have more mass). Hence the person should move with with a velocity independent of the length of his legs. But I have heard (also observed) that people with longer legs walk faster than ones with shorter legs. If that is true, then why? One can argue that the torque about the pivot due to friction is more in case of longer legs, But then the torque due to gravity (when one raises his leg to move), which opposes the frictional torque, is also more for longer legs. And why would these torques make a difference anyway, as they have no effect on the acceleration of the center of mass?
Interesting question. I had a Google around and came across http://silver.neep.wisc.edu/~lakes/BME315ScalingWalk.html, which seems a reasonable discussion of the mechanics (very simplified). The conclusion is that the walking speed is proportional to the square root of leg length, so taller people do walk faster but the square root dependance means it's not not much faster.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/23921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
What is magnetic conductivity? I searched google for the meaning of magnetic conductivity but couldn't figure out what is it? electric conductivity is usually means that there is the electric field parallel to the interface is continuous between to interfaces, what about magnetic conductivity material ?
This is what they told us about magnetic circuits and magnetic resistence at engineering school. Magnetic reluctance (resistance) is similar to the concept of resistivity of simple resistors. Magnetic relucance is used when calculating a magnetic circuit e.g. transformer magnetic core, electromotor magnetic core, generator, etc. Magnetic conductivity is the inverse of magnetic reluctance and resembles the electrical quantity conductance. Here is a pretty picture of the analogy of electrical and magnetic circuits. By the way, the cite I took the picture off seems quite informative. Let us have an external magnetic field H of uniform intensity. That can be generated by a moving permanent magnet or by current through a loop. The magnetic field density (magnetic induction) inside materials, introduced in the field is B = μ0 * μR * H Now let the magnetic flux be the magnetic flux density, multiplied by the cross-section of the magnetic circuit, introduced in the field. For simple math this cross-section has to be constant. So, divide the circuit in a sequence of equal-magnetic-permeability-equal-cross-section sections and model those as series resistors. Φ = B * S Now we can define magnetic resistance - it says "In a magnetic circuit of uniform cross-section, put in a uniform external magnetic field, there exists a scalar constant Rm that is the ratio between the applied external field and the resultant field strength within the material". Rm = L / (μ * S) Example: calculate a simple transformer core TODO
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Baryon asymmetry Baryon asymmetry refers to the observation that apparently there is matter in the Universe but not much antimatter. We don't see galaxies made of antimatter or observe gamma rays that would be produced if large chunks of antimatter would annihilate with matter. Hence at early times, when both were present, there must have been a little bit more matter than antimatter. This is quantified using the asymmetry parameter $\eta = \frac{n_{baryon} - n_{antibaryon}}{n_{photon}}$ From cosmological measurements such as WMAP, $\eta \approx (6 \pm 0.25) \times 10^{-10}$ However, the source of baryon asymmetry is said to be one of the Big Problems of Physics. What is currently the state of the art regarding this puzzle? What's the best fit we can get from the Standard Model? What do we get from lattice simulations?
The only source of asymmetry in the Standard Model is from CP violation, and although there is CP violation in the Standard Model it is not large enough to account for the observed asymmetry. It's expected that the asymmetry will be explained by some extension to the standard model, but at the moment we don't know which, if any, of the suggested extensions is the culprit.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 1 }
Commutating Annihilators with a beamsplitter I am reading Nielsen and Chuang on P. 291, for anyone interested in the origin of my question. Given an annihilator $a$ and its corresponding creator $a^\dagger$ such that $[a,a^\dagger] = 1$ and another annihilator $b$ with creator $b^\dagger$, an argument in a proof claims the following: Let $G = a^\dagger b - ab^\dagger$. Then, $[G,a] = -b$ and $[G,b] = a$. I don't see how these two relations hold. Can someone please point me in the right direction or prove them? Thank you SOCommunity!
This question was answered in a now-deleted comment by Luboš Motl: Hi, just use $[XY,Z]=XYZ-ZXY = XYZ-XZY+XZY-ZXY = X[Y,Z]+[X,Z]Y$ and the basic commutators $[a,a^\dagger]=1$ and similarly for $b$ while other commutators vanish. You will see that from the right hand side, only one term survives and it gives you what you need.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A charged particle moves in a plane subject to the oscillatory potential A charged particle moves in a plane subject to the oscillatory potential: $U(r)=\frac{m\omega^2 r^2}{2}$ There is also a constant EM-field described by: $\vec{A}=\frac{1}{2}[\vec{B}\times\vec{r}]$ where B is normal to the plane. This produces the Lagrangian: $L=\frac{m}{2}\dot{\vec{r}}^2+\frac{e}{2}\dot{\vec{r}}\vec{A}-U(r)$ Now my friend says we need to transform this into polar coordinates and that produces: $L=\frac{m}{2}(\dot{r}^2+r^2\dot{\phi}^2)-mr^2\omega_L\dot{\phi}-U(r)$ where $\omega_L$ is the Larmor precession frequency: $\omega_L=-\frac{eB}{2mc}$ My question is, How does he get this transformation? I don't really understand where the second term is coming from in the mechanical kinetic energy.
$\newcommand{\er}{\hat e_r} \newcommand{\et}{\hat e_\tau} \newcommand{\d}{\dot} \newcommand{\m}{\frac{1}{2}m} $ This one gave me a feeling of déjà vu, since I's already answered a similar one. Here's the relevant part of the derivation: My $\theta$ is your $\phi$\ (usually $\phi$ is used for the azimuthal angle in spherical coordinates--which are a 3D extension of polar coordinates) In radial coordinates, $\d\er=\d\theta \et$, and (useless here) $\d\et= -\d r \er$. $\er,\et$ are unit vectors in radial and tangential directions respectively. Due to this mixing of unit vectors (they move along with the particle), things get a little more complicated than plain 'ol cartesian system, where the unit vectors are constant. $$\vec p= r\er$$ $$\therefore \vec v=\d{\vec p}= \d r\er + r\d\er=\d r \er + r\d\theta\et$$ $$\therefore v^2= \vec v\cdot\vec v= \d r^2+r^2\d\theta^2$$ $$\therefore KE=\frac12m\vec v\cdot\vec v=\frac12m|\vec v|^2=\frac12m (\d r^2+r^2\d\theta^2)$$ So basically it's just a few steps of math that he neglected (IIRC this is usually considered an identity).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Geometry of wireless signal strength How does wireless signal strength correspond to distance? RSSI lies between -100 and 0 (at least, on my computer). Let's say I walk a distance x towards the router, and my RSSI goes from -60 to -50. Now, lets say instead I walk a distance 2x towards the router. Would this imply that RSSI would go from -60 to -40? I'm curious what the relationship of the metrics is, is RSSI linear/logarithmic/etc with respect to distance? I'm a math guy with little physics/engineering background so some help would be very appreciated. Thanks.
From http://en.wikipedia.org/wiki/Received_signal_strength_indication: There is no standardized relationship of any particular physical parameter to the RSSI reading. The 802.11 standard does not define any relationship between RSSI value and power level in mW or dBm. Vendors provide their own accuracy, granularity, and range for the actual power (measured as mW or dBm) and their range of RSSI values (from 0 to RSSI_Max). So whether the implementation is linear or logarithmic in the power received will vary between vendors.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
In the known universe, would an atom not present in our periodic table exist? I have watched this movie Battleship. In it the researchers say this piece of metal is alien because we cant find this metal on earth. So that would mean somewhere else in the universe any of the following should be true? * *Atoms' composition is not similar to that as on earth (nucleus, electrons, anything else) *Elements with atomic numbers above 120 or 130 are stable (highly impossible without point 1) *The realm itself is observed by different binding forces (but then, once that elements realm has changed, it should become unstable and collapse)
Well, meteorite minerals like iridium and all aren't really found on Earth in appreciable quantities. What you're looking for are exotic atoms. These certainly exist, but are too unstable. And, for certain exotic atoms like onia, atomic number isn't even defined. The binding forces cannot be different since the coupling constants are...well... constant (not sure what string theory says about this--but the Standard Model keeps them constant). One thing that I can think of are "satoms" (atominos?), made of sprotons, sneutrons, and selectrons. Or maybe some other superparticles. Supersymmetry predicts that each particle has a superpartner. These ought to exist in our universe, but we haven't detected any yet. They are a candidate for dark matter though. I'm not too sure of how superparticle stability works, though. Seems like only one of them is stable. We could make an atom out of that, I guess. But it's electrically neutral, and probably very light. So there may or may not be sufficient force holding it together. As @annav said, what you may be looking for is a new alloy or something. THis will be an "exotic metal", but will still be made of normal atoms.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why do objects follow geodesics in spacetime? Trying to teach myself general relativity. I sort of understand the derivation of the geodesic equation $$\frac{d^{2}x^{\alpha}}{d\tau^{2}}+\Gamma_{\gamma\beta}^{\alpha}\frac{dx^{\beta}}{d\tau}\frac{dx^{\gamma}}{d\tau}=0.$$ which describes "how" objects move through spacetime. But I've no idea "why" they move along geodesics. Is this similar to asking why Newton's first law works? I seem to remember reading Richard Feynman saying no one knows why this is, so maybe that's the answer to my geodesic question?
The result that a freely falling test particle(a particle whose effect on space-time can be neglected) in a gravitational field moves along a geodesic can be deduced(just like a theorem of maths) from the equivalence principle, which is a hypothesis of the general theory of relativity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 10, "answer_id": 3 }
Calculate relativistic boost to COM frame from two arbitary velocities? Looking in Goldstein's book, there doesn't seem to be a standard formula to calculate the COM frame velocity for two particles, from their relativistic velocities in the lab frame, although it is done for the case where one particle is initially at rest. I find this a glaring omission and would like to know if there is a general formula for two relativistic particles moving along the $x$-axis of the lab frame.
The center of mass 4-momentum is the sum of the 4-momenta of the particles (no vector symbol or index, but the v's are four-component vectors) using the masses as the weights: $$ P_\mathrm{CM} = m_1 v_1 + m_2 v_2 $$ The length of this is the mass of the combined system, (mostly minus metric) $$ M^2 = |P|^2 = m_1^2 + m_2^2 + 2m_1 m_2 v_1 \cdot v_2 $$ The four-velocity of the center of mass is then $$ v_\mathrm{CM} = {m_1v_1 + m_2 v_2 \over M} $$ and the three velocity is given by the ratio of the space-components of the four vector to the time component: $$ v^0_\mathrm{CM} = {m_1\gamma_1 + m_2 \gamma_2 \over M}$$ So that the center of mass velocity is: $$ \vec{v}_\mathrm{CM} = {m_1\gamma_1 \vec{v}_1 + m_2\gamma_2 \vec{v}_2 \over m_1\gamma_1 + m_2\gamma_2}$$ or the weighted average of the velocities using the relativistic mass (the energy). This formula usually appears with energy letters replacing mass letters: $$ \vec{v}_\mathrm{CM} = { E_1 \vec{v}_1 + E_2 \vec{v}_2 \over E_1 + E_2}$$ Where m_1 and m_2 are the masses, $v_1$ and $v_2$ are the 4-velocities, $E_1$ and $E_2$ are the energies, $\gamma_1 = {1\over \sqrt{1-|\vec v_1|^2}}$ and similarly for $\gamma_2$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
A practical deceleration question My friend is a U.S. Army paratrooper. Today, through an unfortunate series of events, he was jerked out of a C-17 traveling at 160 knots by his reserve parachute. First-hand accounts describe it as he was instantly gone. Since he came through it relatively unscathed, I'm curious to know what level of deceleration he might have experienced. By unscathed I mean his right side is all bruised to hell from whacking the the hatch on his way out, but he's alive.
The forces experienced by your friend will be the same as if he was free falling at 160 knots and opened his parachute. The fact he was in the plane when the parachute deployed makes no difference, because in both cases he is slowed by the parachute from a high speed relative to the air to whatever the speed of a parachute descent is. The rate of slowing will be determined by how much drag the deploying parachute creates. According to http://en.wikipedia.org/wiki/Parachuting the forces experienced when a parachute opens are 3-4G. However there are two differences to your friend's situation. Firstly free fall is usually about 120mph and you friend was travelling at 160 knots, so the drag on the parachute, and therefore the decelleration your friend experienced will be higher. Secondly it was your friend's reserve parachute, not the main one, that opened. I don't know how the reserve chute differs from the main one, but it's entirely plausible the drag is different and I'd guess lower. Because of the variables involved I don't think we can more than guess at the forces experienced by your friend, but the 3-4G of a normal parachute drop is a good starting point. I'm not surprised that your friend's colleagues described him as "instantly gone". I've seen a friend accelerate downwards at 1g when caving (he was on a safety line!) and my recollection is that one moment he was there and the next he was gone. A 4G deceleration would indeed seem virtually instant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How can I use sound/resonance to clean sewers? This probably doesn't fit into the realm of regular questions ; it is more of an applied rather than theory/math question ... Anyway, I'm curious whether a metre diameter speaker fitted over a manhole may dislodge any blockage using the principle of resonance. Obviously blockage would be best dislodged at a frequency specific to the blockage. If this silly thought is practicable - would infra-sonic, or ultra-sonic frequencies serve better (as a rule of thumb)?
I'd be very surprised if a lump of sludge blocking a sewage pipe had any useful resonance. The idea of using a resonance is that the amplitude of oscillation builds up rapidly in response to the sound. However this will only happen if the oscillation has a high Q i.e. if it doesn't dissipate much energy. For a wine glass this is a good approximation, hence the legendary ability of opera singers to break wine glasses (I think, though I wouldn't swear to it, that this is an urban myth). I would guess that the gunge blocking your typical sewage pipe will have very high dissipation so you wouldn't be able to build up any significant resonance and therefore the sound will have little effect on it. If you're interested in pursuing this further, Wikipedia has a good article on resonance. See http://en.wikipedia.org/wiki/Resonance for the details.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
what gives the vermiculite it's insultative properties i know that vermiculite is used in insulation applications. i found this notion of the R-value of vermiculite, that i don't know if its true. basically i want to know if the attribute of the vermiculite, of expanding in a temperature of 870 degrees Celsius, is what gives it this high R-value? i Guess not, because other wise you couldn't use it in non heated applications (like dwelling walls).
Vermiculite is a type of clay. Clays in general form structures composed of sheets made from two layers of tetrahedrally bonded silica with octahedrally co-ordinated metal ions in the middle. See http://en.wikipedia.org/wiki/Montmorillonite for a typical structure (montmorillonite is an archetypal clay much beloved of colloid scientists). Anyhow, in clays the sheets are loosely bound to each other and easily separated. For example in montmorillonite this can be achieved just be suitable ion exchange, and you get a clear gel of separated sheets in water. Vermiculite doesn't do this, but if you heat the solid the sheets will separate to form a fluffy expanded solid with lots of entrained air. The resulting material is an excellent insulator because it contains so much trapped air. It's not the best insulator but it's particularly good in construction because it's uninflammable so it presents no fire risks. So it's not the expansion at 870C that gives vermiculite it's high R value, it because it has been heated to 870C (then cooled again) to give the fluffy expanded structure. The expansion of the vermiculite is called exfoliation, and it's normally attributed to vaporisation of water trapped between the silicate layers. I'm not sure how much fundamental research has been done on the mechanism.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Will a stone thrown in space move forever? If I throw a stone in space, in a place where gravity is equal zero, and the space had no end, and no objects to collide with, will the stone move forward forever, because no air, so no friction?
From the perspective of General Relativity, assuming we can ignore interactions with intergalatic gas and the CMB then a thrown stone follows a curve called a geodesic. In general geodesics go on forever so your stone will be moving forever, just as Lev said in his answer. However there are circumstances in which geodesic curves appear not to go on forever. I say "appear" because future theories of quantum gravity will probably change things, but at the moment we think that a geodesic that leads into a static black hole will just end when it hits the singularity at the center of the black hole. This idea is called geodesic incompleteness. So the answer to your question is that the stone will almost certainly go on forever, unless it hits a black hole. Even then it would have to be a static black hole because for charged and rotating black holes the stone could miss the singularity and emerge again (into a different universe, but that's another story!). Later: Oops, I've just seen Logan's comment and you did say "where gravity is equal to zero" so my comments about black holes don't apply. Still, I think the idea of geodesic incompleteness is interesting enough to warrant a mention.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
What is the probability that a star of a given spectral type will have planets? There is a lot of new data from the various extrasolar planet projects including NASA's Kepler mission on extra-solar planets. Based on our current data what is the probability that a star of each of the main spectral types (O, B, A, etc) will have a planetary system?
This question was asked a couple of years ago and things have changed since then. We now know that small planets are found around stars across a broad range of metallicities and that it is only the existence of giant planets that are affected by low metallicity. Nature article here. It was previously thought that small planets were more common around small stars but the latest Kepler results show that small planets are equally common around stars of all spectral types. See this AAS press conference. "After accounting for false positives and the effective detection efficiency of Kepler as described above, we find no significant dependence of the rates of occurrence as a function of the spectral type (or mass, or temperature) of the host star. This contrasts with the findings by Howard et al. (2012), who found that for the small Neptunes (2–4R⊕) M stars have higher planet frequencies than F stars." (Preprint here)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 0 }
A method to estimate the relative magnitude of a star using nearby stars I remember a method to make a reasonable estimate of the magnitude of a given star X by using two other stars of known magnitude as references. The method used evaluation phrases like "the star X and the reference star appear to have the same brightness, and this sensation remains even after prolonged observation" or "X and the other star appear to be the same, but after some time, X appears slightly brighter" or "X and A are completely different in brightness". Each of these phrases had a value (IIRC from 1 to 5) and then with a formula you could estimate the magnitude of X from the magnitude of the reference stars and the values obtained from the comparison phrases. Do you know the name of the method, and the exact protocol ?
This method sounds way too complicated to me. I'm an experienced variable star observer, and use the methods recommended by the American Association of Variable Star Observers. The AAVSO publishes detailed charts for thousand of variable stars with comparison stars marked to the nearest tenth of a magnitude. The brightness of the variable is estimated using interpolation between the various comparison stars. One thing in your description is very wrong: you must NEVER stare at a variable star for an extended period, as that will lead to serious overestimation of the brightness of red stars. Complete instructions are available in the AAVSO manual, which can be downloaded free from: http://www.aavso.org/visual-observing-manual Charts can also be downloaded free from that site.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
What causes millisecond pulsars to speed up? Millisecond pulsars are supposed to be old neutron stars. However, they are spinning even more rapidly than newly formed pulsars. Since pulsars slow down as they age, something must have caused these older pulsars to "spin up" and be rotating as fast as they are. What is the mechanism for doing so?
There are a couple of pieces of observational evidence that support the explanation Jeremy provided. Many millisecond pulsars have been found via x-ray or gamma ray observations, and is interpreted as accretion in a disk on to the surfaces of the pulsars. The falling material speeds up the rotation of the pulsar due to conservation of angular momentum. Pulsars which are in the process of "consuming" the mass of a companion star are often called "black widow pulsars". Currently, something like 30% of millisecond pulsars are thought to be isolated, with 70% in binary systems. Two systems are known to have planetary mass companions, the most recent having been discovered last year (Transformation of a Star into a Planet in a Millisecond Pulsar Binary). While three body encounters may account for some of the solitary millisecond pulsars known, some millisecond pulsars may completely consume their donor stars. Some authors have argued that one of the millisecond pulsars with planets likely formed as a millisecond pulsar, and formed with a very low magnetic field (Implications of the PSR 1257+12 Planetary System for Isolated Millisecond Pulsars). Since the rate of "spin-down" of a pulsar, that is how quickly its period gets longer, depends on the strength of its magnetic field, a pulsar with a weak magnetic field will stay at its rotation speed for a much longer time.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/24990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Mixing and matching across eyepiece designs / manufacturers? Right now, I am considering moving from 1.25" eyepieces to 2". While I'm convinced of the quality of the premium eyepieces, it would take me years to afford a complete set and, if I go that route, I will necessarily pick them up piecemeal. Would I be wise to follow that route, knowing that in a few years time I'll have not wasted any money? Or would I be wiser to buy a complete set of design X, which have great price/performance ratios and slowly replacing them with premium eyepieces? Or would I be wiser to buy a few of type Y for higher-power viewing, which are still pretty expensive, but not as expensive as the 100 degrees field-of-view eyepieces, but maybe just buy 1 of those for wide-angle viewing? Or another strategy? I like to look at everything, so ultimately I do want a pretty wide range of eyepieces. But I kind of hate that with my current mish-mash, I have a "lumpy distribution" in quality.
Regarding your "lumpy" distribution of eyepiece quality: Think about which eyepieces do you use most of the time. I recommend filling the low quality dips piecemeal with your desired type of premium eyepiece, first replacing the eyepiece you dislike but use often. Upgrade from the middle out. The eyepieces of the extreme range are probably used the least. Skip buying eyepiece sets that include sizes you'll use very little and that you'll want to sell (at a loss) in the near future. This is especially true if you know you will want to replace these eyepieces with better in the near future. Look for eyepieces that are parfocal with what you already own. That is, they have the same focus point so you can switch quickly between your eyepieces when transitioning from finding to viewing an object.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
How large is the universe? We know that the age of the universe (or, at least the time since the Big Bang) is roughly 13.75 billion years. I have heard that the size of the universe is much larger than what we can see, in other words, much larger than the observable universe. Is this true? Why would this not conflict with what is given by Special Relativity (namely, that faster-than-light travel is prohibited)? If it is true, then what is the estimated size of the universe, and how can we know?
It's easy to underestimate the size of the Universe because we are concentrating on what we see. What we see when we look at the Cosmic Microwave Background is a look back 13.8 billion years ago to when the Universe was only 1/1000 as large as it is now. Take a piece of the sky as visualized on maps of the cosmic background radiation (CMB). A piece that is only as large as the Moon, one-half degree across. The entire sky is composed of about 160,000 of such pieces. That piece of Universe has now expanded into an entire volume similar to the one that we are in. Right now, they could look at their CMB map,and see the part of the Universe that we are in as a small part of that CMB map of theirs. (If we waited 13.8 billion years to watch their part of the Universe expand, we would not see it at all because the expansion will soon cause that area to recede from us much faster than the speed of light!) So the entire size of the Universe at present, that is represented by what we see on our map of the CMB from 13.8 billion years ago, has about 160,000 times the volume of the currently observable universe, or a radius about 50 times as large, that is about 700 billion light years! The Universe is actually much larger than that, if not indeed infinite, if we consider those portions that started receding from us faster than the speed of light before the epoch of the CMB. Note that this is all very approximate, depending on the details of the expansion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 6, "answer_id": 4 }
Optimal Angular Field of View (AFOV) Given the rather huge price differences between eye pieces at the same focal length. How exactly does the AFOV affect the view seen through the eyepiece? Are higher / lower AFOV better for certain situations? or is higher always better?
There are actually two different fields of view to consider. The apparent field of view is the apparent view you see in the eyepiece, typically 35° to 110°. The actual field of view is how much of the sky you are actually seeing, typically from a few degrees to a few arc minutes. The actual field of view is the apparent field of view divided by the magnification. The main advantage of a wide apparent field of view is the feeling of immersion in the view and presenting the object in context. Typically eyepieces with apparent fields of view in excess of 80° are described as giving a "moonwalk" experience. With these eyepieces you usually can't see the edge of the field of view, so it's like sticking your head out a window, rather than seeing the view framed in a window. A secondary practical advantage of eyepieces with wide apparent field of view is that they are more versatile than eyepieces with a narrower view. A wide angle eyepiece typically takes the place of at least two narrower angle eyepieces, giving you a wide context and fine detail at the same time. I find that I can do most of my observing with only two or three wide angle eyepieces. Eyepieces with narrow fields survive because they are less expensive, and also because their simpler optical design (fewer lens elements) allows more light throughput and better contrast. Serious planetary observers typcally use orthoscopic or monocentric eyepieces because of their high contrast. Planets are small in angular size, and don't need a wide field of view.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
What is the current status of Pluto? Pluto has been designated a planet in our solar system for years (ever since it was discovered in the last century), but in 2006 it was demoted. What caused this decision? And is there a chance that it could be reversed? Edit: well, http://www.dailygalaxy.com/my_weblog/2017/03/nasas-new-horizon-astronomers-declare-pluto-is-a-planet-so-is-jupiters-ocean-moon-europa.html is interesting; this is science, so anything could (potentially) change.
I don't really care what the IAU voted. Pluto will always be a planet in my book. Astronomy is full of historic inaccuracies that we perpetuate for tradition sake. Some examples that come to mind are early/late-type galaxies, Population I/II/III stars, and brown dwarfs.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 7, "answer_id": 5 }
Seeing cosmic activity now, really means it happens millions/billions of years ago? A Recent report about a cosmic burst 3.8 billion light years away. It is written as though it is happening now. However, my question is, if the event is 3.8 billion light years away, doesn't that mean we are continuously looking at history, or is it possible to detect activity in "realtime" despite the distance?
This is true, but you should not forget that comparison of times depends on the frame of reference by relativity theory. In particular, from the point of view of the travelling light, no time has passed. You might want to look at the cartoon at http://xkcd.com/811/ Don't forget to read the mouse-over text.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
Does Mercury have a balmy spot? From Wikipedia: Although the daylight temperature at the surface of Mercury is generally extremely high, observations strongly suggest that ice exists on Mercury. Does that mean there could be a spot on Mercury where a person could stand and it would be a balmy 80° F (27° C)? Obviously, let's ignore any radiation issues, etc.
The observations of ice are at the poles, in permanently shadowed craters, much as people think there may be stable, solid ice in permanently shadowed regions of the lunar south pole. While one atom that's -50°C right next to another that's +200°C would equilibrate almost immediately, there will be a very tiny transition zone of temperature between the permanently shadowed crater with ice (if it exists) and the much hotter surface that's in the sun. However, this would be the temperature of the surface. When I think of "balmy" I think of palm trees, a warm sun, and a gentle breeze. There is no atmosphere to speak of on Mercury, so there would be no air in which you would experience this "balmy" surface temperature in the very very narrow transition region.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How can I stabilize an unstable telescope? I have an 80 mm refractor telescope on a tripod, but it shakes on every touch. It's very hard to see via 6 mm (x120) ocular. Even a little wind causes the image to become too unsteady. How can I make my tripod more steady?
First make sure all your screws are tight, and that there isn't any shaking because of slack in any areas where things connect to each other. Another thing you can do is buy vibration dampening pads to put your tripod on. Finally, you can add counter weights and pendulum weights to the tripod to give it more mass to withstand the wind and touches.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 0 }
Why can't you escape a black hole? I understand that the event horizon of a black hole forms at the radius from the singularity where the escape velocity is $c$. But it's also true that you don't have to go escape velocity to escape an object if you can maintain some kind of thrust. You could escape the earth at 1 km/h if you could maintain the proper amount of thrust for enough time. So if you pass just beneath the event horizon, shouldn't you be able to thrust your way back out, despite the $>c$ escape velocity? Or does this restriction have to do solely with relativistic effects (time stopping for outside observers at the event horizon)?
It is actually possible to get out of the horizon. To get out one only have to reach speed higher than c. But no object that bears information can reach that speed, this would violate casualty. So only things that bear no information can get out from inside the horizon. One such thing is the Hawking radiation (not only photons but also particles) which can be viewed as quantum tunneling out of the BH. Note that in all cases of tunelling (say at nucleus decay) the emitted particle reaches higher than c speed. This does not violate casualty because the process is probablistic and as such cannot be used for iunformation transfer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 6, "answer_id": 1 }
How can Voyager 1 escape gravity of moons and planets? I think this one is pretty simple so excuse me for my ignorance. But since most planets in our solar system are very well tied to their orbit around the sun or orbit around their planet (for moons), I was wondering how can a really small spacecraft such as Voyager 1 avoid getting stuck into these orbits and avoid the gravitational force of these huge objects. It's probably as simple as doing some math, but I imagine that it's because the spacecraft is so small compared to other objects that it makes it quite easy for it to escape gravity. It's not easy for large objects (like moons) to escape their host planet gravity because they're much bigger, right?
You are absolutely right. Considering $r$ constant, Newton's law of universal gravitation, $ F = \dfrac{Gm_1m_2}{r^2}$, says that the force of attraction between two objects with mass $m_1$ and $m_2$, depends on the product of their masses. Hence, though one object is having a larger mass (here planets), the force can be small as the other is having a small mass (here satellite). Also, as the force depends on the square of the distance, even the satellite can feel a very strong force of planets, if it comes very near to the planet. But, before we set the whole course of journey of our satellite, we make sure that it does not give the planets the opportunity to swallow it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Which is the heaviest present day lifter (rocket)? And is it comparable to the Saturn V rocket? I know of the Ariane 5 ECA, the Delta IV rocket and a few more, but which of the present day's rockets is the top heavy lifter, say, to low Earth orbit (LEO)? Although it is not a certain fact, I would imagine that a very heavy lifter to LEO is also good for placing objects into geostationary transfer orbit (GTO) and could be a good candidate for out of Earth orbit flight (for instance a trip to the Moon).
The Saturn V payload mass to LEO was 118,000 kg. Wikipedia has a decent comparison of Super-heavy launch systems with a payload mass to LEO of 50,000 kg or more. None are in current use, and only two systems are in development. There is also a "Heavy" lift launch system list which includes the Delta IV and Ariane 5 you mentioned. The top operational system is the Atlas V HLV with a mass to LEO of 29,420 kg and a Mass to GTO of 13,000. However, it has never been launched and the United Launch Alliance claims it needs a 30 month lead-time to produce the Heavy Launch Vehicle variant of the Atlas V. Next on the list with mass to LEO/GTO: * *Delta IV Heavy: 22,950/12,980 kg, 3/4 successful launches. *Proton: 21,600/6,360 kg (comparatively lower GTO due to launch location), 295/335 successful launches *Ariane 5: 21,000/10,050 kg, 54/58 successful launches. So the answer is there are no currently operational launch systems which approach the Saturn V Mass to LEO capability.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Is dark matter around the Milky Way spread in a spiral shape (or, in a different shape)? Dark matter doesn't interact with electromagnetic radiation, but it, at least, participates in gravitational interactions as known from the discovery of dark matter. But does dark matter exist in a spiral shape around our galaxy?
Disclaimer: I don't do dark matter, I don't work with anyone whose does and I haven't read enough papers to make an difference. I have seen a colloquium and a couple of seminars by people who do do dark matter. Take what follows with that in mind. Dark matter either not having been observed or having served up exactly one unconfirmed observation there is no widely agreed theory of its behavior yet. There are models of cosmological evolution that posit certain traits for the dark matter and when run forward from the presumed conditions of the early universe give something that resembles the universe we see today. In those models, the dark matter distribution is roughly spherical and rather larger than the luminous portions of the galaxy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 1 }
Are we going to be able to travel trough space deforming the space-time? I'm not talking about the speed of the spaceship. If we can deform space-time we needn't any type of propulsion. And how can the travel affect to it's pilots? Can they survive?
the Alcubierre drive has problems far worse than the energy violations; you need to distribute the exotic matter in a space-like direction before you can ride over it. Even if you could produce enough exotic matter, you will have to distribute it first by conventional travel. No amount of exotic matter will avoid this. If you use exotic matter to stabilize some quantum foam wormhole to a macroscopic scale (and there is the big question of where the other end takes you, but lets assume for the sake of discussion that you always get the two ends nearby) you'll still have to move one of the ends by conventional travel to wherever you want to set up shop. energy-condition-violating fields are allowed by thermodynamics law as long as creating them requires at least the same amount of entropy that they could take away from a black hole (just like Hawking radiation does). We know this is also physically possible because squeezed vacuum has already been created experimentally for a single EM mode. You still would have to squeeze a gigantic range of the EM spectra (probably up to scales too small for normal materials to create) to localize enough dark energy to stabilize one of these 28th century toys
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How to measure the diameter of a star? I am thinking about something I read somewhere (if only I could find it again) in a textbook. It is about the size of a star and its ER peaks. It has to do with the waves coming off the edge (maybe) and arriving later than those from "head on" and therefore you can know something about its diameter. It has been puzzling me but I can't quite remember it. Just yesterday I was reading about a black hole that pulses at a minimum of 10 minutes and so it is at least 10 light-minutes across. (I probably am not getting it just right, but help me out!) Is this the same principle? Anyone know what I'm talking about? I would love to have this explained and/or would like to know what it is called, so I can look it up.
To calculate the linear diameter of a star, we need only to know its effective temperature, the bolometric correction, and its absolute magnitude. And if, instead of the absolute magnitude, we know the apparent diameter can then calculate the angle. The formula used to determine the size of a star appeared in Article Stellar Masses of Daniel M. astrnomo Popper (Annual Review of Astronomy & Astrophysics, edition 1980, pages 115-164). Although this Article does not address the issue of diameter SPECIFIC $star-ms$ well, the calculation of the mass in type eclipsing binary systems, the data offered by Popper and ecuations are $ m $ $ s $ it appropriate to get a good aproximacin the diameter of a star. The formula is: $$Mv= log R = -0.2 - + 0.2 2Fv C1$$ where log represents the logarithm function in base 10, R is the radius of the star expressed in solar units (or equally, the diameter, since it is not an absolute figure, but comparative) $Mv$ is the absolute magnitude of the star (in V filter), $Fv$ is a function of luminosity per unit area, and solar $C1$ is a constant whose value is approximately 42.3615 (Popper uses the value of $42 255$). i hope to be usefull.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
How much energy does a super nova generate? For a scene in a SciFi book, I want to know: Is it possible to estimate how much energy per m² an object would receive that hides behind an in-system planet when the sun goes nova?
Is it possible to estimate? Yes. I'll give it a quick try. But the details of whether the planet will be incinerated and so on will make the reality much more complicated. As a ballpark, I think supernovae release about $10^{53}$ erg of energy. Spread over a sphere of, say, 1 AU gives $3.55\times10^{22}$J.m$^{-2}$. This energy isn't all released in one go and I don't know how much is radiative or kinetic. If its released over, say, 20 days, that gives $2.06\times10^{16}$W.m$^{-2}$ For comparison, the Sun emits 1368 W.m$^{-2}$, or 15 trillion times less. The timescale is roughly the time it takes for observed supernova luminosities to rise to a peak but much shorter timescales might be relevant. About 1% of that energy is released in a few seconds in a neutrino burst, but they don't interact much. Also, 1 AU is pretty arbitrary. A star that undergoes core-collapse must be bigger than the Sun, so its habitable zone would be much further away. 100 AU might be just as reasonable and reduce the energy flux by a factor of 100$^2$. To estimate further, you could work out how much energy your planet would absorb based on its cross-section and compare that to its gravitational binding energy to get a rough guess about whether it would survive the blast. Hope this helps though.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
What is the simplest way to prove that Earth orbits the Sun? Assume you're talking to someone ignorant of the basic facts of astronomy. How would you prove to them that Earth orbits the Sun? Similarly, how would you prove to them that the Moon orbits Earth?
So called 'stellar aberration',the shifting of the apparent positions of stars by up to 20 arc seconds towards the direction the Earth is going in its orbit, was the first method available to 'simple'equipment in the 19th century, namely transit telescopes. By the early 20th century, the annual variations of the Doppler shift of stellar spectral lines, caused by the 30 km/sec Earth's orbital motion, was readily detectable. Now, I would say the easiest way would be observations of the annual Doppler shift of the 21-cm line of galactic neutral atomic hydrogen, by amateur radio astronomers. Perhaps others can now measure annual variations of the apparent temperature of the cosmic microwave background. Stellar parallax, while a very small effect (less than one arc second) is easy to measure with modern amateur equipment, by imaging the same piece of sky with a nearby star three times, with six months between images.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 7, "answer_id": 1 }
Do days and months on the Moon have names? On Earth we have various calendars, for example, Days: Monday, Tuesday, Wednesday, etc., etc. Months: January, February, March Does the Moon have names for its "daily" rotations, etc.? It sounds like a silly question, and I am not sure if I've asked it using the correct terminology. I suppose what I'm trying to ask is; from a viewpoint of someone living on the Moon - does it have "day" names?
As 1 lunar day is roughly 27 earth days, I doubt they'd have names of days. Most likely if there was ever a permanent base or something like the ISS, they would use earth days and months. In other words Monday on the moon would still be 24 hours long and would not have roughly half day and night like the majority of earth does. It would make most sense to keep in sync with the primary location on earth that they communicate to. Meaning when the people on the moon are awake would be the same time as the people who they communicate officially with. Until a large truly permanent settlement is created which is minimally reliant on earth, they won't have their own days or anything even then they might not still. It depends whatever is easiest.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Given a photo of the Moon, taken from Earth, is it possible to calculate the position of the photographer's site? Given a photo of the Moon, taken from Earth, is it possible to calculate the position (Earth longitude and latitude) of the photographer's site? I am thinking about photos taken with a normal camera lens and not with a telescope, for example photos taken with a 300 mm lens like these one: http://www.flickr.com/search/?q=100+300+moon&l=cc&ct=0&mt=all&adv=1 I assume the photo shows enough detailed features like recognizable craters and maria. Is there any software capable of solving the problem? Thank you. Alessandro Addendum: A diagram of a simplified model of the problem: R is the radius of the Earth. D is the (average) distance from the center of the Moon to the center of the Earth. P is the photographer site on the surface of the Earth.
This is a great question to tie in the quest to determine longitude as a driver of astronomy. Briefly, your question is exactly in line with the thinking of any number of ambitious 18th century scientists, inventors, and natural philosophers. And @BradC's fantastic movie of the libration of the moon gives an idea of exactly how frustrating nature can be. Another way of visualizing the complexity is this IDL source code -- and even that is only accurate within a few arc-seconds, which isn't going to help you avoid any reefs.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/25968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 6, "answer_id": 1 }
Do nearby gamma ray busts/supernova damage more than just the ozone layer? So we know that many people are putting hard constraints on the galactic habitability zone based on the presence of nearby supernova/gamma ray bursts. But if they only affect the ozone layer, then I doubt that it's as hard of a constraint as many people think it is. For one thing - there is practically no ozone layer around the planets of red dwarfs (and possibly even low-mass K-stars like Epsilon Eridani and Alpha Centauri B - IMHO, K-stars offer the best prospects for life on other planets.) With this information, I am wondering wondering about the outcome, particularly in regard to life: would a nearby supernova really do so much damage to planets around those stars? For instance, would a supernova really cause more damage than, say, the K/T extinction event 65 million years ago? Also, given that much marine life is shielded from UV rays by layers or ocean water, is it really going to cause significant amounts of damage to such life in that environment? As a side note, maybe this about surviving gamma ray bursts is relevant for complex life too (although this response might be imperfect for now.)
Dr Phil Plait covers the effects of a supernova near a habited planet extensively in his book Death from the Skies. Basically, it would have to happen at a distance closer than 25 light years. Given that constraint, and looking at our system, no stars are candidates for a supernova explosion that would wipe us out. A GRB is a different beast entirely, and the distances involved are closer to thousands of light years. However, then the magnetic fields surrounding that candidate star must align perfectly to cause such an extinction event. The effects on the earth are very unlikely to occur due to the rarity of a GRB in our galaxy and the alignment required. Although it has been posited that one such event already occurred.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
What are the chances that a deadly asteroid will hit Earth in the next decade? What are the chances that an asteroid that will kill multiple people will hit Earth in the next decade?
Hm. To answer that question, one would simply count such occurrences for a given time in the past, and divide it through the timespan. Then do some corrections for the increased number of cities in the last centuries. The only bigger impacts which where reported from people was the Tunguska-Event 1908 and 1490 in China, 10 000 people died, according to the German wikipedia on asteroid impact and the corresponding article in English, though the article says that at least some astronomers find the number of deaths implausible. There I found the calculation of a big enough impact for climate changes every 500 000 to 10 million years (not very accurate, is it?). I can sell you an insurance policy. ;)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 2 }
Is there such a thing as "North" in outerspace? On Earth, North is determined by the magnetic poles of our planet. Is there such a thing as "North" in outerspace? To put it another way, is there any other way for astronauts to navigate besides starcharts? For instance, if an astronauts spaceship were to be placed somewhere (outside of our solar system) in the milkyway galaxy, would there be a way for them to orient themselves?
Actually, North on the Earth is determined not by the Earth's magnetic field, but by the apparent motion of stars as the Earth rotates. Magnetic North is not toward the northern end of the spin axis of the Earth, in general. There are easily identifiable distant galaxies, and pulsars, that can serve very well as navigational beacons for interstellar navigation and even for navigation outside our galaxy. From some perspectives the 10 million year lifetime of a pulsar is too short. Navigation within our galaxy using pulsars as beacons would be a bit messy, since the EM radiation emission of a pulsar is in a cone that seems to be 6 to 15 degrees wide, whose axis itself sweeps in a cone as the pulsar rotates. That means a pulsar will typically be visible from at most about 10% of the sphere surrounding the pulsar. But it also means, because there are something like 1000 known pulsars within 2500 light years and the Milky Way is ~ 100,000 light years across, that there must be at least 1.6 million pulsars in our galaxy. Depending on where our galactic explorers are in the Milky Way, some pulsars would be visible and others would not be, but there should always be at least a few hundred pulsars visible from any point in or near our galaxy. Edit 9/25/2019: Although pulsars and distant galaxies can serve as navigational beacons from which to triangulate position, pulsars offer an additional bonus: Measuring the relative phases of a set of pulsars whose period is in the millisecond range should be able to provide a GPS-like accuracy much better than one light-millisecond. The distance from the Earth to the Moon is around 1.3 light-seconds. One light-millisecond is just 186 miles.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 2 }
Meteorites from Mars? So I've heard of meteorites "originating from Mars" (e.g. AH84001), but the phrase confuses me. I'm interested in what this means - have these rocks somehow escaped Mars' gravity and ended up here; or were they part of the material that Mars formed from, but did not end up as part of the red planet? Or another explanation?
These are chunks of rock that existed as part of the crust of Mars but were ejected into interplanetary space by a very powerful impact and then eventually impacted the Earth. It wasn't until we had sent probes to Mars and began to understand the composition of martian minerals and atmosphere that we started realizing some of the meteorites we had already found were from Mars. The clincher has been the analysis of trapped gasses within meteorites. Nearly 100 Martian meteorites are known to have been found. Interestingly, most Martian meteorites fall into only 3 mineralogical categories (Shergottites, Nakhlites, and Chassignites, or SNC meteorites) and had been identified as being unusual as meteorites even before they were confirmed to have originated on Mars. Similarly, there are likely some number of Earth meteorites on Mars.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Can the Hanbury-Brown and Twiss effect be used to measure the size of composite objects like galaxies? I know that the Hanbury-Brown and Twiss effect can be used to measure the size of stars. Can it also be used to measure the size of galaxies?
(2) & (3) are both generally correct. The HBT effect is in essence a property of detection, not emission (the sources are incoherent). H&B observed Sirius, which at 8.6 light years (ly) distance and 1.7 times bigger than the sun, is 6 milli-arcseconds (mas). A 140,000 ly diameter galaxy (like Andromeda) at 10 billion ly has an angular size of 2900 mas. The original Sirius observation had a 6m baseline, so the galactic observation's HBT correlations would occur over a baseline of 1.2 cm--a much more difficult experiment to do-- not to mention the sparsity of photons. At 10Bly, the Andromeda galaxy would be magnitude 21.4, compared to Sirius's -1.46; that is, 1.4 billion times dimmer, and hence, 1.4 billion fewer photons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Why don't we have a better telescope than the Hubble Space Telescope? The Hubble Space Telescope (HST) was launched in 1990, more than 20 years ago, but I know that it was supposed to be launched in 1986, 24 years ago. Since it only took 66 years from the fist plane to the first man on the Moon why don't we have a better telescope in space after 24 years?
Money and willpower. With any program (scientific, military, public works, etc.) it all depends on the amount of money someone is willing to put to it, and how much backing and protection that program has from getting money re-prioritized to other projects. You are making a false dichotomy of attempting to present our past actions as a justification for actions we should have been able to take. With the decisions made on many levels (i.e. to fight several wars, cancel various lift vehicle programs, etc.) that just doesn't translate very well. Keep in mind that getting to the moon was all part of the "Space Race" which had many layered motivations, with science perhaps only being a side benefit to the projects. The James Webb Telescope is the next generation telescope that is due to go up. Although, the JWST is optimized for the infrared spectrum. For visible spectrum telescopes, the most ambitions space based one planned is the Terrestrial Planet Finder. However, the Hubble is still the belle of the ball. This of course doesn't touch on the ground based observatories we have, some of which are truly spectacular! I want to make a family vacation to Chile just to see some of them!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 5, "answer_id": 2 }
What does velocity dispersion (sigma) reveal about a galaxy? I'm getting hung up on this term. In studying SMBHs, I see that velocity dispersion strongly correlates with mass. Just what is the velocity dispersion? How can the velocity dispersion of the galaxy be expressed in one figure (sigma) if it has to be measured all over the galaxy? I can imagine the velocity dispersion changes with radius, so "which" point is used? Why exactly is it that higher velocity dispersion is correlated with higher Mass? And is all this referring to the bulge only or to the entire galaxy? So to get velocity dispersion, is "one slit" at the center enough or do you scan across? I hope someone can answer this without relying heavily on the maths. I am really just trying to understand the concept, because I think that velocity dispersion is real, not just a mathematical construct.
The fact that sigma is used in statistics to refer to standard deviation is an important clue. There are lots of velocities in a typical galaxy. They have a mean and a standard deviation, which is called the velocity dispersion in this case. Directly, it reveals how fast things are going relative to each other or relative to their mean. Via Newton's laws, the velocity correlates with the mass. The faster they are going, the more massive the potential well that they are in.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
How did micrometeorite flux change with the age of the solar system? Do we also have functions for the micrometeorite flux on the Moon, Mars, and even any random body in space as a function of solar distance too?
The short answer to both your question in the subject line (change with solar system age) and your topic (function of solar distance) is "no." We don't even know the impact flux of asteroids and comets as a function of solar distance nor time. The work that most of us in the field cite whenever discussing crater chronology (using number of craters to determine age) is from 2001 (Neukum et al.). The Neukum chronology does indicate a change in the impact rate through time, and while we all use it, we also (almost) all know it's wrong. It doesn't take into account any spikes that likely occurred when asteroid families were formed, it doesn't take into account the late heavy bombardment, and it doesn't take into account other factors. It's also formed by fitting data to basins on the moon from returned Apollo and Luna samples, but those are also disputed, especially the Copernicus crater age. So, since we don't even know the flux of objects that form kilometer-scale impacts, we certainly don't know the change with time of micrometeorites. In terms of solar distance, this is something that we know even less well. Yes, you can project from the moon to other bodies, as we do with Mars (Hartmann, 2005, for example), but that doesn't take into account things like a difference in timing for the late heavy bombardment -- it really only takes into account relative proximity to the asteroid belt and velocity of impactors. Once you go outside the inner solar system, it's completely up in the air, though people still try to do chronology based on what we know from the Moon. The problem with going outside the main asteroid belt is that the relative numbers of asteroids to comets as impactors may change dramatically ... we just don't know. So again, since we don't even know the impact rates of kilometer-sized objects very well, micrometeorites are again an unknown.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Accuracy and assumptions in deriving the Tully-Fisher relation I understand the mathematical derivation of the Tully Fisher relation from basic physics formulas, as shown on this site. However, after using the physics equations, it seems that several assumptions are made from this point on. First are statistical assumptions. There are statistical errors because the observable, luminous mass of the galaxy is less than the actual mass of the galaxy, and the mass of the galaxy is assumed to be only the observed mass, not the actual mass. Second, this relationship seems to assume that all galaxies are perfectly circular (with negligible thickness). So, with all these assumptions necessary for the Tully-Fisher relation, how is the relationship derived, how can corrections be made for the assumptions when attempting calculations using this relationship, and why is the Tully-Fisher relation generally accepted by the astrophysics community?
The Tully-Fisher relation is first and foremost, and historically, an observational relation. The 'derivation' of the relation, is really more of a 'motivation'. The relation is based on a myriad details of galaxy and star formation, and dynamics---an accurate 'derivation' could only be based on numerical simulations (which is the basis for most modern fits to the observed relationships). For example, a large contribution to the velocity distribution of a galaxy will be its history of interactions with nearby objects (e.g. other galaxies). This has nothing to do with the intrinsic properties of the galaxy itself, and thus there isn't really a way to 'account' for it in the analytical equation. If you look at these plots (and note that its actually plotted opposite of the usual way), you'll see that there is incredible intrinsic scatter, not only in each plot, but also between the figures (which are each for a different observed color, but the same sample of galaxies). Plots are from this paper: http://adsabs.harvard.edu/abs/2010A&A...521A..27F While the Tully-Fisher relation is very important, and kind of a staple of galactic astronomy, its reflective of a useful tendency---not a tight, intrinsic scaling.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
What is Hawking radiation and how does it cause a black hole to evaporate? My understanding is that Hawking radiation isn't really radiated from a black hole, but rather occurs when a particle anti-particle pair spontaneously pop into existence, and before they can annihilate each other, the antiparticle gets sucked into the black hole while the particle escapes. In this way it appears that matter has escaped the black hole because it has lost some mass and that amount of mass is now zipping away from it. Is this correct? If so, wouldn't it be equally likely that the particle be trapped in the black hole and the antiparticle go zipping away, appearing as if the black hole is spontaneously growing and emitting antimatter? How is it that this process can become unbalanced and cause a black hole to eventually emerge from its event horizon and evaporate into cosmic soup over eons?
Simply put, the particles that pop in and out of existence are not matter/anti-matter pairs but rather virtual particles (particle/anti particle pairs) that both have net mass, therefore contributing to the net mass of the universe when one of them is swallowed up by a black hole at the event horizon and the other escapes. It seems illogical to assume that the escape of one particle and the entrapment of the other could lead to a net gain in mass/energy of our universe, but the newly escaped particle is in fact a particle that we otherwise wouldn't have without the this effect at the event horizon.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 6, "answer_id": 3 }
How do we determine the mass of a black hole? Since by definition we cannot observe black holes directly, how do astronomers determine the mass of a black hole? What observational techniques are there that would allow us to determine a black hole's mass?
It is possible to measure the mass of a black hole from the speed of other objects. If we measure the speed of a star orbiting a black hole and the distance of its orbiting radius we can tell its mass. For example we would first locate a star that is traveling at 100 km/s and has an orbiting radius of 150 megameters so we could find the black holes mass. To do this we would first find a mass such as our Sun and the Earth. We know that the Earth is 149,597,900 km, or about 150 megameters from the Sun, (so this is its orbital radius) and so simply by using pi we can work out how fast it is moving around the Sun. To find the orbital circumfrance we just do pi*149,597,900 = 469,975,664km (470 megameters) (it'll be a bit more because the orbit isn't perfectly round) We also know we orbit the Sun every 365.25 days (about 31557600 seconds). To find the speed we just do 469,975,664 km / 31557600 s = 14.9 km/s Because the star around the black hole has the same distance from us to the Sun, we now know how to work out how much mass it has compared to the sun. To begin with, we know that it is orbiting around 6.7 times faster (100/14.9) and it is the same distance away. This means that it must have a mass 6.7 times greater than the mass of the Sun. The Sun's mass is 1.98892 * 10^30 kg so if we just times this by 6.7 we can find the mass of our black hole: 1.98892 * 10^30 * 6 = 1.193352 * 10^31 That's 1,193,352,000,000,000,000,000,000,000,000 kg! But that doesn't mean the black hole is 6.7 times larger. In fact, if the black hole we are looking at is an "average" black hole it would be 66297333300000000 cm^3.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 2 }
Why are there more vertical takeoff than horizontal for spacecrafts? Vertical takeoff requires disposable rockets (would it a satellite rocket), which is a money loss, and also a lot of fuel, because initial velocity is zero. Also vertical takeoff seems risky, involves huge pieces of equipments, launch pads, to diminish risk. Horizontal takeoff are done with a reusable aircraft, like a modified 747. Initial velocity not being zero, there are much less risk, and the fuel spent in a 747 is much less expensive than a disposable rocket. So, Why are there more vertical takeoff than horizontal for spacecrafts?
A 747 - can get you to around 35,000 feet. Still very much within the atmosphere. So what do you do then? Launching a rocket from that point still requires an awful lot of kit, so while you have reduced your propellant requirements a little, the 747 still has to carry a launch platform, so you're not really getting anything out of this. New technologies, such as that used by Virgin Galactic is managing to make this work, hopefully, with a hybrid model that does fly up to around 50,000 feet before launching the spacecraft section, but this is very new. So the simple answer is - it used to require vertical rocket launches, and all the associated paraphernalia, but modern technology is moving towards fully reusable methods such as this.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 0 }
Why can't dark matter be black holes? Since 90 % of matter is what we cannot see, why can't it be black-holes from early on? Is is possible to figure out that there are no black holes in the line of sight of various stars/galaxies we observe?
Paolo Pani and Avi Loeb had two recent papers claiming to rule out the remaining window of primordial black holes as dark matter because they would either distort the CMB or destroy neutron stars: arXiv:1307.5176 and arXiv:1401.3025.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 6, "answer_id": 1 }
Topological twists of SUSY gauge theory Consider $N=4$ super-symmetric gauge theory in 4 dimensions with gauge group $G$. As is explained in the beginning of the paper of Kapustin and Witten on geometric Langlands, this theory has 3 different topological twists. One was studied a lot during the 1990's and leads mathematically to Donaldson theory, another one was studied by Kapustin and Witten (and mathematically it is related to geometric Langlands). My question is this: has anyone studied the 3rd twist? Is it possible to say anything about the corresponding topological field theory?
The Kapustin-Witten paper https://arxiv.org/abs/hep-th/0604151 says (on page 17) that two of the three twists are related to Donaldson theory: Two of the twisted theories, including one that was investigated in detail in [45: Vafa Witten], are closely analogous to Donaldson theory in the sense that they lead to instanton invariants which, like the Donaldson invariants of four-manifolds, can be expressed in terms of the Seiberg-Witten invariants By Vafa-Witten, I mean https://arxiv.org/abs/hep-th/9408074 The least studied twist among the three was studied by Neil Marcus https://arxiv.org/abs/hep-th/9506002 but I am not sure whether everyone in that field thinks that the paper is right.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/26850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 2, "answer_id": 1 }
Papers and preprints worth reading, Jan-midFeb 2012 Which recent (i.e. Jan-midFeb 2012) papers and preprint do you consider really worth reading? References should be followed by a summary saying what is the result and (implicitly or explicitly) why it is important/interesting/insightful/... One paper per answer (and start from its citation); you can add further but only in the description (e.g. It generalizes the ides of X in [cite]...). As it is a community wiki - vote whenever you consider a paper good. Also - feel free to contribute to other's posts. See Journal club-like thing for a meta post.
Light-cone-like spreading of correlations in a quantum many-body system reports the first measurements of the speed at which quantum correlations spread in a quantum many-body system. Prior related theoretical works are [2,3,4]. Though the main innovations are plausibly the experimental techniques, I think theorists should be aware of the results.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/27129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 0 }