Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How to superimpose Wood-Saxon and Coulomb potential? I have just written a simple simulation that models the tunnel-effect of alpha-particles for $^{212}$Po and $^{238}$Ur. In this simulation, I approximate the potential of the nucleus by a simple square well. Now I'm thinking about improving the simple model to a more complex one. My idea is to superimpose the Wood-Saxon potential with the Coulomb-potential. The result should look something like this picture: My question is: how do I actually get there? Do I just add the Woods-Saxon and the Coulomb-potential? FOLLOW UP @dmckee, thanks, it worked and only needed little tweaking. you were absolutely right!
"Do I just add the Woods-Saxon and the Coulomb-potential?" Do you know any other way of combining potentials? You certainly do just add them up. Two complications: * *You may need to tweak the parameters of your potentials if, for instance, they were set to get a certain $Q$ *You'll need some kind of charge distribution function for the nucleus (a uniform spherical distribution is not a terrible place to start).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/60174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
normalizing a wavefunction I have a homework problem that I can't get started on, below is the first bit. I feel like I should just be able to integrate to find $C$ but I get a divergent integral. Can someone give me a hint as to where to go here? A particle of mass m is in a one-dimensional infinite square well, with $U = 0$ for $0 < x < a$ and $U = ∞$ otherwise. Its energy eigenstates have energies $E_n = (\hbar πn)^ 2/2ma^2$ for positive integer $n.$ Consider a normalized wavefunction of the particle at time $t = 0$ $$ψ(x,0) = Cx(a − x).$$ Determine the real constant $C$.
The well is not infinitely wide, just infinitely "deep", meaning that the region outside the well has infinite potential energy. The particle cannot exist in a region of infinite potential energy, so it can only exist within the boundaries of the well, which clamps the integral to $0\le x < a$: $$1=\int_0^a|Cx(a-x)|^2dx$$ Evaluate the proper integral and solve for C.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/60248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can I charge a capacitor using 2 batteries? 1 capacitor, 2 separate batteries (Battery A and Battery B). Connect A+ to one side of the capacitor and B- to the other side of the capacitor. A and B are not connected, there is no closed circuit. looking like: -A+__________CAPACITOR_______-B+ Can the capacitor be charged this way or not? if not, why?
No. Batteries supply potential difference. The positive terminal of A(I'll call it A+) is at a higher potential than the negative terminal of A(A-). The same goes for B. However, we don't know if A- and B+ are at the same potential, so we can't conclude that A+ is at a higher potential than B-. In fact, A+ and B- are at the same potential, as it is the lowest energy configuration of this system. For a capacitor to work, there needs to be a potential difference across its ends. Here, there isn't. Besides, a battery only works when charge is being drawn/added from/to both terminals. Electrostatic repulsion will not let the battery supply charge from just one terminal. Don't look at a battery as a producer of charge. Look at it as a separator of charge. For every positive charge A shoots out of its positive terminal, there will be a negative charge that gets stuck on its negative terminal; which will work to prevent more negative charges from accumulating on A-. If negative charge can't accumulate on A-, then A+ will stop shooting out charges. This happens very quickly -- you won't be able to measure the amount of charge that A+ released. However, if you connect A- and B+, then A- and B+ will be at the same potential, and A+ will be at a higher potential that B-, and the capacitor will charge.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/60312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Could a planet sized bubble of breatheable atmostphere exist? I'm reading a book (Sun of Suns by Karl Schroeder) that the main location is a planet called Virga, which contains air, water, and floating chunks of rock, and has no or a very small amount of gravity. There is a main 'sun' at the center of the planet, which provides the heat for weather. Could a 'planet' of this type exist?
If you look in outer space, you'll see things like giant molecular clouds these clouds are not necessarily in equilibrium, so the factors that cause them to exist for a certain amount of time may be very complicated. E.g., there could be shock waves, star formation, ... If the cloud is in thermal equilibrium, then the typical molecular speeds go like $mv^2\sim kT$, and escape velocity is given roughly by $v^2\sim\Phi$, where $\Phi$ is the gravitational potential. The result is that the maximum temperature of such a cloud is $T\sim m\Phi/k$. If you put in a typical numbers, you find that even for a body with gravity as strong as the moon's, it's not possible to have air and water (high $T$ and low $m$). But note that the result depends on the gravitational potential, not the gravitational field, so in theory this could work if the body is very large in linear dimensions. Also, it would be possible, for example, to give the moon a permament atmosphere of heavy molecules such as long-chain fluorocarbons, making it a shirtsleeve environment where all you'd need was an oxygen tank. The WP article for Sun of Suns describes a fullerene shell holding the whole thing together.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/60359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How do you tell if a metric is curved? I was reading up on the Kerr metric (from Sean Carroll's book) and something that he said confused me. To start with, the Kerr metric is pretty messy, but importantly, it contains two constants - $M$ and $a$. $M$ is identified as some mass, and $a$ is identified as angular momentum per unit mass. He says that this metric reduces to flat space in the limit $M \rightarrow 0$, and is given by $$ds^2 = -dt^2 + \frac{r^2 + a^2 \cos^2\theta}{r^2 + a^2}dr^2 + \left(r^2 + a^2 \cos^2\theta \right)d\theta^2 + \left(r^2 + a^2 \right)\sin^2\theta d\phi^2 $$ and $r$, $\theta$ and $\phi$ are regular spherical polar co-ordinates. But I don't understand why this space is obviously flat. The Schwarzschild metric also contains terms involving $dt^2$, $dr^2$, $d\theta^2$ and $d\phi^2$ but is curved. I always thought that a metric with off diagonal elements implied a curved space, but clearly I was very wrong. Question: How do you tell if a metric is curved or not, from it's components?
In the limit where $M \to 0$, the Kerr metric reduces to the spherical coordinates form of the Minkowskian metric. In that sense, we recognize it and say it is 'obvious' that is is flat. (The Schwarzschild metric is also flat in the limit $M \to 0$.) But to show that any given metric is curved or not we have to calculate a curvature invariant. For example usually we calculate the Ricci curvature $R= R^i{}_i = R^{ki}{}_{ki}$ where the first $R$ is the Ricci curvature, the second $R$ the Ricci curvature tensor and the third $R$ the Riemann tensor. If it's $0$ the space is curved, otherwise it's not. Carroll has it in his book.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/60437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 2 }
Whats the anti-torque mechanism in horizontal take-off aircraft? In most helicopters there is the anti-torque tail rotor to prevent the body from spinning in the opposite direction to the main rotor. What's the equivalent mechanism in horizontal takeoff single engine propeller, and jet aircrafts, where the air or the jet coming out back from the turbine or propeller is spinning and will cause such aircraft to roll?
The motor does exert torque on the fuselage. The pilot, without having to think about it, compensates by applying right aileron, which has plenty of roll authority. There's more to it. When a propeller-driven plane is taking off, it has a tendency to yaw to the left, and the pilot automatically applies right rudder to compensate. That left-turning tendency is due to the propeller descending on the right hand side, at a higher angle of attack (thus more thrust) due to the pitch of the airframe, and also because the wind coming off the propeller is a corkscrew flow, and it strikes the left side of the vertical stabilizer. If you watch this video, you will see how pilots are instructed to operate the F4U Corsair in WW2. It is recommended to tune in some right-rudder and right-aileron trim when taking off. It is also recommended not to take off at too slow a speed because it will seem "left-wing-heavy" due to that torque, and more speed means more control authority.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/60505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Ground states in the shell model for odd-even nuclei I understand that even-even nuclei (Z and N number) have zero spin because of pairing. Even-odd nuclei have the spin of the odd nucleon, and parity is given by $(-1)^L$ - so my question is, how do we work out the state which this odd nucleon is in? As an example: $^3_2 \text{He}$ the odd nucleon, is a neutron which is the state $1\text{s}_\frac{1}{2}$ ($l=0,$ $s=1/2$) so the answer for the ground state spin-parity is $\frac{1}{2}^+$... Or with $^9_4 \text{Be}$ the most energetic neutron is the $1\text{p}_{\frac{3}{2}}$ giving $\frac{3}{2}^-$ but how is this worked out? I assume it is something to do with the nucleons occupying all the lowest energy states and finding the highest energy spare nucleon - How will I know which quantum numbers are the lowest energy? Edit: for example, do I need to refer to this table? http://en.wikipedia.org/wiki/File:Shells.png
How will I know which quantum numbers are the lowest energy? In general that is a difficult question from first principles. Simulation can often answer it, but the problem can be pretty involved and demanding. However, as a practical matter the configuration and energy levels of many nuclei are known from extensive experiments. For example an online level diagram and a table of level data for Gd-157 from http://www.nndc.bnl.gov/nudat2/ .
{ "language": "en", "url": "https://physics.stackexchange.com/questions/60577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why uncertainity is minimum for coherent states? While reading for quantum damped harmonic oscillator, I came across coherent states, and I asked my prof about them and he said me it is the state at which $\Delta x\Delta y$ is minimum. I didn't quite understand why it is minimum. Please explain why this happens?
As you probably know, for any particle the product of the uncertainties in the position, $\Delta x$, and the momentum $\Delta p$ (not $\delta y$ as you state) is bounded below by a positive constant; $$\Delta x\, \Delta p\geq\frac{\hbar}{2}.$$ (If this doesn't ring a bell, you need to read up on Heisenberg's uncertainty principle.) For general states, the uncertainty product will probably be quite larger than $\hbar$, and for classical objects it will be very much larger. If we want to be very precise about a measurement, though, we would want the particle to have minimal uncertainty: that is, we'd want to impose the condition $$\Delta x\, \Delta p=\frac{\hbar}{2}.\tag{1}$$ The states that obey this condition are called coherent states. A bit more technically, the general solutions to equation (1) are called squeezed coherent states, essentially because we can "squeeze" the uncertainty from $x$ into $p$ or vice versa. If the particle is in a harmonic-oscillator potential then we can choose a unique way to "split" the uncertainty product into "minimal" position and momentum parts, $$\Delta x=\sqrt{\frac{\hbar}{2m\omega}},\;\Delta p=\sqrt{\frac{1}{2}m\omega\hbar },$$ using the dimensional information contained in $\omega$ and $m$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/60655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Observer effect, do this mean literally someone or just any interaction with other matter? I am a layman and was wondering, the quantum observer effect. The regular notion to laymen seems to be literally "if you look at it", but as I am coming to understand the world I live in better I feel it means just coming in contact with something. Is this an ongoing question? Whether a particle that is not interacting with anything undergoes changes in state? Though now that confuses me too, If it is in two states at once. What denote say spin to the left from spin to the right, what denotes the origins of the calculations we make?
The "observer effect" does not really have much to do with observation in the every day sense. The effect in question has to do with the fact that some kinds of interaction between systems cause them to stop exhibiting quantum interference: this process is called decoherence. The relevant kind of interaction copies information so that it is present in two or more systems where it was initially present only in one system. If an interfering system undergoes an interaction that copies an unsharp observable in this sense, the interference is prevented. For more see http://arxiv.org/abs/1212.3245
{ "language": "en", "url": "https://physics.stackexchange.com/questions/60698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Why isn't temperature measured in Joules? If we set the Boltzmann constant to $1$, then entropy would just be $\ln \Omega$, temperature would be measured in $\text{joules}$ ($\,\text{J}\,$), and average kinetic energy would be an integer times $\frac{T}{2}$. Why do we need separate units for temperature and energy?
I've seen temperature being expressed in electron volts (eV) in Plasma Physics. Basically, you can equate $k_B T = e y$, where $y$ is the temperature in electron volts, and $T$ is the "thermal" temperature in Kelvin. $e$ is the quantum of charge and $k_B$ is the Boltzmann factor. So $1\mbox{ eV temperature} \approx 11600 \mbox{ K}$. ($y$ was set to 1 to obtain the expression). I guess it is convenient in Plasma Physics because of the energy scales involved.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/60830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42", "answer_count": 6, "answer_id": 0 }
How can I prevent my son building up static on his trampoline? Whenever my three year old son plays on his trampoline, it doesn't take very long for him to start building up a significant amount of static electricity. His hair stands on end (which is quite amusing), but when I help him down we both get a nasty static shock (which is not amusing). He finds the shock upsetting, and I'm concerned that it will discourage him from enjoying his trampoline. How can I prevent the build-up? One suggestion I have seen on a parenting forum is to ground the trampoline frame with a cable and a metal stake. However my understanding is that this would enable him to discharge by touching the frame (rather than me), but it would not prevent the static occurring in the first place. If it's relevant, it's this trampoline and is on a grass lawn. My son bounces in his socks, no shoes. In case of link rot, it is an 8ft trampoline with a net enclosure.
Get a grounding rod and bang it into the ground? Get 2 ground clips and 1 meter of 6mm grounding wire. Link 1 clip to the leg of the trampoline and then the other clip to the grounding rod.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/60868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 3 }
How to calculate beam spread of a non-point light source via an aspheric lens I need to determine the angle, or rate of divergence of light from a single aspheric lens when I place a non-point light source (e.g. LED array) at a given distance from the lens which is less than the focal length of the lens. This seems to fall outside of the normal emphasis on image size and location in all the beginner optic info I'm finding on the WWW; I don't care about the virtual image this creates, I just want to know what the angle of divergence is so I can predict the spot size at an arbitrary distance from the lens. I can also say that I've seen plenty of examples which have collimated light on one side of the lens; as near as I can tell, they won't work, as they assume the light is collimated and thus don't take into account the light source beam angle. My best guess is that the equation to solve for the divergent angle would just need: *light source's beam angle *the focal length of the lens, *distance between the light source and the lens, *and possibly the radius of the non-point light source. I also suspect that the equation is probably the same for other types of lenses.
Well, if you were looking strictly at the LED array (no lens) then you can typically find that the intensity of the array will fall off at angle according to the LED manufacturers data sheet AND the cosine of the cross section of your array. Which is to say that an array of LEDs will have a wider viewing angle than a single LED. In theory you could solve your angle of divergence as the point where these angles give a "negligible" intensity output/no light. Since there's also a lens involved, then I would guess that you would need to know the size of the image as it leaves the lens.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/60956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How does a star ignite? I remember reading that X-Rays are generated by 'braking' electrons in a Coolidge tube. Is it fundamentally a matter that the extreme gravity immediately before a star ignites is so strong that it affects the hydrogen atoms to the point the velocity of it's components must be let-off in the form of heat & light? How does a star ignite?
I was going to put in a detailed answer but the comment above me explains it pretty well. Basically, when the sun gains enough mass (has to be A LOT of mass. Nuclear fusion is a pretty powerful process), the gravity smashes the protons together and fuses them to create Helium nuclei. The 'smashing them together' part releases an immense amount of energy. Which is where the heat and light come from. So the 'igniting' is basically just something gaining enough mass to fuse protons (and other subatomic particles) together. The sun doesn't just suddenly ignite, though. It is a gradual process of heating up and collecting matter that happens over millions of years.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/61108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Concerning the curvature of an airfoil (shape) I am wondering about the reasons for the shape of a turbine blade airfoil, see here Do you know the reason for this shape? Usually, very large curvatures like this are to extract high lift from LOW speed environments. But the conditions inside a turbine is the most strenuous for any object, and the speeds are certainly not low (right?)
Turbine blades can actually have a very complex shape, which changes dramatically from the root to the tip. That diagram is a little misleading as it only shows one cross section. The shape at the root is indeed at a high angle of attack, and is quite broad - partly as it needs the strength to cope with the large forces at work in a rapidly spinning blade. Towards the tip the angle reduces dramatically, and the cross section thins out a lot as speed increases. From blog.nikonmetrology.com
{ "language": "en", "url": "https://physics.stackexchange.com/questions/61161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Interval and proper time Is the definition of $$d s^2=-d \tau^2$$ assuming that $c=1$, so that we always have $$\left({ds\over d\tau}\right)^2=-1$$? Is there a reason for this definition? Don't we get an imaginary ${ds\over d\tau}$?
Yeah, you do. The reason $\mathrm{d}s$ and $\mathrm{d}\tau$ are defined this way is that one or the other will be real for any given path. For a spacelike interval $\mathrm{d}s$ will be real (indicative of the fact that the distances we measure in everyday life are spacelike intervals), and for a timelike interval $\mathrm{d}\tau$ will be real (since times we measure in everyday life are timelike intervals). They fill complementary roles, but they're really just two ways of expressing the same thing. $\frac{\mathrm{d}s}{\mathrm{d}\tau}$ itself doesn't really have any physical meaning. It's actually always equal to $\pm ic$ because of the definition. Note that all the above only applies if you use the $-+++$ metric. If you use $+---$, then $\mathrm{d}s^2 = \mathrm{d}\tau^2$ and it's not an issue.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/61353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Waveguides (in the ocean?) The speed of sound in the ocean is given by $$c_s(\theta,z) = 1450 + 4.6\theta - 0.055\theta^2 + 0.016z$$ $\theta$ is the temperature in degrees celcius, and $z$ is the depth. In a simplified model, $\theta$ is constant at 10$\,^\circ $C for the part of the ocean above the "themocline". The thermocline is an interface at depth 700 m over which the temperature drops to 4$\,^\circ $C almost instantly. The question: It claimed that the water below the thermocline can act as a waveguide. Why and what is the extent (in depth) of this waveguide? My thoughts: Evaluating some relevant speeds: $c_s(10,700) = 1501.7 m\,s^{-1}$ and $c_s(4,700) = 1478.7 m\,s^{-1}$. As the speed changes at the thermocline, there will be refraction and reflection of incident waves from both sides (above and below). So waves incident from below will be reflected back. However I don't understand what makes the waves reflect on the lower side of this "waveguide". As far as I can see, the speed of sound will increase with depth. If there is no interface with a sudden discontinuity like the thermocline, how does this situation work?
You don't need a sharp discontinuity in the speed of sound to guide the waves. Remember that reflection does not occur right at the interface; rather, the wave always penetrates outside the waveguide to "see" what's going on there. A gradual increase in the speed of sound enforces the wave to reflect as well. Reflection occurs from above due to the thermocline (700 m), and from below due to the linear increase in the speed of sound, see the red shaded area in the following figure: Guiding from below is possible down to a depth where the speed of sound equals $c_s(10,700)$ which yields 2136.25 m.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/61466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Does entropy alter the probability of independent events? So I have taken an introductory level quantum physics and am currently taking an introductory level probability class. Then this simple scenario came up: Given a fair coin that has been tossed 100 times, each time landing heads, would it be more likely that that the next coin flip be tails or heads? I can see that since the event is independent by definition, then the probability would be even for both heads and tails: $$P(h | 100 h) = P(t | 100 h)$$ But would this differ in a quantum mechanical standpoint? I have a feeling that $P(h | 100 h) < P(t | 100 h)$ because of the push towards equilibrium in the entropy of the system. Am I wrong to think this way? FOLLOW UP: (turning out to be more of a statistical problem possibly?) Something the around the "equilibrium only exists in the infinite-time limit" idea is what I'm getting hitched on. The proportion of head to tails is 1 to 1 as number of trails approach infinity (this is a fact correct?). Therefore, if this must be the case, mustn't there be an enacting "force" per say that causes this state of being to (admitted unreachable, but technically eventual) case? Or is this thought process just illegitimate simple because the state exist on at the infinity case?
The other answers are good, I just thought this would be a cute opportunity to learn some animation techniques in Mathematica. I start with a hundred heads and make a number of additional, fair, independent tosses. Then I compute the total fraction of heads and repeat this a thousand times and make a histogram of the results. This makes a single frame of the animation. As the number of additional tosses is increased you'll see the probability distribution for the number of heads shift towards $1/2$ and become more sharply peaked. Hopefully this embeds okay (its only a 289 kb file):
{ "language": "en", "url": "https://physics.stackexchange.com/questions/61536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Do viruses suffer from quantum de-localization? Consider some microscopic life form. It should obviously be localized in space, in the quantum-mechanical sense, if it is treated as a single particle (though it is composite). If its characteristic length is $l$, then its wavefunction would de-localize in the typical timescale $$ \tau = \frac{ml^2}{\hbar}$$ If we estimate this for a virus (~typical diameter 100 nm), and assume roughly water mass density, we obtain $$ \tau \approx \frac{1 \frac{g}{cm^3}\cdot(100\ nm)^5}{\hbar} \approx100\ seconds$$ So technically speaking, after about two minutes, the virus had doubled its uncertainty in position. Even if the mass assumption is off by two orders of magnitude, we still obtain everyday timescales. How should this estimate be interpreted? Was is observed in biology?
An experiment to put a virus into a superposition of states was described in this Arxiv preprint. As far as I know the experiment has not been done yet, but I would guess most of us believe it will work and that a virus does indeed obey the principles of quantum mechanics just like a sub atomic-particle. After all a considerably larger object than a virus has been placed into a superposition of states, though this was under rather special circumstances. However it's unlikely we will ever observe quantum behaviour for a virus in water. This is because while an isolated virus can be described by a basically simple wavefunction, if the virus interacts with anything, e.g. water molecules, it's wavefunction becomes entangled with the wavefunction of whatever it interacts with. For a virus in water we would have to observe quantum behaviour of the virus and the water it's in. This increases the size and complexity of the system to be point where it would rapidly decohere and return to classical behaviour.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/61585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
In terms of the Doppler effect, what happens when the source is moving faster than the wave? I'm just trying to understand this problem from a qualitative perspective. The Doppler effect is commonly explained in terms of how a siren sounds higher in pitch as it is approaching a particular observer. I understand this is because the velocity of the wave is constant and so the frequency of the waves increase as they are bunched together. What would happen if a siren was mounted on say a plane traveling at a supersonic speed? To clarify what would the observer observe/hear? Apologies if my question is not phrase very well my knowledge of physics is very rudimentary.
Someone standing on the ground would here a sonic boom. The sound would travel out from the plane as a coherent wave front, all the peaks will be in the same place traveling at the speed of sound (look at a picture on wiki, http://en.wikipedia.org/wiki/Sonic_boom) and will sound like an instant boom. I am not entirely sure what the pilot would hear , my guess is that it would be silent to him. I could be wrong.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/61745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Electric field near a conducting surface vs. sheet of charge I know perfectly well how to derive the magnitude of the electric field near a conductor, $$E = \frac{\sigma}{\varepsilon_0}$$ and near a sheet of charge, $$ E = \frac{\sigma}{2\varepsilon_0} .$$ In fact, I can explain with clarity each step of the derivation and I understand why is one two times larger than the other. But here's what bothers me... When I try to think about it purely intuitively (whatever the heck that actually means), I find it difficult to accept that a planar charge distribution with the same surface density can produce a different field. Why should it care whether there's a conductor behind it or not... ? I repeat, I understand Gauss' law and everything formally required, but I want to understand where my intuition went wrong. EDIT: Thanks to you people, I developed my own intuition to deal with this problem, and I'm happy with it, you can see it posted as an answer!
Intuitively, the surface charge on the edge of a conductor only produces a nonzero electric field on one side of itself, whereas the surface charge on an isolated sheet produces an electric field on both sides of itself. The charge on the isolated sheet is filling twice the amount of space (for an appropriate definition of "amount of space") with electric field, so the resulting field will be half as strong.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/61810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 8, "answer_id": 3 }
Does a current carrying wire produce electric field outside? In the modern electromagnetism textbooks, electric fields in the presence of stationary currents are assumed to be conservative,$$ \nabla \times E~=~0 ~.$$ Using this we get$$ E_{||}^{\text{out}}~=~E_{||}^{\text{in}} ~,$$which means we have the same amount of electric field just outside of the wire! Is this correct? Is there any experimental proof?
Outside a current carrying conductor, there is, in fact, an electric field. This is discussed for example, in "Surface charges on circuit wires and resistors play three roles" by J. D. Jackson, in American Journal of Physics – July 1996 – Volume 64, Issue 7, pp. 855. To quote Norris W. Preyer quoting Jackson: Jackson describes the three roles of surface charges in circuits: * *to maintain the potential around the circuit, *to provide the electric field in the space around the circuit, *and to assure the confined flow of current. Experimental verification was provided by Jefimenko several decades ago. A modern experimental demonstration is provided by Rebecca Jacobs, Alex de Salazar, and Antonio Nassar, in their article "New experimental method of visualizing the electric field due to surface charges on circuit elements", in American Journal of Physics – December 2010 – Volume 78, Issue 12, pp. 1432.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/61884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 9, "answer_id": 0 }
What happens when a star undergoes gravitational collapse? Immediately prior to becoming a supernova the core of some types of stars may suffer gravitational collapse. * *What happens to any planets in orbit around the star at the instant the mass is fully collapsed? *Assuming this sudden change would cause some perturbation ; how large/distant would a planet in the system have to be to be relatively immune to such perturbation? *Could unexplained perturbation serve as an indicator of a historic supernova in the vicinity?
"What happens to any planets in orbit around the star" Why would anything happen to them? There is still a mass there; the same mass as before. It still has the same center of gravity. Now, when the radiation and shock waves arrive lots of stuff starts happening, including the effect mass around which the planets are orbiting dropping as material propagates outside of the planetary radius. "at the instant of gravitational collapse" Remember that gravitation propagates at the speed of light, so even if anything where going to happen (it won't) it takes time for the information to get out to the planets. "Assuming this sudden change would cause some perturbation" It won't, so we don't assume that, which also disposes of your fourth point.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/61917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the fastest a spacecraft can get using gravity-assist? Assuming normal spacecraft and space objects (no neutron stars, black holes, etc). To what speed can a spacecraft accelerate using gravity-assist? For example, if a spacecraft is moving at relativistic speeds, it probably won't get seriously sped up by normal-density objects.
A key limitation to how much of a velocity change can occur in a slingshot maneuver is the amount of time the spacecraft spends in the region near the planet. Change in speed is related to force x time/mass. At very high speed, the spacecraft spends only a very short time hear the planet. Moreover, at very high speeds the velocity change is very close to perpendicular to the direction of motion of the spacecraft, which means that the speed of the spacecraft won't change a lot; only its direction changes a little bit. Another key limitation is the radius of the planet. If the planet were extremely massive and compact (like a black hole), the spacecraft could approach very close to the center of mass of the planet, and experience a large change in direction which could add up to 2x the planet's orbital speed to the spacecraft. Of course even in that case, too close an approach would result in tidal forces that could rip the spacecraft apart. So you're right: "For example, if a spacecraft is moving at relativistic speeds, it probably won't get seriously sped up by normal-density objects."
{ "language": "en", "url": "https://physics.stackexchange.com/questions/61960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 3 }
Negative emf in AC generator At a certain instant in AC generator, when the normal of the plane (rectangular coil) makes an angle of 270 degrees with with the magnetic induction B, the value of emf is: $E = -NAB\omega$ My teachers would usually say that this is the minimum value of emf that a generator produces. Does it really mean that? Or does the negative sign only mean that emf is at its peak value but the current is flowing in opposite direction?
Yes. That's why it's named as alternating current. Briefing... The magnetic flux linked to the coil is $\phi=NBA\ cos(\omega t)$ and the emf induced (according to Lenz' correction of Faraday's law, $$e=-NBA\frac{d(cos(\omega t))}{dt}=NBA\omega\ sin(\omega t)$$ Here, $NBA\omega$ is a constant which can be replaced by $E_0$ which is the maximum value of emf induced along the coil. When $\omega t=3\pi/2$ (i.e) when the normal makes the 270-degree angle, $e=-E_0$ which is the maximum negative voltage (or) simply the flow of current in the opposite direction. Please have an overview on the Wiki article...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/62050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the reasoning behind hole carriers being able to carry heat? In the Peltier effect, we consider charge carriers being able to carry heat. As for electrons or ions, this attitude makes sense, since external electric potential drives particles with mass in a direction and effectively transfers heat from one point of material to another. But for holes, this situation is only virtual, holes moving in a direction is only reformulation of the fact that the electron making up the hole environment are traveling opposite direction. I just cannot grasp the concept that the holes can indeed carry heat. I know one can argue that the holes have effective mass, but effective mass is only a measure of how much can you accelerate / decelerate the electron (or hole) in a crystal field. Can you help me out please?
Both free electrons and holes in semiconductor are excitations, i.e. quasiparticles which can propagate under influence of external electric field or temperature due to diffusion or drift. Don't forget that electrons are also characterized by the effective mass. In p-doped semiconductors a gradient of the temperature creates a region where hoping of real electrons between sites of the crystal lattice are more intensive than in other part of the device. Due to diffusion they tend to spread over all volume.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/62353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Voltage of open circuit A battery with emf $\varepsilon$ and internal resistance $r$ is connected with a resistor $R$ in the following open circuit. What is the voltage $V_{ab}=V_a-V_b$? The answer is $- \varepsilon$. "No current. There is no voltage change across R and r.". But I don't really understand why ... I was thinking intuitively it should be $0$? Then thinking of how to get 0, I was thinking ... $V_a = - \varepsilon$ since its on the negative terminal, the $V_b = + \varepsilon$ since its on the positive terminal. But $V_a - V_b = -2 \varepsilon$ ... how do I make sense of this?
Ok , Potential is work done by electrostatic force as you move from A to B per Couloumb. So transfer 1 coulomb from A to B and as you move through the battery, E is directed from +ve plate to -ve plate . So work done by $\vec{E}$ is ($V_-) - (V_+$) , which means $-EMF$. And yes that is an assumption that charge must reach the positive plate with the same velocity with which it entered the negative plate .
{ "language": "en", "url": "https://physics.stackexchange.com/questions/62448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Purpose of Grover's algorithm? How is the output of Grover's algorithm useful if the result is required to use the oracle? If we already know the desired state, what's the point of using the algorithm? So can you give me a concrete example of an oracle function. For example if the indexed items in a Grover search were, for example arbitrary patterns, what would the corresponding oracle function look like? Lets make the example more concrete. Each pattern is the image of a face and we want to see if an unknown face is located within pattern set. Classically our search algorithm is a correlation algorithm (e.g. Kendall-tau, rank correlation etc.). What would the analogue of this be for a quantum search?
The oracle doesn't need to know the desired state in order to verify whether a given state is the desired state. Grover's algorithm can be applied to NP-complete problems. This is the set of problems for which there is no known way to generate a solution in polynomial time, but a given solution can be in recognized in polynomial time.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/62506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How can anything be hotter than the Sun? I've heard that if a space shuttle enters the atmosphere from a bad angle its surface will become so hot that it will be hotter than the surface of the Sun. How can that be? It seems to an uneducated mind that Sun is really really hot, how could something seemingly minor such as a wrong angle of entrance to the earth's atmosphere could end up generating a heat hotter than the Sun?
The temperature of our sun is determined primarily by the generation of heat (the fusion process) and the rate of heat loss by various means. When the process reaches equilibrium, a given average temperature is maintained. Any other process that has a higher generation rate or smaller energy loss, or both, would be hotter than our sun. In the case of the shield, the temperature could be hotter than our sun, but only for a short time. This would not be a fair comparison because we would be using the average sun temperature against the peak temperature generated by the shield. In the universe, there are other suns that are indeed much hotter than our sun.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/62569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Electric current streamlines in induction cooking vessel I am looking for a plot of the typical streamlines of the electric induced currents ("eddy currents") in a induction cooking vessel. How can one theoretically predict the streamlines? How is it possible to measure the streamlines? I was able to find a plot for the streamlines for a moving plate in a magnetic field in Heald: Magnetic braking: Improved theory and want something similar for the induction cooking vessel. Just another question about the moving plate: Up to now I had in mind that eddy currents have to be closed, however in the picture above some lines are not closed. So, what's true and why?
The shape of the induced eddy currents in induction cooking will depend on the shape of the fluctuating magnetic field and the shape of the cooking vessel. Certainly there are commonalities though and I suspect the diagram you have is similar. Also, in the case of induction cooking, the field is varying rather than the vessel moving and this will cause the shape of the current loops to vary too. Regarding closed eddy currents, all current must form a loop. The eddy currents are closed and you're just seeing a cropped diagram. For example, here is a cropped magnetic field line diagram: All magnetic field lines ether extend to infinity or form a loop. In the case of a idealized perfect bar magnet, only the two lines normal to the N and S poles extend to infinity. They just look open because the image is cropped.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/62668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Polarization of sound Sound can't be polarized because the vibration of such type can't be polarized i.e, it can't be limited or controlled by any barriers and so polarization is not possible in them. This is what my teacher answered me when i asked the question. But i didn't understand what did he mean by "the vibration can't be controlled or limited." Does the word cant be limited or controlled make sense here? Moreover can anybody explain in details and more clearly to me?
That happens because electromagnetic waves are consist of electric and magnetic parts http://www.edinformatics.com/math_science/e_mag_nasa_image.gif which are running in orthogonal planes. And usually they run in all directions. When You polarize light - You let waves go just in one plane. But air waves are just vibration of the same matter. You can not take air waves just in one plane.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/62755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 3 }
What materials focus EM radiation in the 2.4GHz range If glass and similar materials refract visible light effectively, what materials would be best for focusing lower frequencies of EM radiation, if any? If not, what other methods exist for focusing these ranges? The thought was inspired by wondering how you might build a camera that captures 'light' given off by Wifi routers
Actually, air itself with different water content and density forms refracting layers that can focus microwaves (and other electromagnetic radiation). This is called "ducting"/anomalous propagation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/62967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
I read a book saying bernoulli's flight equations didn't have as much impact on lift as most people think I'm a computer scientist that likes to read about math and physics occasionally. A local author at a nearby aviation center brought bernoulli's flight equations into question. It was clear enough logic, but I didn't understand all the math involved. He basically said that the lift equations don't account for why a plane can have lift while it's still upside down and took some sample data on his small aircraft to show. Anyhow, I didn't know what to make of it, my only point of view and my first acquaintance with flight physics. How much effect does Bernoulli's equations play in flight, especially when an aircraft is upside down? He suggested that Newton did some early work on lift equations, is this true? What is a sufficient way to view Bernoulli's lift equations?
It is absolutely true that the Bernoulli effect is not necessary in order for a wing to produce lift. Ultimately a wing produces lift by directing air flowing over the wings downward. The can be achieved by ramming air downward through the wing's "angle of attack" with respect to the air flow. This is why an airplane can fly upside down: the Bernoulli effect plays only a minor role, and the angle of attack is the primary force in guiding air downwards. The shape of the wing is not simply designed with the Bernoulli effect in mind: the goal is to produce smooth laminar flow of air over the wings so that the angle of attack (along with the shape of the tailing edge of the wing) can efficiently direct the air downward. A stall is when that laminar flow is disrupted such that the air no longer follows the shape of the wing and so can no longer be directed downward by the wing's shape and angle of attack.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/63023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Escape velocity to intersection of two gravitational fields Find the minimal velocity needed for a meteorite of mass $m$ to get to earth from the moon. Hint: the distance between the center of earth and the center of moon is $\approx 60 R_E$, and the meteorite should reach a certain point $O$ on that distance, where the gravitational forces of the moon and the earth are canceling each other out. It is located $\approx 6 R_E$ from the center of the moon. I thought that the best way to solve this question is to find the difference between the initial enegy of the body and the final one at $O$. Beyond that point, the gravitational pull of the earth will 'overcome' the gravitational pull of the moon, and the object will gather speed itself by reducing the GPE in the earth's gravitational field. Therefore: $U_{G,i}=-G \frac{M_{moon}m}{R_{moon}}$ $U_{G,f}=-G \frac{M_{moon}m}{6R_E}-G \frac{M_E m}{54R_E}$ $U_{G,f}-U_{G,i} \approx m \cdot 1.51 \cdot 10^6 \text{J}$ (there's no mistake in the calculation) $E_k=m \frac{v^2}{2}=m \cdot 1.51 \cdot 10^6$ Therefore $v \approx 1739.97 \text{m/s}$ However this answer is wrong. The correct magnitude of the speed should be somewhere around $2.26 \text{km/s}$ which is close to the escape velocity from the moon ($\approx 2.37 \text{km/s}$). Where am I wrong? Why the difference in GPE is not satisfying the problem?
$$U_{g,i}=-G\dfrac{M_{moon}.m}{\underbrace{R_{moon}}_{(\ distance \ from \ moon\ center) }}-G\dfrac{M_{Earth}.m}{\underbrace{(60R_{earth}-R_{moon})}_{(initial \ distance\ from\ Earth)} } $$ You missed $U_i$ due to Earth!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/63075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Reaching the speed of light via quantum mechanical uncertainty? Suppose you accelerate a body to very near the speed of light $c$ where $v = c - \epsilon$. Although this would take an enormous energy, is it possible the last arbitrarily small velocity needed -- $\epsilon$ -- could be overcome with a minor bump in velocity due to the uncertainty principle?
No. First of all, Planck's constant is not a speed, so you can't compute $c - \hbar$. But you can reword the question to get around that problem, something like this: Is there some speed $\epsilon$ such that an object traveling at speed $c - \epsilon$ could experience a quantum fluctuation that temporarily takes its speed to greater than $c$? The answer to this is still no. Now, in order to really understand why, you could dig into the details of quantum field theory, and learn the meaning of the statement "local operators separated by spacelike intervals commute" which is, in some sense, the most fundamental reason. But I'm guessing that'd be more detail than you're looking for. As a simplified (but still basically accurate) explanation, you can use the same argument for why you can't bump a classical object moving at speed $c - \epsilon$ up to speed greater than $c$ by giving it a little push. That reason is that when something speeds up, spacetime "rotates" around it, but in such a way that all trajectories with speeds less than $c$ continue to have speeds less than $c$. In particular, this rotation (the Lorentz boost) transforms a trajectory with speed $v$ into a trajectory with speed $\frac{v + \Delta v}{1 + v\Delta v/c^2}$. No matter how close you are to the speed of light, speeding up will only take you a fraction of the way closer to $c$, and that is just as true for a quantum fluctuation as it is for a classical push.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/63235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Definition of electric charge and proper explanation Is there a definition of electric charge and proper explanation of it? It is said "Electric charge is the physical property of matter that causes it to experience a force when close to other electrically charged matter." How is it though that matter can get charged? Defining charge as the property of feeling a force with other charged matter seems circular. What is charge? Is there a non-circular definition / explanation?
to be more specific about the definition of charge, charge is an intrinsic property of inherent matter. As we all know the mass which is considered as the fundamental property of every particle in this universe, electric charge is considered as the fundamental property of the particle that is used for electrostatic purposes.from the Franklin`s view he defined the electric charges as the which is used for the electrostatic interaction. Everyone knows that static charge produces an electric field and moving charges produces currents. In general when we deal quantum mechanics charge is considered as number rather than thinking about the importance of electric charge. spin is considered as the property of the atom which is hard to visualize in QM.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/63290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Evolution principle of the physical laws I wanted to know if there is a physical theory that considers that the laws of physics undergo an evolutionary process. That see the law of physics or the absence of them, as something dynamic, and that with time they slowly converge to something we know today. A kind of simulated annealing of the physical laws.
All such theories will be perceived as speculative, radical and unbelievable. Darwinian theory of evolution was strongly substantiated and even despite that fiercely attacked. One example of such theory is MET. It says that every new moment is a result of evolution by necessity arising from information-related (entropy-related) criteria. It depends on recent work. More information is available here: metatemporal evolution theory
{ "language": "en", "url": "https://physics.stackexchange.com/questions/63341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Mathematical proof of non-negative change of entropy $\Delta S\geq0$ I understand that we can prove that for any process that occurs in an isolated and closed system it must hold that $$\Delta S\geq0$$ via Clausius' theorem. My question is, how can I prove this in a mathematical way?
I understand that we can prove that for any process that occurs in an isolated and closed system it must hold that ΔS≥0 via Clausius' theorem. My question is, how can I prove this in a mathematical way? The Clausius theorem says that when the system undergoes general cyclic process during which it is connected to reservoir of (possibly varying) temperature $T_r$, the integral $$ C = \int_{t_1}^{t_2} \frac{{Q}'(t)}{T_r(t)}dt \leq 0, $$ where ${Q}'(t)$ is derivative of the heat accepted by the system by the time $t$ ($t$ is just a real number that indexes states as they occur in the irreversible process, it does not have to be actually the time). Second part of the Clausius theorem is the assertion that if the whole cyclical process is reversible, the integral is equal to zero. Now assume our isolated system undergoes some irreversible process $A\rightarrow^{(irrev.)}B$. The system may undergo such change as a result of change in the imposed internal constraints, like removal of a wall separating two partitions of a vessel filled with gas of different pressures. The system is then thermally connected to a heat reservoir and is let to undergo reversible process $B\rightarrow^{(rev.)}A$. During the irreversible process $A\rightarrow B$, since the system is isolated, there is no heat transferred and the corresponding contribution to $C$ vanishes. During the reversible process $B\rightarrow A$, in general heat may be transferred. The integral $C$ is thus $$ C= \int_{B,\gamma_{\text{rev.}}}^A \frac{Q'_{\text{rev.}}(s)}{T(s)}ds, $$ where $s$ parameterizes reversible trajectory $\gamma_\text{rev.}$ in the space of equilibrium states and $Q'_{\text{rev.}}(s)$ is derivative of the heat $Q_{\text{rev.}}(s)$ already accepted by the system when it is in the state $s$. The change in entropy when it goes from the equilibrium state A to the equilibrium state B is defined as $$ \Delta S = \int_{A,\gamma_{\text{rev}}}^B \frac{Q'_{\text{rev.}}(s)}{T}ds, $$ which is the same as $C$ only with opposite sign. Since $C\leq 0$, $$ \Delta S = -C \geq 0, $$ QED.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/63416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Are all points in the universe connected? Is it true that every point in the universe is connected or could be so theoretically? If so how is this mediated? Is it through the quantum nature of the fabric of space or is it through the interrelationships of the gravity fields throughout the universe. From the gravity fields of single solar systems affecting each other to galaxies and galactic clusters as if they gravitational energy of each field has a knock-on effect on the next field?
On space-time, the useful notion of points are events. Only events that are separated by time-like curves are causally connected. Causal connectedness implies that fields happening at one of the events can influence fields happening at the other. Events that are separated by space-like curves are not causally connected, they might be still connected indirectly through other events in their past or future. The observable universe seems to be topologically trivial, that is, all causally connected paths that connect two given events are deformable into each other, i.e: no torii that can trap inequivalent classes of paths. if the topology of the universe is nontrivial behind event horizons of black holes, that topological information is inaccessible to us, and hence, not part of the observable universe.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/63523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
If everything in existence were increasing in size at some rate, would we be able to detect it? Would our eyes observe any changes? What about electronic measurement devices?
Depends. If you simple assume matter growing we would see the distance between the surfaces of celestial bodies diminishing. Given that we regularly monitor the distance between the surfaces of the Earth and Moon by laser ranging to accuracies of less than one cm (which means less than one part in $10^8$ over the time the project has been running). This is not observed. If you assume that space is expanding then you have the Hubble expansion and the dark energy, both of which have strong observational support,
{ "language": "en", "url": "https://physics.stackexchange.com/questions/63572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Is there a way to create a flickering frequency to be dependent on speed of the person looking at it? Is there a way to make a screen or a road sign flash at different rates, depending on the velocity of the observer looking at it? * *I would like to achieve a state where two observers going at different speeds would see the screen flash at different rates at the same time. *another thing i would like to check if possible is - is there a way to make the observer see a different image depending on his/her velocity? (without a radar).
You can use a reflector with gaps. Then the light from a car will alternate between reflecting and not reflecting at a rate dependent on their velocity towards the reflector. Please excuse my crude diagram: As the car moves right to left, gaps in the reflector will cause it to appear to flash on an off.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/63672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
If particles can find themselves spontaneously arranged, isn't entropy actually decreasing? Take a box of gas particles. At $t = 0$, the distribution of particles is homogeneous. There is a small probability that at $t = 1$, all particles go to the left side of the box. In this case, entropy is decreasing. However, it is a general principle is that entropy always increases. So, where is the problem please?
Even though the answer you chose is very good I will add my POV Take a box of gas particles. At $t=0$, the distribution of particles is homogeneous. There is a small probability that at $t=1$, all particles go to the left side of the box. In this case, entropy is decreasing. Take the statistical mechanics definition of entropy: where $k_B$ is the Boltzmann constant .The summation is over all the possible microstates of the system, and $P_i$ is the probability that the system is in the $i$th microstate. The problem is that this one system you are postulating in your question is one microstate in the sum that defines the entropy of the system. A microstate does not have an entropy by itself, in a similar way that you cannot measure the kinetic energy of one molecule and extrapolate it to a temperature for the ensemble of molecules. An observation on systems with decreased entropy: Entropy increases in a closed system. If an appropriate liquid is turned into a crystal, the ensemble of molecules will have lower entropy, but energy will have been released in the form of radiation, the system is closed only when the radiation is taken into account for the entropy budget.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/63764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
Temperature of glowing materials As I understand it, Stars emit visible light, OBAFGKMRNS, in the range of $10^3 - 10^4 K$. Yet materials such as steel emit similar frequencies at much lower temps; red is around 800K. Why the difference? I thought black body radiation applies to all materials and environments. I am an interested amateur.
The peak wavelength at which a body emits light is governed by Wien's displacement law, which states that this wavelength is inversely proportional to the temperature, as $$\lambda \, T=\text{const}=0.003\text{ m K}.$$ More graphically, in the stellar-surface sort of temperature range, this looks like You'll notice that although the short-wavelength cut-off is rather sharp, bodies still emit light at shorter wavelengths than the Wien peak wavelength. Thus for steel at 800 K the peak wavelength is at 3.7 $ \mu\text{m}$, in the mid-infrared. The total radiance on the visible range is then proportional to some power of $(700\text{nm}/3.7\,\mu\text{m})\approx 0.2$, so it's about 1% or less. Thus, when hot irons glow red, what you're seeing is the very edge of the spectral radiance distribution. The bulk of the emissions is as heat in the IR - as you can tell if you put your hand anywhere nearby!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/63829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Does relativistic mass have weight? If an object was sliding on an infinitely long friction-less floor on Earth with relativistic speeds (ignoring air resistance), would it exert more vertical weight force on the floor than when it's at rest?
First off, your question is phrased in terms of relativistic mass, which is an obsolete concept. But anyway, that's a side issue. The question can be posed in terms of either the earth's force on the puck or the puck's force on the earth. We expect these to be equal because of conservation of momentum. In general relativity, the source of gravitational fields is not the mass or the mass-energy but the stress-energy tensor, which includes pieces representing pressure, for example. The puck has some stress-energy tensor, and this stress-energy tensor is changed a lot by the puck's highly relativistic motion. Therefore the puck's own gravitational field is definitely changed by the fact of its motion. However, the change is not simply a scaling up of its normal gravitational field. The field will also be distorted rather than spherically symmetric. Yes, the effect is probably to increase its force on the earth. The earth therefore makes an increased force on the puck. Here is a similar example that shows that you can't just naively use $E=mc^2$ to calculate gravitational forces. Two beams of light moving parallel to each other experience no gravitational interaction, while antiparallel beams do. See Tolman, R.C., Ehrenfest, P., and Podolsky, B. Phys. Rev. (1931) 37, 602, http://authors.library.caltech.edu/1544/
{ "language": "en", "url": "https://physics.stackexchange.com/questions/63961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 4, "answer_id": 2 }
Why don't black holes within a galaxy pull in the stars of the galaxy? visit http://www.nasa.gov/audience/forstudents/k-4/stories/what-is-a-black-hole-k4.html If black holes can pull even light, why cant they pull the stars in the galaxy?
Why would you assume they do not ? Of course they do. But as you probably know, the gravitational pull decreases with distance (inverse square law). From a safe enough distance any other object (star, galaxy) would feel the normal gravitational pull of an object of the black-hole's mass at that distance; it makes no difference to the stars if the source is a black hole or anything else.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
The Preference for Low Energy States The idea that systems will achieve the lowest energy state they can because they are more "stable" is clear enough. My question is, what causes this tendency? I've researched the question and been unable to find a clear answer, so I was hoping someone could explain what's going on behind the scenes here (or if nothing is and it's just an observed law that isn't explainable).
Large systems with many degrees of freedom (e.g. a ball consisting of many molecules) tend to settle into low energy states. This is a direct consequence of two fundamental laws, the first and second laws of thermodynamics: energy conservation and entropy increase. A system with many degrees of freedom can be in many different microscopic states (think about a ball for which each molecular position and vibration etc is specified). Each such feasible micro state is equally likely. However, what we typically observe is not a micro state, but a coarse-grained description (the position of the ball) corresponding to incredibly many micro state. Certain macro states correspond to far fewer micro states than other macro states. As nature has no preference for any of these micro states, the latter macro states are far more likely to occur. The evolution to ever more likely macro states (until the most likely macro state, the equilibrium state, is reached) is called the second law of thermodynamics. The decrease of potential energy is the consequence of the first (energy conservation) and second (evolution to more likely macro states) law of thermodynamics. As macro states with a lot of energy stored in heat (our ball with random thermal motion of its molecules) contain many more micro states and are therefore much more likely, energy tends to get transferred from potential energy to thermal energy. This is observed as a tendency towards a decrease in potential energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 0 }
Reflection of a polarised beam The past days I've been trying to understand how AutoFocus(AF) works on photographic cameras. There is a statement that says AF systems are polarisation sensitive. This means that they can only work with circularly polarized light. Trying to understand why, I came accross this article which states that beam splitters are polarisation sensitive. Is this true? I cannot think of a way that the reflection of a linearly polarised light would have any problem or why a reflected circular polarized light would be OK.
Reflection polarizes light. A reflected ray becomes linearly polarized perpendicular to the plane containing the incident and reflected rays. This is why polarized sunglasses are effective for reducing glare. The autofocus may not be working as expected because much of the scene is polarized light.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What's the physical significance of using fourier transform for diffraction? I am studying some basic idea of diffraction and there mention in far field, the diffraction pattern could be understood by Fourier transform. But I just don't understand what's the physical fact for that. And why cannot use Fourier transform for the the near-field case? Also, when I am trying to understand the theory of diffraction, it ends up with some complicate math (integrals). I want to learn that but the books I am reading are not easy to understand. Anyone recommends some good books or video lectures (more theoretical but to explain most of the math in plain way)?
To make it short, my interpretation of the Fourier transform in optics is that is transforms positions (and phase) of the rays at some given plane, to their angles. One given angle corresponds to one given oscillation of the phase, i.e. one point on the Fourier space.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Displacement current - how to think of it What is a good way to think of the displacement current? Maxwell imagined it as being movements in the aether, small changed of electric field producing magnetic field. I don't even understand that definition-assuming there is aether. (On the topic of which, has aether actually been disproved? I read that even with the Michelson-Morley experiment the aether wasn't disproved.)
The displacement current is the 'phantom' current that passes through a capacitor in a circuit, since no real current runs between two plates of a capacitor. This is given by finding the rate of change of the electric flux with respect to time, and multiplied by epsilon nought. A great video on this can be found here: http://www.learner.org/resources/series42.html Go to 'Maxwell's Equations' and watch the video about 20 minutes in.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Is it possible to obtain higher order corrections to the ideal gas law when one allows realistic phenomena to make their way into the equations? I had an interesting thought today that caused me to ask whether it'd be possible to make corrections to the ideal gas law via introducing terms derived from more realistic phenomena to make their ways into the equation.
See van der Waals' Gas Equation : $$\Bigg(P+\dfrac{an^2}{V^2}\Bigg)\Big(V-nb\Big)=nRT$$ $ a,b$ are constant dependent on gas properties.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Different batteries connected in parallel If we have 2 batteries one of emf x and the other is of emf y and we connect them in series we get an effective emf of x+y. But what if we connect them in parallel, how to calculate the emf now?
The other answers are good (especially the I = (V1 - V2) / (R1 + R2) equation that we will use) but I just wanted to give you ballpark estimates of some numbers that you can expect to see. Imagine that you are going to do this to a 9V battery and a 1.2V AA battery, then: V1 - V2 = 7.8V For internal resistances, it's hard to put ballpark numbers in the field but based on this excellent document from energiser then at most you are going to see internal resistances of 1.0 ohms. At room temperature you are more likely to see resistances of around 0.1 ohms. Now, if we assume that the internal resistances are roughly the same for both batteries then we can say that: R1 = R2 (given) I = (V1 - V2) / (2R1) Which we will now use for both possible internal resistances: I(R1 = 1.0) = 7.8 / 2 = 3.9A I(R1 = 0.1) = 7.8 / 0.2 = 39A So, as you can see, somewhere around 3.9 - 39 amps of current are going to be generated very quickly. Using p = vi we can then see that: P(R1 = 1.0) = 30.42W P(R1 = 0.1) = 304.2W Which is a lot of energy being released very quickly in a very small package. Which is probably why it is not unexpected to see such violent results. You're going to boil your batteries pretty quickly with all of that heat. P.S. I'm just doing this by ballparking off the top of my head very quickly but the rough numbers make sense to me. I hope this helped give a better visual sense of what happens to the poor batteries when you do this.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 3, "answer_id": 0 }
What is our estimated running speed on Moon's surface? I was wondering if we have the chance to run on the Moon's surface, how would you expect it look like? I expect our velocity will increase for the same work we do on Earth, but not sure if this will be multiples in term of gravity variations. How do you think our maximum speed would reach?
There are a few videos of astronauts running on the moon, e.g. here and here. Note that they have to adopt quite different gaits from on Earth. However, they were encumbered by space suits that were heavy and stiff, and they were probably moving fairly carefully in order to avoid the danger of a puncture. It's difficult to say how much faster a human could run in a more optimal suit, but it's almost certainly slower than on Earth. Prior to the moon landings, NASA did some research where they simulated Lunar gravity by suspending people from ropes. The guy in this video gets up quite a speed by leaning forward and flailing his arms around like crazy, but he still can't keep up with someone sprinting under Earth gravity. (It might not be very obvious from the above video how the gravity simulation works. The trick is that he's actually lying on his side. You can see it when they set it up at the start of this longer video, which also demonstrates things like jumping high into the air and climbing a pole one-handed.) It's just about conceivable that this running speed could be improved upon through the use of things like weights or even stilts. I don't know whether research into such "augmented" locomotion under Lunar gravity has been done. (And if it was, I'm not sure whether it would really count.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Why does increasing the temperature of a thermistor decrease it's resistance? Surely, upon an increase in temperature, the atoms within the thermistor would vibrate with more energy and therefore more vigorously, hence making the electrons flowing through the electric circuit more likely to collide with one of the atoms, so increasing resistance. However, the effect of temperature on a thermistor is contrary to this. I can't understand how it can be. It's analogous to running across a playground: if everyone is still you're less likely to collide with someone, however if everyone is constantly moving from left to right then a collision is more likely. So why does an increase in temperature decrease the resistance of a thermistor?
Olly, the first part of your thinking is correct, as the atoms receive more energy, the electrons do collide more energetically, but they also move "away" from the atom's center. The further they are from the center, the easier it is for an electric field to "move" them. This means that for the same effort (voltage), more electrons are moved (larger current). Since R = E/I, the effective resistance decreases as the current increases (for the same voltage).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
Does inertia increase with speed? I have heard that when the speed of the object increase, the mass of the object also increase. (Why does an object with higher speed gain more (relativistic) mass?) So inertia which is related to mass, increase with speed? So, if I accelerate on a bus, my mass will increase and my inertia will increase for a while on the bus, until the bus stops?
I think inertia doesn't depend on speed, it depends on rate of change in speed, i.e. acceleration. The higher you accelerate the more will be the inertia. It can be understood by taking an example of a motorcycle, in which lower gear gives more traction than the higher one. The higher the acceleration you want the more traction is required due to inertia. If there is no acceleration then no inertia will be there.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
What is the derivation for the exponential energy relation and where does it apply? Very often when people state a relaxation time $\tau_\text{kin-kin}, \tau_\text{rot-kin}$,, etc. they think of a context where the energy relaxation goes as $\propto\text e^{-t/\tau}$. Related is an approach to compute it via $$\tau=E(0)/\left(\tfrac{\text d E}{\text d t}\right)_{E=0}.$$ Both are justified from considering dynamics for which $$\frac{\text d E}{\text d t}=-\frac{1}{\tau}(E-E(0)).$$ My question is: What fundamentally leads to this relation? I conjecture it relates to a Master equation, which mirrors the form "$\dot x=Ax+b$". But I'm not sure how the degrees of freedom in the Master equation translate to the time dependence of the macroscopic energy value. There will also be a derivation from the Boltzmann equation somehow, for some conditions, but what is the general argument and where does it work?
This form of $dE/d\tau$ is valid only when the system is not too far from equilibrium and linear response assumption is valid. The fact that $dE/d\tau$ depends on the difference $E - E(0)$ alone is a consequence of assuming a linear response.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The motion of a spring I have a question about the force set by this spring, I saw many times that $\overrightarrow{F}=-Kx\overrightarrow{i}$. I'm asking why not using $\overrightarrow{F}=Kx\overrightarrow{i}$ without the minus. And supposing that $\overrightarrow{F}$ is changing during the motion of the solid object as well as the form of the spring. Here $K$=the Hooke's constant, $K>0$. P.S:I need a detailed explanation of this phenomenon.
$x$ measures the difference in length of the spring in relation to its relaxed state. If you increase the length (positive $x$), the spring creates a force in the negative $x$ direction, because it wants to return to its relaxed state. Accordingly, if you compress the spring (negative $x$) the spring wants to expand (force in positive $x$ direction) in order to be relaxed again. That's why the negative sign in $$ \vec F = - K x \vec i$$ gives the physically appropriate behaviour.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Hilbert space of harmonic oscillator: Countable vs uncountable? Hm, this just occurred to me while answering another question: If I write the Hamiltonian for a harmonic oscillator as $$H = \frac{p^2}{2m} + \frac{1}{2} m \omega^2 x^2$$ then wouldn't one set of possible basis states be the set of $\delta$-functions $\psi_x = \delta(x)$, and that indicates that the size of my Hilbert space is that of $\mathbb{R}$. On the other hand, we all know that we can diagonalize $H$ by going to the occupation number states, so the Hilbert space would be $|n\rangle, n \in \mathbb{N}_0$, so now the size of my Hilbert space is that of $\mathbb{N}$ instead. Clearly they can't both be right, so where is the flaw in my logic?
* *The Hilbert space ${\cal H}$ of the one-dimensional harmonic oscillator in the position representation is the set $L^2(\mathbb{R})={\cal L}^2(\mathbb{R})/{\cal N}$ (of equivalence classes) of square integrable functions $\psi:\mathbb{R}\to\mathbb{C}$ on the real line. The equivalence relation is modulo measurable functions that vanish a.e. *The Dirac delta distribution $\delta(x-x_{0})$ is not a function. It is a distribution. In particular, it is not square integrable, cf. this Phys.SE post. *One may prove that all infinite-dimensional separable complex Hilbert spaces are isomorphic to the set $${\ell}^{2}(\mathbb{N})~:=~\left\{(x_n)_{n\in\mathbb{N}}\mid\sum_{n\in\mathbb{N}} |x_n|^2 <\infty\right\}$$ of square integrable complex sequences.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51", "answer_count": 4, "answer_id": 1 }
Determine the velocity and acceleration of the vertex $B$ 1) The bent rod $ABCD$ rotates about the line $AD$ whit a constant angular velocity of $90 rad / s$. Determine the velocity and acceleration of the vertex $B$ when the rod is in the position shown in Figure. 2) Determine the velocity and acceleration of the vertex $B$ assuming now that the angular velocity is of $95rad / s$ decreasing the rate of $380rad / s^2$. My attempt was to apply the formula $\vec{v}=\vec{r}\times \vec{\omega}$ but I think it is not valid if the rotation axis is not aligned with any of the axes in the $xyz$ space. Maybe I'm talking nonsense. I was thinking about making a change coordinatas to the rotation axis coincide with the $z$ axis. But it seems very counterproductive and does not seem to be the purpose of the exercise.
Your angular velocity vector is $$ \vec{\omega} = \Omega \frac{ \vec{r}_D - \vec{r}_A }{|\vec{r}_D - \vec{r}_A|} $$ where $\vec{r}_A = (0,0.2,0.12)$, $\vec{r}_D = (0.3,0,0)$, $\vec{r}_B = (0.3,0.2,0.12) $ in meters and $\Omega = 90\;{\rm rad/s}$. Your velocity kinematics is $$ \vec{v}_B = \vec{\omega} \times ( \vec{r}_B - \vec{r}_A ) $$ And acceleration kinematics $$ \vec{a}_B = \dot{\vec{\omega}} \times ( \vec{r}_B - \vec{r}_A ) + \vec{\omega} \times \vec{\omega} \times ( \vec{r}_B - \vec{r}_A ) $$ $$ = \dot{\vec{\omega}} \times ( \vec{r}_B - \vec{r}_A ) + \vec{\omega} \times \vec{v}_B $$ From here you plug-and-chug.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/64937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can a car's engine move the car? Newton's First Law of Motion states that an object at rest or uniform motion tends to stay in that state of motion unless an unbalanced, external force acts on it. Say if I were in a car and I push it from the inside. It won't move. So how is the engine of a car capable of moving the car?
The friction force between the tyres and the ground makes the car change positions or move,if the road is slipperly cars find it difficult to move sometimes they can be just spinning on the same spot but again the energy supplied by the engine makes the crankshaft to rotate in the process making the tyres to rotate too hence that same energy together with the friction facilitates the car movement.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/65001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 2 }
Derivation of Dirac equation using the Lagrangian density for Dirac field How can I derive the Dirac equation from the Lagrangian density for the Dirac field?
The Lagrangian density for a Dirac field is $$ \mathcal{L} = i\bar\psi\gamma^\mu\partial_\mu\psi -m \bar\psi\psi $$ The Euler-Lagrange equation reads $$ \frac{\partial\mathcal{L}}{\partial\psi} - \frac{\partial}{\partial x^\mu}\left[\frac{\partial\mathcal{L}}{\partial(\partial_\mu\psi)}\right] = 0 $$ We treat $\psi$ and $\bar\psi$ as independent dynamical variables. In fact, it is easier to consider the Euler-Lagrange for $\bar\psi$ $$ \frac{\partial\mathcal{L}}{\partial\bar\psi} - \frac{\partial}{\partial x^\mu}\left[\frac{\partial\mathcal{L}}{\partial(\partial_\mu\bar\psi)}\right] = 0\\ \Rightarrow i\gamma^\mu\partial_\mu\psi -m\psi - \frac{\partial}{\partial x^\mu}[ 0] = 0\\ \Rightarrow i\gamma^\mu\partial_\mu\psi -m\psi=0 $$ The partial differentiation is trivial - remember that $\bar\psi$ and $\partial_\mu\bar\psi$ are treated as though independent. We recover the Dirac equation as expected. If we had instead chosen the Euler-Lagrange for $\psi$, we would have found the conjugate Dirac equation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/65102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Why the magnetic flux is not zero? If $\vec{\mathbf B}=B\vec{\mathbf a}_z$, compute the magnetic flux passing through a hemisphere of radius $R$ centered at the origin and bounded by the plane $z=0$. Solution The hemisphere and the circular disc of radius $R$ form a closed surface, as illustrated in the figure; therefore, the flux passing through the hemisphere must be exactly equal to the flux passing through the disc. The flux passing through the disc is $$\Phi=\int_S\vec{\mathbf B}\cdot\mathrm d\vec{\mathbf s}= \int\limits_0^R\int\limits_0^{2\pi}B\rho\,\mathrm d\rho\,\mathrm d\phi =\pi R^2B$$ The reader is encouraged to verify this result by integrating over the surface of the hemisphere. According to Maxwell's equations the magnetic flux over a closed surface must be zero, why in this case does not happen?
the net flux is always zero , and it's satisfies the Maxwell equation , in the answer firstly the flux through disc has been calculated next using the fact that the net flux must be zero we can conclude the flux of hemisphere must be equal to the flux though the disc but with opposite sign namely : flux through hemisphere + flux through disc = 0
{ "language": "en", "url": "https://physics.stackexchange.com/questions/65183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Physical representation of volume to surface area I was looking at this XKCD what-if question (the gas mileage part), and started to wonder about the concept of unit cancellation. If we have a shape and try to figure out the ratio between the volume and the surface area, the result is a length. For example, a sphere of radius 10cm has the volume of $\approx 4118 cm^3$ and an area of $\approx 1256 cm^2$. Therefore, the volume : surface area is $\approx 3.3 cm$. My question is: what is the physical representation of length in this ratio?
For the case of a sphere the ratio you found is: $$ \frac{V}{S} = \frac{ \frac{4}{3} \pi R^3}{4 \pi R^2} = \frac{R}{3} $$ We can actually pass off the volume as being the integral of the surface area here. That's passable when you check the calculus. One approach is then to ask "what is a function divided by its derivative". This is really similar to the area to perimeter ratio of a circle. $$ \frac{A}{P} = \frac{ \pi R^2}{ 2 \pi R} = \frac{R}{2} $$ Of course you see the "2" because of the value of the exponent, which comes from there being two dimensions, just like the sphere. So now we have explained part of the answer, which is that the linear dimension is divided by the number of dimensions. This is still unsatisfactory because we have no clear sense of how we should define this particular "characteristic length". One attempt at resolution of this problem would be to test the idea for a square-cube system. $$ \frac{V}{S} = \frac{ R^3}{ 6 R^2} = \frac{R}{6} $$ $$ \frac{A}{P} = \frac{ R^2}{4 R} = \frac{R}{4} $$ You can see that it still follows our required rule, but the "characteristic length" is now half of the length of a side. Of course we want to make a statement general to all shapes. This is still muddled by the definition of "characteristic length". So let's avoid it by making a statement about the ratio of "insideness" to "outsideness" for any class of shapes, moving from one dimension to another. $$ \left( \frac{I}{O} \right)_{n+1} = \frac{n}{n+1} \left( \frac{I}{O} \right)_{n} $$ This begs the definition of the "characteristic length", which I'll call $l$. $$ l \equiv n \left( \frac{I}{O} \right)_{n} $$ Unfortunately I can't claim to have invented something new. This is the idea behind Hydraulic diameter. The only difference is a factor of 2. A 4D being with a pipe of constant 3D cross-section would use your formula to calculate hydraulic radius. Wikipedia also includes the same observation I just made: For a fully filled duct or pipe whose cross section is a regular polygon, the hydraulic diameter is equivalent to the diameter of a circle inscribed within the wetted perimeter. I've shown that this is also true thinking of a cube versus sphere. So provided we correct your $3.3 cm$ by multiplying by the number of dimensions, you've obtained a sort of generalized radius. Other, more exotic, shapes won't be so simple to explain. If you had a sphere with a bumpy surface and counted the area you had to paint, this would shrink the shape's hydraulic radius. One way we could justify this concept is referring to fluid dynamics. The hydraulic diameter is used because pushing fluid through a "bumpy" pipe is like pumping it through a smaller smooth pipe. So the number is sort of a proxy for viscous resistance. Well, that can be one use.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/65243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 0 }
what sort of set up would i need to create an electro magnetic chair with no legs I am a furniture design student therefore please keep it simple. a system strong enough to hold the avergae male of say 90 kg is this possible even....
Some has done this with a bed http://www.dvice.com/archives/2012/05/maglev-bed-lets.php. With the use of the naturally strong rare earth magnets (neodymium magnets). For a chair it would be hard to levitate the chair any reasonable distance, as you would need a decently powerful magnet.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/65302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to directly calculate the infinitesimal generator of SU(2) We commonly investigate the properties of SU(2) on the basis of SO(3). However, I want to directly calculte the infinitesimal generator of SU(2) according to the definition $$X_{i}=\frac{\partial U}{\partial \alpha_{i}}$$ from Lie group theory. But, where are the problems of the methods I used below? First, I parameterize the SU(2) with the $(\theta, \phi, \gamma) $ like this: $$U=\begin{bmatrix} e^{i\theta}sin\phi & e^{i\gamma}cos\phi \\ -e^{-i\gamma}cos\phi & e^{-i\theta}sin\phi\end{bmatrix}$$ and the E is when $(\theta, \phi, \gamma)$= $(0,\frac{\pi}{2},0)$. Second, I use the definition of infinitesimal generator like this: $$ I_{1}=\frac{\partial U}{\partial \theta}|_{(0,\frac{\pi}{2},0)}=i\begin{bmatrix}1 & 0\\ 0 & -1 \end{bmatrix}$$ $$ I_{2}=\frac{\partial U}{\partial \phi}|_{(0,\frac{\pi}{2},0)}=i\begin{bmatrix}0 & i\\ -i & 0 \end{bmatrix}$$ $$ I_{3}=\frac{\partial U}{\partial \gamma}|_{(0,\frac{\pi}{2},0)}=i\begin{bmatrix}0 & 0\\ 0 & 0 \end{bmatrix}$$ Here is the question... Why do I get the 0 matrix? We should expect to have the Pauli Matrix. Isn't it? Where is the problem from?
The problem is that your coordinates aren't well defined at $\theta=0$ and $\phi=\pi/2$. Note in particular that $$ U|_{(0,\frac{\pi}{2},\gamma)} = \begin{pmatrix}1&0\\0&1\end{pmatrix} $$ for any value of $\gamma$. A simpler choice is $$ \tilde{U} = \begin{pmatrix} x+iy & z+iw \\ -z+iw & x-iy \end{pmatrix}, $$ with $$ x = \sqrt{1 - y^2 - z^2 - w^2}. $$ Differentiating this you find $$ d\tilde U = i\begin{pmatrix} dy & +i\,dz + dw \\ -i\,dz + dw & -dy \end{pmatrix} - \frac{y\,dy+z\,dz+w\,dw}{\sqrt{1-y^2-z^2-w^2}}\begin{pmatrix}1&0\\0&1\end{pmatrix} $$ from which you can read off the Pauli matrices at the point $(x,y,z,w)=(1,0,0,0)$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/65425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How can the big bang occur mathematically? As we know time began with the big bang. Before that there was no time, no laws, nothing. Mathematically how can an event take place when no time passes by? How did the big bang took place when there was no time? Note my question is not about weather big bang took place or not, my question is about is this a mathematical anomaly? Thanks
According to dictionary.com: An event is an occurrence that is sharply localized at a single point in space and instant of time. An event can't take place without time. Yet, many scientists claim that there was no time before the big bang. But, that is completely illogical. How can there be no time and then suddenly an event take place which requires time. That is circle (or paradoxical) reasoning . . . something that many scientists use when they don't know what to say.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/65503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Work on Ferromagnetic Object Due to Solenoid I've been going through some equations and such trying to determine the work done by a solenoid on a ferromagnetic object. I have the following: Magnetic field due to solenoid: $\vec{B} = \langle0,0,\mu_0nI\rangle$ (Assuming coils are on xy-plane and current is counter-clockwise) Force of magnetic field: $ F = q\vec{v} \times \vec{B} $ Work: $ W = \int F \cdot dl $ Work of Magnetic Field: $ W = \int_c(q\vec{v} \times \langle0,0,\mu_onI\rangle) \cdot d\vec{r} $ For one, this seems to indicate a work of 0 if the object is not charged, which I have seen in some places but just doesn't seem right. Also, this does not take into account the properties of the object, such as relative permeability, which I guess could have some effect with the charge value. I'm trying to calculate the acceleration of a ferromagnetic object from a magnetic field, is there a better way to do this? I've thought about the following: $ \vec{a} = \frac{q\vec{v} \times \vec{B}}{m} $ However, this is where I started running into the charge issue and thought to calculate it from the work done.
I think you're right to consider the magnetic permeability of the object. However, I don't think the equation for force you have is valid here as the object may neither be charged (q=0) nor initially moving (v x B = 0), conditions which according to the equation would result in zero force.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/65583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Where does the extra equation come from to determine the forces from an object on a table? I have a question about basic statics but somehow I cannot manage to find the answer on my own (btw, this is not a homework. It's been so many years since school for me...). The problem is very simple: we have an object with weight $D$ at a given location on table wit with four legs ($F_1$ to $F_4$). What is the force applied on each leg? (for simplicity, I'm just using the same labels $F$ and $D$ for both the location and the force) $W$, $H$, $x$, $y$ and $D$ are given. To find the forces on each leg, as far as I remember, I have to consider two general equations: $\sum F=0$ and $\sum M=0$. So I have: $$ F_1 + F_2 + F_3 + F_4 - D = 0 $$ Also, considering the moments round the point $F_1$: $$ W(F_2+F_3) - xD = 0 $$ $$ H(F_3+F_4) - yD = 0 $$ But this just give me 3 equations! I missing one more equation and cannot figure it out.
The simple answer is that you can't fully solve this problem--because as you note it is under-constrained--under the assumptions that are made when you first start doing statics (that objects are completely rigid). The introduction of finite strains bring in additional relationships.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/65647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Geometrical Representation Grover algorithm I am studying the Grover algorithm and in my and others lectures, I've come across this picture. If the dimension of the computational basis is greater than 2, why does the evolution algorithm have this geometrical representation in the plane?
The plane is enough because all the vectors – before and after the operations (which are really simple rotations) – belong to a two-dimensional plane. The Hilbert space has many more dimensions but they're orthogonal to the plane of the picture and the coordinates (amplitudes) in these additional directions don't change during the calculation so we don't need to draw them at all. The rotation only mixes two coordinates – if the basis is correctly chosen (the plane is a different plane than some plane generated by two "a priori" basis vectors but it exists, anyway).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/65702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Expectation value - Zetilli vs Griffith I know that an inner product between two vectors is defined like: $$\langle a | b\rangle = {a_1}^\dagger b_1+{a_2}^\dagger b_2+\dots$$ but because a transpose of a component for example $a_1$ is again only $a_1$ the above simplifies to: $$\langle a | b\rangle = \overline{a_1} b_1+\overline{a_2} b_2+\dots$$ Where $\overline{a_1}$ is a complex conjugate of $a_1$. Furthermore we can similarly define an inner product for two complex functions like this: $$\langle f | g \rangle = \int\limits_{-\infty}^\infty \overline{f} g\, dx$$ In the Griffith's book (page 96) there is an equation which describes expectation value and we can write this as an inner product of a function $\Psi$ with a $\widehat{x} \Psi$: \begin{align*} \langle x \rangle = \int\limits_{-\infty}^{\infty}\Psi\,\,\widehat{x}\Psi\,\,dx = \int\limits_{-\infty}^{\infty} \Psi\,\,(\widehat{x}\Psi)\,\, dx \equiv \underbrace{\langle\Psi |\widehat{x} \Psi \rangle}_{\rlap{\text{expressed as an inner product}}} \end{align*} In Zettili's book (page 173) the expectation value is defined like a fraction: \begin{align*} \langle \widehat{x} \rangle = \frac{\langle\Psi | \widehat{x} | \Psi \rangle}{\langle \Psi | \Psi \rangle} \end{align*} Main question: I know the meanning of the definition in Griffith's book but i simply have no clue what Zetilli is talking about. What does this fraction mean and how is it connected to the definition in the Griffith's book. Sub question: I noticed that in Zetilli's book they write expectation value like $\langle \widehat{x}\rangle$ while Griffith does it like this $\langle x \rangle$. Who is right and who is wrong? Does it matter? I think Griffith is right, but please express your oppinion.
If the wave function $\Psi$ is normalized, then $\langle\Psi|\Psi\rangle$ should equal 1. Griffiths' definition assumes the wave function is already normalized, while Zetilli accounts for all possibilities by dividing out the normalization constant. So if the wave function $\Psi$ is normalized, Zetilli's definition will reduce to Griffiths' definition. As for the sub question, that's just a matter of notation, which is just a matter of convention, which doesn't really have an objective "right" and "wrong." They're both right as long as they're consistent with the rest of the notation in their respective books.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/65779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can one of Newton's Laws of motion be derived from other Newton's Laws of motion? Can one derive Newton's second and third laws from the first law or first and third laws from the second law or first and second laws from the third law I think Newton's laws of motions are independent to each other. They can not be derived from one another. Please share the idea.
Newton's laws of motion cannot be derived from each other. They are the building blocks of Newtonian mechanics and if fewer were needed, Newton would simply formulate fewer. The first law postulates the existence of an inertial reference frame in which an object moves at constant velocity if the net force acting on it is zero. Although it might seem you can derive it from the second law (if the net force is zero, there is no acceleration and the velocity is constant) but in fact, both second and third law assume that the first law is valid. If an observer is in a non-inertial reference frame, she will observe that the second and third laws are not valid (when you sit in an accelerating car, the Earth accelerates in the opposite direction without any force acting on it). You also cannot derive the second law from the first one because all you know from the first law is that when an object accelerates, there is a force acting but the first law says nothing about the relation between the force and the acceleration. That's what second law is for, to say that there is a linear relationship. The third law adds something more to the first and second laws. It deals with interactions and states that two bodies exert same but opposite forces o each other. That is something you cannot see from the first or second law and similarly, there is no way to use this to derive the second law (you cannot derive the first law because that is assumed to be valid in order to postulate the third law).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/66057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 0 }
A Musical Pathway Using a small number of sound emitters, could you create a room where certain nodes emitted particular tones, but no meaningful sound was heard anywhere else. So, for example, by walking down a certain path, you could hear the tones for "Mary Had a Little Lamb." Is there a generalized algorithm to make particular paths for particular tone sets?
I suppose you could use destructive interference and set up speakers in just the right positions for it to work, but I also assume the calculations needed to achieve it would be complicated (luckily, you're not asking for that). It should be possible in theory
{ "language": "en", "url": "https://physics.stackexchange.com/questions/66129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 7, "answer_id": 4 }
Physical interpretation of different boundary conditions for heat equation When solving the heat equation, $$ \partial_t u -\Delta u = f \text{ on } \Omega $$ what physical situations are represented by the following boundary conditions (on $\partial \Omega$)? * *$u=g$ (Dirichlet condition), *$n\cdot\nabla u = h$ (Neumann condition), *$n\cdot\nabla u = \alpha u$ (Robin condition), *$n\cdot\nabla u = u^4-u_0^4$ (Stefan-Boltzmann condition). Are there other common physical situations where another boundary condition is appropriate?
Different boundary conditions represent different models of cooling. * *The first one states that you have a constant temperature at the boundary.This can be considered as a model of an ideal cooler in a good contact having infinitely large thermal conductivity *The second one states that we have a constant heat flux at the boundary. If the flux is equal zero, the boundary conditions describe the ideal heat insulator with the heat diffusion. *Robin boundary conditions are the mathematical formulation of the Newton's law of cooling where the heat transfer coefficient $\alpha$ is utilized. The heat transfer coefficient is determined by details of the interface structure (sharpness, geometry) between two media. This law describes quite well the boundary between metals and gas and is good for the convective heat transfer. http://www.ugrad.math.ubc.ca/coursedoc/math100/notes/diffeqs/cool.html *The last one reflects the Stefan-Boltzman law and is good for describing the heat transfer due to radiation in vacuum
{ "language": "en", "url": "https://physics.stackexchange.com/questions/66219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Field At Magnetic Dipole Suppose I have a pure magnetic dipole $\mathbf{\vec m} = m\hat z$ located at the origin. What is the magnitude of the field $|\vec B|$ as $r\to 0$? In other words, what is $\lim_{r\to 0}\frac{\hat{r}\cdot \vec{p}}{4\pi\varepsilon_0r^2}$? Is it just zero? $\infty$? Do I have to use some sort of quadripole term?
The magnitude of the fields would go to infinity at zero. However, dipoles are an approximation, at large distances, of the fields created by smaller object (e.g. a current loop). If you zoom closer, the B field does not diverge.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/66283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Does potential energy in gravitationall field increase mass? I was just taught (comments) that any type of energy contributes to mass of the object. This must indeed include potential energy in gravitational field. But here, things cease to make sense, have a look: * *I have object at some distance $r$ from radial source of gravitational field. *The potentional energy is calculated like this: $E_p = m*a*r$ where $a$ is gravitational acceleration and $m$ is mass of your object. *But that means, that the object is a bit heavier - because of the potentional energy of itself - $m = m_0 + \frac{E_p}{c^2}$ ($m_0$ here is the mass without the potentional energy) *That would mean that the gravitational force is a bit stronger at higher distances. Now, I do understand that the rules of physics are not recursive and the mass and force will be finite. But what is the correct approach to this situation? What is the correct equation for potentional energy?
The total energy of the test mass in the Schwarzschild gravitational field depends on the time-like component of the metric tensor $g_{00}$. This is how the gravitational redshift is derived (see, for example, Hartle J B, Gravity). In this case for the test particle (at rest) with the rest mass $m$ it is possible to write $$E=mc^2 \sqrt{g_{00}}$$ as the energy of the test particle residing in the gravity (as measured by an observer located in the center of the gravity). This would lead to $$E=mc^2 \sqrt{1-\frac{2GM}{rc^2}}\approx mc^2-\frac{GMm}{r} $$ But such energy measurement is highly dependant on the location of the observer. The expression, for example, would be different if the observer is located outside faraway from the gravity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/66359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 3 }
Guitar strings and temperature I am investigating Mersenne's law with a guitar by varying tension (hanging weights) and string length. Will temperature change (room temperature to ~4°C) effect the frequency noticeably? If so, is the string oscillating differently or is the change due to a variation of the speed of sound? The strings have a free end so the contraction of the string will not increase tension. Any help would be appreciated.
The speed of sound according to this chart would change as much as 330.4/358.0 = 8% . The air behaves closely to an ideal gas, therefore this kind of change would hardly change the pitch, let alone be noticeable to a human ear. However the change in temperature would effect the stiffness and length of the string dramatically, this depends on what material you are using. Look at the difference of length vs temperature. Since you are using weights, your stress is not going to change, and the tension would remain the same. Try using this equation: But this is not the full picture, since you are actually using an open string with dampening. Though with those speeds it should be approximated to a closed string. You can try using: $$f_open = \frac{\sqrt{\frac{T}{m/2}}}{2L} \cdot constant$$ And tune the constant to fit your current parameter space. Source with calculator (The calculator though would not work in your case because one of your ends is open).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/66438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What happens when a photon hits a beamsplitter? Yesterday I read that we can affect the path and the 'form' (particle or wave) of a photon after the fact (Wheeler's delayed choice experiment). Part of what is puzzling me is the beam-splitter. Are the individual photons actually being split into two new photons of lesser energy? This question implies that you cannot split a photon but it seems that beam splitters do exactly that.
Very short and "axiomatic" answer: You indeed can "split" one particle. In QM particles are treated as a "wave functions", maybe it will be more easy for you to imagine a splitting wave. However, only at the point when the photon is detected the particle is measured in one point of space. This is the very foundation of QM and I agree that it's hard to grasp the concept.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/66498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 2 }
Circuit Loop Law Doubt In a circuit with a solenoid/inductor and a resistor and a battery . Books say that $\Sigma \Delta V=0$ around a closed loop . That means work done by electrostatic field per unit charge is $0$ around a closed loop . Now as we go pass through a solenoid $\Delta V= -L\frac{di}{dt}$ . Suppose I take charge $idt$ through it , the work done by me against the field will be $\frac{1}{2}Li^2$. Then it is said this energy is stored in the magnetic field and not the electrostatic field . Then how are we using the loop law ? When $\Delta V$ across an inductor is actually because of work done by magnetic field but loop hole holds for work done by electrostatic field only . If there is such a electrostatic field that does the same work as the magnetic field , then $\frac{1}{2}Li^2$ must also be stored in electrostatic field in the conductor .
Books say that ΣΔV=0 around a closed loop KVL holds only if the magnetic flux linking the circuit is unchanging. In ideal circuit theory, it is assumed that circuit elements are ideal lumped elements and the self inductance of the circuit is zero. In other words, we assume that the dimensions of the circuit and circuit elements are arbitrarily small and the electric and magnetic fields associated with a circuit element are confined to that element. Without these assumptions, KVL is only approximate. When ΔV across an inductor is actually because of work done by magnetic field but loop hole holds for work done by electrostatic field only. The energy stored in the magnetic field of an inductor is unrelated to the voltage, at any instant, across the inductor. For a steady current, the voltage across the inductor is zero while the energy stored is proportional the square of the current. The voltage across the inductor, at any instant, is proportional to the time rate of change of the current through the inductor. Since the power associated with a circuit element is given by the product of the voltage across and the current through that element, work is done only when the current through the inductor is changing.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/66583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sum of all forces Let us glue up these two images, where we get closed loop thrust of water. Force $F_3$ has direction $-x$ and force $F_2$ has $x$ direction. What is the sum of all forces? Can it be more than zero? Speed of water is constant.Angles are the same.Half circle is not exactly circled at the ends due to the angles. One more subquestion. What if speed of water is very high and we have quite big amount of centrifugal force?
Newton's second law was originally formulated as $S_F$=$dp/dt$. P is momentum, which equals mass multiplied by velocity and that quantity divided by time. $S_F$ is the sum of the forces. Although $S_F$ is usually expressed as ma, to which it is mathematically equivalent, the original form is actually more descriptive, because it shows that a force is the change in momentum(how an object moves,if at all) divided by the change in time. Force, like momentum is a vector, and as a vector, it has direction. If two vectors of an opposing direction and equal magnitude are added, the result is zero always, no matter what the shape is. If the change in momentum is zero, then the force is likewise zero.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/66639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Generalizing a relativistic kinematics formula for spatial-acceleration dependence I'm starting from this expression $$ \alpha dt = \gamma^3 dv $$ where $\alpha$ is proper acceleration of a point particle, $dv$ and $dt$ are coordinate differentials of velocity and time, and $\gamma$ is the relativistic factor of the particle being subject to the acceleration If $\alpha$ is constant, one arrives at the usual expression: $$ t_f - t_0 = \frac{1}{\alpha} (\frac{v_f}{\sqrt{1-\frac{v_f^2}{c^2}}} - \frac{v_0}{\sqrt{1-\frac{v_0^2}{c^2}}}) $$ Now, I have some dependence of acceleration to position; $\alpha(x) = f(x)$ and I'm not sure how to integrate it in order to obtain a similar expression relating time, velocity and position. For instance, I tried the following for the left-side differential: $$ \alpha dt = \alpha(x) \frac{dt}{dx} {dx} = \int{ \frac{ \alpha(x) }{v(x)} dx } $$ and leaving the right-hand side untouched. When I do this I get a weird integral with velocity in both sides, and I'm not sure how to continue. Update so, after a while of staring at this, i noticed an error i was doing, and actually the 2nd derivative of acceleration should look like $$\frac{d^2 x}{d \tau^2} = \gamma^4 \frac{d^2 x}{dt^2}$$ So far so good, but the approach on the comments doesn't seems to work. I try with a simple force potential: $\alpha(x) = -k x^3$, but is not clear how to work from it $$ -k x^3 = \gamma^4 \frac{d^2 x}{dt^2}$$ $$ -k x^3 (1 - 2 \frac{1}{c^2} (\frac{dx}{dt})^2 + \frac{1}{c^4} (\frac{dx}{dt})^4 ) = \frac{d^2 x}{dt^2}$$ It looks like a (nonlinear) differential equation. I just want to be sure i'm on the right track. This is what it takes to solve this kind of problem? Or should some numerical integration/summation be enough? What throws me off in particular, is that if i would replace $-k x^3$ with just $\alpha_0$ (constant acceleration case), the $\gamma^4$ would still look pretty ugly and i wouldn't know how to solve the problem even in that case using the above expression, when we already know that a closed formula exists
In your equation $\alpha dt = \alpha(x) \frac{dt}{dx} {dx}$, just multiply each side by v before you integrate.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/66718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Action Reaction when I pushing a trolley? I tried to explain how those force work but I can hardly figure it out. I exerted a force on the trolley and there will be a force on trolley on me as well. This is the newton's third law. But why the trolley will move? Where's the force come from
The trolley only moves because of forces acting on it. The reaction force acts on you, not the trolley. It is the force of you pushing on the trolley that makes it move. Learn about 'free body diagrams'.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/66790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Maximum angular velocity to stop in one rotation with a known torque I have an object I can rotate with a given torque. I would like to stop applying torque once I've reached a defined maximum rotational speed. The maximum rotational speed should be defined so that applying maximum torque will stop the rotation of the object within one rotation. If I know my torque and moment of inertia, how can I find the maximum rotational velocity to allow me to stop the object in one rotation? Time is whatever is needed. I've tried finding the angular acceleration required to stop the object, but that leaves me with the time variable. Of the equations I've tried, I'm left with a time variable as well as the maximum angular velocity.
Building off of Zen's response, the energy will be $E_r = \frac{1}{2}I\omega^2$. The work done in one rotation is $\tau\Delta\theta$. These two terms are equivalent in your case. I.e. you will have the following expression $$ E_r = \frac{1}{2}I\omega^2 = \tau_\text{max} \Delta\theta$$ $$ \omega_\text{max} = \sqrt{\frac{2\tau_\text{max}\Delta\theta}{I}}$$ You're treating your $\Delta\theta$ as $2\pi$, for one full rotation, hence: $$\omega_\text{max} = \sqrt{\frac{4\pi\tau_\text{max}}{I}}$$ Where $I$ is the moment of inertia of your object. $\omega$ is the angular velocity. $\tau$ is your torque.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/66866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Is play-dough liquid or solid? At room temperature, play-dough is solid(ish). But if you make a thin strip it cannot just stand up on it's own, so is it still solid? On a more general note, what classifies or differentiates a solid from a liquid?
Play-Doh is mostly flour, salt and water, so it's basically just (unleavened) dough. There are a lot of extra components like colourings, fragrances, preservatives etc, but these are present at low levels and don't have a huge effect on the rheology. The trouble with saying it's basically just dough is that the rheology of dough is fearsomely complicated. In a simple flour/salt/water dough you have a liquid phase made up of an aqueous solution of polymers like gluten, and solid particles of starch. So a dough is basically a suspension of solid particles in a viscous fluid. To make things more complicated the particles are flocculated, so you end up with a material that exhibits a yield stress unlike the non-flocculated particles in e.g. oobleck. At low stresses dough behaves like a solid because the flocculated particles act like a skeleton. However the bonds between flocculated particles are weak (they're only Van der Waals forces) so at even moderate stresses the dough flows and behaves like a liquid. Dough, and Play-Doh, are best described as non-Newtonian fluids.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/66941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 3, "answer_id": 1 }
Which quantity gives the resistance of a component? In a current vs potential difference graph, we can obtain the value of the resistance of the component. There are books that say gradient-inverse is the resistance and also books that say the value of $\frac{\text{current, I}}{\text{potential difference:V}}$, which comes from the coordinates of the graph, provides the resistance. I tend to believe the second method (coordinates) provide the true value of resistance. The reason is because when we discuss gradient, we are talking about change wrt to another quantity. e.g. acceleration is change of velocity wrt time. Resistance is NOT change of potential difference wrt current. Resistance is the ratio of potential difference over current, according to Ohm's law. This is frustrating because I can find official solutions to international certified examinations also saying that the gradient method is the correct one. So can someone who is very knowledgeable in this area provides the true answer? Students need to be taught the correct methods. Update: The two ways to calculate the resistance from an I-V graph is: * *Take the inverse of the gradient. In the diagram ,it would be $\frac{10-1}{15-5}=\frac{9}{10} \Omega$. *Take the ratio of $\frac{V}{I}$ which in this case will be $\frac{10}{5} = 2 \Omega$. These two methods yield different results, but both give the same dimension. Which one is the correct method?
The rate of change of voltage with respect to current is known as the dynamic resistance. The ratio of voltage to current at a point is the static resistance. For an ohmic circuit element, the static resistance and the dynamic resistance are equal. For non-linear circuit elements, the dynamic resistance is more useful as one can then linearize the element about an operating point and, for small variations about the operating point, the element behaves as a resistor with resistance equal to the dynamic resistance.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/67019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Where can I find the full derivation of Helfrich's shape equation for closed membranes? I have approximately 10 papers that claim that, from the equation for shape energy: $$ F = \frac{1}{2}k_c \int (c_1+c_2-c_0)^2 dA + \Delta p \int dV + \lambda \int dA$$ one can use "methods of variational calculus" to derive the following: $$\Delta p - 2\lambda H + k(2H+c_0)(2H^2-2K-c_0H)+2k\nabla^2H=0$$ But I'm having a lot of trouble tracking down the original derivation. The guy who did it first was Helfrich, and here's his and Ou-yang's paper deriving it: http://prl.aps.org/abstract/PRL/v59/i21/p2486_1 . However, they don't show an actual derivation, instead saying "the derivation will appear in a full paper by the authors" or something like that. Yet everybody cites the paper I just linked for a derivation. Does anybody know a source that can derive this, or can give me some hints to figure it out myself? To be honest I can't even figure out how to find the first variation. Edit: So, after some careful thought and hours and hours of work and learning, I realized that the answer that got the bounty was wrong. The author stopped replying to my messages after I gave him bounty.... thanks guys. That said, I've almost got it all figured out (in intense detail) and will post a pdf of my own notes once I'm done!
In the paper by Lin et al. (2003) Progress in Theoretical Physics they mention in the abstract that they extend the work of Ou-yang and Helfrich by expanding the bending energy to fourth order. That means you should be able to work out the lower order solutions from their paper as well.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/67181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 3 }
Is heat flux density and heat flux the same thing? Heat flux and heat flux density is the same thing, while electric flux density and electric flux is not the same thing? It makes me confused since we compare Fourier's law with Ohm's law. Here is a statement from Wikipedia. To define the heat flux at a certain point in space, one takes the limiting case where the size of the surface becomes infinitesimally small. Is heat flux defined at a point or on a surface? I have never been found any defintion of heat flux or heat flux density. As a mathematical concept, flux is represented by the surface integral of a vector field, $$\Phi_F = \iint_A \mathbf{F}\cdot \mathrm{d}\mathbf{A}$$ where $\mathbf{F}$ is a vector field, and $\mathrm{d}\mathbf{A}$ is the vector area of the surface $A$, directed as the surface normal. Heat is often denoted $\vec{\phi_q}$ and we integrated the heat flux density $\vec{\phi_q}$ over the surface of the system to have the heat rate but we integrated the $\mathbf{E}$-filed to get the electric flux? Thanks.
If we take the definition of heat flux given here seriously, then heat flux is defined as a vector field $\vec\phi$ with units of energy per unit time, per unit area. At every point $\vec x$ in space, the vetor $\vec\phi(\vec x)$ tells you the direction and magnitude of heat flow in a neighborhood of that point. In particular, if we consider some two-dimensional surface $d\vec A$ containing $\vec x$, then $$ \vec\phi(\vec x) \cdot d\vec A $$ will tell us the amount of energy per unit time flowing through that surface. In particular, notice that here flux is being using to describe a vector field, not a scalar as in electric flux in EM. Perhaps this is rather bad terminology for this reason.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/67303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
What could magnetic monopoles do that electrically charged particles can't? I understand the significance to physics, but what can a magnetic monopole be used for assuming we could free them from spin ice and put them to work? What would be a magnetic version of electricity? EDIT Sorry this wasn't clear. The question is mixed between the quasiparticle and the theoretical elementary particle based on some similarities between the two. I am more interested in the quasiparticle and if they have properties in some way that are similar to particle version: There are a number of examples in condensed-matter physics where collective behavior leads to emergent phenomena that resemble magnetic monopoles in certain respects, including most prominently the spin ice materials. While these should not be confused with hypothetical elementary monopoles existing in the vacuum, they nonetheless have similar properties and can be probed using similar techniques. http://www.symmetrymagazine.org/breaking/2009/01/29/making-magnetic-monopoles-and-other-exotica-in-the-lab/ "The Anomalous Hall Effect and Magnetic Monopoles in Momentum Space". Science 302 (5642) 92–95. "Inducing a Magnetic Monopole with Topological Surface States" "Artificial Magnetic Monopoles Discovered" and comments in articles about quasi-particles like this: Many groups worldwide are currently researching the question of whether magnetic whirls could be used in the production of computer components. led me to wonder what application might they have? Mixing these two concepts is probably a bad way to present this question. A true magnetic monopole would effect protons whereas the artificial ones don't. What I don't understand is what advantages an artificial magnetic monopole would have. And does this relate to some theoretical aspect of a true monopole?
Magnetic monopoles are thought to be carriers of magnetic force similar to electrons and electrical charge. If you can generate and use these particles you could expand the number of ways we can manipulate electromagnetic waves. Two ideas that spring to mind immediately are the creation of a DC transformer that does not require superconductors and the ability to create some nicely huge magnetic fields in confined spaces.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/67448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 3 }
Membrane-reversed black holes and their relationship to white-holes We usually think of white holes as 'thermodynamically reversed black-holes', and this kind of membranes have not been observed in our universe. However, there is some other kind of 'topologically reversed black hole' which we know exists: our cosmological event horizon (CEH). It is reverse in the sense of the membrane direction where light cannot come out, the CEH allows outsiders to look in, but doesn't allow insiders to look out. Question: How do GR describe in general reversed-orientation black holes like the example of our CEH? Please discuss the possibility that exact GR solutions where 'white holes' exist, we might be wrongly interpreting the solution, and what we rather should expect is a membrane-inverted black hole
A cosmological horizon isn't the same thing as a black hole horizon--the black hole horizon is an essential feature of the spacetime that is located where it is due to special geometry. A cosmological horizon is an observer-dependent phenomenon that describes when two observers are out of causal contact with each other. The only sense in which white hole solutions are the same as cosmological horizons is that both are past trapping horizons, which you have already described qualitatively.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/67523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is a proton collision (collisions like in the LHC) visible to the human eye? I was curious if a proton collision is visible to the human eye. (This might sound like a really basic question and forgive me if it is. I am very inexperienced in Physics and just wanted an answer to my curiosity)
A spinthariscope is a simple device consisting of a minute amount of an alpha-emitter and a zinc-sulphide screen. It often includes a magnifier to view the screen. The amazing part is that a dark-adapted eye can clearly see the flashes of light produces as individual alpha-particles hit the screen! There's probably multiple visible-light photons from the collision, but the eye can detect the light produced by the energy of a single nuclear decay.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/67592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 5, "answer_id": 4 }
How does energy convert to matter? To my understanding, matter and energy are one and the same. Shifting from $E$ to $M$ in Einstein's famous equation requires only a large negative acceleration. If $M$ really is $E/c^2$, does that make matter the solid state of energy? I've read a lot about positron-electron collisions at high energies creating larger particles, and there is obvious matter conversion in fusion and fission reactions, but I can't find anything describing the physics of the conversion from energy to matter, rather than the interactions of what is already matter. Specifically, the thing I'm getting hung up on is the reason energy would take on a solid state in the first place. If energy is represented by waves, how does it become particles? If gravity is determined by mass, and mass is nothing more than static energy, does that make gravity a static-electromagnetic force?
The thing about energy becoming particles is not entirely true. Quantum mechanics explains that particles themselves are waves. The energy that forms mass, however, is not a part of the particles themselves. For subatomic particles such as electrons and quarks, their mass is caused by their interaction with the Higgs field. The energy itself is stored in the Higgs field, much like how electric potential energy is stored in electric fields. As for gravity, gravity is not determined by mass. General relativity explains that gravity is the curvature of spacetime caused by the presence of energy, be it in the form of mass, electric potential energy, or electromagnetic radiation. So gravity is not a static force, rather, gravity is the curvature of spacetime caused by the presence of energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/67647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How to cut a stone on a White Dwarf? I've heard that white dwarf stars are extremely dense and hard. So, if I had a piece of white dwarf matter, would it be possible to cut it (or otherwise) into a custom shape? How could one do that?
The "surface" of a white dwarf is a mixture of hydrogen, helium and perhaps a trace of heavier elements. It is never (read this as many, many times the current age of the universe) going to cool down enough to solidify. Solids exist inside the approximately isothermal interiors$^{1}$ of white dwarfs, at densities $\geq 10^{9}$ kg/m$^{3}$, once temperatures drop below a few million K. In typical white dwarfs this probably occurs within a billion years. The white dwarfs freeze from the centre outwards, because the melting point increases with density. The typical pressure contributed by the degenerate electron gas in a white dwarf interior at these densities is $10^{23}$ Pa. So, if you want to preserve your bit of crystallised white dwarf that you have somehow mined from the interior, then you have to work out how to stop it exploding. And it is not just a matter of letting it cool down - the degeneracy pressure is independent of temperature. So this high density material simply is not stable unless you can work out a way of constraining it. The problem is similiar, though not quite as extreme, to that of constraining neutron star material. So in summary, crystalline material at white dwarf densities will have such a high internal energy density (due to degenerate electrons), that it would be incredibly difficult to constrain or manipulate. $^{1}$ Degenerate electrons have extremely long mean free paths and so the thermal conductivity in a white dwarf interior is very high.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/67706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
Why does the Sun feel hotter through a window? I have this big window in my room that the Sun shines through every morning. When I wake up I usually notice that the Sunlight coming through my window feels hot. Much hotter than it normally does when you're standing in it outside. I know if the window were a magnifying glass that it would feel hotter because it is focusing the Sun's rays, but I'm pretty sure that my window doesn't focus the rays, otherwise things outside would appear distorted. So my question is, why does Sunlight always feel hotter when it shines on you through a window than when it shines on you outside? I thought it might simply be a matter of convection, but anecdotal evidence would seem to say it still feels hotter even if you had a fan blowing on you. Am I just crazy?
Whenever you are near a window with solar radiation, you will feel hotter even if the room you are might be cooled(lets say 65F), and its not your AC fault. That's why even if you are near a fan like you said, the heat gain from the sun still is superior. From ASHRAE Fundamentals Handbook 2017: Direct solar load has a major influence on perceptions of comfort Transmitted radiation often causes discomfort if it falls directly on the occupant. A person sitting near a window in direct solar radiation can experience heat gain equivalent to a 20°F rise in MRT(Mean radiant temperature) (Arens et al. 1986). The solution that would best fit this scenario would be investing in shading.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/67815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 3 }
Integrable equations of motion Suppose that a force acting on a particle is factorable into one of the following forms: $$\text{a)}\,\,F(x_{i},t)=f(x_i)g(t)\,\,\,\,\,\,\,\text{b)}\,\,F(\dot{x}_{i},t)=f(\dot{x}_{i})g(t)\,\,\,\,\,\,\text{c)}\,\,F(x_{i},\dot{x}_{i})=f(x_i)g(\dot{x}_i)$$ for which cases are the equations of motion integrable? I know the answer is b, as this is an example. However, I don't feel that clear on why the other two aren't integrable. For instance, $$\text{c)}\,\,m\frac{d\dot{x}_{i}}{dt}=f(x_i)g(\dot{x}_i)$$ if I do some manipulation with this equation... $$m\frac{d\dot{x}_{i}}{g(\dot{x}_{i})}=\frac{f(x)}{m}dt$$ Why is that false for integrability?
You can't integrate the right hand side because $f=f(x_i)$ and you've got a differential on $t$. As for a), if you rearrange terms, you can verify that $$m\frac{d\dot{x_i}}{f(x_i)}=g(t)\,dt$$ so that now you can't integrate the left hand side because $f$ depends un $x_i$ and you've got a differential on $\dot{x}_i$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/67883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How general relativity gets to an inverse-square law I understand that a general interpretation of the $1/r^2$ interactions is that virtual particles are exchanged, and to conserve their flux through spheres of different radii, one must assume the inverse-square law. This fundamentally relies on the 3D nature of space. General relativity does not suppose that zero-mass particles exchanged. What is the interpretation, in GR, of the $1/r^2$ law for gravity? Is it come sort of flux that is conserved as well? Is it a postulate? Note that I am not really interested in a complete derivation (I don't know GR enough). A physical interpretation would be better. Related question: Is Newton's Law of Gravity consistent with General Relativity?
I found many explanations for this type of questions http://settheory.net/cosmology http://settheory.net/general-relativity It's better than "The Meaning of Einstein's Equation" (John Baez). In particular - It is directly applied to an important example (universal expansion) - The expression is simpler (relating 1 component of the energy tensor to 3 components of the Riemann tensor) - The relation between energy and curvature is not only expressed but also justified - Both (diagonal) space and time components of the relation are expressed and justified, resulting in showing their similarity "like a coincidence".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/68067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Why do stars flicker? Why do stars flicker and planets don't? At least this is what I've read online and seen on the night sky. I've heard that it has to do something with the fact that stars emit light and planets reflect it. But I don't get it, isn't this light, just "light"? What happens to the reflected light that it doesn't flicker anymore? I was thinking that it has to do something with Earth's atmosphere, different temperatures or something (if this has any role at all).
Much closer than stars are the distant lamps (polychromatic or monochromatic), say 2 to 3km away, also twinkle. this cannot be related to change of index of refraction due to temperature's variations , since the frequency of this twinkling does not vary strongly with air turbulence. the accepted explanation for this is: light is formed of photons (light particles) which spreads out from the source in all directions, thus as the distance increases, the spherical surface area increases with the square of the radius, hence photon's flux decreases, and the number of photons reaching the eye become less frequent. at the times of absence of photons entering the eye, the image of the source disappear. Moreover, it is known from astronomy, that forming an image of a very far celestial object needs long time, and large lenses or mirrors. this shows that: in order to form an image, an enough number of photons is needed, which requires long time to collect them. By Sami Kheireddine
{ "language": "en", "url": "https://physics.stackexchange.com/questions/68200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 2 }
Why doesn't an electrometer's capacitance influence the measurement of voltage? I've read on the subject in several books, and none of them mentioned whether we can neglect electrometer's influence on the measurement of the voltage or not. Maybe my question sounds a bit stupid, but I really can't understand why they do not address the fact that electrometer has a certain capacitance, and when we connect its parts to the desired points, the electrometer and the conductor become one equipotential conductor, and certainly its capacitance no longer depends only the conductor we're testing but also on the electrometer. And we know that $q=CV$ so it influences the voltage as well. So why the measurements are still fine? Can we really neglect the capacitance of an electrometer? Or am I completely wrong here? Thanks in advance.
For low frequency measurements the capacitance in the circuit is much higher than the probe's capacitance (100pF-10uF+ vs 10pF). The slight temporary effect of connecting the meter is quickly compensated for by the circuit because you're talking about a very small amount of charge that is pulled off in a quick transient then replaced as Art Brown pointed out. Once you move to higher frequencies this becomes a much bigger problem. For example at 1GHz your probe may only have 10pF of capacitance, but the circuit you are measuring probably only has 0.5pF. The addition of probe would kill the 1GHz signal by shunting it to ground. You could design an extremely high resistance network with very little capacitance and inject a signal that would be filtered by the addition of the probe's capacitance (forming a low frequency RC low pass filter with the probe). However due to the normally very high impedance of the meter is is very hard to do this in any practical circuit. However most text books avoid mentioning these effects because they are really only present under high frequency conditions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/68468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why are particles in harmonic motion in normal modes? Why do we assume that in normal modes, particles oscillate in form cos (wt) ? How do we know that the general motion of particles can be expressed as a superposition of normal modes? In both French and Crawford, the assumption of harmonic motion is made without any proof, please help.
In most cases, this is related to an assumption of small displacements from equilibrium. Assume that the system is described by a potential function $V(s)$, where $s$ represents the coordinate(s) associated with the normal modes. Let $s_0$ represent value of the coordinates the equilibrium state. Taylor expanding the potential about this point yields $$ V(s-s_0) \approx V(s_0) + V'(s_0)(s-s_0) + (1/2) V''(s_0)(s-s_0)^2 + ... $$ The key feature is that we know $V'(s_0)=0$, since that is the the definition of equilibrium. We can also ignore the first term since it is independent of the state of the system. Thus the resulting form of the equation of motion of the form $$ 0 = \ddot{ s } + \omega^2 s^2 $$ with $\omega^2$ a function of $V''(s_0)$ and the masses/moments of inertia of the system. This equation has $\sin( \omega t), cos(\omega t)$ as its solutions. Thus, simple harmonic motion is a generic feature of small oscillations about any mechanical equilibrium.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/68605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Uncertainty in path integral formulation In Feynman's path integral formulation, in order to calculate the probability amplitude, we sum up all the possible trajectories of the particle between the points $A$ and $B$. Since we know precisely that the particle will be at $A$ and $B$, does it mean that the uncertainty of the momentum is infinite?
If you are using non-relatavistic quantum mechanics then yes the momentum uncertainty is infinite. If you want to include Lorentz invariance you need to use quantum field theory in which case you describe the evolution of a field with the path integral formalism and interpret particles as disturbances in the field.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/68762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is my boss wrong about our mechanical advantage from our pulley system? I work on a drilling rig as a roughneck and we had a lecture today (at the office) about mechanical advantage in pulley systems. Now, I know that my boss is well educated in oil drilling, but my instincts tell me that he may have this one wrong. A drilling rig works sort of like a crane in that it has a tall structure supporting a pulley system. There is a large winch permanently installed on the base platform and then it goes over the top of the structure (the crown of the derrick) and down through a floating sheave--this has a few wraps to give us more mechanical advantage. I am including pictures to help describe the situation. Here the picture shows the floating sheave (the blocks) which we use to do most all of our operations. Most importantly, we use it to pick up our string of pipe that is in the ground. As seen in this picture, the blocks hold the weight of the string of pipe. Now he told us that if the pipe get stuck in the hole (maybe it snags something or the hole caves in), that we lose all of our mechanical advantage. He said that is why the weight indicator will shoot up and go back down after it is freed. He said that because when the pipe is snagged in the hole then we are not dealing with a free floating sheave anymore and that is what is required to have a mechanical advantage. I disagree with this because even if it is not free, there is still a mechanical advantage such that (say the normal mechanical advantage is 6 to 1) our pulling force is multiplied by 6. I would like somebody to confirm this for me. First picture taken from www.worldoils.com on June 21, 2013 Second picture taken from www.PaysonPetro.com on June 21, 2013
To put it simple let's say the pipe is 1 ton and you have a 4 to 1 then your lifting 1/4 ton, if the pipe was 2 ton you would be lifting 1/2 ton, although heavier you still have a 4 to 1 pull so will need more power to lift it. So no matter the weight or strain it remains 4 to 1 advantage.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/68841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 6, "answer_id": 5 }
How can we describe the electrons of multi-electron atoms (i.e. not Hydrogen) when equations/analytic solutions only exist for Hydrogen? I've been digging into emission spectra of different elements and found that such things as the Rydberg equation, Bohr's model, and quantum mechanics can only fully describe the single electron in the Hydrogen atom. How did we then make the leap to s,p,d,f shells of multi-electron atoms? How accurate is our analysis of these more complicated elements? Rydberg Equation (side-note: Is this an empirical 'data-fitting' equation? What's the significance of that?) $$\frac{1}{\lambda}=R_H\left( \frac{1}{n_1^2}-\frac{1}{n_2^2}\right)$$ Hydrogen: Helium: Iron: Potassium:
The only atoms for which the Schrodinger equation has an analytic solution are the one electron atoms i.e. H, He$^+$, Li$^{2+}$ and so on. That's because with more than one electron the forces between electrons make the equation too hard to solve analytically. However, over the 90 or so years since Schrodinger proposed his equation a vast array of numerical methods for solving it have been developed, and of course modern computers are so powerful they can calculate the (electronic) structure of any atom with ease. This applies even to heavy atoms where relativistic effects need to be taken into account. The Rydberg equation is an approximation because it does not take the electronic fine structure into account. However it's a pretty good approximation. It works because for a one electron atom the energy of the orbitals (ignoring fine structure) is proportional to 1/$n^2$, where $n = 1$ is the lowest energy orbital, $n = 2$ is the second lowest and so on.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/68995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
To what extent are quantities fundamental? Arguably the most well-known and used system of units is the SI-system. It assigns seven units to seven ‘fundamental’ quantities (or dimensions). However, there are other possible options, such as Gaussian units or Planck units. Until recently, I thought that these different systems differed only in scale, e.g. inches and metres are different units, but they both measure length. Recently though, I discovered that it is not simply a matter of scale. In the Gaussian system for example, charge has dimensions of $[mass]^{1/2} [length]^{3/2} [time]^{−1}$, whereas in the SI-system it has dimensions of $[current] [time]$. Also, I have always found it a bit strange that mass and energy have different units even though they are equivalent, but I find it hard to grasp that a quantity can be ‘fundamental’ in one system, and not in an other system. Does this mean that all ‘fundamental’ quantities are in fact arbitrary? Would it be possible to declare a derived SI-unit fundamental, and build a consistent system with more base units? What is the physical meaning of this?
The Si - Units are just a definition which are related by euqations. Take for example the speed of light $c \approx 3\cdot 10^8\frac{m}{s}$. Now what does $\frac{m}{s}$ mean? You can take it as a parameter that is connected to other units by equations like the famous $E = m c^2$. Since only the equation is important and you have to define your unit somehow you could also say that $c = 1$. What I did here is nothing else than to set $$\frac m s = \frac 1 {3\cdot 10^8}$$ You can always do this for the first unit you change, but you have to be careful if you change a second unit as those units may be connected by an equation. Take again $E = mc^2 = m$ where I have set $c = 1$. Now there is one more independent unit: Either mass or energy which you also can choose as you like. As you can see there is an infinite number of possibilities to choose the units, but as people have to communicate it is a very good advice to keep on the standard units in the different fields of science. An important note is the following: We have set $c = 1$ this means, that length has the same unit as time. Take as an example a star $\Delta x = 100 c \cdot s$ away. Here $c \cdot s$ are light seconds. As we have set $c = 1$ you can clearly see that $\Delta x = 100 s$ which is not very intuitive but you have to keep in mind that $c = 1$ and with this you can always change from meters to seconds with $s = 3\cdot 10^8 m$ in this system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/69066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
Why can't we obtain a Hamiltonian by substituting? This question may sound a bit dumb. Why can't we obtain the Hamiltonian of a system simply by finding $\dot{q}$ in terms of $p$ and then evaluating the Lagrangian with $\dot{q} = \dot{q}(p)$? Wouldn't we obtain then a Lagrangian expressed in terms of $t$, $q$ and $p$? Why do we need to use $$H(t, q, p) = p\dot{q} - L(t, q, \dot{q})?$$ Or is it that whatever the Lagrangian is the method of finding $\dot{q}=\dot{q}(t,q,p)$. Will give us that equation for $H$?
A fairly basic remark to make is that usually we can plainly identify $$L = T-U$$ where $T$ is the kinetic energy and $U$ is the potential energy, and $$H = T+U$$ Expressing these quantities for e.g. a Hooke-like spring (or any system where $U\neq 0$) would give you a problem with the sign of $U$ if you simply substitute the expression you find for $\dot{q}(p)$ into the Lagrangian.$^1$ So the Hamiltonian is definitely not just the Lagrangian with $\dot{q}$ expressed in terms of $p$. More mathematically expressed, the Hamiltonian is defined as the Legendre transform of the Lagrangian. (for some elaboration on the Legendre transformation - particularly in the context of the Lagrangian and Hamiltonian - see my answer here, as well as the other answers to that question) $^1$ Indeed, the Lagrangian for such a (1D) system would be $$L(q,\dot{q}[,t]) = \frac{m\dot{q}^2}{2} - \frac{kq^2}{2}$$ for which the canonically conjugate momentum is $$p = \frac{\partial L}{\partial \dot{q}} = m\dot{q}$$ and therefore $$\begin{align} H(q,p[,t]) &= m\dot{q}\cdot\dot{q} - \frac{m\dot{q}^2}{2} + \frac{kq^2}{2} \\ &= \frac{m\dot{q}^2}{2} + \frac{kq^2}{2} \\ H(q,p[,t]) &= \frac{p^2}{2m} + \frac{kq^2}{2}. \end{align}$$ Just inserting $p$ into the Lagrangian would yield $$\frac{p^2}{2m} - \frac{kq^2}{2}.$$ Note the sign difference of the second term.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/69133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 2 }