snippet
stringlengths
143
5.54k
label
int64
0
1
What does it mean, "gravity is so strong that not even light can escape from a BH..."? This can physically interpreted IMHO that beyond the event horizon of the BH, light is phase transitioned (accelerated) to a FTL superluminal energy that breaks known physics. The inner of the BH therefore appears in our spacetime as "Nothing"! Exactly as the rest of vacuum space appears to us, basically as nothing. Someone could therefore infer that vacuum space we generally describe us nothing is actually something and considering the physics of BHs it points to the possibility that vacuum space could be a superluminal FTL (i.e. faster than light speed c) unknown type of energy. There is light, matter and vacuum in our universe. Seems to me that vacuum could be a different energy than matter and light, defying our known physics which we mastered over millennia and concern more about matter and light and their interactions. The vacuum is still a mystery today. Much to learn in the future. Maybe sci-fi Star Wars creators imagination was right after all when saying "...prepare for a jump to hyperspace!".
0
classical electromagnetism , from my understanding can be derived completely from coloumb's law , charge invariance, superposition principle and postulates of special relativity. Biot savart law and lorentz force law are all consequences of it . For derieving faraday's we need not do any experiments, flux rule : e=-d(phi)/dt (e is induced emf and phi is magnetic flux) can be proved from motional emf . From flux rule , it can be understood that emf is induced when area of conducting loop changes in constant magnetic field or magnetic feild is a function of time with constant area. As the flux rule rule is derived from no experimental information other than lorentz force law biot savart law ,we should be able to prove that changing magnetic feild induces emf and hence current is induced . It seems that magnetic feild cannot induce current because it acts on moving charges (which in our case is not moving).experiments shows that it is non conservative electric feild that induces current , but why do we need to do experiment because we already proved it in flux rule. My question is that this information of inducing emf was in flux rule , but it seems that we can not prove induced non conservative electric feild from basic priciples (coloumb's law , special relativity,charge invariance....)
0
The photoelectric effect is most probably seen when the incoming light has lower energy than the energy needed for both Compton scattering and pair production to happen. The probability of the photoelectric effect to occur also increases when the matter that light interacts with has a big atomic number and high atomic/electronic density such as Lead, Tungsten, and even better Uranium. Given the energy difference between Gamma and X-Rays, and the for high probability, photoelectric effect requiring low energetic photons to occur, how is it possible that X and Gamma rays are both prevented by the photoelectric effect that is observed in those materials? Is it that energy levels of photons that photoelectric effect has to occur on materials like Tungsten are on the high energy boundary of X-Rays and Low Energy boundary of Gamma Rays, and thus rest of the matter-photon interaction effects such as COMPTON and PAIR PRODUCTION are almost always on the GAMMA Spectrum then?
0
I was watching Susskind lectures in string theory. There he explains that open strings can both, split at any point, and also join at the ends when the ends touch at a single point. I have one question about each of these two processes. Is not the likelihood that the two ends of a string end up at the same spatial position of measure zero? Or the two ends do not need to really meet at the same point, but only be close and then they will be attracted to each other to make it closed? If and open string breaks at an arbitrary point, would not this create particles of arbitrary mass? as the rest mass is proportional to the length? but we know particle masses do not form a continuum. What am I thinking wrong?
0
If we shine a light that has less energy than the work function energy of the electron of an atom on a metal, the electron is not released but excited and electron gives off this excess energy as heat or collisions or light. We know that the intensity of incoming light does not affect whether the electron will be released from the metal or not, only the energy of the incoming photon determines it. So, Assume that we hit an electron with an energy lower than the work function energy so that the electron gets excited and builds up with energy. WHAT IF Before that excited electron releases its build -up energy as heat or light or collisions we hit it again with a light that again has an energy lower than work function energy, why isn't it released from the metal?
0
Current in a wire is defined as the amount of charge that passes through a cross-section of that wire in a single second. By this definition alone, it is clear that a current relies on the motion of some charged particle. I believe it is possible that electrons could transfer energy to each other in every direction and when the current starts flowing those energy transfers are more directional. This could lead to electrons always having the same velocity independently of the current. Is this the case? I came up with this wile trying to understand magnetic force around wire, if electrons are traveling in the same direction their spins would also be in the same direction, which could cause magnetic field similarly that would spinning ball in viscose liquid cause flow of that liquid.
0
In cubical homology you have to consider the group of degenerate cubics and use the group of cubes module degenerate cubes (Massey). If you not do that you get the wrong homology for one point space. In singular homology (simplices) every textbook explain you do not need to do that because you are going to get the same groups. It is easy to check that you get the correct homology groups for a point space without exclude degenerate simplices. But I would like to know a GENERAL PROOF of that. That is the equality of both groups in general order and spaces (excluding or not degenerate simplices). Because it seems that textbook writers consider that evident for I have seen the statement many times but no proof at all and I am not willing to take it by faith. I have read an old paper by Tucker available in internet by googling: Degenerate cycles bound. But I dont know how to apply that to singular homology Thank you very much
0
I have a set with unknown cardinality. The cardinality can be from a number a to a number b. How can I indicate the number a (or b)? The question in texts is: "What is the least value of n(A)" (A is a set with a few possible cardinalities) How can I properly express this in mathematical notation? I used the notation(If you call it a "notation") least of n(A) I have seen this question: Mathematical notation for the maximum of a set of function values The things that differ is that I need the notation to express the minimum (or max) number a function can give for a constant x, not the min (max) for a range of x Also, the set of all possible sets isn't given and is there to be found out by the question solver. (as it implies, each solver may generate a different one) n(A) is the cardinality of the set A I do not have a strong mathematical background. so please avoid very confusing notations. thanks in advance
0
Clearly particles individually pass through slits, be it a single or a double slit experiment. The fact that wave interference is evident in their trajectory may be due to their interaction upon entering the slits. If water particles, or sand particles can together form waves, one can assume they will act in such way that their individual trajectories will reflect the wave they form, and once that wave, if it were to pass through a couple of slits will alter their individual trajectories to satisfy the interference pattern we see on the plate in the double split experiment. In other words, the interference pattern in the double slit experiment (based perhaps on the my sorely naive point of view), are the result of particles, that together form a wave, and upon the entrance to their respective slits, their altered trajectories through the slits reflect their wave relationship. I need to know if any of this is absurd.
0
[Note I am asking up to, but not including, consciousness as this bleeds into philosophy and is a much messier question] Assuming that the laws of physics have remained constant across space & time since the big bang, has the way the universe evolved been entirely predetermined? While impossible to know the physical parameters & interactions across all space and time, these unknown states would have been acted upon by consistent forces since their onset. Thus, while not knowable, the universe grew along only a single possible path forward, meaning from the instant of the big bang to (cosmic) now, everything has been entirely determined? Phrased another way, was there any physical process that introduced true randomness that caused the universe to have multiple different potential paths? Note that I don't mean things that seem random - such as paths of specific gas molecules when dispersing - since these are ultimately entirely predictable given sufficient knowledge of states & environment. Or is my premise somehow wrong?
0
If the Hamiltonian manifold for the moving surface of the standing wave is smooth then it must be minimum surface of revolution. The frequency of the string is the capacity of the manifold that is a periodic solution to the Hamiltonian. The frequency cannot be a rational number frequency = velocity/wavelength because the solution to the Hamiltonian is a real given by the Dirac measure. There is only one Hamiltonian so I think string tension and length are determinative. All I want is the mean curvature of the manifold. You are being asked here to prove the manifold is not symplectic. Obviously, you cannot. I say you must conclude curvature is constant. You have no way out of this except to say I have not explained this to you well enough that the string makes coherent theory. Do you have a calculus for the string as a moving surface?
0
I. I thought he was swimming - I thought he swam for a while but found out it was a lie I thought he is swimming - I thought he was swimming now, but he's not. I thought he swam - I thought he once swam (don't know when). I thought he has swam - I thought he has already swum recently (a couple of minutes ago). I thought he had swam - I thought he had already swum recently (a couple of hours ago). I thought he ... but i changed my mind, and now i dont think in this way If I can't frame my thoughts in this way, how should I do it? Specify proposals, add new ones? But is it possible to designate meaning with the help of time?
0
If electric fields are created by an accelerated charged particle, such as an electron, and magnetic fields are generated by electric fields in motion, what are the individual fields that make up Electromagnetic Radiation oscillating between? What is the Y axis in their sinusoid function measuring, and how does that continue to oscilate in the absense of moving charge once the photo is emited? Is the electronic field aspect oscilating between a negative and positive charge relevant to overall energy levels, or is it confined to one side of that? I am asking as I have an incomplete understanding of how a photon, without a charge, continues to oscilate at the speec of light. I fully expect that I have an oversimplified understanding of both repsective fields, and I hope to learn what unknown forces are at play.
0
Hi there Wise people of the internet, I am trying to do analysis some data gathered from a gamma scintillator setup, its stored in root. So i have to do some coincidence measurements, and i found that in Krane, you normally use a TAC (Time to Amplitude Converter) to check for it. However, this gaussian peak expected there has count per channel in y axis. Should i just divide the counts that i recieve for each channel by channel number? I dont see how that will be useful.(this will disproportionately eat away any meaningful data stored in higher channels, which corresponds to higher energy,after calibration, breaking expected stable background of chance coincidence) I am kinda new to detectors. Any help is much appreciated and also any resources/references will also be. Regards
0
There is a group in which is embedded the set of rational functions considering the formal composition binary operation? More generally how (and when) can we extend a non-commutative monoid (or semi-group) into a group ? I was looking at the interesting case that the natural numbers can be extended to de integers by the matters of considering it as a monoid over the sum and constructing a group of the fractions and then extended again into the rational by applying the almost same method considering the multiplication operation this time. Thinking in this context i try to apply the same ideia to the set of rational function (with rational coefficients - quotients of two rational polynomials) but considering the composition binary operation as it is associative, has a neutral element, etc... But obviously isn't that simple and after searching for a answer i discover the concept of group completion of but not much more and specially not much for the non-commutative case. I'm thinking in a abstract way, thinking in a rational function as a pair of finite sequence of rational numbers, as was already noted in comments that not all rational polynomial are invertible, but. Considering that the case is analogous to the fact that not all integers are multiplicatively invertible
0
I am stuck writing this sentence: Relics such as Fortran, B, D and other programming languages continue to stay alive, stories of programming languages in the graveyard, and those that spurred magic have antecedent the continues quest of improving programming. EDIT: Clarity. Think of this as a battle cry among programming language developers. In this battle cry, stories of both success, and failures emerge overtime and historically, nevertheless, the journey is not even complete or done. Essentially, events, programming languages in the graveyard, and programming languages that made magic and allowed some companies to achieve technology never seen before, the combination of all of this has thus, "......" the continuous quest of improving programming. I am trying to make the reader, from also previous text and stories discussed in the text, to understand that, technologies that left a MARK in an industry or across industries as a result of a different programming language have caused: a curiosity and quest to build even better programming languages that can move industries forward. I HOPE THIS IS CLEAR ENOUGH.
0
I've read quite a few things about rarefaction waves in terms of giving weak and entropy solutions to certain PDE problems with fixed initial time data. I understand that the consideration of these solutions provide some sense of density that decreases with the passing of time, but I've not heard much else. My concerns are: Why are these kind of solutions specially noticed as one of the very first examples given in several books?. Maybe it's some sort of "unwritten" but written tradition to expose these solutions. Are they directly applicable in some specific physics topic or something? I'm pretty sure they are since every attempt I've done to try to research about the topic results in some content including the word "density". Any thought, suggestion or reference could be very helpful to me ;)
0
Papers often use one example throughout as their ongoing example. Similarly, it is possible to focus on one particular case and show a broader claim by using that case. I remember there was a really nice verb for using x at a test case, or perhaps x serves as a use case. I tried a thesaurus searching for exemplify (which is close, but not exactly right) and some other searches ... but none of these helped much. An example sentence might be: Testing food in general, we verbify eggs. or Eggs verbify food in our context. More elaborate examples: Using the RR dataset as a test case, we investigate how different aspects of metric design affect the computation cost. We perform an analysis study on efficiency, using the popular RR dataset as a test case. As a test case for investigating dataset efficiency, we analyze the results of RR.
0
Could somebody untangle following statement I found here: the integer cohomology groups correspond to the quantization of the electric charge. I know from pure mathematical side the meaning of cohomology groups, but not understand the translative part between physics and pure math at this point: Could somebody borrow some time to sketch how this identification/ correspondence is in detail established? ie how can the (presumably geometric) quantization procedure be "interpreted" / encoded purely in terms of certain cohomology group? And furthermore, what is here the precise meaning of the notion of "a charge" from viewpoint of pure mathematical terminology? Up to now I thought that a charge in mathematical physics can be recognized as a quantity comming from the existence of global symmetry of the given system, so in simple terms certain intergral/ number witnessing the existence of such global symmetry. Do they endow in the quoted sentence above certain "bulked" meaning to the term " charge" in order to making it to becomes the object subjected to quantization procedure?
0
I am trying to find some sort of motivation as to why we integrate manifold over differential form and why especially does it in some form corresponds to integrating the surface of the area. I have already passed my university courses that have covered these topics, including Stokes's theorem with proof (My uni has this approach of definition - theorem - proof structure in all courses with very little motivation). The issue is that sadly I've never intuitively seen that it could have anything to do with the real area if the manifold would be an earth-like object or any other intuitive geometric structure. Is there any free material/youtube class that illustrates that it truly corresponds well? I don't need to see proof as I quite understand the technical side of things. My issue also is that I cannot see differential form in any other way than only technical mathematical extremely abstract definition. Please keep in mind that my knowledge is limited to European standards of a bachelor's degree...
0
Let say we have some water in the sink and open the closure.. The water starts to move towards it in a whirpool like manner.. If we have a table tennis ball and leave it near the hole of the sink it will rotate like a planet instead move right towards the hole... If we suddenly open up more the hole the water will start to move faster towards the hole but even the water moves towards the hole the orbiting ball will still be able to detect the change of the water speed due to the hole of the sink being openned more... Can this be compared with gravitational waves where even the spacetime tissue moves towards the BH binary star the orbiting planet is still in position to detect this sudden perturbance moving in opposite direction regarding spacetime tissue simmilarely as the table tennis ball get the perturbance moving in oposite direction than the water moving in direcion of the sink hole?
0
Hi everyone, lets say we have a circle shaped space station and there was an accident where we got a hole (hole A on the picture) on a station and now most of the station is vacuum except for one enclosed part that is still filled with air. Now if we make holes B and C at the same time in the enclosed part, what will then happen with the air? Will it: because air moves in random directions, half of the air will go through hole B and then through hole A and other half will go through hole C, go through entire station and then through hole A because of difference in pressure all air will go through hole B and then through hole A (I am not sure if this is true or I am misremembering, but I read somewhere that air takes the shortest path to equalize pressure so that is the reason that all air will go through hole B? If this is not the true is there some way to influence all of the air to go through hole B, change the size of the holes, make a longer distance between holes C and A?) this and next scenarios are highly unlikely but I have left them just to cover all possibilities , all of air will go through hole C and then through hole A air will stay where it is and it will not leave the room
0
First, there are only three type of leaves: enter image description here If we assume lamination is compact ,all leaves is only can be a simple closed geodesic. I want to ask about how could union of uncountably many leaves be a minimal compact lamination,actually all leaves is complete simple geodesic,but all leaf in a compact minimal lamination in a punctured torus can only be a simple closed geodesic cause leaf of compact lamination can not goes up to cusp ,so it can only be closed simple geodesic ,so we take closure of any leaf ,it can only be itself, that contrict to minimal. cause i dont know that question above, i can not understand why we cut a punctured torus along a compact minimal lamination which is not a simple closed geodesic ,we can gent a ideal bigons which is patch two side of two ideal triangels.
0
If I were to measure some quantitavie metric of a sample population and record its mean, and then I were to split by random selection all members of the population into two groups of equal size and record the means of each group on the same metric. Is their a statistical term/measure of the difference of those means from each other and/or the mean for the whole population. This comes as I'm trying to analyze an experiment in which both the control and sample group where told information regarding the experiment during a pretest where I fell like the information could have been left out due to the large size of the population. My rational behind that is the difference in the split means would decrease as the overall population increases and one could assume that both sample groups are evenly distributed.
0
The court's ruling is claimed to be (...) because it has only counted specific actions to set up some (...) simple structure of justice that only encompasses the last hour neglecting the history of events. (...) = ? = "has only lasted for a short amount of time prior to being mentioned" Referring to current attitudes that have arisen in the general span of an hour or day or other fairly newly developed engagement. "Of the moment" "According to recent neoteric dispositions" The structure is simple because it is founded on only one or two just-happened pieces of evidence. Adjacent to "transient" "transitory" "ephemeral" but in reference to a newly grown/spoken structure of logic. "recent" and "latest" seem too general and encompassing an indeterminate amount of time, whereas the word I'm looking for refers only to the nature of the just-referenced events, which is (...) Transient = lasting for a short amount of time (...) = has only lasted for a short amount of time
0
Simulate a magnet sticking to a metal door. The side that is against the door, means that side is the pulling side, and must be the inwards movement of a toroidal magnetic field, pulling the object inwards resulting in getting stuck to the object! Does this mean, the side that is against the object is the side where the field goes in through the magnet and will always be the actual South pole of any magnet? When the magnet is flipped onto the object, the field is in the opposite direction pushing against the object, which will be the North pole of that magnet. Does this mean that when holding a magnet and you can diffirentiate between the actual physical pushing and pulling side of a magnet, them to know that the pushing side must be North and the pulling side must be South? This has Nothing to do with: Stringing the magnet and it will find magnetic North or South and zero to do with earths magnetic field direction, but everything to do with the toroidal magnetic field of the magnet itself and which direction the magnets toroidal field flows?
0
In school, I learned the mechanism of high and low pressure areas, which roughly goes like this: In the tropics, the sun warms up the air during the day. Water evaporates, so that the air gets warm and moist. Warm air is lighter than cold air, so it rises up, leading to a low-pressure area because the warm air is now "missing", so we need air to horizontally flow in. The rising moist air cools down, so it starts to rain, that's why there are rainforests in the tropics. Some thousand kilometers to the north or south, the air is sinking to compensate for the rising air in the tropics. Sinking air warms up and gets dried, so it is very dry (Azores High), leading to the big deserts in the subtropics, e.g. the Sahara. I hope this is more or less correct. There is one thing I do not understand: Why is the air pressure lower in the tropics, and higher in the subtropics (Azores High)? My intuitive physical reasoning tells me the opposite, for various reasons: The reason warm air rises up is not that because "it is light", but that it is lighter than cold air, so the cold air can replace it by flowing under it and lifting it up. So the phrase "warm air rises up, leading to a low-pressure area because the warm air is now missing" is very misleading: it is the inflowing cold air that lifts the warm air up. So there is never a "moment" when we have a lower pressure - the warm air rises because it gets replaced by cold air, and there isn't a reason why the pressure should drop. The air pressure is more or less the combined weight of the air above us. The heavier the atmosphere above us, the higher the ambient pressure. Now, when the air warms up and gets lighter, it gets replaced by the cooler air flowing in. So after the warm air has risen up, we should have more air above us: the warm air that has risen up, plus the cold air that has flowed in horizontally. So the atmosphere above us is heavier in total, which again means we should have a high-pressure area. Another view could be that since the air in the tropics is warming up, it expands. This large-scale expansion should lead to a high-pressure area. After all, if you expand a large amount of gas, its pressure increases until it has had time to expand properly. I am very much aware that all these reasons must be wrong in some way. I don't doubt the existence of the Azores High or the rainforest. I just miss an intuitive understanding. How can the tropics Low and the subtropics High be explained in intuitive physical terms?
0
My understanding is that in commercial nuclear reactor operations, fuel rods are not used up to the point where they're fully depleted and unable to support fission, but are replaced while they still contain an appreciable amount of fissionable isotopes, to ensure that the reactor stays in a stable operating regime at all times. However, would it be possible, for any reason, to run a nuclear reactor without replenishing the fuel as long as any useful energy at all could be produced, that is, actually "running the fuel rods dry" (or, more formally, until criticality is irrecoverably lost and decay heat is the only remaining output left, at which point the reactor is essentially just a fancy spent fuel pool)? Could such an end state be safely achieved, with power just gradually fading away while the control rods extend out more and more to compensate for the declining reactivity? Or would unstable and potentially dangerous operation ensue, as in Chernobyl where operator response to a poisoned core started a catastrophic chain of events? The question is motivated by the realization that one of the safety-enhancing measures proposed for the RBMK reactor design after the Chernobyl disaster was to raise the enrichment grade of the fuel, apparently to reduce the type's susceptibility for core poisoning and the resulting power fluctuations (at least that's how I understood it). I'm interested if the converse is also generally true, that is, whether it would be hazardous to let the fuel deplete too much in modern PWR/BWR reactors that don't share the design flaws of the RBMK.
0
In biology, the scientific name of a species (known as the "binomial name" or just the "binomial" or sometimes even just the "binomen") is written as a pair of words in italics (or underlined, which is the equivalent of italics in handwriting). For example, modern humans belong to the genus Homo and, within this genus, to the species Homo sapiens. The first word specifies the genus (meaning a group of related species) and has an initial capital, while the second word specifies the species, which doesn't have an initial capital, even if it is derived from the name of a person, e.g. the scientist who discovered the species. https://en.wikipedia.org/wiki/Binomial_nomenclature For example, that Wikipedia article says, "[...] the binomial name of the annual phlox (named after botanist Thomas Drummond) is now written as Phlox drummondii." Note that in the phrase "annual phlox", which is not in italics, "phlox" has a lower case initial. So in this case it seems that Wikipedia thinks it is acceptable to write "phlox" in all lower case. And yet I've often read that a genus name must always start with a capital letter. For example https://en.wiktionary.org/wiki/genus_name says, "genus name (plural genus names) (taxonomy) The scientific name of a genus, which is always capitalized; the generic name or generic epithet. Usage notes The scientific name (binomen) of a species is a two-part name and is typeset in italics, the genus name (the first name) has an initial uppercase (capital) letter and the species epithet (or specific epithet) is written with a lowercase (small) letter; for example, the scientific name of the wild Rock Dove is Columba livia.". First, notice how the dictionary quoted above wrongly has "wild Rock dove". It should be "wild rock dove" of course. Ha ha. Now, if you will, imagine people somewhere had started to refer to one or more locally found species of the genus Columba as "Columbas"/"columbas". This could happen because calling them "rocks" would be confusing. After all, rocks do sometimes fly, land, and so on. In writing they write them without italics or underlining and sometimes without an initial capital. So my question is: Would the latter be acceptable in formal writing among nonscientists? To me it seems a bit anomalous for something that is a type of animal to get a capital letter. It's as if biological jargon has spilled into ordinary English, and where it conflicts with the rules of the latter, it just brushes them aside. The italics/underlining rule is rarely followed by nonbiologists, so do we really have to always follow the capitalize all genus names rule? Can't we follow the English rule that says types of animal are not capitalized?
0
The following question was discussed in my Discrete Math class, but we couldn't reach a consensus. Think of a set as a collection of bins. Each bin contains exactly one object distinct from all others. While we are allowed to move the objects around between bins, we cannot remove any object, nor can we place two objects into the same bin. Someone hands us a new object. Can we place that object into a bin, after moving around some of the objects already sitting in bins? A. Yes, for some sets this works just fine. B. Sets are not composed of bins and this question makes no sense whatsoever. C. No, there is no room for another object, all bins are already filled. D. It depends on the new object. I'll try to summarize the general arguments for why each answer is correct or incorrect: A: A few of us think this is right, because we can insert the new object into the set iff it's not already in it. However, this is similar to answer choice D, which was given to be wrong (see below). B: Many of us shy away from the answer because it "sounds" incorrect, but I personally believe this is the right answer. This is because I disagree with the representation of a set as a collection of bins (each of which contains exactly one object), for the simple reason that the model does not seem to account for adding new bins (and hence does not account for adding new elements). C: Some thought this was correct, if we assume that the number of bins cannot increase (i.e. if we cannot add elements to the set). Some thought this was incorrect for the same reason; because it would mean we can't add any elements to the set at all. This answer was given to be wrong. D: Most thought this was correct - if the new object is different from all the objects already in the set, then we can add it, and otherwise we can't. However, this answer was given to be wrong. And the fact that this is wrong seems to also imply A is wrong (i.e. the intended answer isn't talking about whether or not the element is already in the set). Could someone give a justified answer to this question using concepts from Discrete Math and Set Theory?
0
I have seen different explanations to understand why there are no local gauge invariant observables in gravity. Some of them explain that diffeomorphisms are a gauge symmetry of the theory and thus any observable evaluated at a spacetime point will be gauge dependent and therefore not an observable. This line of reasoning then argues for Wilson loops, or asymptotic charges as good (non-local) observables in gravity. This explanation, in my opinion, is purely classical, it doesn't rely on the uncertainty principle, or commutation relations, etc. However, other explanations give the argument that any device trying to measure a local observable will have a finite size, and therefore a finite accuracy. If the device wants to probe local physics, it should be smaller. However, the uncertainty principle forces the device to collapse into a black hole before allowing the experiment to give us local information. Alternatively, it is also explained that the commutator of two (quantum) operators has to vanish for spacelike separations but that a metric that is dynamical and fluctuates will mess with the causality of the theory, therefore making the operators not observable. These arguments seem absolutely quantum mechanical. I have a couple of very similar questions: Is the statement "no local gauge invariant observables in gravity" true in classical GR, in quantum gravity or both? If it is true in both, why do people treat the statement "no local gauge invariant observables in quantum gravity" as something special? Do statements about observables in classical and quantum gravity mean different things?The arguments given to explain each one are pretty different and seem to involve different physics. The first one relies heavily on diffeomorhism invariance while the second one relies on holographic-flavoured arguments about how much information you can concentrate in a given volume before you form a black hole.
0
I'm building the Proximal Policy Optimization algorithm from scratch (well, using PyTorch). I've been studying it by my own, but I'm a little bit confused in the optimization phase, here is the thing. From what I know... First, we initialize a Policy Network with random parameters. Second, we start with policy rollouts. At each time step (t) of the episode, we compute the Value Functions (functions approximators) in order to get the Advantage Function A(s,a), also, we compute the clipped surrogate objective (J) of each time step (t). At the end of the episode, we sum all the clipped surrogate objective values, this will give us the expectation of the entire episode of the expected cumulative rewards. In the paper of PPO the equation contains the expected value of all time steps (t). Once we have the value of the expected clipped surrogate objective (sum of all clipped surrogate objective values) at the end of the episode, we compute the Stochastic Gradient Descent (SGD). In order to compute SGD we need a loss function, we have our expected clipped surrogate objective function, so we just do -(expected clipped surrogate objective function) and that will represented the loss, that is the same as computing the Stochastic Gradient Ascent in order to maximize the expected cumulative reward or the objective function. Now my confusion comes in here... I thought that the clipped surrogate objective function was computed at each individual time steps (t) and then at the end of the episode, we sum all of them in order to optimize it (compute SGD). The thing is that some authors say that the optimization process (SGD) is done at each time step (t) of the episode instead of at the end of the episode, but why? doesn't the clipped objective function equation in the paper contains an expectation symbol? if the computation is done at each time step (t) then the expected value symbol in the equation is redundant, isn't it? Also, they say that in order to compute SGD we need a loss, I thought that by setting the clipped surrogate objective function to negative (-) we can get the gradient of it and then minimize it using SGD that it would be the same as maximizing it, but the paper shows another equation as being the loss function that the optimization phase uses, how is this? So my questions are... When and how is the clipped surrogated objective function computed (at each time step (t) or at the end of the episode), is my implementation of the computation correct?. When and how is the optimization phase computed (at each time step (t) or at the end of the episode)? My thoughts about the loss function correct? or what does the paper means with this loss function they show? Thank you in advance:)
0
I am currently very confused about the "topological" prerequisites of Lee's Riemannian Geometry (RG) book, An Introduction to Riemannian Manifolds. I have heard that this is a "truly introductory" text for beginners in RG. But as for prerequisites, in his preface Lee states that his other two books on Topological and Smooth Manifolds are a sufficient preparation for his RG text. But from several online forums I've come to the realizations that this would be an "overkill" just for studying RG (e.g., one need not study all of his Topological Manifolds text to study most of his Smooth Manifolds text!). So, in general, his preface is confusing to me when discussing prerequisites. I'm confident with my analysis background (inverse/implicit function theorem, etc.) As for the differential geometry stuff, I'm thinking about going over doCarmo's Curves & Surfaces text. I'm a beginner in DG and I prefer E. Kreyszig's Differential Geometry as it introduces tensors early on (something I'm highly interested in learning about). I'm also aware of another "standard" text on RG by doCarmo as well. So, here's what I need guidance on: Which of the books among Lee and doCarmo on RG is preferable if one wants to get a decent exposure to RG so that they can pursue research in related areas? How much topology does one need to know in order to tackle the texts by Lee and/or doCarmo on RG? For differential geometry, is Kreyszig's DG book sufficient to prepare for Lee and/or doCarmo's RG text(s)? Or is it preferable to study do Carmo's DG before tackling RG? My ultimate goal is (broadly) mathematical physics, in particular, Relativity and Quantum Gravity. Moreover, I consider myself more of a "math"-person and so would not prefer texts that ruin mathematical rigor.
0
I recently asked this question about whether there was a "distance" between two galaxies where both the gravitational force and the influence of dark energy would be balanced. The answers and comments seem to indicate that there is indeed such a "radius" around a galaxy. I was very interested in this, so I contacted the authors of this paper about this phenomenon. I asked them if it could be possible to have a satellite galaxy orbiting a bigger one just in the point where there would be a balance between the gravitational attraction of the bigger galaxy and dark energy, so that the satellite galaxy orbit would not decay (through gravitational waves, tidal forces...) and avoiding its fall eventually towards the bigger galaxy. They replied that the answer was basically yes, and that they could keep that orbit as long as there was no external perturbation modifying these orbits. But I had one more question about this scenario. My question is: If that balanced state would be possible, would there still be tidal effects between the two galaxies (So that some of the orbits of planets and stars inside the galaxies could be somewhat modified) but without making the orbits of the galaxies decay over time...? I mean, imagine a satellite galaxy orbits a bigger galaxy just in the radius distance where the influence of gravity and dark energy are balanced out. Is it physically possible (at least theoretically) that the tidal forces between the galaxies may affect some of the planetary systems' orbits in these galaxies (for example changing the orbits of planets around their stars like for example making them orbit their stars further apart)? And would these tidal forces disrupt the satellite galaxy from the distance where gravity and dark energy are balanced out? Or without any external perturbation, it should keep orbiting at that distance (even with these tidal forces between the galaxies or the gravitational waves emitted from the orbits around the bigger galaxy)?
0
I'm an amateur and this is my first question here, I'm trying to formulate question about a general representation I have in mind after trying to grasp the idea of relativity and the concept of space-time. We always talk about the "speed of light", but it's a bit of a misuse of terminology, since we should really talk about "speed of electromagnetic radiations (EMRs)". Moreover, we recently confirmed that gravitational waves, despite not being of the same nature as EMRs, travel at the same "speed of light". Considering that, according to relativity, an object travelling at the speed of light would not experience time passing, and that it doesn't seem to make sense to envision travelling faster than the speed of light, Would it not be natural to consider the "speed of light" as being the "speed of time", or, let's say the speed at which the present propagates through space ? This way the speed of light and gravitational waves would be actually instantaneous, or at least as much instantaneous as the universe can be, and each point of the universe would "communicate" with the other points at the "speed of light". Usually when trying to [in]validate this idea by reading other questions or other sources, I find complex explanations that do not really help me evaluate the correctness of the broad model I have in mind. I do not know if this is something that seems quite obvious to everyone already, but I couldn't find it expressed this way, and this mental picture really makes sense to me. If it does not correspond to our current state of understanding, could you give me some clue of where my formulation does not fit reality ?
0
I understand that "laws of physics" is a bit of a misleading term since all they really are is just us applying a logical statement about observed physical phenomena in a way that allows us to predict or understand said physical phenomena. That said, what my question is getting at is whether there are any laws of physics that "hold at all levels?" The idea of the "invariance of physical laws" is, and has been, a key notion for developing new theories and furthering understanding of phenomena. But for a lot of given laws, there seems to be some system or situation, in which the laws must be modified or corrected in order to hold, or is simply not applicable. I'm not concerned with the numerical accuracy of physical laws, which seems to be the focus of similar questions on this forum. An example of this would be Newton's laws of motion, which break down at the quantum scale (despite having analogous principles) or at relativistic speeds. Maxwell's equations have been described as the "solutions" to electromagnetic theory, but they are only correct up to the point of treating magnetism as an unexplained phenomenon and not the result of relativity. Are there any laws that, with current and modern understanding, are always true (I'm not saying that they couldn't be found to not completely the case in the future, but we believe they are currently)? The only one that I could believe fits this description, is the second law of thermodynamics, in that "entropy never decreases." We have entropies for vastly different systems at vastly different scales, from quantum systems to human scales, to galaxies and black holes. However, this may be a little handwavy, as entropy has different definitions for different levels (Von Neumann entropy and the entropy of a black hole for example) and my understanding is that they do not translate from one another.
0
I am looking for a term to use as the name of a software project that I am working on. The project is a software tool, and this tool aims to be useful in virtually all software, so I am looking for a term that alludes to it being an indispensable, or perhaps even the most indispensable, tool in a profession, as in: the test screwdriver being the most indispensable item for an electrician the monkey wrench being the most indispensable item for a plumber scissors being the most indispensable item for a tailor etc. A different but equally useful direction of meaning would be a term for an item which is guaranteed to be present in a certain line of business or endeavor. For example: In a tire shop they are bound to have lots of tires, so tires are their ___ (fill-in the blank.) If there is one thing that mountain-climbers are guaranteed to use, that is rope, so rope is their ___ (fill-in the blank.) Every priest is bound to have a bible, so the bible is their ___ (You get the idea.) Besides the word "indispensable", other terms near the meaning that I am looking for (but unsuitable) are "essential", "sine qua non", "tool of the trade" and "staple item". I was seriously considering the term "bread and butter", but after reading about it I have formed the impression that it necessarily has a fiscal connotation, while there is none in my case. Please correct me if I am wrong. (Other than that, "BreadAndButter" would be an awesome software project name, despite being unconventional: the modern trend in software project names is strongly towards unconventionality.)
0
In textbooks on many-body quantum physics (e.g. Fetter and Walecka), Feynman diagrams are typically introduced after formulating the Dyson perturbative expansion of the Green's function using Wicks theorem. Then the Feynman diagrams follow as a way to conveniently represent the resulting integral equations. In most literature however, I have noticed that the language is somewhat different. Typically, after introducing the Hamiltonian a certain quantity of interest will be introduced, typically the single-particle Green's function of self-energy. Then there is often a sentence like: To evaluate this quantity of interest we sum the set of Feynman diagrams shown in Fig. X Then Fig. X will contain the perturbative series already written in terms of Feynman diagrams. It seems that the step where this series is formulated, as I find in the textbooks, is typically skipped. I can think of two possible reasons for this: The actual formulation of the Feynman rules from Dyson's equation and Wicks theorem is seen as trivial, and hence not repeated in typical papers. Even if the system is not a standard system treated in other literature. There is actually a faster or more intuitive way to write down the relevant Feynman diagrams from the Hamiltonian, without having to resort to the perturbative expansion explicitly. If this is the case then I would love to see a textbook where such a procedure is explained. Currently whenever I want to understand a paper I go through the whole perturbative expansion for the respective Hamiltonian, which is a very tedious and time consuming process. I would be very appreciative if someone more familiar with this field could tell me which of these is true. Thanks!
0
I think this is easier to understand the mapping of injective, surjective and bijective in terms of marraige proposal where men are from set A and women from set B but Is this analogy correct? Injective (One-to-One) Functions: If the marriage process between men from set A and women in set B is injective, this means: Every man proposes to a distinct one and only one woman Some women might not receive any proposals (i.e., remain unmarried), but no woman receives proposals from multiple men. So, in terms of our marriage analogy for injectivity: each man proposes to one woman, and no woman has more than one suitor. Surjective (Onto) Functions: If the marriage process is surjective, it implies: Every woman receives at least one proposal. All men have a marraige proposal and it's possible for a woman to have multiple suitors. For surjectivity: every woman gets at least one proposal Bijective Functions: For a marriage process to be bijective, every man must propose to a distinct woman such that every woman gets exactly one proposal, and every proposal is accepted. No man or woman is left without a partner, and there's no situation where a woman has more than one suitor or vice versa. To sum it up using the marriage analogy: Injective: Every man has a partner (definitely unique since they are not allowed to propose multiple women), but some women might be left without a partner. Surjective: Every woman has a partner and they are allowed to propose multiple men so that no man remains without a proposal Bijective: Every man and every woman have unique partners. Nobody is left out, and no overlaps in pairings.
0
Trying to solve a problem with a colleague of mine, we prove a theorem that I'm sure someone else must have had to come across but couldn't find anything about it. We needed a way to tell how far away from equilateral was any triangle. A measure for any triangle of its non-equilateral-ness. One idea (that later turned out not to be the best) was this: Let ABC be any triangle (the one we want to know how far away from equilateralness is). Let's take any of its sides, say AB. Let's take the point C' in which ABC' is equilateral, and which lies to the same side of AB as C. Let's measure the distance CC'. Let's take points B' and A' the same way we took C'. What we found is that the distances CC', BB', and AA', are equal. No matter which side of the triangle we start with, the result is the same. Is it known to be found earlier? Edit For the record, the proof goes this way: We have the original triangle ABC and three new points A', B', C' such that ABC' is equilateral and so on. Let's see triangles ABC' and AB'C. Both are equilateral and share the same point A. Let's see the geometric transformation that brings C' into C, B into B' and A into itself. Given that the distances C'A and BA are the same, the distance traveled by C' to go into C must be the same as the distance traveled by B to go into B'. Ergo, distances C'C and BB' are the same. By the same way we prove that the distance A'A is also the same.
0
I came across this sequence called Digital River, where the next number in the sequence is defined as the sum of the digits of the previous number, plus, the previous number itself. It caught my attention for some reason, and I wanted to analyse it. And I found some curious fractal-like patters. But let me begin by saying I am no mathematician, and I was just doing this recreationally, as I don't have the requisite tools and faculty to unwrap why these fractal-like patterns should appear. So I am posting my analysis notebook here in hopes of finding some answers. Now, I have also come across the summatory Liouville function, and it too has similar fractal-like patterns. So, could it have something to do with, or is related in any way to, digital rivers? Some of the comments say that Liouville has something to do with the Riemann-Zeta function. Could the Riemann-Zeta function also explain why fractals appear in subtraction of digital rivers? If so, could you explain how, in a way that somebody without an undergrad degree in Math can understand the source of these fractal patterns? And in doing so, could we formulate a theory or pattern of what other, similar kinds of sequences can show similar fractal-like patterns? Here are some of the fractal-like patterns I've found, to pique your curiosity to download my notebook: P.S. If this turns out to be an interesting problem that cannot be explained away trivially, and you want to work on analysing it together, then I'm happy to collaborate.
0
I'm sorry for mistakes if any; English is not my native language, but I'll try to explain myself as thorougly as possible. There is a geomerty topic about construction of different geometric shapes using just a straightedge and a compass (a pair of compasses, more accurately). E.g. a regular pentagon can be constructed under these restrictions, but a regular heptagon cannot. hyperlink My question is: why straightedge and compass but not pencil? I mean, the compass as a drawing tool does contain pencil lead. That's a fact. But using the compass in order to construct straight lines, in my opinion, is not rational. I think nearly all people use a pencil to draw a straight line with the aid of a ruler. And the ruler itself, of course, cannot draw anything on its own. I tried to ask it on a different site. I received two answers. The first answer was very ironical and I considered it rude and impolite. The answerer told me: "If you wish to add a pencil, than you must also add a boy and a girl (who will make your construction)." I'm sorry, but I didn't speak about boys or girls. Just a pencil, that's the plot. A very sad and unsatisfying response! And another answer was quite polite but obscure. I think it needs some clarification. The second person told me that maths never changes anything. The ancient Greeks just used the ruler (straightedge) and compasses, and that's all. So we don't have the right to change it. I think I could agree with the second answer, but I am curious and inquisitive. So why is it so conservative, why have the rules remained unaltered and why didn't anybody try to change anything? That will be my secondary question.
0
We have hydrogen inside a tube, and we induce a voltage on it; a current passes through it and light is emitted. The frequencies of light correspond to the differences of the eigenvalues of the energy operator, which is the observable in question, so it is customary to give a heuristic explanation that the electric energy produced an energy transition and the residual energy was emitted as light. At what precise moment did the wave function collapse in this experiment, if we try to describe according to the Copenhagen interpretation? How does that description work in this case? Can you maybe direct me to a paper that describes this in detail? Some more words to clarify these questions: I would like to understand if the wave function is supposed to collapse the moment the voltage is applied, or the moment the electronic transition happens, or the moment the light arrives at the spectrometer, or the moment it hits the photographic film. It would be interesting to know what event, in that interpretation, triggers the collapse. A worked-out model of the whole situation, explaining how one describes each component of the system, would be most welcome. Thanks a lot in advance! Edit: This post has been marked as needing more focus, I think by people that did not understand the point of the question, to whom I'm nonetheless very grateful for their feedback (but please if you're one of them, kindly explain better what's going on because I also don't understand your position). The question was phrased as a bunch of different questions in an effort to clarify it, but it boils down to this: what exactly is the role of quantum collapse in the standard quantum theory's description of hydrogen atom gas radiating in a tube? Thanks again.
0
I have a bit of confusion because when doing QFT and QFT in curved spaces this particular issue seems to be avoided. I have this feeling that when we quantize a theory, we somehow choose a chart and we stick to it. This feeling comes from, for example, the way we deal with Lorentz transformations in QFT, namely via unitary representations. In my head, change of coordinates is something more geometrical rather than algebraic as is done in QFT. I also asked a professor of mine and he told me that the usual way of quantizing things is chart-dependent and then suggested I read TQFT and AQFT papers for which I'm not ready yet. Can someone help me understand? I am searching for a mathematically rigorous construction of the quantization process (in canonical quantization) and if it can be done in a coordinate free way. Hope my question does make sense. EDIT: I think my question was misunderstood: I do believe that, of course, the physics in QFT is Lorentz invariant. But in my understanding of the process of quantization what we are doing mathematically is the following: pick a chart, construct Fock space/Quantize and then model in that Fock space Lorentz transformations via Unitary transformations. In this process, if I take another chart I construct a different (but canonically isomorphic) Fock space. So you see: in QFT (I believe) you don't treat change of chart in a more geometric way, but you model it algebraically. I think that what I'm saying can be seen in Wightman axioms: there is no reference on the spacetime manifold (of course the choice of the Lorentz Group comes from the isometries of the Minkowski metric but one can avoid talking about the metric completely) , it's purely algebraic. So are Lorentz transformations.
0
What's the word(s) for a feeling of disappointment when you've lost something of financial value? For example, let's say I'd just got an expensive LCD monitor from a raffle, but I accidentally dropped it and it broke, and I now lament its loss. I guess "lament" works okay, but it's not really a colloquial word, and it seems to refer to disappointment in a more general sense (like the loss of a good friend) rather than strictly about valuable possessions. The "value" in question should be strictly financial rather than emotional, meaning that you feel sorry for it being expensive rather than it having been used for a long time. I broke my brand-new LCD monitor. I haven't even gotten a single use out of it. I ... that monitor so much! Edit: Just to drive home the point, there's this word in my language that is very specific, because it's used in contexts where you miss out on something of monetary value, lose something of monetary value. For example, when you almost won a monitor but didn't because of one stupid mistake on a game show; or when you'd just won a monitor, but accidentally damaged it and made it unusable; when you actually owned a monitor, but lost it while moving because the delivery guys dropped it. In all these cases, the disappointment is purely monetary, because the monitor was just too expensive and it'd be hard to ever get another one as good. Even in the case of you having owned the monitor, you still missed it because it was expensive, not because it was with you through thick and thin. As an actual example, I have this tablet that's quite cheap, but if I ever "miss" it, it's because of my emotional attachment to it, not because it's expensive, because it's actually very easily replaceable.
0
There are a few river crossing problems that I have seen that share some common aspects. The cannibal and missionary problem is typical. All these problems involve moving everyone from one side of the river to the other side by using a boat to cross the river. Denote by complement, an arrangement that is equivalent to what you get at any stage of the problem by swapping the shore that each person is at. For example, the final position of the puzzle is the complement of the initial position. One thing that these puzzles have in common is that it is legal to start at the final position and do the moves in reverse order and in the reverse direction, going from the final position to the initial position. I noticed a relatively simple property of some of these problems that I am having trouble describing formally, let alone proving. In the solution to many of these problems there comes a point where a new position is the complement of the previous one. If you start at this position and perform the previous moves (except for the last) in reverse order, the last position will be the complement of the first, making the sequence of moves a solution. I hope that what I said makes sense. How could I state this in a more formal manner? It should be pointed out that you can't actually do a complementation, but you can achieve the equivalent. For example, in the cannibal and misssonary problem, you go from having two cannibals and two missionaries on the far shore to having two cannibals and two missionaries on the near shore. This is done by having one cannibal and one missionary move from the far shore to the near shore, but could have also been achieved using complementation. I thought that the idea of complementation would simplify proofs. Here is a more compact description. Let the positions be represented as (A,B), where A is the set of people on the near shore and B is the set of people on the far shore. The complement of (A,B) is (B,A). The starting position has the form (S,-) and the final position is (-,S). If we have a sequence of moves (S,-)->...->(W,X)->(Y,Z) then we could perform the moves in reverse order, giving (Y,Z)->(W,X)...->(S,-). Suppose that you can go from (Y,Z) to its complement, (Y,Z)->(Z,Y). I would like to be able to show that we could go backwards (Z,Y)->(X,W)...->(-,S), giving a solution to the problem. Here is an idea of what I would like to be able to say. In going from (W,X) to (Y,Z), the boat ends up on either the near shore or the far shore. In going from (Y,Z) to (Z,Y), the boat ends up on the opposite shore. This means that, in going backwards, it will be able to go from (Z,Y) to (X,W) and generate all the complements of the first set of reversed moves.
0
I saw other posts such as this one but I don't think it's quite the same question, or even if it is, the answer employs the operator formalism and I'm not sure I follow it. I'm wondering, if you have two multiparticle states - a multiparticle state being, in my mind, a complex probability amplitude for each possible configuration of particles, as a function of time - then, is a normalized linear combination of these two states still a valid multiparticle state? Keep in mind they are both functions of time that obey the equations of QFT, so the linear combination is also a function of time, and I'm asking if it still obeys the equations of QFT. I'm trying to think about it in terms of Feynman diagrams. In particular, I'm pretty sure you can linearly combine two multiparticle states at one time with no problem - you just get a superposition rather than a pure state. And since the amplitude for a final configuration is essentially the sum of all propagators from the initial state, well, this sounds linear enough to me. I think you would just sum up the independent contributions from all the pure states of which the initial state was composed. What else could it be? What throws me for a loop is that I've seen several posts here and elsewhere talking about the inherent non-linearity of QFT. But I think they might be talking about linearity with respect to combination of single particle states. I'm not worried about that, however, since I can see that multiple identical particles really form a single mathematical entity (a product rather than a sum, e.g. the Slater determinant for fermions), so linearity wouldn't have much meaning in this context anyway. Still, the whole thing appears rather murky so I'd really like to clear up this point. To put it another way, I know that there is interaction within the evolution of a pure multiparticle state, and this leads to entanglement, which, mathematically, is just the inability of the final state to be factored into a single tensor product of one-particle states. But, is it fair to say there is no interaction between the pure multiparticle components of a mixed multiparticle state? (at least, as long as we don't classify summation of complex amplitudes as "interaction")
0
Context: I ask this as a school teacher reaching past the boundaries of my expertise. A colleague was talking about the standard model with an advanced student, explaining how particles interact by exchanging gauge bosons, asking the student to imagine gauge bosons as little spheres. Of course they got to the issue that this mental model doesn't add up, since in classical mechanics such a process always leads to repelling forces, while the forces in electron-positron-scattering are attractive. He asked me on my opinion on how to rectify his explanation in such a way that a student could still create a mental image of the process. This is what I came up with, and while I'm sure that it can't do the actual physics full justice, I would like you to point out precisely in what ways this is inaccurate, so that I can improve it in such a way that an advanced student can still have a mental image of the process. The mental model: In classical physics, waves typically obey the superposition principle, that is, two waves don't interact, but can pass through each other without influencing each other. This is true for electromagnetic waves, but also all other waves the students might have encountered in class. It is not always true for water waves, however. For instance, two tall waves crashing into each other won't pass through each other unchanged since they lose energy due to turbulences. The correct description of their interaction requires an additional term besides just the sum of the two wave functions. We can imagine something similar to be true for wave functions in QFT: As a particle like an electron is described as a wave function, the interaction of, say, two electrons is similar to that of two water waves in the sense that they do actually interact, yielding more complicated wave effects. A main difference is that unlike classical fields, which don't interact with other fields, the electron field interacts with the electromagnetic field as well, and the nonlinear effects come from one electron wave packet interacting with the em. field, which then interacts with the other electron wave packet. And since photons are electromagnetic wave packets, we can think of the em. part of the interaction as being comprised of photons. So what do you think? How (in)accurate is this mental model of interactions mediated by gauge fields.
0
I am currently thinking through evaporation over lakes, specifically the Laurentian Great Lakes (a complex subject, I know). Particularly, I am trying to wrap my head around why evaporation peaks in the fall and winter. Based on what I have read, this fact is due to the vapor pressure gradient that exists between relatively warm water and dry air (dry in an absolute sense because the air is cold) and the high winds which continually replace that dry air over the water. I have also read that this evaporation from the Lakes has a cooling effect on the Lakes themselves, causing a temperature decrease in the Lakes. I have seen it insinuated that over-lake evaporation is synonymous with a latent heat-flux. Based on what I have read, latent heat is an exchange of energy to a substance without a change of temperature of the substance. For water transitioning from a liquid to a gas, the required amount of energy to cause this phase change is called the enthalpy of vaporization or the latent heat of vaporization. When the air is colder than the water, where is the energy coming from to supply the latent heat of vaporization that causes evaporation (assuming its a cloudy day)? If the vapor pressure gradient is the main driver of the rate of the evaporation, how can the evaporation still be said to be a latent heat flux, i.e., how is the transfer of particles based on a pressure differential called a transfer of heat, albeit a latent transfer of heat? Does a latent transfer of heat mean that the water particles which evaporate do not change temperature, though the liquid water which they leave behind decreases in temperature? Sorry, that's more like three questions! Hopefully they are not stupid ones :) it seems this is a complex topic, and my initial intuition that evaporation is always higher when the air is warmer than the water (thus transferring heat energy to the water, increasing the energy of the molecules, leading to increased evaporation rate since the water molecules are now higher energy) is simplistic and even wrong, as it depends on so many more factors than that. Our world is not a simple one, physically at least!
0
As is well-known, classical conserved systems have conserved quantities by virtue of continuous symmetries, which can be derived from Lagrangian mechanics. For example, two masses on a spring can swap momentum between the two, but translational invariance ensures that the total momentum is conserved. But what if we introduce damping, specifically viscous damping, into the equations? If we consider a fluid medium, our two-mass system surely loses momentum to the fluid. But suppose that the spring itself is viscoelastic, so that it generates equal and opposite forces on the two masses proportional to their relative velocity. Then, the system is translationally invariant in some sense and conserves momentum, but is not Lagrangian (at least not in a simple way). Is there a theory of how to calculate conserved quantities based on symmetries in damped systems, analogous to how it is done in undamped systems? I can do this for specific cases, but I'm not sure how to do it in general. A few attempts: Constructing a complicated Lagrangian consistent with damping Declaring that damping is a purely phenomenological effect coming from many microscopic degrees of freedom behaving conservatively, and so concluding that the system should have the same conserved quantities as if there were no damping (problem here is that some of the conserved quantities, like energy, must leak into the unobservable microscopic degrees of freedom). Considering the corresponding conservative system and showing that it has conserved quantities. Then arguing that this is true for each frequency even if the "spring constants" are complex and frequency-dependent, and so concluding that damping doesn't really change anything. Again, though, this seems to prove too much, since energy shouldn't really be conserved. By way of motivation, I'm working with a system that has an unusually large and complicated number of symmetries, and I'm trying to determine how worried I should be about damping killing the conservation in a real experimental system.
0
Imagine we have a hollow metallic toroid, with copper wire winded around it, which carries electric current. That implies we have magnetic field inside the hollow toroid. The toroid has vacuum inside. We have a set up of a high voltage supply and an electron gun that takes the free electrons from the metallic toroid and shoots them inside it. The velocity of electrons shot inside the toroid is low enough so the magnetic field will bend its trajectory in a complete circular path within the boundaries of the toroid. Now the electrons are flying in circular trajectory inside the toroid. But this situation can't be hold forever, as the electrons are centripetally accelerated the irradiate photons and thus loose kinetic energy. As they loose kinetic energy and velocity, at some point they will stop orbiting and will stay still. But unless they still exactly in the geometrical center of the section of the toroid (which is highly unlikely), they will be attracted to the boundary of the toroid due to Coulomb forces. And as they move towards the boundary, they will regain velocity and start orbiting again. Resuming, the electrons will loose energy due to their accelerated motion, then regain energy, and then loose it again. Apparently this cycle will repeat endlessly, meanwhile they will radiate photons as they loose kinetic energy. If my analysis is correct, how the energy conservation principle will be applied here? Radiating photons endlessly means giving endless energy. The second part of my question is as follows. Suppose the electrons doesn't radiate photons, due to some arbitrarily stated postulate (like the Bohr's explanation on why electrons doesn't fall on the nucleus of an atom). Apparently there is no obstacle to hold an infinite amount of electrons inside the toroid. The only limiting factor will be the amount of the voltage applied to fire new electrons inside the torus, as previously fired electrons will create repelling Coulomb force for the new incoming electrons. But there will not be such thing as "dielectric rupture" as in the case of an ordinary capacitor, so hypothetically an infinite amount potential difference can be set between the hollow toroid and its inside. Is this assumption correct?
0
I'm having trouble finding a good phrasing to describe a component of a system that is too important -- in a sense that it distracts a person from all the other components, that should have been, or were formerly, important aspects of the experience. The case where I'm trying use it is something like this: We tried adding a new super-queen to our variant chess game that could move anywhere -- but we found that it was ______, and that none of the other pieces were important to the gameplay as a result. Another example, possibly from academia: We recommend that teachers not offer students extra-credit work during a semester. We have found this to be _____, in that the option routinely distracts students from the main body of work and preparation for exams. The key thing I'm trying to highlight is that participants in the system become hyper-focused in their attention on this one component, to the detriment of the other parts. (E.g., while the chess example indicates an overpowered piece, that's not what I want to emphasize here; it could also be the result of a super-weakness that becomes the only thing you'd want to attack, say.) I could also imagine this being used in a work of art: Say, an element of a painting that is too bright or oddly placed. Or a breakout character in a TV show who takes over what was meant to be an ensemble story (i.e., they "may... overtake the other characters in popularity, including the protagonist"). Is there a good word or phrase to indicate this state of being too important, and attracting too much attention?
0
Scenario Consider an empty universe with just the Earth, Moon and the Sun. The Earth and the Sun will rotate about their center of mass, which is inside the Sun. The Earth and the Moon will orbit their center of mass, which is inside the Earth. The Moon and the Sun will orbit about their center of mass, which is inside the Sun. Assumptions Since the Sun has a center of mass with respect to both the Earth and the Moon, which are moving separately, the Moon pulls on the Sun too, just like the Earth does. This results in the Sun not having a closed loop orbit around their common center of mass. I assume the third body affects the two other bodies in this manner: The center of mass of the Earth-Sun system rotates in the same manner the Earth-Moon system rotates about their center of mass The center of mass of the Moon-Sun system rotates in the same manner the Earth-Moon system rotates about their center of mass These two effects describe the same motion when viewed from different frames (Earth-Sun and Moon-Sun systems) The above scenario is more pronounced if you consider Jupiter instead of the Moon, but I'm looking at the minute details. Question The question is whether all three bodies always revolve around their common center of mass, which also accounts for the motion of all bodies with respect to one another? If I'm wrong, how exactly does motion happen in this three body system? (Explain using simple words) Can this be generalized for an n-body problem? (Although probably not useful for computation but seems good for a mental image) I think this can be consolidated into a two-body problem by considering the Earth-Moon system as a single body at their center of mass, resulting in a single stationary point as the center of mass of the whole system. That is, I'm taking the center of masses of the Earth-Sun system and the Moon-Sun system, and taking their center of mass, and I expect it to be the center of mass of the Earth-Moon-Sun system. Or is it just that bodies only revolve around their common center of mass, and the center of mass of the three-body system has no say in describing the motion, aside from being a static point? References The motion of the Earth and the Moon around the Sun for a month (from a WIRED article) The Solar System's barycenter with respect to the Sun (from the Wikipedia article on Barycenter)
0
Initially, I was looking for how centripetal force is produced on the surface of the rotating earth for a mass kept at any latitude. I went through the following threads - Which force provides the centripetal acceleration that makes objects on earth's surface rotate about Earth's axis of rotation? Is the normal force equal to weight if we take the rotation of Earth into account? Question about the Normal Force exerted by Planet Earth in relation to centripetal force If the ground's normal force cancels gravity, how does a person keep rotating with the Earth? From there, I understood that the resultant of normal force (N) and gravity (mg) is the required centripetal force. But what is bothering me now is HOW? According to the answers, the normal force is slanted such that it is not exactly opposite to gravity. Thus, they don't cancel out, resulting in a horizontal centripetal force. I'm still confused and have the following questions : Why is normal force slanted in the first place ? (Is it because of the earth's bulge, friction or centrifugal force?) I think there's also a vertically-upwards component of the resultant, why is that? (the resultant of normal force and gravity) This is what I see, what is the reason behind this? Source of the image - https://en.wikipedia.org/wiki/Equatorial_bulge Edit - I've gone through this question Is the normal force equal to weight if we take the rotation of Earth into account? but this doesn't clear my doubt regarding the upward component of the resultant of gravity and the normal force. I posted this question because I wanted some more insight into that poleward force and its relation to the bulge of the earth, which isn't emphasised in that question. kindly reopen my question.
0
This Q&A did not answer my question. The voltage of a circuit is the difference in each Coulombs potential energy at the negative pole, compared to the positive pole. At the negative pole, there's a whole wire for the electrons to pass through under the influence of the Coulombic forces; a whole wire with atoms between which they can accelerate, before transmitting their kinetic energy into the atoms. The longer the wire, the more durations of acceleration (thus, the more kinetic energy is produced). But, the longer the wire, the more resistance there is, thus, the lower the amperage. A lower amperage means less acceleration between each collision (assuming constant wire diameter). So, as the wire gets longer, there's more durations of acceleration, but the acceleration is lower. The accepted answer to this Q claims that this is the explanation. But I doubt these two factors cancel each other out such as to leave the voltage unchanged with change in wire length. That is, despite this, I still think a longer wire would mean more kinetic energy is produced. More kinetic energy means there must have been more potential energy that was transmitted into it; thus, the voltage must have been higher. But it isn't, so what gives? There's the distance factor; the longer the wire, the further away from the charged poles of the battery the electrons get. However, this would then make the voltage dependent on how close you lay the wire to the poles, which is again contrary to the assertion that the voltage only depends on the battery.
0
I'm now in the process of writing a report on a lab work i did with an Abbe refractometer. In all sources i found the working principle of this refractometer is described as such: "Light shines into a illuminating prism who's side that contacts the sample is roughed so light scatters uniformly in all direction into the sample. The sample is held between the illuminating prism and the refraction prism (RP). Rays that hit the sample/RP interface at an angle large than the critical angle of the interface suffer total internal reflection (TIR) and thus do not penetrate into the refracting prism. This creates a zone in the refracting prism where there is a shadow. The angle of that shadow depends on the refractive index (RI) of the sample, and it can be measured by adjusting the light/shadow line". Additionally, some sources explicitly say that the RI of both prisms must be larger than the RI of the sample. Critically, i did not find any source that said otherwise. If this is the case, how is it possible for there to be TIR in the sample/RP interface if the RP's RI is larger than the sample's? Furthermore, i used https://phydemo.app/ray-optics/simulator/ to simulate the apparatus. I found that a shadow zone only formed when the condition: "prism's RI > sample's RI" was met. Of course in this situation there was no TIR. In fact, forcing the situation where there is TIR, absolutely no shadow zone was created. Can someone explain to me how this actually works then? Is there something i am getting wrong? I'm almost sure there is because even manufacturer pages say it works with TIR... But still, it is very confusing!
0
Whenever I Google to try to find an actual formal statement of the first incompleteness theorem (as opposed to all the oversimplified explanations that talk about "true but unprovable theorems" rather than theorems independent of the axioms), the definitions that don't just mention something like a system with "strong enough to do arithmetic on the natural numbers" mention a system with an "a sufficiently expressive procedure" for enumerating theorems, which is reminiscent of terminology used in explanations I've seen of Turing machines, and so I thought perhaps it meant that the incompleteness theorems apply specifically to Turing complete systems. However, when I posted about this on Quora I got responses saying that Turing completeness has nothing to do with it, whereas, on the askcomputerscience subreddit, I got responses saying that yes, the type of systems the incompleteness theorems apply to are mathematically equivalent to Turing complete models of computation. So which is correct? Is a system with a "a sufficiently expressive procedure for enumerating theorems" just a Turing complete complete formal language/model of computation? If not, what exactly does that terminology mean? To be clear, I understand that the first incompleteness theorem states only that a formal language with "a sufficiently expressive procedure for enumerating theorems" must be either inconsistent or incomplete (i.e. must either imply contradictions or contain theorems that are probably true in some models of the language and provably false in such models), so I don't need an explanation of what the theorem says, so much as clarification on what types of formal languages it actually applies to. I did see this answer to a related question, which seems to imply that the types of languages/systems the incompleteness theorem applies to are systems that are "recursively axiomatized" systems, but it's not entirely clear what that means. Based on other simplified explanations I've seen of the the theorem, I guessing it refers to something like "a system sufficiently powerful to talk about itself" which would intuitively seems to match up with the concept of using Godel numbering to construct a statement like, "This theorem is unprovable", but that still seems rather vague and nonrigorous. Surely, there has to be a specific and rigorous definition/description of the sorts of systems the theorem applies to though, otherwise how would it be useful?
0
I have been using '[...]' to indicate skips in the middle of sentences, '[...].' to indicate that a single sentence has been skipped (or that a middle of sentence skip '[...]' is at the end of a sentence and so is punctuated), and '[....].' to indicate that two or more sentences have been skipped (or to indicate that an end of sentence skip '[...]' bleeds into a '[...].' or '[....].'): [...]. Because of this qualitative simplicity of negation that has returned to the abstract opposition of nothing and ceasing-to-be to being, finitude is the most obstinate of the categories of the understanding; [...] finitude is negation fixed in itself and, as such, stands in stark contrast to its affirmative. [....]. The determination of finite things does not go past their end. [....]. But since it looks clunky, I was wondering if this is correct instead: [...] Because of this qualitative simplicity of negation that has returned to the abstract opposition of nothing and ceasing-to-be to being, finitude is the most obstinate of the categories of the understanding; [...] finitude is negation fixed in itself and, as such, stands in stark contrast to its affirmative. [....] The determination of finite things does not go past their end. [....] I have not been able to find anything on the topic of skipping entire sentences within block quotes that incorporate entire paragraphs. I know it would be preferable to avoid this situation in the first place by paraphrasing or excluding parts (some style guides even say that ellipses should never be used at the start or end of quotes), but I can't do that here for multiple reasons.
0
In the past two months, I had found a wonderful pdf that went through a derivation of the determinant with calculating the area of a parallelepiped as its starting point. The document did not get into the weeds of calculating the determinant given a matrix or even focus explicitly on matrices at all; it was probably the single best description of why the determinant tells us about area I had found and I just really found the document's approach to determinants extremely useful. I'm trying to relocate this pdf to no avail and was really helping someone could help me find it. What I remember is this: it begins with calculating the area of a parallelogram as its starting point. It describes how we want the area to change when a side is scaled and when the image is sheared (I very distinctly remembering these two things being bulleted in the first page of the paper) and goes onto describe how these properties would manifest in a general "area function," that is a function that takes in the vectors as inputs and returns some type of unsigned area. It proves several facts about these functions that begin to show how the determinant "appears" from answering this question naturally. After this point, they expand their scope to signed area functions and, building upon the results in the first part, show that any function which returns the signed area of a parallelogram is simply a multiple of the determinant. The paper rounds out with a discussion about expanding these ideas to higher dimensions; it describes the problem of measuring the area of a higher dimensional parallelepiped with regards to measuring the area of its shadow. One of the more distinct parts of this description is that it begins with a hypothetical scenario where you find a floating parallelepiped outside of your window one day and cannot interact with it but wish to find its area. My attempts at googling with these bits of information I remember have not been fruitful, so any help finding this pdf would be appreciated.
0
Motivated by question Can IC engines be modeled as Carnot engines?. I am wondering whether/how Carnot theorem could be generalized to other kinds of devices performing "useful work", such as, e.g.: Motor (or generator) fed by a battery Nuclear power generators Solar cells Water wheels I think that the theorem must be generalized in at least three ways: Operating media is neither a gas nor a liquid - that is the reasoning based on isothermic adiabatic expansions might not apply. Generalizing the concept of temperature (introducing "effective temperature"?) - e.g., in case of a batter or a water wheel, we do not have two reservoirs with different temperatures to properly speak of, but rather two reservoirs with different (chemical) potential. Generalizing the concept of useful work - solar cell and water wheel are not really transferring the energy between two reservoirs - the energy already flows, and the device simply diverts a part of this energy into work. But, since the energy flows anyway, it is not clear whether/how the part of it that is diverted is useful: e.g., how is the current generated by a solar cell is more useful than the heat generated in the surface illuminated (which may be also "useful" in everyday sense.) Perhaps, there is not much left of Carnot theorem with all these generalizations, and we simply need to consider it as limited to a particular class of phenomena? If so, are there other upper boundaries on converting energy to work (that would be applicable to the devices cited in the beginning?)
0
The Earth, effectively a non-inertial frame of reference, is where Newton concluded his laws of motion. However, Newton's first law only holds in an inertial frame of reference. In the process of inventing the Newton's laws of motion, since almost all (I suppose) the experiments were done in a non-inertial frame of reference i.e. the Earth, why were people confident enough to believe that Newton's first law is true (to some extent, I am not talking about relativity etc.) in an inertial frame of reference? I am not trying to say that Newton's laws of motion are lies. I just had a logical question: since the research was done in a non-inertial frame of reference, how can we invent the laws regarding motion in an inertial frame of reference? I suppose that's because the Earth can be approximated to an inertial frame of reference as the effect of the self-rotation (which causes the Earth to be a non-inertial frame of reference, in my opinion) of the Earth on the objects on the Earth is quite small. And, therefore when people did the experiments, the uncertainty caused by the self-rotation of the Earth is too small to be significant (or, maybe they didn't even find/realise such an uncertainty!). And therefore, by inference (or guessing?), we can invent the laws of motion regarding objects in an inertial frame of reference. In theory, if we do the same experiment in a true inertial frame of reference, we will effectively get the same/similar results. (This is just my explanation, which can be very debatable!) I want to hear how other people think about this question. Thank you!
0
Let me try to illustrate what I mean. Consider e.g. a Solar radiation storm (Solar particle event) where high-energy protons are hurled at Earth from Solar flares. I've tried to illustrate my conception of this (I know the protons will typically not follow straight paths out from the flare due to the Parker spiral, but it's a simplification): So the protons get captured by the field (given sufficiently low velocities perpendicular to the field, as far as I understand it) and are then led to the poles due to their drift velocity, as they will almost always have some velocity component that's not perpendicular to the field. Now, to me it seems (and the same applies to the plasma in coronal loops as far as I can tell) that there's a current along the field lines themselves due to the drift of the protons, in the direction they're traveling. This should itself induce a magnetic field surrounding the imaginary magnetic field lines at the centers of the helical proton motion as if the magnetic field lines themselves are current-carrying wires, should it not? Something like this: Am I correct in thinking about it roughly in this manner? If so, does that mean that these new magnetic fields could potentially themselves partially trap particles (although I assume the stronger original field would overwhelm it) and induce new magnetic fields around them in turn? Is there a limit to this "fractal" process of magnetic field lines acting as currents inducing magnetic field lines acting as currents, and so on?
0
My question, right up front, is: what is the term for a modifier that behaves this way? But "this way" takes some explanation, and that is the rest of the question. I am a mathematician, and my question makes the most sense in a context where words are formally defined anyway, but you can freely substitute "foo" and "bar" for any technical jargon. A ring is defined to be a set with two operations that satisfy certain properties. Among these properties, there is not universal agreement: should, or should not, the ring be required to admit a multiplicative identity ('unit')? For a mathematician who does not require this, it is easy to indicate when they wish temporarily to impose the hypothesis: they can just refer to a "unital ring". A mathematician who will almost always consider only unital rings might decide to make that part of the definition of a ring, so that they can say "ring" where a more permissive mathematician would say "unital ring". (This lightweight controversy is discussed in the Wikipedia article.) However, this latter kind of mathematician, on making the rare encounter of a ring that does not have a unit, must either make up an entirely new term for it, or call it a "non-unital ring". This usage is almost universally understood, but somewhat puzzling: while it makes sense to understand, e.g., a "commutative ring" as being a structure that it is a ring, and also satisfies the requirement that its multiplication be commutative, there is no way that one can so interpret "non-unital ring" if a ring, by definition, has a unit. So, what's happening here is that "non-unital" is not refining "ring" by adding conditions, but changing the meaning of "ring" by dropping existing conditions. That is, you know something about a commutative ring even without cracking open the definition of "ring" (namely, that it has a commutative operation); but, to know something about a non-unital ring, you must not only crack open the definition of "ring", but recognize which among the properties one is expected to remove. I'm looking for a word describing the behavior or function of modifiers that behave like "non-unital", in the sense outlined above.
0
So, I was reading some books by Stephen King, S.D. Perry, and a couple authors I really love. I notice they'll use pronouns or certains words twice in the same sentence. When I read it, it's pleasant and doesn't sound weird in my head at all. I assumed that if something is published under an author like Stephen King or Neil Gaiman, it must be grammatically correct to use something like "he" or "her" twice in the same sentence. I'm not sure if this is why it doesn't sound strange in my head when I read it, but I've provided an example below. Here it is (bolded words are the repeated pronouns): My example: Martin stared at the ceiling above and grimaced, counting the number of bumps and flecks embedded in its expensive paint. He gave up after some number he couldn't remember, tossed to his side, and yawned. The smell of curing lacquer and stain drifted off a nearby nightstand, burning his nose. And here is another example from It, by Stephen King: When he grinned, there was a ghost of the handsome man he would become in the lines of his face. There examples in multiple Stephen King novels of names repeated in the same sentence as well. I'm using Stephen King as an example, but note that almost every published author I've read does this in wildly different genres and styles. Anyway, is using a pronoun more than once in the same sentence grammatically correct? I know sometimes things are grammatically correct, but discouraged or frowned upon, but that's not my question, here. tl:dr; Is it grammatically correct to repeat a pronoun in the same sentence in any circumstance, and if not, why is there published material that violates this rule?
0
In my limited knowledge of statistics, I am confused between the meaning of Latent Variables and Nuisance Parameters. Here is my current understanding: Sometimes the variance parameters in a regression model (or the overdispersion parameters) are considered as Nuisance Parameters because we are not directly interested in estimating them In a Gaussian Mixture Model, the weights of each Gaussian Mixture are called Latent Variables because we do not observe them directly (although this point somewhat confuses me because we dont observe any parameter directly, latent or non-latent ... all we observe is data) From an estimation perspective: We remove the nuisance parameters (via marginalization, factorization) because they can bias the estimates. I don't fully understand this, but I think it has to do with degrees of freedom, MLE produces biased variance estimates (in complex regression models, we can not a priori know the correction factor needed to remove the bias). Later, if required, we estimate the variance parameters using a more complicated form of MLE called RMLE We remove the latent variables (again via marginalization) ... because they make the optimization problem easier? (i.e. reduce the dimensionality of the function being optimized) Is this the only reason we remove latent variables? Or is there some other reason ? (ex: the estimation of latent variables is biased?) From here, it looks like these concepts are almost interchangeable: A nuisance parameter is latent and a latent variable is a nuisance. They both complicate the estimation process for similar reasons and we try to remove them through clever math tricks (e.g. marginalization). I feel I am missing something here - any one have hints?
0
My partner frequently asks me questions that, when read literally, are questions about the past, but in intent and intended response are actually conditional questions: Did you have any thoughts about dinner? Did you want to have coffee? ...where the intent is not merely to inquire about the preferences I have established prior to the time of questioning, but also about what my preference might be at the time of questioning. "Would you have any thoughts..." or "Did, or would, you have any thoughts..." might also capture intent, but awkwardly. I'm quite literal so I've had to train myself out of interpreting "did" as a past tense question of fact, but it occurred to me that it might be a regionalism or other speech pattern where there's a kind of merger going on between "did" and "would", or other shift where "did" is used conditionally in casual settings, maybe in cases where there's the possibility that the answer might have been determined in the past, but also might not have. Even in my examples above, "would" does feel a bit stuffy and formal by comparison, but I've never used "did" this way myself. Does this interpretation make sense idiomatically outside of the individual context of my partner? Does anyone know if there is any regionalism in play? My partner comes from a family that has lived in Connecticut for generations. My family is mostly from the midwest and south. Please feel free to correct me if I am misapplying "conditional" or can otherwise better describe the usage here, I will update as I can! Also, unfortunately I don't have the gift of search terms on this one, so I haven't been able to find any discussion of it elsewhere, but let me know if I can add any research or context.
0
Double-slit experiment image source: Wikipedia The double-slit experiment can be regarded as a demonstration that light and matter can display characteristics of both classically defined waves and particles. It also displays the fundamentally probabilistic nature of quantum mechanical phenomena. In a double-slit experiment using an electron beam an interference pattern is formed after experimenters record a large amount of electron detections. I have seen this answer by "anna v" which states an electron never travels through both slits only one slit per electron and the pattern formed is only a statistical probability distribution for the entire accumulation. But assuming in an experiment in which electrons travel one after the other and each electron travels only through one slit then how could the pattern on the screen be different from the one when we close one slit interchangeably and send electrons only through one slit at a time. (Actual experiments have shown patterns are different indeed) I think an electron through double-slit as a superposition of probabilities of spatial distribution. Like this picture below: But according to anna v picture that come to my mind is below (several electrons illustrated): So I have two related questions: Is the so-called wave nature of particles only a mathematical model or is there some physical nature (properties) to the probabilistic wave that passes through the double slit? Is stating whether the electron passes through both slits or it only passes through one slit just a personal opinion/interpretation that cannot be proven or disproven by observations? Edit: Evidence supporting simultaneous two (path) position: Using a Mach-Zehnder Interferometer to Illustrate Feynman's Sum Over Histories Approach to Quantum Mechanics One particle on two paths: Quantum physics is right by Vienna University of Technology Double-slits with single atoms: Selective laser excitation of beams of individual rubidium atoms by Andrew Murray, professor of atomic physics, University of Manchester, UK
0
I am working on a problem where I need to extract a connected tree of nodes based on certain attributes while optimizing for the minimum number of nodes. Some attributes of the nodes are known in advance, such as resource capacity. However, I do not know in advance how many nodes will be selected in the final tree, especially the intermediate nodes that ensure connectivity and may act as task-forwarding nodes. The goal is to formulate a mathematical optimization model with a set of constraints to select an ordered tree with the minimum number of nodes, where each node hosts specific tasks. Selecting individual nodes is straightforward, but I am struggling to write the constraints to ensure that: All selected nodes are connected. The hosted tasks follow a desired ordered sequence. For example, if nodes host tasks A, B, C, and D, task A should be in a node that comes before a node hosting task B, and so on. Multiple tasks can be placed in the same node if they do not exceed the resource capacity of the node and do not violate the ordered sequence of tasks. A node with task D may act as a root node in the tree. Most of the existing literature I found online assumes either the total number of nodes in the final tree or the total number of edges in the tree is known beforehand. In my case, it is possible that only a single node may satisfy the required attributes and host all the required tasks, or a set of constraints may be needed to satisfy the required attributes. Any hints or suggestions regarding how to formulate these constraints are highly appreciated.
0
Sorry if this equation is not phrase in precise mathematical form. I am open to suggestions to improve the explanation, and I have tried to formulate the problem as precisely as I could. I was talking to a friend--postdoc in PDEs--today and he was asking me about obtaining some Yahoo data on stock prices for his students. He explained that he had some students working on stock price data and he was looking for software packages/libraries for time series analysis--like ARIMA, GARCH, ARMA, ETS, etc. I told him that as far as I remember, most many models to stock prices or asset prices use stochastic differential equation like the Black-Scholes models. So I was asking if he actually needed like a stochastic differential equation solver, rather than a time series package. He said something that surprised me. He said that many people used to use Black-Scholes models, but implied that they are not used as much any more. He basically said that many people lost money using these models. I myself don't work in mathematical finance or in banking at all. However, I was unaware of any major critiques of stochastic differential equations and their applications. From a formal perspective, I imagine that these SDE models have a set of parameters (means, variances, etc.) that users can tune against some data using an optimization method. I am not sure what optimization would make sense, I can imagine approximate bayes or something, could work, but there are probably a million choices. So is the claim of a bad fit, mean that the error between the model predictions and the ground truth data too high, or perhaps changing over time, etc. Also, are there a different set of finance models being used as replacements for Black-Scholes?
0
I have seen that it is possible to approximate the metric in the presence of a gravitational field by the Rindler metric: Does a uniform gravitational field exist? Is there any acceleration in a uniform gravity field? Applying the principle of equivalence to an accelerated frame Rindler Coordinates and homogeneous Gravity Field Gravitational field strength and Horizon in Rindler coordinates Now as some answers in the links have pointed out, this doesn't quite describe a uniform gravitational, because the acceleration described by the coordinates depends on one of the spatial coordinates. My question is, how can we refer to this as a gravitational field at all? The Rindler metric is derived from a coordinate transformation on inertial coordinates on Minkowski spacetime, so we know a priori the Riemann curvature tensor will be exactly zero in the Rindler metric. In doing this "Rindler approximation" to the gravitational field, say, near the surface of the Earth, we started out with a nonzero Riemann curvature tensor (indicating spacetime curvature exists), and then we obtained a situation in which the curvature tensor vanishes everywhere in the region we're approximating. Doesn't this render the approximation invalid? Even if you argue that the region of approximation is small (which makes sense), there is no sense in which we can make the curvature tensor outright vanish (because the Ricci scalar, which is a contraction of the curvature tensor, is supposed to be invariant). Accompanying this strange change in the curvature, objects that "fall into the Earth" followed geodesic paths prior to the approximation, and under the new approximation, the same objects are now undergoing proper accelerations, meaning they are no longer following geodesic paths. Is there a physical interpretation or the mathematical reasoning behind this change? It seems like we are replacing spacetimes outright as opposed to approximating them. Is this understanding correct?
0
This question was inspired by the interesting discussion here: Why isn't the T in "relative" flapped? It seems like the adverb already and the two-word phrase all ready should be pronounced differently, but as far as I can tell, both sound exactly the same. For comparison, consider the other phrase/adverb pair all ready and already, where there is a difference: "The suitcases are all ready" doesn't sound the same as "The suitcases are already..." because the stress pattern is different ([all ready] as opposed to [al ready]). Additionally, the "l" sound at the end of "all" in the phrase goes on for longer than the "l" sound in the adverb, corresponding to the separation between the two words. If someone said the second sentence (fragment) out loud, it would leave the listener asking "already what?" In contrast, I wouldn't be able to hear the difference if someone incorrectly substituted the adverb altogether for the phrase all together. They both have the same sequence of stressed and unstressed syllables ([all to ge ther] vs. [al to ge ther]). More surprisingly, the adverb also keeps the aspirated "t" sound found in "together," even though that t is between a stressed and an unstressed syllable. This seems very exceptional, since that context usually requires the substitution of the "flapped t" for the "aspirated t." But, if this were the case, then "altogether" could be spelled "aldogether" without changing its pronunciation, which it can't. ("Aldogether" just sounds like I have a stuffy nose.) This also contrasts with "relative"/"reladive" like in this question, where the two pronunciations are interchangeable (the flapped t one being more common when it's said faster). "Relative" is also not an exception to the rule cited in the accepted answer, since there the t comes between two unstressed (or at least only tertiary stressed) syllables. The cited paper states that flapping the t is optional in that case. I'm interested in what the context (either historical/usage-related or some less well known phonological rules, or something else?) could be behind the pair not being differentiable by sound, and only by spelling, but none of the answers to the other question mention the word "altogether." No one seems to have asked this specific question yet, as this one is about meaning and not pronunciation. Here is a reference for the pronunciation of "altogether," confirming it always has a distinct "t" and not a flapped t/d sound: https://en.wiktionary.org/wiki/altogether. This proves that I haven't just been mishearing and mispronouncing the word the entire time!
0
Euclid based much of his geometry on a theory of magnitudes that looks roughly like this: A general theory of whole and part and how they are related in size (eg: the whole is greater than the part). A general theory of the properties of magnitudes (eg: equals added to equals are equal). A basic rule that allows one to determine in specific cases that one magnitude is equal to another (eg: all radii of a circle are equal; all right angles are equal). Euclid applied this theory to lines, angles, and figures (meaning the area). From these simple foundations, he is able to prove the equivalence of all sorts of things that are not related by the basic rule alone. I've been looking for a modern development of this theory, and though there is some interest in it (Robering), modern work on mereology and mereotopology that I've been able to find online all does things that seem to violate the spirit of Euclid's work; in particular: They treat points as parts of lines and lines as parts of the plane (Robering comments on this). Euclid did not do that. In Euclid the parts of a line are lines and the parts of a plane figure are plane figures. They assume a single universal whole of which everything is a part. This is related to the first issue. Obviously there isn't a single universal line of which all lines are a part. Similarly for angles. Also related to the first issue: the works on mereotopology make use of the concept of an interior part, which basically means, for example, the part of a line segment not including the endpoints or the plane figure excluding the boundary. This doesn't makes sense when points aren't parts of lines and lines aren't parts of plane figures. The works on mereology assume that any two objects form a whole. Euclid only considered connected wholes. They assume completed infinities. Euclid avoided those. Robering treats an angle as the infinite plane section enclosed by the rays of the sides of the angle. I assume this is to turn angles into parts of the plane so they can be part of the universal whole. They assume that there is no maximum-sized whole, but this doesn't work for angles if you treat them as Euclid does as their own kind of magnitude. Can anyone suggest a body of work that I can find online that deals with some of these issues?
0
The direction of polarization of a transverse wave is defined to be the direction perpendicular to the direction of propagation of wave or the direction of oscillation of wave, right? But in the case of Electromagnetic waves, a class of transverse waves, there are two directions of oscillations that are perpendicular to the direction of propagation of the wave, the direction in which the electric field oscillates and the direction in which the magnetic field oscillates. But while referring to the direction of polarization of an Electromagnetic wave, only the direction of oscillation of Electric field is called the direction of polarization. But why isn't defined to be the direction of Magnetic field? For this I thought like this: "For transverse waves like waves on a string, to determine the direction of polarization, just put a plane with slit in different directions and the direction perpendicular to the direction of slit in which the wave completely gets absorbed is its polarization direction." Now, if I apply the same logic to the Electromagnetic waves by replacing the plane-with-slit with a material that can interact with the electric or magnetic components of the electromagnetic field, i.e., a polarizer, then I can define exactly what the polarization direction of EM wave is. First let us introduce a polarizer that can interact with electric field. Generally these polarizers contain some long chain linear molecules placed in the same direction. If an EM wave is allowed to fall on this polarizer, with the normal of plane of polarizer being in the direction of propagation of EM wave, in different directions each time, then the electric component of EM wave gets blocked if the direction of electric component and the axis of the long chain linear molecules are parallel. Hence the polarization of EM wave is along the direction perpendicular to the axis of such long linear molecules or simply the direction of Electric field. Now if I conduct the exact experiment with such a polarizer that can interact with magnetic field then the polarization of the EM wave comes out to be the direction of oscillation of magnetic field. But when direction of polarization of EM wave is referred, only the direction of oscillation of Electric component of EM wave is considered. Why is that so? Don't the polarizers that can interact with magnetic fields exist? Or between the both directions of polarizations only the direction of Electric component of EM wave is considered conventionally because of simplicity? Or am I missing something?
0
Firstly: having a lot of difficulty figuring out how to articulate this question due to lack of general math knowledge. There are multiple questions posed below, but I feel like if I knew more they could be condensed into a single question and am hoping someone can suggest and edit to the effect of the below. Thank you in advance for your consideration and assistance on this point! Background I'm taking a course which includes descriptive statistics. In that course they describe the method of calculating covariance and provide that equation. I find myself wondering - why did they choose to define it as they did (using multiplication instead of addition between the terms in the numerator - not even sure if that description is accurate). My question? Are equations for things like covariance derived from looking at phenomenon and 'cracking the code' of how those phenomenon could be described mathematically via a proof? Or does one, at some level of mathematical skill, say: I want to model this phenomenon and I want that model to have these features and qualities to its output, therefore, I choose this particular structure to achieve that goal and then I prove that functionality through a proof? If the latter case - what progression of mathematical learning develops that skillset? Is the same skillset used in both cases? Why I ask If I understand the motivation of the creator of the covariance equation, I could compare it to my own motivation and perhaps come up with a different approach to the same problem that better fits my own goals because maybe our goals are similar but not the same... Thank you again for any advice on how to simplify this..
0
I'm trying to understand how electromagnetic radiation is created and can propagate through the void. I do understand the concept of an electromagnetic field. But I don't understand how we get from a "field" to a "wave". I'm not really interested in detailed mathematics of how this happens, rather I'm looking for a complete high-level answer that to the extent possible: Explains every step of how we get from having nothing to having an electromagnetic wave that propagates through space (e.g wifi signal) and Provides clear and intuitive justification for any point/law/fact it uses. Preferably all of this should be included within the answer but I do appreciate inclusion of helpful links and references for additional context. Below is my current understanding along with some more specific questions. To create electromagnetic radiation: You make some charge move (e.g through an antenna by creating an oscillating dipole). This moving charge creates fluctuating electric fields and magnetic fields around it All good so far as this is what you expect a charge to do, however these fields will become weaker proportional to inverse square of the distance and you would expect them to basically disappear at a distance. Electric and magnetic fields somewhat magically interact and now you have a self-perpetuating wave that doesn't fade out like the field. I don't understand this part, e.g how we get from a "field" to a "wave" and unfortunately most of the resources I tend to skip over why. I believe there should be a better explanation. For example, according to Wikipedia, Faraday/Len/Lorentz laws have to do with this. However all these laws/theories require a conductive "circuit" that we don't have in the air in the vacuum of space (I do understand how these laws explain how your antenna would receive an electromagnetic signal). Considering magnetic and electric fields/forces will act on charged particles, this raises a few questions: Does the magnetic field produced by the antenna somehow create charge in the air surrounding it? If the whole wave propagation is based on interactions between the fields and charged particles, then how can a wave propagate through the void of space when there is no charge? Thanks!
0
I'm an undergrad with an electronics repair background trying to find an explanation for one of the fundamental aspects of a transformer. Every explanation I've found of a transformer's basic operation insists that there is essentially no power loss; that is that VxI in the primary = VxI in the secondary. This suggests to me that if there was effectively no current in the secondary (i.e. it was an open circuit or there WAS no secondary) that current would more or less stop in the primary as well. My instructor adamantly insists that this is not the case, and that current in the primary is constant no matter what might be happening in the secondary which boggles my dang mind as it seems to fly in the face of every explanation I see, and I have not been able to get a clear explanation out of him. If this were the case, then a device with a transformer would consume equal power regardless of whether it was operating or not! Right? It seems to me from an intuitive and practical perspective that the EMF on the secondary should be relatively constant with respect to the voltage across the primary, and that the currents through each should be proportional at any given time, so that a changing load across the secondary should somehow influence the current in the primary, but I can't work out or find an explanation of that... mechanism. I feel like it must have to do with the secondary influencing the magnetic field in the core... Please help! I've been wondering about this for years!
0
In QFT, many mathematical issues arise. Setting aside renormalization, these deal with rigorous constructions of objects underlying QFT: i) In the canonical quantization approach, the main issue comes from trying to multiply (operator-valued) distributions. My understanding is that mathematicians have formalized some settings in which this makes sense, but you have to be very careful ii) In the path integral approach, the main issue comes from defining the path integral (both in terms of defining a sensible measure on paths, as well as making the integral well-defined despite the presence of oscillatory integrals). My main question is: Are the two issues (one for the canonical approach and the other for the path integral approach) related? If so, intuitively (from a purely mathematical perspective) how is the problem of defining products of distributions related to the problem of defining path integrals? I'm particularly curious as to whether there's some intuition to be gained from the usual proof of the equivalence (between the two approaches) in non-relativistic QM (which begin's with Schrodinger's equation, inserts of bunch of intermediate states, and removes operators one by one). In the case of non-relativistic QM, my understanding is that the canonical approach can be made fully rigorous, while the path integral approach isn't quite so (one can use Wick rotation to compute the integral using the Weiner measure, then rotate back using some analytic continuation argument, but I have read that this is only justified in some cases). Given this, do we expect that the issues of rigor aren't quite the same in the QFT land as well? Disclaimer: Apologies for any inaccuracies in my characterization of anything, as I'm still a beginner grasping with many aspects of QFT I am aware that decades of work have gone into formalizing QFT rigorously, and have addressed many of of i) and ii) for different variations of QFTs. What I'd like to understand here in particular is, if these approaches have given insight into how the different issues I outlined above are related. EDIT: I edited the original question to focus on just one question. Originally, I also asked about renormalization (which is what one of the comments addresses)
0
Would the positive and negative charges line up on either end of the wire? Or would it induce a current? Or would the wire be unaffected by the magnet? This was deleted for being a homework question. I'm not aware of any homework questions like this. I thought this up in my own head. (I drew the diagram in a program called Biorender, which I use regularly.) This IS a question about the underlying nature of physics. The question I have is this, "If simply moving a straight wire through a field causes motional emf, why then does a loop of wire require an area and a certain angle to create a current?" Why wouldn't moving a loop create a current if moving a straight wire creates a potential difference? All of the explanations I can find out there separate the two concepts from each other. "Because Lenz's law states..." How are they tied together? It's still a wire through a field. Physically, what happens to the charges when a loop moves through a field as opposed to when a straight wire moves through a field? But before I can even get to how the two situations are the same or different, I had to clarify some things about how a current can be induced in a straight wire. The following image was meant as the answer to my question, but Stack Exchange seemed to feel it went in the question. Now it may not make sense why I needed an answer when the answer is below. But that's why the answer is in the question itself in case anyone is confused about that.
0
I have been trying to learn about lattice path integrals. Unfortunately, majority of the literature on this topic is in regard to Lattice Quantum Field Theory and Lattice Quantum Chromodynamics. That is fine, however what I desire is literature on evaluating plain, standard quantum-mechanical path integrals on a discrete spatial-lattice, ideally in continuous-time. This could be a simple chain of atoms or a crystal lattice, but it would be a system where the lattice actually exists and is not just used as a regularisation tool for a continuum quantum field theory. For example, what is the "path integral" (I imagine this would instead be a summation now instead of an "integral") for a free particle on a lattice? Can it be evaluated in that it has a closed-form solution? Even if an ideal closed-form expression cannot be obtained, how far can one go? (E.g. reduce to discrete Gauss-sum or a Jacobi-theta function) In the continuum case, this is just a Gaussian functional integral, which results in a Gaussian. What would it be if space is discretised? Then what is the path integral for a harmonic oscillator on a lattice? What if you add a force term? What about a general quadratic Lagrangian? I have struggled to find mention of these problems in the literature. The standard approach is to introduce the typical time-sliced path integral derivation, and then jump to the Klein-Gordon equation. Are there any resources that do not jump to field theory and instead just investigate these lattice problems for simple quantum mechanical toy models? As a slight aside, I was wondering also if you could start from a typical continuum space path integral, for say a free particle, and add a delta-functional constraint to restrict the continuous path to lattice sites? This would enter the action then as a Lagrangian multiplier and may be evaluable. Thanks!
0
I've started to study in details quantum optics and I find difficulties in linking the concepts of coherence and correlation among fields, especially because I'm building right now a background on classical optics aven if I already have a strong background in QFT. As far as I have understood, roughly speaking, coherence is meant to be some kind of measure on how similar the properties of one or more waves are at two different spacetime points. To describe with a quantity this feature, we make use of correlation functions to build the coherence factor (I'm thinking about the second degree of coherence): the closest the factor to one, the greater coherence my wave(s) shows between the two points. Furthermore, correlation functions usually arise in photodetection experiment, stemming for the importance of coherence itself in any process regarding the measurement of EM field (I know there are alternative ways to takle detection, but I'm sticking the question to the very basics of quantum optics). Now, this means that correlation functions give me the "degree" of dependence between the photodetection probabilities at two different points. This means that they have more or less the same role that in any other QFT. But now, this seems to me conflicting with the nature of coherence itself: more specifically, in (free) QFT we have the so called "miracolous cancellations" where spacelike correlation functions vanish due to commutation relations, but in the optical theory spacelike correlation functions are at the grounds of spatial coherence measurements, and thus are usually non-zero. I don't get what am I missing, probably I'm misunderstanding some basic stuff regarding coherence itself, and I'm hoping to get enlightned about it. Thanks for the help.
0
After reading the first two answers to this question, I've become interested in understanding the concept of (co)tangent complex as a way to get some intuition about homotopical algebra, being somewhat more used to the algebro-geometric framework than to the algebro-topological one. More specifically, I'd like to understand this concept in order 'to do basic geometry - this time calculus - on a singular variety', as stated in the first answer (whatever this means), but not 'mechanically', instead trying always to keep an organizing point of view like the one described in the second answer. Also, I'd like to do so following the shortest possible path from basic algebraic geometry and basic category theory directly to the subject matter, with the smallest possible amount of detours, but comprehensively including all the needed basics. (I've studied some scheme theory and homological algebra before, including derived categories, and also ventured a little bit more deeply in the categorical world, but never dealt professionally with these topics and will have to recall a lot before being sufficiently at ease with them.) In this context, what I'm looking for is a double list of topics, one from algebraic geometry and the other from category theory, both ordered by degrees of complexity, designed to be studied in a parallel manner, showing the highest possible level of correspondence since the very beginning, and if possible accompanied with the most up-to-date literature available for this purpose. I'd be very grateful if someone would spend some time thinking about this and writing a nice answer.
0
There is a particular Twitch streamer from a video game I played, MermaidonTap. If you subscribed and follow her, not all but most of her public streams, she uses "fuck" and a lot of the word "cunt." I am a fellow American citizen, but was not born in US but in an Eastern Asian country, and am trying to learn English. I'm just wondering, is it normal for women to use the word "cunt" normally? I took university English and finally passed it on the fifth time growing up in America. So I'm trying my best to understand how the words in English come about and how certain words passed down to certain generations. As I have researched, the word, "cunt" is a derogatory term towards women and as well as another meaning as vagina. It just seems like, it's derogatory when men call women cunts. But what about women calling other people "cunts." I still find it derogatory, vulgar and I find it hard on my ears to hear words like "cunts" or "fucks" from women like her. Also, some of my teachers and some friends and my own mother, they both say that people who curse a lot and can NOT control their word usage, means they don't know any other language but vulgar words. They didn't go through college and university. They're not educated was what I heard and been told. Do you think this is also true or not? Overall, I'm just trying to understand the American English literacy. I don't mean to offend anyone but rather understand the meaning of the word being used at a person or in a public audience, like her on her public Twitch stream, MermaidonTap.
0
As far as I know, the first statement of the correspondence is between two formal theories named simply typed lambda calculus and intuitionistic propositional logic, which maps types to formulas and terms to proofs. We also have other statements for higher order logics and type theories. But it is also famous to replace the word "term" with "program" when people try to express the correspondence informally (like Wikipedia). I think that if we assume that programs and terms are same things here, then we can conclude that writing a program, which means expressing a program in a possibly Turing-complete language, is actually proving a mathematical theorem intuitionistically (without the use of law of excluded middle and other proof techniques that are banned for an intuitionist!) Is this true? I think that replacing the word "term" with "program" is misleading here! Because in any statements of the correspondence that I am aware of, we don't have a Turing-complete type theory, and I don't think that type theories are computational models like Turing machines and untyped lambda calculus. Also I think the fact that untyped lambda calculus is equivalent to Turing machine is also misleading to think that a term in a type theory is equivalent to some Turing machine, while as far as I know the equivalence between untyped lambda calculus and Turing machines is not necessarily a bijection, and even if it is, what does it have to do with simply typed lambda calculus?! All in all, don't you think that using the word "program" in the statement is wrong and misleading?
0
A textbook I'm using to refresh some basic grammar states that indirect objects can be identified by it's answering of questions such as 'to whom', 'to what' etc. (fair enough) and they always come before direct objects in a sentence (this raises questions for me). So the text would identify the pattern in: The teacher gave the students homework. as: S - TV - IO - DO but the pattern in: Tim kicked the ball to Ken. as: S - ITV - Prep. It's been a while, but I was taught the direct object received the action of the verb and the indirect object received the direct object, and also that a verb's classification of transitive or intransitive arises from how it is used in the sentence (i.e. it's not intrinsic to the word itself). So that I would have identified the second example's pattern as: S - TV - DO - IO because the preposition is receiving the DO and therefore is the indirect object. Since 'kicked' can be used with or without an object (i.e. The baby kicked.) I let it pass thinking the text and I could both be correct. But a third example from the text has me questioning how transitivity is assigned: Problems led to desperation. the text again gives the pattern as: S - ITV - Prep. But, 'Led' is almost never intransitive - not unless it's the answer to a question or given some additional context. And so this classification seems more forced to me. The text seems to be implying that the role the verb plays in the sentence depends on how you classify the thing it is acting on and that prepositions can not be indirect objects (despite receiving the direct object as well as answering the question 'to what'). Could someone please clarify, illuminate, or otherwise help me make sense of this?
0
About a year ago, I came across a really cool property of the envelope curve of a parabola that I couldn't prove. I'm posting it now for help: If we have a straight line and a circle that belong to one plane, then the enveloping curve of the parabola whose focus is a moving point on the circumference of the circle and whose guide is that straight line forms two parabolas that can be drawn with the four information: the perpendicular from the center of the circle on the line is the axis of symmetry of the two parabolas, the distance between the guides of the two parabolas is equal to the length of The diameter of the circle, the center of the circle is a common focus of the two parabolas, the straight line is midway between the two guides. If you start from a circle that does not share the line at any point, the segment will leave traces of two parabolas inside each other, as shown in the picture But if you start with a circle cutting the straight line, leave the traces of two intersecting parabolas at two points on this straight line. But if a circle starts touching the straight line, then the traces of one parabola touching the straight line will be left I was hoping to prove it myself, but unfortunately my level of proof does not allow me. Please do a complete proof that discusses the three cases, please, and thank you. Also, is this feature previously discovered or is it new, please attach a reference if it was previously discovered
0
My question probably sits more on the applied mathematics side. There is a spring mass system, for the case where the mass moves into the spring (vertically downwards) the spring experiences compression and the second order equation for a spring mass system will provide a solution. However, when the mass moves vertically upwards the lets say that the spring cannot experience tension and thus the mass moves freely without the effect of the spring. It is pretty easy to develop the differential equations for each of the two scenarios individually. However would the following procedure presented a decent manner of modelling the system as a whole: Lets say we apply a displacement as an initial condition which causes compression in the spring. Thus initially the differential equation of the spring governs the system. At each iteration the results are reviewed to check if the spring experiences tension. if the spring has tension then the initial conditions for the last iteration of the spring mass system is used as initial conditions for the differential equation which presents free motion. The above mentioned points will work in both directions, thus if the free motion differential equation is in use and the system begins to move vertically downwards the initial conditions from the last iteration will be used as initial conditions for the spring mass system. I am not looking for a coded solution for this since it can be done easily on Python and Matlab using the ode solvers. Would the presented method provide relatively accurate results for the system?
0
As a rule of thumb, vapor condensation usually happens at the interface between the system and the heat reservoir. Now, according to my analysis below, it is the only way for vapor to condense, which implies the near impossibility of condensation in the bulk of vapor. : As the temperature drops, within the vapor system, whose thermodynamic behavior can be described by a canonical ensemble, energy is favored over entropy. That is, the energy should escape from the system to the heat reservoir to create a large entropy overall. This means that the system has the tendency to evolve into the low energy configuration when the temperature drops. Thus, for such a phase transition to take place, the passage of energy from the system to the reservoir is crucial. The case for vapor is unique in that its only access to the heat reservoir is on the surface, while for other systems, such as the Ising model, it has access to heat reservoir (the phonons, for instance) within the bulk so that we can see "bubbles" of broken symmetry, domain walls, forming inside the bulk. At the interface between vapor system and the heat reservoir, the mechanism for such a transfer of energy is the conversion of kinetic energy of vapor into the phonons of the reservoir. In the bulk of vapor, however, there is no direct way for the energy to be transferred. Classically, the energy released by forming a droplet can only be carried to reservoir by vapor. This process is extremely inefficient, and its explicit mechanism is obscure. Quantum mechanically, it should be possible that the release of energy is in the form of radiation, where the "radiation background" plays the role of a heat reservoir. Clearly, these two passages of transferring energy to the reservoir should be negligible compared to the mechanism taking place at the interface. So, in any realistic cases, condensation of droplets never happens in the bulk without impurity. It remains to be checked whether such an analysis is physically well-founded and consistent with the experiment.
0
I am using ZFC as a tool to demonstrate my problematic logic. In zfc we construct a proof system for zfc in zfc (a simulation of a proof is what I mean); we will call it inner proof system. We establish that if there is a proof in this inner proof system that concludes A then A holds. Turing machines can be formalized in zfc with the notions of halting and not halting. The whole question relies upon the argument that the zfc can prove that for every halting turing machine on a certain input there exists a proof of this in the inner system (which I am not entirely sure it is provable in zfc without further assumptions). A turing machine that takes an input (and treats it like an incoding of a turing machine and also as an input) and goes through all possible proofs (in the inner proof system) that conclude that the input halts on itself or does not halt. If such a valid proof was found our machine does the opposite of the proof's conclusion (meaning that if the proof demonstrated halting the machine would enter an infinite loop and if the proof demonstrated not haulting the machine would halt). Under the assumption that zfc is consistent it holds that this turing machine does not halt on itself. This is a construction used to demonstrate the first and second incompleteness theorems in zfc for computer scientists. Assuming the consistency of zfc, the statement that the aforementioned turing machine does not halt on itself cannot be proven by zfc and thus cannot be proven by the inner proof system. Assuming the consistency of zfc, according to the completeness theorem there exists a model of zfc where the statement: "the aforementioned turing machine does not halt on itself" is true and another with this statement false. My problem is with the model in which it holds that this statement is false. This means that the relevant turing machine halts. It follows that there is a proof of that in the inner proof system. It follows that the inner proof system is inconsistent. I know there is something wrong with this logic but I cannot pinpoint it (obiously because I did not formalize this argument sufficiently). Where in this sketch of a proof does the argument fail (for example because a statement is not directly provable from zfc without certain assumptions)? Note: my backgroung is in computer science
0
"For classical (non-quantum) systems, the action is an extremum that can never be a maximum; that leaves us with a minimum or a saddle point, and both are possible." The above statement is an excerpt from the "Introduction" (preface) of the book "THE PRINCIPLE OF LEAST ACTION - History and Physics" by ALBERTO ROJO & ANTHONY BLOCH. I want to know whether the "For classical (non-quantum) systems, the action is an extremum that can never be a maximum" aspect of that statement is true because it looks pretty definitive. (definitive in the sense certain or assertive) Note: Now I know there are a lot of related questions that look like this, but not any of them looks for a direct and definitive answer for this direct and definitive question, most are descriptive questions for descriptive scenarios and most answer's given are describing particular scenarios, incomplete ones or ones that asserting irrelevance of such question's for actual path determination as we only seek stationary action not whether that is minimum, maximum or inflexion point. (This is intended as to why this should not be labelled as a duplicate, not as a judgement on other questions or their answers as they serve their intended purpose. It is however important to differentiate between the scope of this question and other similar questions. I hope the Phys.SE community will respect the original poster's judgement on the relevance and uniqueness of their own questions unless there is overwhelming evidence to say otherwise.). I have already browsed similar questions as indicated by the system and have not found any definitive question or definitive answer. This definitive question clearly expects a definitive answer, so I hope it will remain a question, not a duplicate.
0
I am an A-level physics student, and I've been taught that temperature is the average kinetic energy of a particle. So when gas particles are heated, they move faster. This makes sense as an airplane traveling faster does make the nearby air warmer when measured from the plane. Say I release a box of room temperature atmospheric pressured gas in the vacuum of space. Assuming there is no gravity, all the gas particles will be traveling in a straight line as they won't be bumping into other particles. And since space is only a few degrees above absolute zero, the gas particles will cool down after they transferred a lot of their "thermal" energy via radiation, and thus should slow down (lower temperature = lower speed). Looking at a single gas particle this breaks the conservation of momentum, as it is quite literally slowing down to nothing. So what's going on. Now imagine there is indeed gravity in space, the gas particles will eventually start traveling and be accelerated toward a source of gravity. So its kinetic energy increases and so does its temperature (higher speed = higher temperature). First of all it is heated up by nothing without any sort of heat transfer taking place, and would there be any distinction between temperature and speed? Why does this only seem to apply to gas and not solids - a fast moving car wouldn't look hotter would it (ignore friction with air). So can anyone point out where the chain of logic breaks down because it is not making any sense.
0
Many electromagnetic interactions are modeled as exchanges of a real photons: e.g. an excited electron can relax and emit a photon. Somewhere else, a photon and an electron can interact, "consuming" the photon and leaving the electron in a more excited state. Electromagnetic radiation is modeled as a flow of real photons. The flux of energy radiated through a surface over a time is the sum of all photons' energy that pass through that surface. Other electromagnetic interactions are modeled as exchanges of virtual photons: e.g. the electrostatic force between two electrons is mediated by virtual photons exchanged between them. I understand that these virtual particles are not "real" in the sense that they can't be measured directly, and are just a representation of whatever actual physics our models approximate. They "exist" transiently, in infinitude. In a vacuum field, virtual photons pop in an out of existence, with their total energy and momentum summing to zero. Can electro/magnetostatic fields be modeled as flows of virtual photons to/from on their sources? In the absence of a field, the "positive" and "negative" virtual photons cancel out in energy and momentum. In the presence of a charge, are positive virtual photons flowing one way, and negative virtual photons in the other, relative to the charge? What would the net energy and momentum flux through some surface around e.g. an electron be? Part of why I ask: radiation pressure is quantized in the sense that each individual photon imparts momentum on an object, instantaneously, yet it's difficult for me to imagine the situation for static charges where they accelerate continuously.
0
When calculating area of a hole in an irregular surface for water flow calculations, what defines that area? I need to calculate the amount of water passing through holes in surfaces in a fixed time, knowing the pressure on either side. This is mostly a straight-forward process (discounting the discharge coefficient, but that is a different matter), but the formulas depend on the area of the hole. What I know about the hole is its boundary. If the surface is planar, this is obvious enough and can be calculated via Green's theorem. But when the hole is in a non-planar surface, it becomes much more problematic. What does the area of the hole even mean? The area of the surface? The hole is exactly where the surface does not exist. Outside the hole there is some surface, which I could presumably find information about (though at a serious increase in complexity), but where the hole is, there is nothing. I cannot even be sure how the surface where the hole was cut was shaped. That information was lost with the hole. "Yeah, there was a kilometer long capped tube there! Good thing that was where the cut-out needed to be." Besides, the water doesn't know how this mythical surface was shaped either, so its behavior will not be influenced by that. Whatever definition of area is appropriate here, it cannot be dependent on the exact shape of some surface. The areas of all possible surfaces filling the hole is clearly bounded below, so there is some minimal area. Presumably this would be the best choice. That is a mathematical rather than physical question, but I want to be sure I am properly handling the physics instead just jumping on an idea.
0
What is electric current? An electric current is a flow of charged particles, such as electrons or ions, moving through an electrical conductor or space. It is defined as the net rate of flow of electric charge through a surface. How do we generate electric currents? We know charge is the property of a particle due to which it can interact with electric fields and experience electric forces. We use this property of charge to have electric currents; we connect a conductor with a potential difference (battery) so there an electric field sets up in the conductor causing the free-charged particles (electrons) to drift opposite to the direction of the electric field. Hence we get an electric current. But electrons have another property too i.e mass, which is the property of a particle by which it can experience Gravitational forces. I need help in designing a setup to produce electric current using the property of gravitational mass of electrons. So if we have a long conductor wire and a Gravitational field is switched on such that it's direction is along the length of the wire then electrons (which are free) will experience a Gravitational force and we should have a flow of charges, the electric current. But there can be some problems in this: The gravitational forces are way more weaker than the electric forces. As free electrons move away from one end of the wire, they leave this end positive, so a mean electric attractive force acts on the free electrons tending to prevent their further movement. Possible solutions and crux Electrons in a conductor are loosely bound to the nucleus and at any temperature they will have enough thermal energy so that they can be free and do continuous random motion like gas molecules in a container which is also called brownian motion. And our sir has told us that brownian motion ceases at absolute zero temperature. If we connect the wire from top to the earth then the charge vacancy can be fulfilled by charge flow from the earth. So above thought experiment suggests generating current using gravity instead of electric potential difference. Some study materials I don't thought that I will need to do it but now I felt the need to do this after an answer asking for "how one would switch on a gravitational field" and there can be more like that comments in future maybe. So here is a Wikipedia page about thought experiment
0
The way I work on homework questions, especially for analysis and topology, might be a little different (or maybe not). I would remember the questions and think of them when I run, take a shower, and during other periods when I don't need to use my brain; later I will write out the solution without much thinking. This typically works well even on harder questions that my professor expects more than two hours to solve. However, sometimes I forgot about what I thought before, and my notes for insights, which are random diagrams or phrases on a scratch paper hidden in a stack of papers, are normally either not found or incomprehensible, so I have to rethink the questions. When I actually write out the proof, it is very readable, but as an undergraduate, I have many other time commitments, so I don't have enough time to completely elaborate or type out what I'm thinking. In that case, how can I record my thoughts quickly so I will be able to reproduce them? Also, sometimes I would try different approaches, and typically most of them do not work, so how can I record these attempts and not go into loops? What is frustrating is sometimes I go back again and again on a "branch" that is seemingly close to the answer but is very unrelated and other times I spent a too short amount of time on something that is close to the solution. Is there a way to prevent this? For mathematical research, I would imagine the complexity dramatically increases. For my research project (as an undergraduate working in mathematical optimization), my professor typically introduces to me some very nice lemmas, which help me to prove the optimal goal. However, if I'm working on complicated questions by myself, how can I keep track of all the methods I tried and not worry about forgetting about them the next day? Of course, typing everything out is a solution, but how exactly can we type something that is just vague intuition in our brain instead of proofs for lemmas and theorems?
0
In machine learning sometimes we build models using hundreds of variables/features that we don't know (at least at first) if they might have a relation with the target. Usually we find that some of them do and others don't. Some of them even have a true relation that we couldn't think of at the begining. Once we build a first model, sometimes we have an idea to include a new variable that we know it has a natural relation with the target and that we didn't think of at first. Sometimes this new variable is sparse, though, what means that it's constant or null for the major part of the data. The problem then is that to use the information that the new variable has, we need to find in some node a cut point of that variable that reduces the loss_function more than all the other cut points of the other variables. However, a sparse variable usually doesn't reduce the loss_function a lot because the major part of the data ends on the same side and only a very few part of the data goes to the other side. Also, when we have that amount of variables, statistically we find cut points in other variables which are not related to the target that reduce more the loss_function for those data points in that node. Not because there is a true relation, but just because of statistics. This ends in overfit and also not being able to use the prediction capacity of our new variable. In this circumstances, what can we do to extract the value of our new variable?
0
In Einstein's original thought experiment involving "a (very long) train running along a [straight] railway embankment", of essential importance appears the prescription that "[E]very event which takes place along the [railway track] line also takes place at a particular point of the train." The constituents of the train are surely distinct from the constituents of the railway embankment and track (especially since they are supposed to have moved wrt. each other; each constituent of the train and each constituent of the track segment under consideration have separately met each other in passing). Is it consequently correct to say that each separate event involving train and embankment/track has one particular constituent of the train and one particular constituent of the track as participants in this event, such that this particular pair uniquely identifies the event ?, and that each such event has two distinct parts, namely one distinctive part attributable to the participating train constituent (which is characterized by the train constituent indicating being met, in passing, by the track constituent; perhaps with additional characterizations), and another distinctive part attributable to the participating embankment/track constituent (being vice versa foremost characterized by the track constituent indicating being met, in passing, by the train cconstituent) ? Note that the event-parts in question are not presumed to be separate from each other, or resolvable, in a geometric (spatial) sense. My question is not whether and how certain sets of (finely-resolved) distinct events may be considered and addressed as one (coarsely-resolved) event; nor whether any one actual particle (or even several) may be considered fully contained in any spacetime region of finite spatial extent. My question is rather conceptually: Whether (at all) and (if so) How to reconcile speaking of an event as "having distinguishable parts", as described above, while also speaking of an event as "a point in spacetime", or "a point of spacetime" with the understanding that: "A point has no part(s)." ?
0
I'm currently taking this college calculus course, and this exercise has stumped me. It is in German, but hopefully what it's asking is fairly clear. To summarize, the problem first asks me to prove that the preimage of an intersection of a family of sets is equal to the intersection of a preimage of a family of sets, where f: A -> B and Ui is a family of subsets of B. To do this, I just gave the general example of x being an arbitrary element of A, which means that x is an element of the preimage of the intersection, which means for all values of i f(x) is an element of Ui, which means x is an element of the intersection of the preimage of Ui. Image of my answer cause I can't figure out MathJax: Then the problem asks to show that for a family Vi, a subset of A, an image of the intersection of a family of sets is not in general equal to the intersection of an image of a family of sets. Yet, for every example formula and sets I can conjure up, I always seem to prove that the two sides are in fact equal. No amount of coaxing and complaining on Bing or ChatGPT causes them to bring an example up either - they just keep giving examples where the two sides are equal. So, is this a typo from my professor and this expression actually is generally correct, or am I missing something? What family of sets and functions would make these not equal? Thank you!
0
How is quantum entanglement different from a controlled experiment where a pineapple is smashed at a high speed to a perfectly symmetric object and measuring one of its piece in air for spin, speed, direction etc. and finding correlations with other pieces of pinapple in mid air? In quantum mechanics information cannot travel from piece A to piece B at faster than light to explain the correlation so the result is seen as unexpected (at least to people outside the science community like me) and at any time a piece can have any property but once the properties of a piece is measured the wave function will collapse and the properties of other piece or pieces becomes clear. But isn't the correlation simply due to how the piece or particle was generated? No information needs to travel because the particle is simply continuing to react to the same common generation event? If we can call it information isn't it already with all the particles or pieces of pineapples in air already? I (am a beginner, used to be interested in physics but nobody gave me answers so lost interest eventually and took a career in finance. this one was one of my questions) also view quantum mechanics as deterministic rather than probablistic but think the probablistic approach is necessary because we cannot measure all the variables with our very limited measuring and processing capabilities. For example, a roll of die by a machine is deterministic if we measure and process everything from the light fluctuation in power, air movement in real time, mass of the die, initial position, curvature, area of contact, etc. but because that is too complicated we use it as random with a probability. I feel in quantum physics we are doing the same thing as it is simply more practical, but everybody I talk to truly believe I am wrong. Mentioning this as it is related to the question, that is, the properties can be calculated without measuring in quantum mechanics if we have virtually unfathomable knowledge, measurement capability and processing power. We don't have that so in my view too probablistic quantum mechanicsm is the way science can progress, so no question about that, but is it established that quantum mechanics cannot be deterministric in reality? Just want to know where and why I'm wrong. Thank you in advance.
0
It occurred to me that the limits of possibility to the nature of the universe is it is either deterministic ie we are all at the will of natural laws that determine the outcome of events from the moment of inception and we are philosophically dust in the wind. Or is the world random and our future is uncertain and indeterminate we have freewill and there is nothing governing our future but our free choice or is it both of these scenarios happening concurrently. I am no expert, but from what I gather from quantum mechanics is that the cause of events at the quantum level is indefinite and the outcome of these uncertain and probalistic. There is the principle of uncertainty in which determining the property of a particle results in an uncertainty in determining the nature of another property of this particle is it this principle that makes it difficult to get precise measurement that is a factor in us being able to predict the outcome of events in a linear deterministic equation that gives us the unpredictable nature of quantum mechanics ie there is an equation governing the outcome of events at the quantum mechanical scale and classical scale but we are unable to feed that equation with precise data to enable us to accurately predict their outcome ,kind of like how dynamically chaotic systems are mathematically determined by there initial condition but a small perturbation or difference in initial conditions results in a different outcome. I know its that old chestnut I am not a mathematician or physicist but there can only be three possible scenarios any response will be welcome even if its disparaging.
0
The proofs presented in lectures, textbooks ect. are usually cleaned up versions that show just the necessary steps for logically proving the theorem, not the thought process that went into the proof. To give a concrete example, I'm working through an (overall quite good) MIT Open Course Ware class on real analysis. I just paused a lecture where the professor said: Now, when you write a proof, as you'll see, it's going to be magic that somehow this h does something magical. That's not exactly how you come up with proofs. How it comes up is you take an inequality that you want to mess with, you fiddle around with it, and you see that if h is given by something, then it breaks the inequality or it satisfies the inequality, which whichever one you're trying to do. And proceeded to just write the finalized proof. Because that's not the part I care about. The main thing I want to learn is the part were you do the fiddling around to come up with the proof to begin with. Verifying proofs other people made is (relatively) straightforward, and the property being proven (Q doesn't have the least upper bound property) is important but I'd be willing to take it as an assertion if I was just trying to learn about Q rather than how to do analytical proofs. I would really like to see examples of someone who is good at proofs showing their work in creating a new one including the dead ends, fiddling around ect. I have tried to teach myself this step by just going out and proving things, and have made a little bit of progress but thing I would greatly benefit from more examples of deriving proofs.
0