snippet
stringlengths
143
5.54k
label
int64
0
1
G is a given sphere in the space. For any line e that has no common point with G, define the line f as the conjugate of e with respect to G if f joins the points of tangency on the two planes tangent to G passing through e. Show that two lines of the space passing G are skew if and only if their conjugates with respect to G are skew.
1
Let's says there is a fundamental particle: That is so massive that it is a black hole by itself (Compton wavelength < Schwarzschild radius) That carries a conserved quantum number (e.g. charge of an exotic interaction) which no lighter particle carries Would it be able to emit Hawking radiation? If not, does it contradict with the classical arguments (entropy analogy, pair creation at the horizon etc.) regarding the origin of Hawking radiation?
1
I am a programmer and I am doing a camera simulation, I am stuck in a matter of how to know where arrives every ray of light after traveling through the lens and being refracted. Every point of the object gives an infinite number of rays, but in my simulation I will take five random rays to trace from every point from the object. The rays from one point of the object should also come to one point on the film. How can I know this specific point on the film for each point from the object? Hope to find some help here about this problem.
1
I'm going to start a degree course in physics next year. So far in high school I covered some physics without calculus. I know that I will start everything from the beginning at university, but I would like to prepare in some ways. Then the question is: what should I do? Should I revise what I studied? Should I go head? Can you suggest (from your experience) what should I do before starting a physics degree course? PS In case you recommend to study some material, can you suggest a book?
1
There are so many books teaching how to take derivative and integration of a function. I think I'm good enough (enough for me lol) in those parts, my problem is that I can't start solving a question and even don't know where to start (finding or making the correct function or equation, kind of calculus applications in area, volume, etc.) - what should I do? Plus I wanna know if there is a good syntax in calculus; like if I should write constants and variables in a special way (capital, small, etc.), so that my writing is more readable and clean?
1
This question is quite specific but I am not sure which verb tense I should use when committing/checking in code. If I fix a bug and checking in the code, should I write: Fixes bug on feature A Fixed bug on feature A I always use past tense because the bug was fixed before I check in but it looks a bit strange when I look at file history. Should I use one over the other and why?
1
I'm writing a short bio of my character who has made a contract with a demon and is now required to do her bidding. He's not necessarily a slave, he still has the freedom to do whatever he likes, I guess it's kind of like a DnD warlock-patron type of relationship where she occasionally has tasks for him to do and he's bound by contract to complete them. The sentence I'm writing is like "[Character] made a contract with the demon [Demon Name] and is now [word or phrase I can't think of] to her bidding." I swear there's just a single word for it (I keep thinking of words like "vulnerable" and "liable" but they're not really right), not a phrase (though if you can think of one that fits then I might end up using it if I can't figure out the word), and it feels like it's on the tip of my tongue but I can't remember it at all and it's killing me.
0
Is there a standard, formal or defacto, for what exactly a TeX distribution is and what programs/features need to be supported to be a TeX distribution? I feel like this question has had to have been asked before, and while my searching turned up a number of similar questions none got at the heart of what I was after. For context, I'm trying to understand what TeX and LaTeX are from a software point of view. I've been using TeX/LaTeX for years via the pdflatex command or indirectly via pandoc but I've never really had a good feeling for what, exactly, I'm installing when I install a TeX distribution. For example, right now on my Mac I have a pdflatex command that's located at /Library/TeX/texbin/pdflatex and the /Library/TeX/texbin/ folder has a few hundred binaries in it. So is a TeX distribution just folks deciding which of these programs to make available? Or is there more to it than that? Put another way: If I wanted to create a new nominal TeX distribution (which, trust me, I don't) what exactly would I be doing/have to do? I realize this is a pretty big question with answers that could have varying levels of detail. If the answer is RTFM that's legit -- but I'd appreciate a pointer to which part of which manual I want to be reading.
0
I'm feeling kind of stumped on this, even though I think I know the answer. If I had written this sentence, I would just rephrase it to avoid the issue, but I came across it and found myself wondering. The sentence: "All that stuff like cars and planes is releasing carbon dioxide." At first glance, it seemed wrong and I immediately thought it should be using "are" instead of "is". But then I stopped and thought about it and realized that it's likely correct, since the sentence would be "All that stuff is releasing carbon dioxide", which is correct, if I took out the examples. It just sounds really awkward and wrong the way it is. Is there any grammatical rule that would justify using "are" in this context? Would it make a difference if the list was longer, like if the sentence was "All that stuff like cars, planes, and factories is/are releasing carbon dioxide"? Or is just a correct grammatical sentence that sounds bad?
0
My question: What is Happening During Particle Collisions if Particle Location is Defined by a Wave Function? From my understanding, we can define particle location by a probabilistic wave function from Schrodinger's equation. So the particle doesn't exist in one exact spot naturally. So what actually happens then when two particles "collide" (which does seem to imply a single location of contact, at least in classical mechanics)? My guess would be it counts as being "measured" and the wave function collapses to a single location for each particle, but I'm unsure what interaction between the particles would cause it to count as a "measurement" (measurement might be the wrong word but I don't know a more generic term for this). Maybe I'm not fully grasping wave-particle duality here in this specific case. Any help would be appreciated.
0
I would like to buy a book to study multivariable calculus. Currently, the texts I have in mind are: Vector Calculus, Linear Algebra, and Differential Forms A Unified Approach by Hubbard & Hubbard Multivariable Calculus with Applications by Lax & Terrell Functions of Several Real Variables by Moskowitz & Paliogiannis I want a book that has a clear expositions of the subjects of multivariable calculus. Also, I would like a book that avoids leaving proofs as excercises to the reader, or at least that does not do it most of the time. If possible, a book that also contains multiple examples/exercises with (fully) detailed explanations/solutions to at least some of the examples/exercises. I do not mind a rigorous approach to the subject as long as the content is explained with detail. Which of the books mentioned above fits best the description? Also, if you have other books in mind, feel free to recommend them as well. Thanks in advance! Note: I have taken two proof-based calculus classes and one proof-based linear algebra class.
0
I always assumed that the word censorious meant someone or something that is given to censorship. Like if you say that a community, an organization, or a person is overly-censorious, that means they frequently or unnecessarily censor content. But I just looked up the word in my favorite dictionary and found that censorious has a different meaning than what I thought: censorious Addicted to censure and scolding; apt to blame or condemn; severe in making remarks on others, or on their writings or manners. Implying or expressing censure. So, a censorious person is not a person who is quick to censor. Too bad, because I thought that was a useful word! Is there an actual word that means "being given to excessive censorship"? Example: I might criticize a film distributor as being _____ because they often censor controversial content whenever they release a new edition of a classic film.
0
Everything I can find says that time dilation approaches infinity at the event horizon of a black hole. Black holes evaporate over a finite amount of time. Wouldn't this imply that somebody falling into a black hole would eventually just see the event horizon shrink away from them as fast as they fall into it. This seems to imply that its impossible for anything to fall past the event horizon, everything just gets very close to the event horizon and then the black hole evaporates from underneath them. Ive read through as many posts as I could find addressing this. The most direct one being this one: https://math.ucr.edu/home/baez/physics/Relativity/BlackHoles/fall_in.html In all the posts that claim to have a definite answer I don't understand it. And all the posts I feel like I understand don't claim a definite answer.
0
In the Wikipedia page for the Ising Model it is written without citations: One of Democritus' arguments in support of atomism was that atoms naturally explain the sharp phase boundaries observed in materials[citation needed], as when ice melts to water or water turns to steam. My question is: Is this statement historically accurate? If so, did Democritus have an argument or explained his intuition on how exactly atoms explain sharp phase transitions? A related question, would be how this relates to a a modern statistical mechanics viewpoint and If one really needs an atomistic description: From a continuous model of matter like a classical field theory, could one arrive at a first order phase transition from first principles? I know you could just 'artificially' impose a phase transition, but I'm thinking of a procedure like taking the large-scale limit of a classical field theory and arriving at a new theory with phase transitions.
0
I am working on TeXmaker and all of the sudden this message popped up (this problem occured after closing and re-opening the program and the code is correct, there i no error inside of it). I tried to run the same code on TeXstudio but it gave me the same problem. I tried to search everywhere in the internet to see if there was a similar cases, but all of theme were useless or not exactly like mine. I get the same message also for older files and new ones. This makes me guess that this is a problem of MiKTeX since it doesn't compile neither on TexMaker and TeXstudio. I tried to use the command prompt as suggested on reddit (I wrote "pdflatex filename"), but I really don't know how to exploit this feature. Do you have any idea of what could cause this problem?
0
I am learning about EM waves and am getting confused. Here is a picture from an intro. textbook I am using. Looking at this, it is clear that both E and B are perpendicular to the direction of motion, and all is fine. However, let's say the wave was propagating at an angle theta to the z axis. In this case, you can still draw the E and B field perpendicular to the direction of motion (with E in the same plane as the direction of motion and B perpendicular to the plane of motion as in the picture), but now there would be a component of the E field that is in the direction of z, even though the E field is still perpendicular. In this case, why wouldn't the EM wave propagate with this longitude component? What am I missing here? Here's a crappy picture of what I mean: In the picture you see that the wave is propagating in a direction with a non-zero x and z component and clearly the E field also has nonzero components in the x and z direction, so wouldn't these components be "longitudinal waves" as they are propagating in the direction of motion?
0
Most explanations of the integer quantum Hall effect start out in the grand canonical ensemble, where the plateaus arise when the chemical potential (or equivalently the Fermi energy) is in the gaps between the Landau levels. However, they then usually continue with a statement that working at fixed chemical potential is actually imprecise as the system is in a canonical ensemble (i.e. fixed particle number,) and so we need to include disorder to get the plateaus. While I understand why disorder is needed if we work at fixed particle number, I don't understand why the experimental systems are assumed to be in the canonical ensemble to begin with. Don't electrons flow in/out of the material when you measure resistivity? If this is the case, why is working in the canonical ensemble justified?
0
In set theory, the notions of set and membership are considered primitive. We only specify some of the properties that we think our primitive notions have using the axioms. Usually, the very first axiom of set theory is the axiom of extensionality which specifies that two sets are equal if and only if they have the same members. My discomfort with specifying this axiom as the first is that we haven't said anything about how the notion of membership relates to the notion of set in any previous axiom. That is we haven't specified that it makes sense to say something is a member of a set. Yet we use this notion in the axiom of extensionality. Why we don't have an axiom that exclusively talks about the membership of things in a set?
0
I have been searching for an automatic method (a computer program) to evaluate any first-order logic (FOL) formula given some knowledge base. The most common approach to do this is to use PROLOG. The issue with PROLOG is that it employs a subset of FOL which, for example, restricts the use of quantifiers. When searching about this, I have learned that PROLOG uses the resolution algorithm, which requires formulae to be in Conjunctive Normal Form (CNF). I have also learned that there is a process called Skolemnization that can be used to remove the quantifiers from a formula and help convert it into CNF. Therefore, my question is the following: Can the resolution algorithm be applied to any FOL formula? By this, I mean if there exists some algorithm that takes as input any FOL formula and normalizes it (using skolemnization and other methods) into CNF so that resolution can be applied to it?
0
To "empathize" with someone means to "understand and share the feelings of another" (Oxford dictionary). In science fiction, an "empath" is "a person with the paranormal ability to perceive the mental or emotional state of another individual." (Oxford dictionary) Is there a word to describe the "opposite" (behaviour in nature, not antonym) of an "empath" - someone who can communicate in such a manner that it is very easy to empathize with them because you can easily understand or recognize the feelings behind what they are saying? More details: As pointed out above, an "empath" has the ability to perceive the mental or emotional state of another individual. I am seeking a word (if it exists) that describes the ability to communicate in such a manner as to "create a mental or emotional state" - an emotional connect - to make others more easily relate and empathise with them. (Something that most story tellers do effectively). (Some commenter thinks my question is about Sci Fi. No, I am just looking for a word to describe something in plain English. But it doesn't matter if it is an uncommon and rare word from old English or even a modern slang).
0
The second law of thermodynamics states that the entropy of the universe increases over time and this has lead to theories like the heat death of the universe and the big rip. What this means in effect is that all matter and energy has an expiration date, beyond which it gets divided accross infinite space and this is irreversable since entropy does not move backwards. The first law of thermodynamics states that energy cannot be created or destroyed. The only explanation this leaves for the universe's existence is that energy has existed since always and never had to be created. And herein lies the contradiction: If energy has been around for an infinite amount of time, why hasn't the heat death or the big rip already taken place? How come we are able to observe the universe in its current fairly organized transitional state unless A. energy was created a finite amount of time ago or B. Energy has existed since always but it can freeze or move backwards entropy-wise allowing us to observe it in its current form. Either way one of the two laws is violated. How do physicists reconcile this paradox?
0
The other day I wasn't feeling so well so I visited a doctor. He recommended that I take a break and get intravenous therapy so I did. I laid on a bed and the nurse inserted a needle into my left arm which was connected to the pack of liquid containing vitamins. I soon fell asleep. More than half an hour later I woke up. The pack was practically empty. However blood was slowly flowing out of my vein. I frantically called the nurse, who apologized for not catching it earlier. I thought the sole factor that delivers the fluid to my body was gravity, since the pack is hung in the air much higher than my arm. But given that the direction of flow reversed once the pack was emptied, my assumption is probably false. My new theory is that the vitamins are delivered because the pack's pressure is higher than my vein's when there is enough liquid in the pack. However this still doesn't explain why I started losing blood as when the two sides' pressure reaches parity all movement should stop. Since the pack's initial pressure was higher, There should be no change of direction until reaching equilibrium. Why did I lose blood and are there any precautions I can take to prevent this incident in the future?
0
Many elementary explanations of the Einstein-Lorentz transformation derive it using Einstein's special relativity postulates, combined with some thought experiments considering light sources and observers. However, I have read that these thought experiments often do not hold when subject to rigorous mathematical scrutiny, and that Einstein himself did not use them when making his arguments. Further, it is obvious that Lorentz did not use Einstein's postulates when deriving his transformation because at that time he did not trust in Einstein's postulates. Rather, from what I have read, the actually rigorous way that the transformations were derived is using more complicated math such as hyperbolic geometry. I cannot seem to find a good resource that walks through the derivation in a rigorous way; especially one that does not assume prior knowledge and explains the math being used. Can someone please either provide a rigorous derivation here, or point me to a good resource for me to find it? I am especially interested in learning the approaches of both Einstein and Lorentz.
0
I have two rectangles of different sizes side by side. I want to scale them both (each maintaining their original aspect ratio) so they each end up with the same height and together equal a specified, fixed width. I would like to find a formula that will work no matter what the sizes of the rectangles are (some may be bigger than the target width, some may be smaller - thus some will have to be scaled up and some down). I found what appears to be a very similar question here, however the only solution provided seemed to imply a universal scaling factor. That doesn't work for my situation because the rectangles need independent scaling. I am also only scaling two rectangles instead of three, and the resulting widths of my rectangles must together add up to a specific width (the other question just didn't want to exceed one). Any help would be incredibly appreciated.
0
I'm reading Carroll's GR book. I'm able to follow it for the most part, but a couple of paragraphs are a bit hard to decipher: According to the WEP, the gravitational mass of the hydrogen atom is therefore less than the sum of the masses of its constituents; the gravitational field couples to electromagnetism (which holds the atom together) in exactly the right way to make the gravitational mass come out right. What exactly does "couples to" mean? Right now that's just a vague phrase to me that implies gravitational field has something to do with EM - but what's the precise notion behind it? Sometimes a distinction is drawn between "gravitational laws of physics" and "nongravitational laws of physics," and the EEP is defined to apply only to the latter. Then the Strong Equivalence Principle (SEP) is defined to include all of the laws of physics, gravitational and otherwise. A theory that violated the SEP but not the EEP would be one in which the gravitational binding energy did not contribute equally to the inertial and gravitational mass of a body; thus, for example, test particles with appreciable self-gravity (to the extent that such a concept makes sense) could fall along different trajectories than lighter particles. I have no idea what the statement in bold means at all. Could anyone please explain this so that a layman like me could understand?
0
As we know, the recent Nobel prize was awarded for the creation of attosecond light pulses. I read this excellent answer, describing both how the pulses are created and what applications they have. I understand how the pulses are created by the addition of waves with harmonic frequencies in a classical sense. However, from a quantum mechanical point of the view, light comes as photons with quantised energy, with the energy of each photon related to its frequency. As the attosecond pulses consist of many frequencies, I wonder how they relate to photons. My guess would be that the attosecond wave describes the probability of detecting a photon at a particular location/time, but that the photon detected can have the energy corresponding to any of the constituent frequencies. Is that a correct guess, or is that too simplistic? When we detect individual photons of the attosecond pulse, what frequencies can they have?
0
First, let me acknowledge there are numerous posts on this question already. The most pertinent to my specific question is probably this one. To restate the problem: "A family has two children. Find the probability that both children are girls, given that at least one of the two is a girl who was born in winter." The solution offered in the text includes this step: "use the fact that {both girls, at least one winter girl} is the same event as {both girls, at least one winter child}". This subtle shift is necessary to reach the correct answer. But it seems to come out of nowhere, right? I wonder if the change was motivated by a desire to be able to invoke independence. Since {both girls} and {at least one is a winter girl} are clearly not independent, while {both girls} and {at least one winter child} are. Hard to get inside the author's head, but that's my best guess. What do you all think?
0
There are various threads on this site explaining the mathematical details of how, in QFT, position operators are non-relativistic. I can follow some of the math, while some of it goes over my head. But even with the parts of the math I can follow, I have a hard time relating it to anything conceptual -- that is, all the mathematical explanations I've seen have a feel similar to proofs by contradiction in that they show that it's true there are no relativistic position operators, but not why it's true. So, to be clear, what I'm looking for isn't a nonmathematical explanation necessarily, but just an explanation that feels like more than just a bunch of algebraic manipulations -- something that connects all those calculations to the actual objects we're trying to model and their properties. To clarify what specifically I'm confused about, I don't get why bringing in special relativity should mess up the notion of position at all. I understand that position is a relative property, so that if we change reference frames any position operator would be affected. But momentum and energy are also relative, and yet there doesn't seem to be any issue with defining relativistic versions of their operators. What makes position so different that we can't just define it in some given reference frame and apply the Lorentz transform as needed? Does it have something to do with the geometry of spacetime being hyperbolic?
0
I've read several threads over the past several days talking about how photons don't have wavefunctions in the same way as massive particles do because they don't have non-relativistic limits. If I understood correctly, that's because the usual position operator introduced in introductory QM courses really only applies to non-relativistic theories. The Newton-Wigner operators kept being mentioned as the closest QFT analog to the position operator from non-relativistic QM, so I've been trying to find information on them, but the relevant Wikipedia page is very sparse and vague and everything else I found was very long and technical. All I really want to know is: What's the actual definition of the Newton-Wigner operators? How does it differ from the definition of the position operator from non-relativistic QM? From what little the Wikipedia article did say, I know the Newton-Wigner operators aren't Lorentz covariant, which, if I understand correctly, means they're reference-framed dependent w.r.t. the Lorentz transform. But is that the only difference between them and analogous operator from non-relativistic QM? If so, then why does position in particular often get singled out as being different in relativistic and non-relativistic QM and QFT, when other properties, such as energy and momentum, are also reference frame dependent?
0
As a caveat, i am not a mathematician but rather a programmer with an amateur interest in patterns, fractals, sequences, data science. That said, i have been following recent developments in aperiodic tiling with interest. I've had an idea for an application for it, but I don't know enough in the field of mathematics to know if: It's obvious, been done, move on It's an interesting idea worth exploring, or It'll never work or be useful, just stop now. The thought is as follows: By definition, aperiodic monotiles don't repeat on the plane, therefore They are effectively a visual representation of an infinite, non repeating sequence. They can be computed relatively easily (e.g. see here), and the next tile can be calculated based on the position of the previous one. Therefore, you effectively have a pseudorandom number sequence, given a seed of the coordinates to start at. Is this worth exploring, or just a rabbit hole leading to a dead end?
0
Consider a sample of an ideal gas kept in a pouch of some volume. This pouch is then kept in a bigger container of volume V. As soon as we open the pouch then the gas will expand irreversibly in the container. Also consider that this expansion is adiabatic in nature and no energy flows in or out of the container. If we let the system be kept isolated for a sufficient amount of time in which it approximately achieves steady state, then can we say that in this steady state condition every molecule in the gas sample will approximately have the same speed? I thought that the entropy of the system would be maximum in this configuration as the energy is tending to be spread out equally among all molecules. Is this notion of entropy being a measure of the distribution of energy logically correct? Also if it is not correct then is there any mathematical way to find the distribution of the molecular speed of the gas sample?
0
I would like to know if there are any books on combinatorics and number theory that follow an axiomatic approach akin to that of Sierpinski's General Topology. I have found some books for both subjects that might follow this axiomatic approach but they aren't explicit about it. The books I am referring to are: Aigner's Combinatorial Theory, which one user on Amazon said follows an algebraic approach but I don't know if that is equivalent to an axiomatic approach. Landau's Elementary Number Theory, with the author having written an axiomatic book on Analysis called Foundations of Analysis, which I have a suspicion might be axiomatic. And Sierpinski's Elementary Theory of Numbers by the author I mentioned above with my suspicion being that it also follows an axiomatic approach. I have scoured the internet for such books but to no avail. I haven't found posts on stack exchange that address this question, specifically for number theory and combinatorics, either so I decided to write one myself. I really hope someone here might have found such books. More specifically Brualdi's book on combinatorics states that the multiplication principle is a consequence of the addition principle while Vinogradov's number theory book states that division with remainder is a generalization of the quotient of two integers. I would like the books to follow such an approach, stating what is essential and deriving everything else logically. I don't think I can be any more specific but I hope you get the gist of what I am saying.
0
This is probably a really common word and I'm having a moment, but it occurred to me the other day that I can't think of the verb that describes the action of sweeping a knife across a pat of butter, which has the effect of scraping butter from the pat onto the knife. It's kind of the opposite of "spreading"; i.e., the action to deposit butter from the knife to the bread/toast/whatever. "Scraping", I suppose, is close...but it doesn't feel right. Intuitively, I feel like scraping involves a fair amount of effort to remove the surface layer, which is usually hard, contrary to the properties of butter at room temperature. "Peeling" or "paring", perhaps, but again I'm not convinced: butter doesn't have a peel and paring is more of a cutting action. The artefact of the process is a "curl of butter", so maybe "curling", but I've never heard that used in this way.
0
Suppose two particles A and B collide. Consider the position vs time plot of the trajectories of A and B, shown in green and red in the following diagram. The two trajectories "meet" at coordinates (x', t') indicating collision, although according to the Pauli exclusion principle, two particles cannot be at the same position at the same time. Now, it is clear that the trajectories A and B are continuous and differentiable, indicating that both A and B have well-defined velocities throughout their trajectories. But now consider trajectory C formed by the part of the trajectory A before collision, and the part of the trajectory B after collision (at the point of collision, assume trajectory C has the value of trajectory A). Trajectory C is shown in blue: My question is this: is trajectory C continuous and differentiable at the point of collision (x', t')? That is, does it have a well-defined velocity at point (x', t')? My own thoughts on this problem so far is that trajectory C is not continuous at (x', t'), because the left-hand limit of trajectory C is the point on A's green trajectory near (x', t'), while the right-hand limit of trajectory C is the point on B's red trajectory near (x', t'). But I want to understand/formalize/prove this more rigorously.
0
I do understand that we can't experimentally verify anything we imagine about the interior of a black hole. If we were to apply what we know about the physics of the observable universe and assume that those laws remain valid on the other side of an event horizon, then are the following assumptions at least plausible? Anything that falls into a black hole might experience perpetual free-fall, because everything closer to the singularity will experience a much stronger pull than things farther away. The observer, similarly, will experience a constant acceleration away from the event horizon, so objects that cross it after the observer does will always be moving away from them, from their point of view. Objects within the event horizon of a black hole can't interact with our observable universe, but there doesn't (to me) seem to be any reason why they can't interact with each other. So they could clump together in volumes that are small enough where the tidal forces don't rip them apart. In this way, wouldn't the inside of a black hole look like a universe where everything is constantly accelerating away from each other, but in smaller scales can still bind together through all the normal interactions that we observe on "this side"? I hope this question doesn't break the rules of stackexchange, and I'm totally ready to have all of the above bullet points torn to shreds.
0
I've encountered a few possible definitions of a "connected ring" and am having some confusion relating them. The first one is defined for any commutative ring: A commutative connected ring has a spectrum which is connected in the Zariski topology. But there is also the concept of a topological ring, where the ring itself (not the spectrum) is endowed with a topology. In this case, it also seems natural to consider a topological ring to be "connected" if the the ring itself is a connected topological space (irrespective of the spectrum). My concrete questions are thus: For a commutative topological ring, is there a relationship between "connectedness" in the spectrum-sense vs in the ring topology-sense? More broadly, is there any relationship between the topologies on the two spaces? They seem unrelated to me. When I read results on ring theory, I'm often confused whether topological statements refer to the ring itself or the spectrum. Take for example the following: Every compact Hausdorff ring is totally disconnected. I'm assuming here that the ring in question is a topological ring, and "compact Hausdorff"-ness is refering to the topological properties of the ring itself. But here, is "total disconnected"-ness referring to the ring topology, or to the spectrum topology that results in the definition of a "connected ring" above? Am I getting tripped up on overloaded terminology here, or is there some deeper connection that I'm missing? Thanks for the help!
0
I understand that the state space representation is mathematically equivalent to the transfer function representation for linear systems, and that it allows us to solve the corresponding DE by finding the eigenvalues of a matrix. However, for nonlinear systems, the transfer function can only represent a linear approximation, while the state space form can represent the full system. But what's the advantage of using state space form for nonlinear systems, if we can't generally solve them by matrix methods? How does state space representation help us analyze or design nonlinear control systems any better than we could by sticking with the original DE representation? Some background: My impression was that the state space form of linear systems is essentially just syntactic sugar for the final result of transforming a nth order DE into a system of n first order DE's, and writing that system as a single matrix equation. It "hides" the derivatives under the extra parameterization variables. But for nonlinear systems, we can't just get a system of linear equations and write it as a single matrix equation that doesn't explicitly involve derivatives. So I don't see how the state space form simplifies anything for nonlinear systems.
0
In my school, I learned that when two blocks are placed on the ground with one block above the other, if a force is applied to the lower block, two opposing forces of friction act on it: one from the ground and the other from the upper block's surface. Consequently, according to Newton's third law, the upper block experiences a friction force in the forward direction. However, I have a question regarding this scenario. If the external force applied to the lower block is significantly less than the limiting friction of the ground, the lower block won't be set into motion due to the opposition from the static friction of the ground. In addition, I believe that the static friction of the upper block also plays a role in opposing the motion(as it does when the block do move). Consequently, the upper block should experience an equal and opposite reaction that sets it into motion as well. However, this doesn't seem to happen in reality. What misconception do I have in this situation?
0
In analyzing Compton scattering we consider the conservation of both energy and momentum. However, in analyzing the photoelectric effect only the conservation of energy is taken into account. In fact, if the momentum of the photon is taken into account there seems to be a violation of the law of conservation of momentum. The momentum of the photon is into the metal but the momentum of the ejected electron is out of the metal i.e. the momentum of the ejected electron has a component opposite to the momentum of the incident photon. A way out would be to argue that the process of ejection is complicated and that it is actually the atom which ejects the electron. There is, however, a problem here. Experiments with linearly polarized x-rays or ultraviolet light show that the electron is always ejected in a direction parallel to the electric vector i.e. perpendicular to the direction of the incident beam of photons. This suggests that it is indeed the photon which ejects the electron, not the atom. Is there an explanation in either quantum mechanics or classical electromagnetism or both?
0
I'm not sure if this is an appropriate question for here, so fair enough if it gets closed/down-voted. I'm self-studying mathematics, did basic real analysis (Riemann, not Lebesgue integraton) and complex analysis, and I'd like to move on to functional analysis shortly. I have two books: Muscat's Functional Analysis (Universitext), and Axler's Measure, Integration and Real Analysis (GTM). It seems to me that Muscat is rigorous but still lighter (not meant in a bad way) on the maths, but covers more topics than Axler. But Axler spends a lot of time on measure theory (of course) and does also cover the basics of Hilbert spaces and Banach spaces and linear operators on Hilbert spaces. At this moment I'm more inclined to start with Axler, since I suspect after covering Axler I'll have a stronger foundation and more easily pick up on the topics covered in Muscat but not in Axler. Would this be wise? Other recommended books welcome.
0
I was playing with numbers in my middle school math club and found a beautiful pattern. I presented my idea in front of my club mates. The teacher was impressed with my result and suggested I write a short manuscript and have it published somewhere. I thought it was a great idea so I agreed. It took over six months to get used to latex but I think it was worth it. I had some previous experience with the Lua language so I chose LuaLaTeX for my document. It was a short one page article but I was still satisfied with it. I searched for journals that had writings similar to my skill level. I sent them my LuaLaTeX file and the pdf file resulting from it via email. They reacted positively to the content but asked me to write in plain latex because they do not use LuaLaTeX. It took a few more weeks to convert to LaTeX. I have never maintained a journal so I didn't know that it wasn't as simple as just ordering the articles in the right order then gluing them together. Will I face similar barriers in high school, university and beyond if I keep using LuaLaTeX and not LaTeX? Are engines that extend beyond LaTeX such as LuaLaTeX and XeLaTeX to be avoided in environments where I do not produce all the content of the publication?
0
I encountered several times a certain type of sentences (in colloquial contexts) which were clearly grammatically incorrect but seems to be widely spread and, as a non-native English speaker, I would have liked to have more information about that. It is about the conjugation of the modal verb to be. I have heard many times the following type of sentences: "We was playing basketball" "They was eating dinner" "I were there" "He were with me" where the last two were used without any conditional of any sort. These sentences are obviously wrong in standard English (as far as I know) but still seem widely used in everyday life conversations. Therefore, my question was the following: How wrong are these grammatical structures ? How strange will I sound if I use them ? Also, is it some kind of dialect/slang and in what kind of context do we use them? Unfortunately I do not have any neat example to provide for I've heard it in an everyday life conversation with my office mates who are from Scotland. (I did not dare to ask them directly as I barely understand when they speak) If someone could provide me more information about them I would be grateful.
0
Imagine an air hockey table where there is a puck P and a rectangular slab S. Both are free to move as there is zero friction. The slab is at rest in the middle of the table. The puck is moving towards the slab and collides with it in an elastic collision. The puck is not spinning before the collision. As this is a closed system, the total momentum, angular momentum and energy of the system should be preserved. But after the collision the slab will be spinning about its centre of mass, but does that mean the puck will also be spinning in order to conserve the angular momentum of the system? I feel like the elastic (frictionless) collision won't be able to impart any spin on the puck. I understand that the puck and the slab will also change their linear momentum and energy but overall in the system these will be conserved. How would you calculate the velocity vectors and angular velocity for the slab and the puck post-collision? Assume the puck and slab are made of the same uniform-density material and the puck has radius r and the slab side lengths a and b.
0
I do not feel comfortable with such constructs that contain "one's work ...ing" as the following: "University of British Columbia marine biologist Amanda Vincent has won the prestigious Indianapolis Prize for her work protecting seahorses." "Learn more about her work protecting the rights of women and girls." No matter how I feel about it, there is no denying that it is done and it is fairly commonplace, so I am not going to say it is wrong. However, it does not sit well with me. How would you explain this phrase grammatically? Is "protecting" a present participle? If so, construct wise, "her work" will have to be the subject of "protecting," but meaning wise, it is clearly wrong; the assumed subject is she. The only explanation I can come up with is that this is an idiomatic expression. Maybe "protecting ..." is a gerund and it is an appositive to the immediately preceding "her work"? Your input will be much appreciated!
0
I know that we can consider an object as point object, if its size is negligible as compared to distance traveled by it in reasonable amount of time. But in my book Ncert there is questions which asks to determine which of the following are point objects: (a) a railway carriage moving without jerks between two stations. (b) a monkey sitting on top of a man cycling smoothly on a circular track. (c) a spinning cricket ball that turns sharply on hitting the ground. (d) a tumbling beaker that has slipped off the edge of a table It states that (a) and (b) can be considered as point objects but (c) and (d) cannot. Why cannot we consider them as point object if we do not the distance they have travelled and their size?
0
If we place a conductor between the plates of a capacitor, the conductor reaches an electrostatic equilibrium with the surrounding electric field. At this equilibrium state, the charges within the conductor have redistributed such that the electric field inside the conductor is nullified. Now, what happens if we separate this conductor into two halves, each containing the redistributed charges corresponding to one side of the conductor (meaning we have one half that is positively charged and one half that is negatively charged). We did not take the conductors out of the field. Would there be an electric field between the separated halves of the conductor if we look at the complete system (Capacitor and its field and conductors and their interaction)? My book says there is no field because it would cancel out the external field of the capacitor but I always thought that electric field lines always end on charges, which would not make a superposition possible.
0
I am working on an English-language online resource. It seems an obvious good idea to allow users to choose a version in British English or American English spelling. However, I've noticed that spell-checkers also provide options such as Canadian English, Australian English, South-African English, Jamaican English, Hong Kong English, and so on. I have always assumed that these variants are all basically British or American, with maybe a few minor details that are different. So my question is: Are any of the other variants of English spelling significantly different from British or American spelling? With significantly different, I mean that they don't just add a number of new words, like "wee" or "bairn" in Scotland, or French loan-words in Canada, but that basic English words are spelled differently, in the way that e.g. "colour/color", "analyse/analyze" and "theatre/theater" are spelled differently in the UK/USA. (To be clear: I'm only interested in how great the difference is to an outside observer, not how important it is to e.g. a Canadian person to be able to select "Canadian English" instead of having to select "British English". I'm also not interested in differences in grammar.)
0
Physicist Grigory Volovik has put forward some ideas about the universe undergoing a topological phase transition (especially in the early stages of the universe). He published a book called "The Universe in a Helium Droplet" where he explained his ideas. You can find a brief discussion about it here. In one discussion I had with Mr. Volovik, he mentioned that depending on the type of topological phase transition that could have occurred in the universe, all the fundamental symmetries of the universe (spacetime symmetries, translation symmetries, CPT invariance, internal invariances...) could be all emergent from a more fundamental state without symmetries (like in Holger Nielsen's random dynamics proposal where all symmetries in the universe would be emergent) I asked him if this was all speculation or if there was some truth behind and he replied that although we don't know if the universe actually took this "path", we know that this topological phase transition would be possible. But is this true? Would that be possible according to what we currently know about physics (although we don't know if this actually occurred at some point of the universe's history)? Or, on the contrary, we don't even know if these transitions are even possible to begin with?
0
There are examples in physics in which a simple law results from an immeasurably more complicated set of underlying interactions. Consider Hooke's law, for instance: there is a very simple equation that relates the extension of a spring to the force required to extend it further, yet the underlying physics when considered at the level of the trillions of individual electrons and ions that form the spring is of an entirely different order of complexity. Is it possible that we need to find some new theory to replace quantum mechanics which is as different to quantum mechanics as quantum mechanics is to Hooke's law, or are there any considerations that limit the additional complexity we might encounter in a more fundamental theory. For instance, our current model of reality assumes a set of fundamental particles, and there have been attempts to model the particles as vibration modes of strings. Might it be that strings are themselves composite entities composed of countless smaller parts- in the way that springs are composed of atoms- or do we have firm physical grounds to suppose that there is a fundamental limit to the divisibility of matter which prevents strings, for example, from being composit entities at a much more granular level.
0
Gravitational waves carry energy. The sticky bead argument shows that this energy can be extracted: https://en.wikipedia.org/wiki/Sticky_bead_argument But Lee Smolin points out that "In principle, nothing can screen out the force of gravity or stop the propagation of gravitational waves, so nothing can be perfectly isolated. I discovered this important point during my PhD studies. I wanted to model a box that contained gravitational waves bouncing back and forth inside, but my models kept failing, because the gravitational waves passed right through the walls". Gravitational shielding is considered to be a violation of the equivalence principle and therefore inconsistent with both Newtonian theory and general relativity, and there is no experimental evidence for it. If energy from gravitational waves can be extracted, does this mean that partial gravitational shielding exists?
0
I read this from Div, Grad, Curl and All That: The second reason for introducing the electrostatic field is more basic. It turns out that all classical electromagnetic theory can be codified in terms of four equations, called Maxwell's equations, which relate fields (electric and magnetic) to each other and to the charges and currents which produce them. Thus, electromagnetism is a field theory and the electric field ultimately plays a role and assumes an importance which far transcends its simple elementary definition as "force per unit charge". The first reason was basically about how finding the field first, then finding the net force on a charge due to the net field simplifies calculations, but I don't understand the second reason. What is this fundamental importance of an electric field that the author is referring to? (PS, I am unfamiliar with Maxwell's equations and classical field theory)
0
I am currently writing the conclusions of my bachelor's thesis on convergence spaces and there are a couple of points I would like to make, but lack the proper references to cite in order to do so. The first point I would like to make is that one of the starting points of General Topology was trying to axiomatize the notion of convergent sequences and hence the notion of a convergence space is much closer to the origins of topology. I recall reading somewhere that the first attempts of defining topological spaces were like that, but I can't find where. The second point is that the category of convergence spaces is much more adequate than Top, because it has exponential objects. I know there is a whole discussion of the importance of having cartesian closed categories of spaces, but I by myself cannot argument that, because my background on category theory is really small. Therefore, I need to cite someone who knows that for a fact and can give reasons why that is. Hopefully I am not breaking any rules by asking two different sources in the same post. I thank gratefully for any answer.
0
Crossposted on MathOverflow I am an undergraduate mathematics student with a keen interest in pursuing research in the formalization of natural languages (from a more mathematical-logical approach), yet there aren't many resources that provide an overview of this very technical field. I wish to be able to provide myself a general map to navigate this subject more clearly, so I will attempt explain my basic understanding of an overview of the fields that tackle with this subject, and I'd greatly appreciate any correction or addition to my limited insight. From my understanding, (loosely speaking) there are two prominent mathematical-logical approaches in formalizing natural languages: Categorial Grammar and Montague Semantics. While Categorial Grammar uses methods borrowed from category theory to (mainly) study the syntax of natural languages, Montague Semantics, as the name might suggest, (mainly) focuses on the semantics of natural languages by implementing methods from Lambda Calculus. In terms of subareas of each of these two fields, I have not seen much discussed in terms of the subareas of Montague Semantics; however, Categorial Grammar seems ripe with subareas (although I have heard some of the fields mentioned below are only closely related to Categorial Grammar rather than being a strict subfield of it): Combinatory Categorial Grammar Lambek Calculus Type-Logical Grammar Pre-Group Grammar Proof-Theoretic Semantics In addition to any corrections or additions, I would greatly appreciate any suggestions for resources, references or books that deal with these subjects or their prerequisites
0
It seems that there are many terms in linear algebra that have multiple names. For example, unitary and orthogonal both refer to the same general idea, a Hermitian is essentially a self-adjoint matrix, invertible and nonsingular, and there are definitely more that I can't think of off the top of my head. I've noticed that I've seen terms like orthogonal and self-adjoint in more classes/texts that feel like they consider linear algebra from a more abstract algebraic perspective, while I've seen terms like unitary and Hermitian in physics and more applied settings. I was wondering if there was some kind of history behind these terms? Why do we have multiple terms for the same thing? Is it just a coincidence that I've seen Hermitian and unitary in these kinds of settings, or was the subject simply considered with different motivations by different people studying different things? If this is the case, I would love it if anyone could suggest references for where these terms originated, and also if there are multiple motivations/perspectives for linear algebra, are there any references for the origins of these different motivations/perspectives? I recall one of my physics professors mentioning briefly that mathematicians and physicists had independently developed the same theory only to realize later that they had been working on the same thing all along. Is there anywhere I could learn more about that history? And are there fields other than linear algebra where this has happened?
0
I'm a current MA student doing research in formal semantics, which is an application of, among other things, logic and model theory to the study of the semantics of natural languages. I'd like to build up a stronger foundation in formal logic before tackling other topics / projects. I love the interplay between logic, algebra, and topology. Model theory is very interesting to me, and I'm keen to learn more on that. I'm keen on extensions of first-order logic, as well (higher order logic, modal logic, etc). I also have Formal Semantics and Logic, by van Fraassen, and I'd like that text to be more accessible to me. There's so many introductory texts to logic, and a lot of them seem to spend time on things like deductive systems and such, which I'm not very interested in. Given the somewhat eclectic list I gave above, are there any logic texts that would be a good match?
0
I have a conceptual question about graphs which I couldn't find the answer to. I am calculating some node centralities and using them as features for a machine learning problem. I am using Networkx python library. I noticed that for the degree centrality the library does the weighting of the values by the highest possible number of connections. In other words, the degree centrality of a node is defined by the number of connections the node has divided by the number of nodes in the graph minus one. Although, from graph theory literature, the degree centrality of a node is simply counting the number of edges a node has. As far as I understand. I wonder what is the implications of this weighting of the values by the library? Doesn't it completely changes the concept of the degree centrality?
0
I'm currently writing my master thesis on "Differentiable Stacks". I'm really fascinated by the idea of generalizing manifolds to include also orbifolds/leaf spaces of foliations/moduli spaces... I formally understand the construction of differentiable stacks as stacks on the category of smooth manifolds possessing a representable epimorphism. And I formally understand why they generalize smooth manifolds. What I'm having some trouble to get is: how does one come up with such a definition? I really read many sources on the topic and none of them justifies the choice of such a definition for differentiable stacks (I mean the existence of a representable epimorphism aka an atlas). This "intuition problem" is getting better as I familiarize more with this object but I feel like I'm just "getting used to it". I'd like to know how this object was born and what's the idea behind it.
0
According the classical physics, the electron should radiate energy and fall to the nucleus in a short period of time. However, this was not the case. Hence, Bohr proposed his theory, suggesting that electrons existed in specific orbits, where they did not radiate energy. These orbits had quantised or discrete energies. Moving between these orbitals meant the emission of specific amounts of energy, emission of photons. However what was so revolutionary behind this idea? It seems to me, he solved this radiation problem, by simply stating that it didn't happen, "electrons exist in specific obrits, where they don't radiate energy". Did Bohr know why this actually happened or did he just state it? Currently, I am struggling to understand what was so revolutionary about Bohr's contribution to the previous nuclear model of the atom.
0
There was an open dump yard a few miles away from where I used to live for an internship. It was not noticeable during the daytime, but once the sun sets, the dump yard reminded us of its presence through its stinking odor. The smell came every evening and left the following day as if on a schedule, and my colleagues shared the same experience. One of my friends said it might be because of the temperature, but that didn't make sense. Smells, or the gases that cause those smells, travel faster at higher temperatures, which is why food has a more intense smell when served hot, as opposed to when taken out of a fridge. That would mean that we would get the smell during the day, and not during the night, which is the exact opposite of what is happening in reality. What might be the reason then, why we can smell the dump yard at night, and not during the day? P.S: There is no activity going on in the dump yard, apart from people throwing garbage during random intervals throughout the day. It used to be a barren land, and at some point in history became the nucleation site of the garbage of an entire city.
0
We know that Carleman's condition is a sufficient condition for the determinacy of Hamburger moment problem and the Stieltje's moment problem. The first one look at measures on the real line, and the second one look at measures on the positive side of the real line. There is a third problem called Hausdorff moment problem that looks at measures on the bounded interval. The interesting thing about this problem is that if the measure that fits moments exists, it is also unique. My question: Is Carleman's condition also sufficient for the determinacy of the Hausdorff moment problem? I figure that it should be since Hausdorff moment problem looks at smaller set of distribution than the Hamburger moment problem. I looked around but I couldn't really find this anywhere. I just want to makesure that I am not missing something subtle.
0
I am simulating a simple Lennard-Jones fluid confined between two fixed walls and I am analyzing the autocorrelation function of the velocity along the direction of the confinement (normal to the walls). I observe an exponential like decay which is in line with the rough expectation, but in addition I observe periodic peaks. These peaks turn out to be the sound modes reflected back and forth between the two walls, as the time between the peaks increases when I increase the walls distance. And when I calculate the sound velocity for my system, it turns out to be equal to the walls distance divided by the time between the peaks. Now I know about the sound mode, I am not satisfied with just having the velocity autocorrelation for looking at the sound modes. I would like to visualize the mode with more geometric features such as the velocity field, as I expect rich structures. Of course the problem is that in such equilibrium simulations a snapshot of the particle velocities is just a random pattern, so some kind of averaging over the particle trajectories and velocities should involved to make the sound mode visible. Any suggestion ?
0
"Weed" (the annoying plant you don't want in your garden) and "weed" (the psychoactive drug) are treated differently grammatically. Just some example sentences "There are weeds in my garden" vs "There is weed in my garden" "There is a weed growing in this pot" vs "There is weed growing in this pot" "How many weeds are growing there" vs "How much weed is growing there" "I'm going to get rid of twice the weeds" vs "I'm going to get rid of twice the weed" "Here are two types of weeds" vs "Here are two types of weed" Basically, the two words are treated completely different grammatically. I was wondering how in particular the two words are categorized that links to their different treatment. (e.g. maybe one is a proper noun, and the other is not [I know that's not the case, but it's just an example of the kind of answer I'm looking for]). Or is this just some weird slang thing that only applies to the drug "weed"? P.S.: I swear I'm not high while asking this question >.<. The impetus was actually because an anime brought up "happa", which could either mean "leaf" or "weed" in Japanese, and I wasn't sure which they were referring to. And then I got down this line of thinking >.<.
0
I just thought of this question, and a quick wiki search did not turn up anything. So, here is the question: Rel is the category whose objects are sets, and whose morphisms are binary relations. Is there a way to differentiate between morphisms that are functions and morphisms that aren't using purely category theoretical means (that is, not using the set structure whatsoever). Followup question: If the answer turns out to be yes, let's call a generic morphism that follows the definition an f-morphism (for function). What are the f-morphisms in other, more commonly used categories, like Set, Top, Grp, Ring, etc.? What are the co-f-morphisms (i.e. morphisms defined dually to f-morphisms)? Note that I have not yet given the question serious thought, so it may be extremely difficult or extremely easy to answer.
0
I'm currently working on a variant of a non-convex low-rank matrix completion algorithm, whereby we take a uniform sample of entries in a (symmetric) matrix and look to complete said matrix. For various reasons, we're interested in trying to reduce the bandwidth of our matrix initialization for our algorithm, and our first idea was using the reverse Cuthill-McKee algorithm to do this (mostly just because it's pre-built in matlab). However, the output structure of the Cuthill-McKee algorithm provides a poor initialization for our algorithm. I'm mostly just curious if there are other bandwidth reducing algorithms that people use regularly. I did a brief literature search and found some papers using neural networks to try and learn permutations to decrease the bandwidth, which looks interesting but is more work than I think is worth for this particular problem.
0
In some linguistics papers that I have been reading, it has been argued that a false sentence has the same semantic status as a noun phrase that fails to refer. A classic example of an English noun phrase that fails to refer would be the present King of France, if uttered today. This is because, of course, France has no monarchy any more. However, one of the examples discussed by several papers is the highest prime number. Now, intuitively it seems to me, even as a lay non-mathematician, that there are obviously an infinitely large number of prime numbers. But that's just my intuition. My intuition as a schookid would have been that it would be easy to predict which numbers were going to be prime and which weren't. So much for intuition then. Do we know that there are an infinitely large number of prime numbers? Is it possible to explain how we know (or don't) to a non-mathematician like me?
0
It's commonly stated in the literature that the free distance of a convolutional code is the minimum Hamming distance between the all-zero path and any other (non-all-zero) path in the trellis originating in the zero state and ending in the zero state. (The free distance of a code is defined as the minimum Hamming distance between any two distinct codewords.) My question: Is this true in general? Or is it true only conditionally (e.g. when we assume that the code is terminated in the zero state)? If it's the latter, what's the condition for it to be true? It appears to me that a sufficient condition is that the convolutional code is a linear block code. This includes the finite length "zero-tail" codes and "direct truncation" codes. The former refers to codes which include all paths that end in the zero state; the latter means the codes allow the paths to terminate in all possible states. Am I mistaken?
0
I have a setup consisting of two lidar sensors with a known extrinsic calibration. Both lidars have messages with the same timestamps (no timedelay between them). My goal is to estimate odometry using the data from these lidars. To achieve this, I have performed a scan-matching algorithm (ICP) and obtained transformation matrices between the consecutive pointclouds. However, I'm facing difficulties in transferring the translation and rotation frames from one lidar to another to put into some filter. The lidars are mounted on a level arm and do not have the same translation due to the Euler rigid body equation. I would greatly appreciate your assistance in understanding how to appropriately transfer the translation and rotation frames between the lidars. Any insights, suggestions, or code examples would be highly valuable. Thank you in advance for your help!
0
I'd like track my LaTeX projects using git, especially when collaborating with other authors. I'd also like to track my texmf directory in a separate git repository. My desire is to add my texmf repository as a git submodule to each LaTeX project. However, I can't figure out how to tell latexmk (really, pdflatex) to see my project-local texmf. Ultimately, I want my LaTeX projects to be standalone, i.e. they depend on nothing other a TeX Live install and the files in the project directory so they can seamlessly be compiled on different machines or in a CI pipeline. How can this be done? Note: I have tried the solutions presented in similar questions, e.g. here (the TEXMFHOME environment variable seems to be ignored) and here (requires updating system texmf.cnf), to no success so far.
0
I have a relatively strong background in the theory of numerical analysis of partial differential equations (PDEs) and functional analysis, particularly applied to the numerical analysis of PDEs using the finite element method. I'm also very interested in graph theory. I was wondering if there are any theories or research areas that unify these fields and that I could explore given my background. Are there any connections between graph theory, functional analysis, and numerical analysis of PDEs that I should be aware of? I am aware that there is another question about this, but I am interested in it being in the context of PDEs and numerical methods. Also I am looking for open questions to initiate a research project, so I am seeking topics where the intersection is significant and there are open problems. Thank you in advance for your assistance.
0
In Borevich & Shafarevich's Number Theory, the authors define integral equivalence of quadratic forms as follows: Two forms of the same degree with rational coefficients are called integrally equivalent if each can be obtained from the other by a linear change of variables with rational integer coefficients. They further state a second definition In the case of forms which depend on the same number of variables, this is equivalent to saying that one of the forms can be transformed into the other by a linear change of variables with unimodular matrix. But I cannot see how to prove the equivalence of the two definitions. Specifically, it seems to me that to show the first definition implies the second, one has to prove that the two linear changes in the first definition are inverse of each other, or that both changes are nonsingular. Can anyone help? edit: Mr. Stucky's example in the comments has shown that the two changes occurred in the first definition need not be invertible. But still, I think to prove that the first definition implies the second, one has to show the existence of two nonsingular changes, as is pointed out by Stucky.
0
Here is the sentence in dispute: In humans, the femoral angle shows no correlation with femoral length. The question: why would 'femoral angle' receive a definite article, but not 'femoral length'? I feel like it does, but my co-author says no. I can't really justify it, but I feel like 'angle' somehow needs the article whereas 'length' does not. Thoughts? Thanks for all the answers. Yes, I am aware the anatomy may be esoteric, and apologies for that. In this case, the angle is measured in degrees, or fractions (in decimal) of degrees. The length is similarly measured in centimeters or fractions (in decimals) of centimeters. (And there really is only one of each on a person's leg.) So while I felt like angle requires a definite article because of the way I have always seen it used, in this case I can't logically defend the difference between angle and length.
0
In Reed-Solomon codes, the symbols of a code word contains multiple bits. Since the error correction and detection happens at the symbol level, it doesn't matter how many errors there are within the same symbol, it only counts as a single symbol error. Because of this, Reed Solomon codes are considered to be a great candidate for transmissions that are subject to burst errors. On the other hand, BCH codes are always considered to be a good candidate for random errors. But is that also true in an apples to apples comparison where you compare total number of message and parity bits (not symbols!) are the same for a Reed-Solomon code and a BCH code? Would there be any way in which the BCH would lose against the RS code in terms of error correcting capability?
0
I'm a graduate aerospace engineering who's really interested in the topic of numerical methods and in particular for structure preserving schemes. Hopefully at September I will start a phd in this subject, as a consequence of my work of thesis. However, I do feel my mathematical background is kinda poor, coming from an aerospace engineering master degree. Indeed, I'm already excited to follow some of the courses I will in my phd courses about advanced linear algebra and functional analysis. During the summer I have some spare time, and I would love to study some of the books I encountered during my work of thesis. In particular "Geometric Numerical Integration" by Hairer et al. As soon as I started reading this book I encountered some mathematical topics that I never encountered during my degree courses: Lie algebra, Lie groups (groups in general) and manifold. I would like to know what are the books you suggest to get into these topics. I saw that "Lie Groups, Lie Algebras, and Representations" by Brian C. Hall is a really good book as a starting point for this topic, but I know there's a more general introduction to lie groups which is related to manifolds. So which books do you suggest to introduce me to these topics ? Also, I would like to refresh some knowledge about the algebra courses I did in the past, but I don't really know which book I should refer to. Thank you so much for your help.
0
I was going through the derivation of a mathematical equation for the upthrust exerted on a body which is given in my book. It says that the downward pressure exerted on the upper surface is less than the upward pressure on the lower surface. Thus, there is a net pressure acting in the upward direction and therefore, a net upward force. The lateral pressure gets counterbalanced. I am able to understand that the pressure in the downward direction is due to gravity and the lateral pressure is due to the fluid's tendency to flow. However, I am not able to understand how the fluid exerts pressure in upward direction. I have worked on the problem and arrived at two different explanation:- The pressure inside a fluid is due to the collision of the particles. Since, collision is random, pressure can be considered to be equal in all directions at the same horizontal level. The fluid exerts a pressure and thus, a force on the bottom of the container and the reaction of this force exerts upward pressure. Question:- How does a static fluid exert pressure in the upward direction? Which explanation of mine is correct?
0
While I was reading the book "In Search of Schrodinger's Cat" I found an interesting excerpt on how Max Planck used Boltzmann's statistical equations to solve the Blackbody radiation problem. The book mentions that there will be very few electric oscillators at very high energy end and at the lower end, electric oscillators would not have that much energy to add up to any significance. So most will be on the middle range. But my question is why and how this distribution of electric oscillators comes into picture. I mean why can't be the this distribution skewed, so that we may have electric oscillators concentrated at high ends. More specifically how this probability distribution was derived by Boltzmann? ( I am a layman interested in Physics so please be simple in your explanation)
0
I was reading a Mathematics books and it gave the axiomatic definition of a function as being a mapping from a set called "The domaine of the function" to another set called "The codomaine of the function", and at first I thought the codomaine is the image of the domaine by the function (i.e. the set that contains and only contains the images of every element of the domaine). Turns out that's the range or image of the domaine which is only a subset of the codomaine and is equal to the codomaine only if the function is surjective. My question is : Why not define a function as a mapping from a domaine (the set A) to the set B (defined as the set containing and only containing the image of every element of the domaine)? Why the need for the Codomaine set with extra elements? Isn't the complement of the range in the Codomaine irrelevant to the function? In other words, aren't all functions surjective in the end? Sometimes they say that the codomaine is the set of possible outcomes of a function, but I don't understand what they mean by "possible" in this context.
0
I have a hard time understanding GR. I understand a lot (from a math point) about (pseudo)Riemannian manifolds, and I also learned about Einstein's elevator thought experiment. So let me elaborate: From a physics point of view, you can take the elevator and derive that light has to bend, also that there has to be gravitational time delay. So far so good. Then almost all the literature I saw, turns to the next chapters, and assumes from the previous discussion that it is clear the (pseudo)Riemannian metric is all what matters now. For me, there is a bit gap, a 'how'/'why'/'what' in between. I do not see how the metric tensor relates to accelerated reference frames. I feel like I am missing something very obvious, but what is it? Can somebody elaborate?
0
I'm trying to create a transition effect in a situation when a character suddenly finds himself falling, and his last word, which ends in '-y', is transitioning into an unintelligible scream. However, English is not my native language, and when I've tried to write it as 'Maryeeee!!' it was criticized as awkward and weird. At the same time, the counter-suggestion of simply extending the '-y' at the end and writing 'Maryyyyy!' doesn't feel right to me. It may be grammatically correct, but my goal was to create an impression of a word seamlessly transitioning into a scream, preferably making it clear that it is an unintelligible scream and not just the character's speech trailing off, and if the resulting word would be a horrible mess, it was fine for as long as I achieved the desired effect. So the question is, how should I go about it?
0
I'm working on a problem where I need to convert an undirected and unweighted graph with cycles into a tree while preserving the edge information (all the edges from the graph are preserved in the resulting tree). For the resulting tree, the height and nodes duplication should be minimized. Nodes duplication can happen because in order to preserve the edge information from the graph, we will need to break the cycles by removing the edge(s) and then make a copy of the node(s) then connect it in the tree, or in other scenarios that I haven't thought of. I don't think minimum spanning tree algorithms will work because the algorithms will lose the edge information. So far my heuristic is: For choosing the root node of the tree, pick the vertex with the highest betweenness centrality in the graph. Because that vertex likely serves as a bridge which many shortest paths flow. When breaking the cycles, remove the edge whose nodes have the least total sum of degrees. This works for some examples I crafted.example My question is, how can I verify the heuristic is indeed optimal and provide a formal proof? Or is there a better strategy? Any insights, references, or alternative approaches would be greatly appreciated!
0
I am beginning to learn chemistry/physics, and I have recently read about JJ Thompson's experiment which led to the discovery of the electron. In every source that I've read, the writers note that Thompson used an anode and a cathode to conduct electricity. In addition, the magnet supposedly had N/S ends. I did not know what a charge was, so I looked it up. But, frustratingly, I always get answers that refer to electrons and protons. That is, things that are negative have more electrons than protons, and vice versa. This gives me no idea as to how Thompson inferred that electrons are negatively charged from his experiment. My question is, can you define charge without talking about electrons and protons? Thompson didn't know electrons existed, so it seems to me that he must've had some other working definition of charge in order to determine that electrons were negatively charged. What makes an object a cathode or an anode, without referring to protons and electrons? If that question doesn't make sense, is it possible to adapt it just for the sake of understanding the experiment?
0
My understanding of a simple k-vector is that it is the wedge product of k vectors. Also, two simple k-vectors are the same, when their magnitude, attitude and orientation match. Now my question is, could I just define a simple k-vector in this way? Meaning "a simple k-vector is an equivalence class of ordered k-tuple of vectors. Two tuples are equivalent if their attitudes and orientation match, and if the parallelograms they span attain the same magnitude." Especially a reference where this is stated would be greatly appreciated! I feel like I have read something like this somewhere, but I cannot find it anymore. I am only using k-vectors for something I am writing for university and having to explain the wedge product would deviate from my topic a little. That's why I am trying to avoid this definition.
0
So I am writing a program that works with regular polygons, and in part of that I need to represent circles that are inscribed and/or circumscribed upon the polygon. As this is programming, I need to refer to the relationship between the polygon and these circles. I have searched all around, but all of the places I have found that educate on calculating these circles never refer to this relational quality. As I said, I need a way to refer to this relationship and I cannot seem to find a word that represents whether a circle has the quality of being circumscribed or inscribed upon a polygon. Does any such word even exist? This pseudocode example might help clarify what I am trying to find: var myPolygon = new Hexagon(); var myCircle = CreateCircleOnPolygon(myPolygon, CircleType.Inscribed); ^^^^^^^^^^ ............ if (myCircle.CircleType == CircleType.Inscribed) { ... } ^^^^^^^^^^ ^^^^^^^^^^ The name emphasized in the above example is what I am currently using; however, considering that there are lots of different possible parts of my code that could need a different "type" of circle, the usage of the name CircleType is a little too broad in this context. I suppose the word "quality" is what I am going for, but I thought I would reach out to the language experts to see if there is a term I am unaware of that is more appropriate.
0
I can imagine a relatively simple experimental setup whose resulting data could easily be compared with theoretical predictions: Send two identical atomic clocks into orbit and settle them at rest relative to each other. Then synchronize them. Then use thrusters attached to one clock to oscillate it, causing it to experience a series of counterbalanced accelerations (not changing it's average distance from the non-accelerated clock). Then collect data from both clocks to measure their desynchronization over time. My preliminary calculations suggest that the energies involved to reach relative velocities between the clocks that should produce measurable desynchronization in a matter of days or weeks are feasible, since atomic clocks are so accurate. I know the twin paradox isn't really a 'paradox' and that the predictions in general relativity are completely coherent. I've read a bunch of responses to related questions, explaining exactly what is predicted. But I can't find any experiments that confirm those predictions directly.
0
As I understand, for elevation mapping using InSAR, one typically requires an out-of-plane baseline to create the required phase difference between images to detect objects at height. This usually requires either multiple satellites flying in formation or waiting for a repeat pass of a single satellite. This paper, A New Single-Pass SAR Interferometry Technique with a Single-Antenna for Terrain Height Measurements, suggests that one could achieve this in a single pass (along-track interferometry), if we are able to image at a high squint angle. The idea is that a grazing-angle difference is still present in the along-track case when squint angle is high. This is not the case for broadside imaging. However apart from this paper, I was not able to find any other sources to cross-reference the viability of this approach, nor does it seem that anyone else has reproduced the results. The mathematical principle behind seem sound to me. If it works, why is this concept not being used more often, and if it doesn't actually work, where is the error in the logic?
0
I would like to ask your opinion on the point that looks simple. Consider the group of orthogonal matrices of order n over the field R of reals, equipped with the topology induced by the Euclidean norm of matrices. Let g be one of such matrix and denote by X the topological closure of the cyclic group generated by g. My question: May I find a matrix g with the property that: if the identity matrix I is the accumulation point in X of some sequence of powers of g, then the sequence is definitely trivial, that is, all the elements (except finitely many) equal I? In other words, can I exclude that, for a suitable g, the identity I is obtained as the accumulation point of a non trivial sequence in X? I don't think such g exists but I'm not be able to exclude it with a direct argument. Thank you very much for help.
0
Perhaps another way to put it is, what exactly does it mean to quantize the EM field and why is it necessary? What mathematical properties does the quantized version of the field have that the classical version doesn't and vice versa? For context, I was reading a thread about where specifically classical E&M fails and we hence need QED to get accurate predictions, and stuff like the ultraviolet catastrophe and the photoelectric effect were mentioned. I get why those things necessitated the introduction of quantum mechanics and especially modeling photons as wavefunctions. What I don't get is why we can't just use "regular" relativistic QM with Maxwell's equations and the same mathematical E and B fields (which together form the EM field) from classical E&M. What goes wrong if we try to do this and how exactly does QED fix it? Or, to put it another way, what properties does a field modeling photons as wavefunctions need to have that the classical EM field doesn't have?
0
I am new in Linear Algebra and I have encountered the concept of Solutions of linear equations. According to the textbook I use, a solution of a linear equation is a vector whose components satisfy the equation. However, I am puzzled by the choice of "vectors" specifically. Why is it not defined as a point whose coordinates are the same as the values that satisfy the linear equation? After all, a vector is more than just its components; it is also a set of points and its "components" define its magnitude and direction. A point, on the other hand, is a single entity with no additional properties. No magic. I think It's just a bit more intuitive to think of it this way instead. I can also anticipate that Vectors have a deeper meaning that relates them to linear equation systems but I can't tell what exactly. I'm going through the chapters and I'm still waiting for the "Bingo! I figured it out."
0
I have been reading the Loop Antenna section of Antenna Theory by Constantine Balanis and trying to understand how exactly a ferrite core improves the performance of a small loop antenna. Balanis writes, The radiation resistance, and in turn the antenna efficiency, can be raised by increasing the circumference of the loop. Another way to increase the radiation resistance, without increasing the electrical dimensions of the antenna, would be to insert within its circumference a ferrite core that has a tendency to increase the magnetic flux, the magnetic field, the open-circuit voltage, and in turn the radiation resistance of the loop. How exactly does increasing the magnetic flux (adding the ferrite core) increase the antenna's radiation resistance? Balanis claims the voltage increases, and presumably deduces that the radiation resistance increases likewise from the equation V=IR. However, this would only follow if we knew that the current I did not increase as the flux increased, which is not obvious to me. It also is not obvious to me why the voltage increases in the first place!
0
I've been really stumped on this particular concept. In Case A, when a bar magnet is brought towards a copper coil around a soft iron core, in accordance with Faraday's Law of Electromagnetic Induction, the the pole facing the magnet acquires a North Polarity while the opposing pole acquires a South Polarity. Now by Lenz's Law, the direction of the induced EMF must oppose the cause that produces it, but the current is in the same direction as the moving magnet. The galvanometer deflects in the same direction too. Why doesn't this obey Lenz's Law? And additionally, in Case B, by merely reversing the winding of the copper coil, the current flows in accordance with Lenz's Law and the Principle of Conservation of Energy. My question is, why does changing the winding of a coil defy Lenz's Law in Case A and then follow Lenz's law in Case B? Something such as the Principle of Conservation of Energy and Lenz's Law should apply in any case. The method of winding shouldn't matter that much right?
0
Is it correct to say that, in English, when you use the Present Simple tense in the Interrogative Negative form you are either implying the negative or just confirming the affirmative (depending on the order of "Do", "subject" and "not"), but no other possibility? Like in the scene of The Lord Of the Ring - The Return of the King, when Eowyn is talking to Aragorn just before he leaves to see the dead army in the mountain. Eowyn: "You cannot abandon the men. We need you here." Aragorn: "Why have you come?" Eowyn: "Do you not know?" In this last line, given the context, she is clearly implying that she thinks he knows she loves him, and thus, she is asking it just to confirm it (as I understood it). But then, if she had used the other form "Don't you know?", it would change the meaning, as she would appear to be asking it to really have a first idea on his feelings, not presuming anything. Then my question is: can the Interrogative Negative have only these two possibilities?
0
Have studied traditional point-set topology, but find there's a fairly large gap between the preparation typical point-set courses give you, and the level assumed in algebraic topology texts. Looking for a good introductory book on topology that uses more categorical / modern language - something that would segue smoothly into, for example, Tammo tom Dieck's text Algebraic Topology - at least covering most of the topics mentioned in the first chapter of his book (subspaces, quotients, products, sums, compactness, proper maps, paracompactness, topological groups, transformation groups). Have perused the MIT textbook 'Topology - A Categorical Approach', which looks decent, but isn't so comprehensive. Have found one text that looks good (Grundkurs Topologie by Laures and Szymik) - but unfortunately it's in German! (Anyone know of an existing English translation?) Open to recommendations. Thanks.
0
I'm studying nonlinear control systems, especially the Pontryagin's minimum principle and its applications. Throughout my studies, the authors have always defined the control systems with state variables and output variables. However, in this article Optimal Control of an SIR Model with Delay in State and Control Variables and so many others, it doesn't seem like control systems in epidemiology has output variables equations (or at least clear ones). Why is so? And is it possible for a nonlinear control system not to have output variables? I have found this answer: It is possible for a nonlinear control system not to have an explicit output variable, depending on how the system is defined and what its purpose is. In a control system, the input variable is typically the signal or information that is used to control the behavior of the system, and the output variable is the variable that is being controlled or affected by the system. However, in some control systems, the purpose may not be to directly control an output variable, but rather to achieve some other goal. Is it correct?
0
This might be a stupid question, but, why is it that gamma rays are able to penetrate almost any barrier without question? We know that gamma rays are simply high frequency waves with massive amounts of energy. However, what processes can enable it to go through layers of bonded atoms of metals and just about anything else? On another note, we know that beta particles can pass through paper but can not penetrate a thin layer of metal. What is different about the bonds of metal so that an electron can't squeeze through? I assume it is because of the tightly bonded metallic bonds and the high volume of electrons that flow throughout the metal that causes this. However, if this is the case, what would a beta particle do if it is "rejected"? Would it join the flow of electrons in the metal, would it just stay on the same side of the sheet or would it rebound in the opposite direction due to the equal, opposite force exerted on it by the sheet?
0
I am studying interacting QFT in the context of quantum fields in curved backgrounds, and I am getting some confussion about the concept of particles. To study some gravitational phenomena involving particles (e.g. Unruh effect, Hawking radiation, etc.), it is typically sufficient to deal with free fields, which are expanded in mode functions and particle/antiparticle operators (i.e. its energy eigenstates form a Fock space). This can be done, in general, because the Hamiltonian of free fields is quadratic in the field operators, and therefore one can calculate a single-particle Hamiltonian which can be diagonalized, giving rise to a band structure by means of which we describe this Fock space (eg. for the case of Dirac fermions, the vacuum state, prior to a particle-hole transformation, amounts to a filled lower band/Dirac sea). However, when one considers an interaction term, the Hamiltonian is no longer quadratic in the fields, and this band-structure cannot be obtained by direct diagonalization of the single-particle Hamiltonian (I am not even sure whether the notion of band structure remains). As a consequence, I do not understand if particles/anti-particles can only be defined when the Hamiltonian of the theory is quadratic, i.e., when the evolution of the theory preserves tha Gaussianity of the states. If this is the case, I imagine that a mean field approximation, which turns the Hamiltonian back to a quadratic one, would recover the notion of particles, is this the case?
0
I'm working on a set of beamer class slides in TeXstudio. Usually upon compile, the preview slide visible on the right will be at the position of the cursor - instead, for me, it consistently jumps two slides behind. This is reflected in the behaviour of the preview when I enable the "scrolling follows cursor" option - there, too, the preview is always two slides behind where the cursor actually is. If the cursor is on one of the first three slides, the preview stays on the first one. I get this behaviour on two different machines, so I assume it's not some super specific local bug but something more broad. It also persists independently of whether or not the document is compiled in handout mode. Any help on resolving this would be very much appreciated, as this behaviour is quite annoying.
0
I noticed that in several 'throwing' sports like the javelin throw and the shot put there have been a few cases where competitors tried to introduce a technique where they throw the projectile further by rotating their entire body (these techniques were then banned for safety reasons). As an example, there is the cartwheel shot put, where the person performs a cartwheel and rotates their body before throwing. I am not sure what advantage this has apart from that the ball is pushed over a slightly longer distance so that there is more kinetic energy. In the spinning javelin throw, instead of throwing the javelin with a regular technique, the athlete spins around and slings it out of their hand at high speed so that it flies forward. Again, I am not sure why this gives a distance benefit but I assume that it's different to the rationale behind the cartwheel shot put?
0
I am wondering if the esteemed members of this forum can help me with these questions, which have bothered me for a long time and are what have brought me to this forum. One thing I struggle to understand and work with is the use of the "editorial we." I am a copyeditor and proofreader, and I have a client who consistently likes to use the first-person plural voice for his works. I have asked my colleagues, and I got mixed, unclear answers. Here are a few examples: We will be able to feel magnificent light in our souls... Does "souls" have to agree with "our"? Should it not be "soul"? The chair or pillow may not be so comfortable, our kids may get rowdy, our spouse may need our help with this and that... Here, I have no issue with "our kids," because a person can certainly have more than one child. But should "spouse" be plural like "our" and "kids"? It is a constant issue that we must face our entire lives. "Entire" here does not seem to fit with "lives." But then "our entire life" does not seem to be correct as the singular "life" does not agree with the plural pronoun "our." I would be grateful to learn more about the proper use of the "editorial we" in this regard and to hear suggestions on how to deal with this issue. Thank you all in advance.
0
In a textbook for thermodynamics, it considers a situation where work is done to a system by an irreversible work source through a thermally insulating piston, and it states "any irreversible work source can be simulated by a reversible work source". It briefly explains the reason; what the work source does is simply to apply force to the piston, and therefore it does not matter how the force is applied, whether or not it is applied by an irreversible work source. I have a difficulty to fully convince myself with the statement, and can anyone kindly help me with this? The textbook is a non-English one, and it is not available online as an electric file. The way how it defines the work source is simply that it is any system, which is connected with the system of our interest only through a thermally insulating piston. That is, there is no heat exchange between the two system. Let me rephrase my question: Suppose that system A is interacted with system B (irreversible work source) only with a thermally insulating piston. They exchange energy only through work, but not heat. My question is if it is possible to replace system B with a reversible work source in an indistinguishable manner, i.e., any mechanical reaction that system A receives from system B remains exactly the same. Thank you so much for your time.
0
As per my understanding: Multiple fermions cannot have the same quantum state (as per Pauli exclusion principle) Multiple fermions can occupy the same physical space as long as they have different quantum states (or numbers or properties such as spin) If both these statements are true then, part of the second statement "as long as they have different quantum states (or numbers or properties such as spin)" doesn't become necessary. Because first statement implies that "multiple fermions always have different quantum states". Hence, the second statement simply becomes "Multiple fermions can always occupy the same physical space" (For a moment let's consider only fermions, their quantum state and physical space they occupy. And not other factors like electromagnetic repulsion etc) However, at multiple places on the Internet it has been stated (and seems like widely accepted) that: Multiple fermions cannot occupy the same physical space as per Pauli exclusion principle, and that is why matter structures exists in the universe. Can someone please help me trying to figure out where am I making mistake?
0
According to wikipedia, the aurora borealis is primarily caused by charged particles from the solar wind being redirected to the poles by earth's magnetic field and slamming into the nitrogen and oxygen atoms in the upper atmosphere. This slamming gives the electrons in those oxygen and nitrogen atoms enough energy to escape their electron clouds. The light comes from when electrons emit photons in order to rejoin an atom. Hence the colors, green corresponding to oxygen atom readmittance and purple to nitrogen What I am wondering is what role dissipative absorption plays in the aurora borealis. I mean we have a bunch of accelerating charged particles right? Even if speed is constant their direction is changing due to the magnetic force and since they're charged they must be emitting some kind of em radiation right? Not to mention when the actual collisons take place between electrons. Not all of this em radiation will correspond to the energy levels of nitrogen and oxygen right? So shouldn't we see more colors as all sorts of em radiation ought to be emitted? If nothing else you would expect the electron clouds of oxygen and nitrogen to oscillate in response to the emitted em radiation of the incoming solar wind right? Why do we tend to see only green and purple?
0