snippet
stringlengths 143
5.54k
| label
int64 0
1
|
---|---|
I would like to use this template from Overleaf for a PhD dissertation at the University of Aberdeen, offline on TeXnicCenter. (This appears to be a legitimate thing, according to my supervisor.) I have looked at this; it didn't help. My main issue is that I'm not sure how to use the cls file. There might be other problems once that is sorted, so if you could explain to me how to do it, please, I would be most appreciative. I'm not sure what I can offer in terms of context. But here: I did both my MMath and my MPhil dissertations using TeXnicCenter, along with much smaller projects, so I have a fair amount of experience with it. Like I said, I think the issue is with the cls file. I believe it is something like a package, but one that I have to install myself - I don't know . . . I am likely to award a bounty to the first answer that works when I try it! | 0 |
Three non-collinear points are placed randomly inside a unit circle. Question: What is the probability that if you were to connect these points, forming a triangle, the triangle will have the center of the circle contained inside (including the periferi)? Here are two examples, A and B, illustrating what I mean: where you can see that A works and B doesn't work. What Iv'e tried so far is illustarated in the following image: Two randomly points are obviously always collinear. So we can connect the two. Now let's draw a staright line, starting from each point, through the center and finish on the circumference of the circle. The grey shaded area represents the region where the third point needs to be in order to create a triangle that works. Now, as you can see there can be all kinds of regions depending on where the first two points are located. So I'm simply looking for a method/approach that can answer this question. If you have a another approach than my example feel free to use it. I'm not sure if this is very complicated or if there is a nice solution. | 0 |
If we are given that propositions P and Q can never be true, is it still accurate to say that P and Q are necessary and sufficient for each other, and why? I am conflicted here, as the statement P iff Q is true here and I have learned that this also means P and Q are necessary and sufficient for each other (for context, I have learnt that P is necessary for Q if P must be true to conclude Q is true, and that P is sufficient for Q if Q must be true to conclude that P is true); however, it seems counterintuitive to stipulate conditions for propositions being true (by deeming other propositions as necessary or sufficient for them) if these are never relevant to the situation, as these propositions are always false anyway. I would appreciate an answer grounded in an explanation of necessity and sufficiency. | 0 |
I am not well versed in TikZ and PGF Plots, and I am hoping someone would be able to help me with developing the structural code for a Parabola with its Focus and Directrix. I found an image of what I am assuming was created via LaTex and was hoping to recreate this somehow. This was a perfect image of what I am looking for, because the conic sections seem to be so lost to time that I can not find good representations of this particular topic without hand drawing it. I found a sample code for drawing a parabola, and when I attempted to recreate it, it does not let me use a fractional coefficient. If anyone has starter code to this image, that would be greatly appreciated | 0 |
My understanding is this: the Solar surface becomes speckled with more sunspots near Solar maximum, and these spots tend to form groupings known as active regions; each spot is associated with a given magnetic polarity, and often regions will tend to have spots of both polarities that are typically linked in certain ways, but sometimes the formed regions are much more complex, especially as the polar fields are about to flip and Solar activity tends to increase. Now, my question is this, as the title says: which way does current flow in the plasma streams that form the coronal loops and filaments that are generally "rooted" in the active regions? Is there a way to determine the direction of current solely from the magnetic polarities of the spots involved, or can current flow in either direction? | 0 |
In a corona discharge, the air around a conductor locally breaks down but remains an insulator further away from the conductor. Therefore, in the case of a positively charged conductor, the free electrons in the locally ionized patch of air can enter the conductor, positively charging the air too. However, corona currents don't exceed a few micro-amps. What is the reason for this? I know that electrically charging the increases the effective size of the conductor lowering the peak electric field. However, can this current be increased to arbitrarily high values is the charged air sheathing the conductor is removed quickly through some other mechanism? In essence, what I am asking is if the current limitation of corona discharge is due to the sheathing effect of the charged air or something intrinsic to the conductor-air system such as the probability of a free electron jumping into the valance band of the conductor being low. Where can I read about the microscopic nature of corona discharge? Any good books would be appreciated. I would like to learn more about the statistical mechanics behind the free electron and valance band interaction of metals. | 0 |
Perhaps you might want to take this question as some sort of challenge. I'm really just looking for the "most efficient way" of making a table similar to that in the image directly below (in LaTeX, of course): I've attempted several different things, and ended up with a huge mess and a thousand packages in my preamble. (Here "thousand" is a hyperbole for "too much for my brain to keep track of".) Because I provided an image of what I want, I don't feel like any MWEs are necessary here. Whatever looks most like the image I above, and gives the least headache, I'll mark that as the solution. Notice the days of the week are aligned to the horizontal centre of the cells, unlike the rest of the content. Note. The table I provided as a reference was made using LibreOffice writer, and the immortal CMU Serif font. If you have a good eye, you'll have noticed that the content of the list elements are not aligned perfectly, as the second and following lines of all the list text blocks are pushed ever so slightly forward. This is neither intentional nor wanted. | 0 |
If I want to calculate the length of a string that is wrapped around a cylinder what mathematical equations can I use? The thread is tightly wrapped around the cylinder, creating a spiral that has no gaps between each consecutive loop. Moreover each complete spiral creates one layer. and as a single layer finishes at the end another start to form up to the top creating a second layer, and so on. The catch is, is that each consecutive layer would require a larger length of string than the one before it because the circumference enlarges. Since the yarn ball is manufactured by machine, it leaves no room for error because the machine is in the same constant motion, the gaps between each string, as well as the gaps between each layer, are all the same, as is the tightness of each loop around the centre cylinder that holds the yarn together. So, how would you suggest I calculate the length using only external measurements and observations? | 0 |
When a thermodynamic system, like an ideal gas within a piston immersed in a heat bath, is subject to changes, such as compression or extension of the piston, then the work that can be extracted from this process is maximal if the process is carried out quasi-statically, that is, at each step of the expansion the gas is allowed to relax to equilibrium. Why is the relaxation to equilibrium at each step related to more efficient energy extraction (or reversibility) ? Here is another way to ask. If I make a large change for the piston, I imagine there will be an abrupt relaxation to equilibrium and this will be related to large dissipation. But If I make many quasi-static changes, then at each change the dissipation should be small, such that at the end more energy is extracted. But the problem is that I do not see any physical picture or equation why the sum of the very small amounts of dissipated energies during the sequence of quasi-static changes should be smaller than the amount of dissipated energy during the one abrupt change. The difference between my question and similar available questions such this one, Is there a quasistatic process that is not reversible? , is that I insist on pictorial or physical answers of the questions. | 0 |
Its kinda confusing but I will explain. I saw a term a while ago that explained how people can disagree with another. Instead of disagreeing because they do not like their opinion, the person will disagree because they dont like someone or something that also likes the opinion. Example: Person A thinks that driving on the road should require a license. Person B disagrees because a politician Person B doesn't like agrees with Person A, so Person B decides to disagree. You see that the argument Person A gave is reasonable, and Person B would completely agree, but the problem is that because someone Person B doesn't like agrees with Person A, they decide to disagree. This example is something very popular in politics (ex; only voting for a single political party, instead of voting depending on the politician's opinion). Another thing its similar to is contrarianism. You are disagreeing based not on the logic, but for another reason. | 0 |
How is the background noise of gravitational waves modeled? Is it a thermal model, giving a stochastic distribution of the curvature tensor (field-strength tensor) in ambient space? That is, every binary star, every orbiting planet, every orbiting black hole or neutron star -- anything that accelerates -- is emitting gravitational radiation. The grand-total of all of these sums up to what looks like noise. Is there a "well-known" distribution for this noise? Some power law? Can an argument be made that this noise has a thermal profile? Are there specific equations describing this noise, and what are they? How do they scale? Should one suppose that this varies from galaxy to galaxy, and depends on the local environment? Or can one argue for some generic form that is "typical"? Say, for binary clusters? Somewhat related: what is the order of magnitude strength of this noise, compared to the instrument noise in current gravitational wave detectors? Yes, of course, its frequency dependent, so its a graph, but is this "natural noise" strong enough to be detectable? (Ignoring Earth-bound sources.) | 0 |
A non-spontaneous change occurs when an external effort is being done to it. Since the external effort is also a natural source , does this mean that there is no truly any non-spontaneous process in nature (universe) ? Example to illustrate my point : An electrochemical process (like in a galvanic cell) can be reversed by providing an external voltage in opposite direction whose magnitude is higher than the cell potential. This external voltage is given by humans. Humans are basically converting some form of energy like mechanical energy of rivers in a dam into electrical energy. Also, humans are doing so because they eat food and they get their energy to do through spontaneous metabolic processes in their body. Also, the conversion of energy in above process is spontaneous because humans are making use of a spontaneous process ( a river flowing down from a height) . So, is there a truly non-spontaneous process ? | 0 |
I asked my coworker to fix something in a program. When he fixed it, he replied with, "I already fixed it." -- this wasn't intentionally misleading, but was an incorrect translation of "ya". But for a moment I was thinking... "no, you fixed it after I asked". It was a bit jarring. Having learned a decent bit of Spanish when I was younger, I paused and realized there was some confusion when translating between "ya" and "already". BUT...I really struggled to find a clear/cogent explanation to help my coworker--from my English perspective. I finally found an article in Spanish, but honestly it was crazy how many meanings of "ya" there were, many of which I hadn't realized, and it was slow-going to read through it and make sense of it. My Spanish skills are rusty. So nobody else whose primary language was English was likely to realize what happened, had they seen this mistake So, here's my question: How can I help both the ESL coworker, other coworkers, and others in general to be aware of this pitfall, and what is a kind, cogent, and English-centric way to explain it? Thanks. Secondary question: And are there blogs/references or sites where I can learn more patterns of this sort of mis-translation across many languages? As a privileged white guy from a homogenous suburbia in the USA, I probably have a number of ways I might misunderstand cultural differences, and likely need to improve my awareness bit by bit. | 0 |
I want to use Matex for the figures in my plots which I eventually would like to use in overleaf. I create a plot and use Matex in it and right click on it to save as pdf. I then use this pdf in overleaf. So, when I compile the latex file and download the pdf, the Matex font looks rather shabby compared to other text on the plot (such as numbers on axes for which I didn't use mathematica) when I do not zoom in. . The font on latex can be seen on the description of the figure. The Matex font looks spotty with kinda white gaps in it. The problem with this is that when I print the file onto paper, the font doesn't look good. I don't think the problem is resolution since when I zoom into the pdf file, I get a very sharp Matex font. Does anyone know what's going on? | 0 |
Consider the sentence: So how can a computer think if it knows nothing of what it means to be a human being. Initially I thought that because "of" in this sentence basically means "about" the latter part of the sentence is not a relative clause as it is not adding any additional information, therefore is the subject of the sentence and cannot use which. Furthermore, as the contents of "to be a human being" is not known, "what" is used opposed to something like "that". However, upon further thinking I'm not sure if this is a relative clause or not as technically it is adding information to the sentence by specifying what the "nothing" the computer knows is. However, none of the resources I looked at concerning relative clauses used "what" in any example sentences. In other words, I'd like to know why this sentence uses "what" and what type of clause/grammar this sentence is. | 0 |
This is a broad question but it's well documented that GR and QM are very well tested in their own domains but they conflict around black holes. Picture a neutron star slowly accreting matter until it's mass is sufficient to bring about an event horizon. It resists gravity owing to the Pauli exclusion principle and it must surely be comprised of the same 'stuff' as the event horizon forms. Why do we then rely on GR and assume everything collapses to a singularity which seems illogical in nature when the most sensible ('we don't know yet') answer surely should be that there exists a 'black star' under the event horizon? It seems that QM is overshadowed by GR in this instance when GR seems to give more illogical answers. As a thought experiment, if we had a very heavy neutron star and fired one photon at a time at it; I would imagine the surface begins to redshift more and more as time goes on. There would surely reach a point where the miniscule deviations mean that a 'ravine' can no longer emit anything to an observer but a 'mountain' could. It would seem to be on a tipping point of being both a black hole and a neutron star but the mountain is still supported from below from seemingly below the horizon that is forming. | 0 |
Is there an easy way to draw complex images in LaTeX? By this I mean: Have a look at the following question. Some answers provide incredible pictures, very elaborated. In the (rudimentary) way I can create figures with tikz or pstricks, it would be impossible for me to come up with the correspondings codes. This made me think: Perhaps there are some tools that help write down the code. I am aware of things like tikzcd-editor or LaTeXDraw. I also know about InKscape, although I have never used it myself. Howerver, none of them seem to be appropriate to draw tori or spheres. Yes, with LaTeXDraw one can draw ellipses and create things that resemble those surfaces, but the quality of these draws is as good as the quality of Paint. I am thinking more of some type of image recognition software that identifies the geometric object (as well as some text the figure might have) the user has drawn and produce the corresponding code. Does anything like that exist? | 0 |
I am seeking a term for what can collectively be referred to as "the leader and/or the most important and powerful roles" in the hierarchy of a society. I have an example from the YA series Warrior Cats by Erin Hunter. Warriors are the generic members of society. Meanwhile Leader, Deputy, and Medicine Cat together are the important and powerful positions. A term, for them together, as a collective, is what I seek so I can quickly an efficiently refer to them instead of some nickname like "the Big Three" every single time. I would like for a term generally applicable to most of societies, as it can be used in many contexts. I've been unable to think of any possibilities, hence I ask. Example Sentence: "The X are foundational to how our clan runs." | 0 |
Poi are tethered weights used for dancing, which often have battery powered lights in them. Clearly work is being done on the poi, first to accelerate them, then to keep them going at constant velocity despite air resistance. I'm curious whether it would be possible to power the lights via the work done by the dancer rather than needing to replace/recharge the batteries externally. Obviously the amount of energy the dancer needs to exert would go up as well, regardless of the method. One idea that clearly wouldn't be sufficient would be to use current induced by the Earth's magnetic field. Are there any methods that might produce more power? It'd be cleanest if these methods would work while the poi were spun at constant velocity, but methods relying on variations in velocity or tension in the tether would also be interesting. | 0 |
When you ask search engines or dictionaries, they don't seem to recognise the word 'orthodontry' and all point to 'orthodontics' and 'orthodontia'. I suspect 'orthodontry' is a mash-up of either of those and 'dentistry'. However, there are many businesses that use the phrase in their advertising and online communications. It seems to specifically refer to the practice of orthodontics. Is this an example where the dictionaries are just behind the times and is 'orthodontry' just an example of a neologism on the rise - a new 'aluminum'? Or is there some good reason the word should really be avoided? Examples of sites using the phrase: https://orthodonticsinlondon.co.uk/blogs/benefits-of-orthodontry.html https://www.columbiaasia.com/indonesia/specialties/orthodontry There's more like it - none of them any kind of language authority on the subject, but I started looking at it because two colleagues (I live in Australia) mentioned it independently in chat, and upon asking, one shared a mail from their orthodontist that had the word in there. | 0 |
you are a player in a game where each contestant has to choose between three boxes to win a prize. The prize is distributed uniformly between the three boxes. Each player makes a choice independent of one another. many or none of the players could choose the right box which is revealed at the end of each round. However, you have info that the other players don't- you know that the prize will be behind a different box from the previous round. Find the probability that you will have chosen the right box more than the other players by the end of the game. The problem doesn't give me how many rounds per game or how many players there are which made me first think that there was an issue with the problem, but i'm not sure. Would appreciate if someone could please confirm or help me out if its possible. Thanks | 0 |
I wonder what are the advantages of using the MPS-MFS (method of particular solution combined with method of fundamental solutions) for nonhomogeneous PDEs? In order to implement MPS with radial basis functions (multiquadrics or compact supprt, doesn't matter) you still need to mesh the entire domain (like in FD) but, unlike FD, you eventually end up with either a fully-populated matrix (when using MQ) or something anyway denser than in FD (when using functions with compact support). So, if anything, the MPS-MFS must be more expensive than FD, not less. The only advantage compared to FD or FEM seems to be that MPS-MFS is a meshless method in a sense that you don't need to build the connectivity matrix (which takes anyway just a fraction of the total time in a typical simulation job). Am I missing something? | 0 |
I want to self study Real Analysis, and I want to choose between these books. This is my background I am a teacher of mathematics for highschool, which means (at least) in my country, that I had seen abstract algebra untill ring theory and calculus untill integral in several variables, but not things like Stokes theorem (because standards for teachers in my country solicites some very basic ODE and, so I had seen ODE instead Stokes). This calculus class was a mix level between engineering and pure mathematics Could you tell me about the pros and cons of each of the following Real Analysis books? (Suitable or not for self-study, content quality, difficulty of the exercises, etc.) Real Mathematical Analysis, by Charles Pugh Mathematical Analysis, by Tom Apostol Principles of Mathematical Analysis, by Walter Rudin | 0 |
If we look at the majority of useful or industrial materials surrounding us, like metallic alloys, glasses, ceramics, or plastics, it is often the case that these materials went through really hard times or difficult stages of their life during synthesis and processing. For example this could be the heating of a metal to an extreme temperature and abruptly putting it in a cold environment to quench it, thus typically resulting in good mechanical resistance. Perhaps the question may be not so precise, but is there some abstract reason why good material properties (mechanical, thermal, electrical, and so on...) are typically obtained through procedures that really drive the materials out of their native solid stable states? Can this somehow be related to the difference in phase flow topology between very stable states (liquid, solid, gas) and the "non-equilibrium" states obtained through extreme conditions (like heat-treatment and so on)? | 0 |
I use Beamer presentations in workshops where I set questions, have students work on the answers, and then display some sample answers. I give the students a version of the presentation without the answers, so they can read back through the presentation as needed. It seems the usual way of doing such conditional processing in LaTeX would be to define a command to mark up an answer and then provide two definitions of the command, one that includes the answers and another that omits the answers. The usual way of switching between these definitions is to use distinct top-level source files. I would prefer to avoid creating an extra top-level source file for each presentation and instead control the formatting behaviour using command line arguments, an environment variable, or such like. Then I could write a script to format the presentation twice, once with the answers and once without answers. Looking through the arguments of pdflatex, I don't see a way of passing a flag that could be used in the LaTeX source to choose between the definitions. Related questions and solutions: Passing parameters to a document has answers that solve my problem, but the question does not express the context and requirements as clearly as the question here. Conditional text lines latex document - expresses a similar requirement. The exam package - seems to require multiple top-level source files. | 0 |
I know the heading may be a bit misleading one. But I can't find a better one. Anyone can suggest a good one. By writing Weird function I mean those functions that are integrable but not easy to do so. I have come to know that uniform convergence of a series of integrable functions on some set confirms the fact that we can interchange the order of sum and integral. In other words, this leads to perform term by term integration for infinite series of functions. I am very curious about: Can we write any weird function as the limiting function of uniformly convergent series of simple integrable functions? If the answer is yes, then I think we can integrate those functions easily. But then after integrating we again have a series, that must be again uniformly convergent I think. In that case it may not be possible to write it as closed form. Is my observation is correct? | 0 |
I'm trying to replicate a specific script 'X' that I'm seeing in a textbook: This lines up pretty well with the computer modern script X: But I'm using the unicode-math package, and the script characters in the default font (latin modern) look nothing like the script characters in computer modern (even though it's supposed to just be a modernized version). The unicode-math documentation includes a symbol list of a few included fonts, of which the concrete math font gets the closest: But I really don't like the uniform stroke thickness - it really doesn't match with the rest of the characters on the page (since I'm otherwise using computer modern / latin modern). Does anyone know of an OpenType math font that has a similar script X that I'm looking for? | 0 |
Maybe the questions is too stupid to be asked or I do not know the technical words, but I could not find any answer to this question. Here is how I started to think the title: First I thought of if we look far enough if we could see Big Bang theoretically. The answer was "No", due to reasons such as opaqueness of the early Big Bang and expansion. This is okay. However, then there was a second question arised: Even if we could, which direction to look at? Then I thought if it is the furthest and the oldest, it should not matter. Because any direction we looked at furthest and oldest we should see the Big Bang if we could. Then, I know I made too many assumption, but does this conclude that we are surrounded by Big Bang and the universe is expanding not outwards but inwards? As I said maybe question was way too stupid, but still I want to hear some ideas/facts. Thank you. By the way this is the first time I post a question, I have no experience, so sorry about any mistakes I made. | 0 |
So I'm writing a book, and I've used straight quotes (") in my latex files throughout the entire project. Now whenever I compile my document in Overleaf, these, to no ones surprise, come out as no starting quote, and a straight quote at the end of what's quoted. What I wish is for all straight quotes in my tex files is to actually appear as straight quotes in the generated pdf, regardless of whether they are situated at the front or end of a word. Here's an example: % In my .tex file This is a "nice quote". I want this to appear exactly like this in my document: This is a "nice quote". Is there anything I can do to override the default behaviour? I really do not want to go through my entire project consisting of many .tex files finding and replacing characters, all I want is simply the straight quote symbol to appear as an actual straight quote in my text regardless of whether it's a start-quote or an end-quote. | 0 |
Consider a person pedaling a bicycle, if we consider the system consisting of the rider and cycle as a total and apply work energy conservation we can see that whatever force the rider applies on the pedal will also have an equal and opposite reaction and as both the pedal and the foot of the rider will have equal velocity the net power cancel outs? I don't know where the mechanical energy is generated in this process. I have attached a picture of a question from my engineering textbook (Engineering Mechanics by P.C Dumir) from which I got this question. I have attached a portion of the solution as well, I am not sure why It has considered the reaction force to be acting at point B and not the foot of the rider, due to which it has calculated the power by taking the difference of velocities of the subsequent parts. | 0 |
My understanding is that without acceleration the "movement" of a body is a relative concept, i.e. we can choose an inertial frame of reference where the body is at rest and there is no property or experiment that can tell us that the body is in movement, because it's a meaningless question. In the same way, can I say that the movement of earth is arbitrary, just choosing a non-inertial frame of reference? The fact that I need to include fictitious forces to explain for example the movement of a Foucault pendulum, means that the earth rotation is "absolute"? The law of physics should stay the same if I choose the earth as frame of reference, but does that mean that there is nothing absolute about the movement of the earth? | 0 |
Traditionally, mathematical work is presented in a linear fashion. Books, papers and articles are single streams of text meant to be read sequentially, from beginning to end. However, mathematical content often has a not-so-linear underlying structure. Sometimes it can be imagined to be tree-like, with nodes being results and directed edges being dependencies. Question Is there a format for presenting maths that is faithful to some underlying logical structure of the work? The 'logical structure' could be defined by the author. Using digital devices, we are obviously not restricted to linear text anymore. Have you seen such an 'untraditional' format being used? Prototype Imagine a PDF-viewer that can collapse and expand certain blocks of texts, as defined by the author and with the possibility of nesting. In proofs there are often steps which are very unclear to some readers and trivial to others. These steps could get elaborated on in an expandable block -- providing the necessary details for the people who want it while maintaining reading flow and brevity for the others. Using layers in LaTeX something similar can be achieved as described in this question | 0 |
I'm writing documentation for a piece of software I worked on and I came across an odd sentence format that puzzled me. I was wondering if there might be a conclusive answer on the matter: In the event that a complex compound sentence contains two independent clauses and one dependent adjective clause, but each independent clause would require different articles to tie with the dependent adjective clause, would the best article to use be the lattermost article? Example: "create a" vs. "update your" Everyone needs an updated google account to access the document. Please create, or update, your account before the meeting. I do not have a background in English, please excuse any incorrectly used terminology! I did some googling beforehand in order to better articulate my question so it's possible I misunderstood something in my haste. | 0 |
In order to define my question, I will demonstrate what I consider an appropriate answer. My question is as follows: How do I develop fast "shortcuts" in math? What I consider a shortcut is a means to solving a problem in a fast and creative way. My own answer would be to look for patterns and model a formula to solve the statement in an appropriate and timely manner. Maybe one could also introduce something brand new to the problem, like the perfect square technique. As one can see, I am not asking how to approach the problem, but what to do to find new and fast ways to solve it. This is obviously getting back to my question. How do I develop fast "shortcuts" in math? I value the community's intelligent feedback on this question. Thank you for your time and patience with me. | 0 |
It is my understanding that there are two types of random number generation used in computer science. True random number generators that use principles of a physical property to determine their generation and pseudo random generators that use algorithms based upon mathematics such as chaos theory. True random number generators use a proposed indeterminate properties of the physical world because of the principle of uncertainty and pseudo random number generators employ chaos theory which are non linear deterministic equations whose outcomes are sensitive to initial conditions however they are deterministic and with the same initial conditions the outcome will be the same. My question is the principle in true random numbers generators and the principle of using chaos theory in pseudo random number generators the sane, as both principles use a deterministic factor the schrodinger equation in true random number generators and deterministic equations of chaos in pseudo random number generator,so what's the difference? Is randomness actually certainty? or vice versa? | 0 |
I work in tech but I at some point I almost took a career path into theoretical physics. I changed my studies very late to data science and machine learning (my last masters year) and before that I was in physics and solid state physics. I am looking for research papers that could be interesting to read for a non-expert like me. I am used to read physics research papers (mainly solid state physics and spintronics). I am looking for papers that do not have super heavy mathematical development. An example would be the famous "More is different" from Anderson. It describes very well what I'm looking for although I'd prefer some mathematical perspective if applicable (but again not too much of it). The idea is to read about topics like General relativity not from Wikipedia but from research papers that are either foundational or educative/pedagogic. No books however I don't have time for them, I am not looking to study these subjects but rather acquire some insight and basic knowledge. I am looking for now in papers in : general relativity | 0 |
In papers studying or searching for topological order (intrinsic or symmetry-protected) in various condensed matter systems (e.g. Field-tuned and zero-field fractional Chern insulators in magic angle graphene), a common refrain of motivation goes as follows: Topological physics began with the experimental discovery of the integer and fractional quantum Hall effect, for very clean two-dimensional electron gases in a large magnetic field. The large magnetic field is unfortunate, and it would be nice to get rid of it. In fact the large field is not necessary, and equally interesting physics can arise in our system due to strong interactions, time-reversal breaking, etc. But I realized I never really understood the second point: why is a large B such a problem? What applications or lines of scientific inquiry does it challenge? To what degree are these challenges insurmountable? | 0 |
My problem is as follows: I would like to use the upright fourier package symbols together with dsfont. This produces: I therefore want to scale the dsfont symbols down to the same size as the fourier letters (or vice versa scale fourier up). I tried using scalerel, however for some reason pdflatex and Anki (a flashcard software) doesn't seem to like this. It declares no error message, however it doesn't scale the symbol (or any others I've tried for that matter). Overleaf does scale with the same code, so it's not the code. Are there any alternatives to scalerel to scale dsfont (or scale the fourier package - although this isn't preferred). I've also tried using the mathalfa package to load dsfont (although this doesn't load), as it has a scale feature. Some notes: I'm using mathastext with nosmalldelims enabled, if this makes a difference. Any help is appreciated:) | 0 |
I recently encountered a quirky situation. A student wrote a sentence, and it was much more technical than this example (actually for a literature review on microbiology), but this examples illustrates the basic issue: John said, "The cat is on the table" with great anger. A parallel example: John said, "The cat is on the table" when the cat climbed up. I was always taught to offset complete quotations with commas, but I tend to only see examples with the signal phrase beginning or ending the quotation (e.g. "The cat is on the table," said John...). While I understand the sentences could easily be rearranged to terminate with the quotations and to improve style (e.g. With great anger, John said...), how should these be punctuated as they stand? Should there be a comma where the quotation terminates? Obviously, if the quotation were a regular clause, no comma would be used (the first due to a prep phrase, the second a dependent clause following an independent). But how does the quotation impact this rule? Thank you! | 0 |
Localized ferromagnetism refers to materials where the magnetic moments are primarily associated with localized atomic orbitals. Ferromagnets, such as those made of iron or nickel, are called itinerant because the electrons whose spins aligned to create the magnetic state are extended and are the same as the ones responsible for conduction. I don't understand how can localized ferromagnetism exist at all. From my understanding, a large gap insulator will always be non-ferromagnetic because the full valence band always have electrons paired up and leave no net electronic magnetic moment (while nuclear magnetic moment is negligible for ferromagnetism). The energy favorability of spin alignment is much smaller than overcoming the large band gap, so the spins are always paired. This means ferromagnetism can only occur in metal or small gap insulator where the energy favorability of having aligned spins is larger than the energy favorability of strictly filling the spin-unpolarized states below the fermi-level (before considering the interaction of electron spins). Is my understanding of the necessary condition for ferromagnetism correct? (ie, energy favorability of aligned electron being larger than band gap)? If this is correct, how can localized ferromagnetism exist at all? | 0 |
I have been studying Condensed matter physics, and there are some basics that are confusing me. Basically, when we find the dispersion relations of electrons in a lattice using the nearly free electron model, I do not get what the dispersion relation signifies physically. If we have one electron per orbital, I know that the band will be half-filled, which means that we have electron waves with those particular k values, but then how is the concept of Bloch wave packets incorporated? In Yang and Grivin, when the Bloch envelope is introduced, they integrate Blochs function over all k values, from which I thought all these electron waves superpose to make a wave packet of some average momentum. But then going forward, when considering the dispersion relation I got confused that will we still be looking at electrons in particular states individually? When we consider Bragg reflection at the boundaries of the Brillouin zone, does it mean that electrons with that crystal momentum form standing waves, all the while the other electrons with the other k values continue to propagate, and they all superpose to form a resultant wave or is it like when the wave packets k value becomes the k value at the boundary, it forms a standing wave? | 0 |
The potential at the surface of a conducting sphere is KQ/R where 'R' is the radius of the conducting sphere and even the electric field on the surface of the conducting sphere is maximum. And while taking a test charge from the surface to the inside the potential neither increases nor decreases because electric field inside the conducting sphere is zero, so whatever potential is on the surface, it is the same inside. But my question is why don't we need to do work on the test charge kept on the surface against the electric field on the surface to move the test charge to the inside of the sphere where electric field is zero? And if we would need to do work against electric field on the surface to move charge inside then potential inside will also be different from the surface. Thank You! NOTE- i am assuming that the conducting sphere is positively charged. | 0 |
Recently I have been learning about optimisation-techniques and built a simple "gradient-descent brachistochrone solver thingy" to try out some methods. One thing currently still hurting the results is the distancing of the points. Obviously, if they are first generated an equal distance on the x-axis apart from each-other, when the resulting graph becomes non-linear the distances between them start to vary quite significantly. Especially between the first two points there is obviously a massive "hole". What I would like to as is, if there is any well-established method of spacing these points by their distance from each-other instead of evenly along one axis. This at first doesn't seem to be too hard a problem; just move them a bit to be at the right place for their heights. This doesn't really work however, because when changing the "x-position" of the points their height must be re-adapted as well, leading to another distance-discrepancy and so on and so forth. As all algorithms I came up with required an ungodly number of iterations to complete, and I was unable to find the correct keywords to google this problem, this is now asked in form of a post on this forum. Thanks, Robbe. | 0 |
Now here is a potentially stupid question about something that has really been annoying me recently. In the pdf viewer panel in Texstudio, there are two buttons in the toolbar that change your type of cursor: a magnifying glass and a blue scroll 'cross-arrow'. If you never click on any of the two buttons, your cursor is the standard (Windows) white arrow cursor. I made the mistake of clicking on the magnifying glass, and there seems to be no way of getting my nice white arrow cursor back - instead I am stuck with either of the two Texstudio cursor types and I find them very ugly. There is no button to choose the standard cursor, nor does it revert if you open and close the panel, or Texstudio itself. Any help? | 0 |
In order for the jars to be sterilized for pickling tomatoes, they need to be boiled. But when one of the inverted jars was standing on the rack in the pot, it started sucking in the water that was boiling there (and sucked out almost all the water). The jars were turned over so that their open side was completely submerged in the water (so that approximately the entire surface of the neck of the jar was in contact with the grid). An approximate drawing of what is happening from the side: I took the jar out, emptied the water out of it, and put it back, but after a while it filled up again. The plastic grate on which the jars stood was uneven, and had holes in it. Why does this happen? Thank you! | 0 |
According to Kepler's First Law, the orbit of a planet is an ellipse round the sun with the sun at one focus. There's an inherent asymmetry in this. Instead of the sun being in the dead center, its shifted over a little bit. In the hydrogen atom, all the orbitals of the electron are symmetric about the proton at the dead center. Why is there no similar asymmetry? You can convert the function for the position of a classical simple harmonic oscillator with respect to time to a space dependent probability distribution where the probability is higher at the classic turning points where the velocity is at its lowest. The ground state of the quantum harmonic oscillator has a higher probability exactly between the classical turning points. The quantum solutions more closely match the classical probability at higher quantum numbers. I was thinking the classic "lopsidedness" of gravity could be recovered at higher quantum numbers for a Coulomb like potential Schrodinger Equation. But higher principle quantum numbers just enlarge the orbitals, they all remain symmetric about the proton. So that can't give you higher probabilities on an ellipse. The technique to recover classical behavior that works for the harmonic oscillator fails for Coulomb like potentials. Are there circumstances where any asymmetry appears in the probability of the electron, in particular, concentrations of probability along an ellipse? | 0 |
The Question: Boolean algebra is to classical logic like what is to relevant logic? Context: I guess this is a terminology question, so there's not much I can add, except that I've been interested in paraconsistent logic for a long time. Is the answer de Morgan algebra or is that something else? Meta Question: Is the question well-formed? Perhaps the question ought to be something like, Boolean algebra is to classical logic like what is to paraconsistent logic? I don't know. But: If the answer to the Meta Question (MQ) is yes, then please answer the main question. If the answer to the MQ is no, then please feel free to tell me why and answer what I hope will be a clear intended question. If the answer to the MQ is nonclassical, please explain. | 0 |
If I launch a ball into the sky it would reach a distance after which it would return into the ground transforming the potential energy into kinetic energy as it hits the ground This is similar to what happens at galactic scales, where material (like gases) from an outflow get expelled from the galaxy, they reach a distance where they turn back, and fall again into the galaxy, increasing their kinetic energy as they are attracted by gravity towards it However, the presence of dark energy at galactic scales, causes outflows to be less bound to their galaxy (and could even reach a distance where the influence of gravity and dark energy are balanced, beyond which, it would be expelled from the galaxy to never return: Is there a distance from a gravitational source where the influence of gravity and dark energy are balanced out?). This then can make these outflows reach a greater distance before turning back into the galaxy. Then, as the material would travel more distance towards the galaxy, and there would be a point in this trip where the influence of dark energy would be negligible (only leaving gravitational attraction), would the infalling material have more kinetic energy in this scenario (than in one without dark energy)? | 0 |
I finished calculus books like Thomas and currently reading a book on advanced calculus and another one on real analysis. I noticed recently that I don't solve enough "hard problems". I usually just solve the exercises on my books. so I figured that I need problem book(s) on calculus. I want a book(s) that has many exercise on calculus topics like Limits(without l'hopital) ,derivatives, integrals , series and sequence , multiple integrals etc... I also want them to be challenging and interesting problems and not "too hard" I consider myself on an intermediate level as the advanced exercises on Thomas book seems very easy to me, I also don't want "forward substitute questions" which is plugging numbers in formulas or straight forward use of a theorem or equation as I want to increase my problem solving skills. I will appreciate any suggestions ,thank you in advance . | 0 |
There is much talk of using lasers to bring down drones. That talk is followed by talk of protecting the drones by surfacing them with mirrors. Would that work or does light falling on a mirror impart all its energy to the mirror first before it can get re-emitted? I am aware that lasers themselves use internal mirrors to reflect the light so it would seem that if lasers don't destroy themselves then the energy is not absorbed: or lasers are cooled, I don't know which is the case. There are two questions here really: the Prime question is about the possible efficacy of protecting a drone by mirror surfacing. The more general 'underneath' question regards reflection of light. Is it so that all light falling on a surface must be absorbed by that surface before it can be 'reflected' ? It is my belief that it must. That 'reflection' is in fact re-emission. | 0 |
The textbook I'm using to study integral calculus usually assumes for it's proofs that the function takes on only positive values. The author says that if we divide the x-axis into intervals, and pick the point in each interval for which the value of the function at that point is a minimum at the interval, then we can approximate the area under the curve using inscribed rectangles. Specifically, the author says, this would be the lower sum. The upper sum is achieved by picking the point at each interval for which the value at that point is the maximum of the interval. I was thinking: wouldn't this be the opposite for negative functions. Because the upper sum would be achieved by taking the minimum at each interval, at the lower by taking the maximum at each interval. If I were to prove the same theorem for negative functions as well, is this the only difference that it makes? Or is this not the case? Because if the upper sum is always the greater numerically, and the lower sum is always the less numerically. And if "negative area" is a thing, then the lower sum would still be the one that is formed by taking the minimum at each point, and the upper by taking the maximum. So I am confused with the exact definitions right now and would enjoy guidance. Thank you in advance. | 0 |
If we spectroscopically observe a cloud of hot gas, which is on the whole not very absorbent, and which is not illuminated by a source behind it, we observe emission lines. How does this type of spectrum form? I had thought that those lines are those in which there are transitions of atoms is true, but I don't think that's enough. Why should all the material as a whole emit like that. Why are there these electronic transitions? And then: if there is a light source behind the material, one observes a spectrum that is in a way the negative of the one I put, i.e. the lines become absorption lines. I had thought that something different must be happening in the two cases, although I don't know what. And then again: if we knew nothing about electronic transitions and only wanted to consider the macroscopic properties of the gas (which could be composed of complicated molecules, in which there are not only electronic transitions but also other phenomena), could we still justify the fact that the emission and absorption spectra are the negative of each other? Thank you for any input; complex, articulate and in-depth answers are also welcome. | 0 |
Saw this on reddit: A: I'm a gun owner and I think any sort of gun sticker on a vehicle is cringe. -> B: Ditto any sort of camo, esp. grey/urban camo prints, sure go ahead and tell the world you're itching for an excuse to defend yourself with a gun while you wait for your latte -> -> C: But the hunting community uses camo! Do you think hunters are looking to kill people? Etc, where C continued to belabor the edge case. Now, to me, C misunderstood what B was saying and just continued to argue when it was clear that B didn't think that hunters were included in the group that they were talking about. It feels like C went out of their way to think that B was saying something about ALL people who wear camo -- but to me unless someone says "ALL ___" they just mean "in general the people who ____". Is there a term for "deliberately misunderstanding what someone said so that you can argue about it"? I see it all the time in forums/reddit/insta. Strawmanning doesn't seem adequate -- that's more about mischaracterizing someone's point so you can dismiss it. My best attempt at this is "reaching for outrage" but I've never heard any say that. | 0 |
In a recent Introductory course on logic, we were introduced to first order and propositional logic. The purpose of these concepts is portrayed as being a way for us to formalise the reasoning we use when we compose proofs in other branches of mathematics, but something about this bothers me, namely that in defining these theories, we extensively used set theory, and the axioms that it grants us, however one needs first order logic to actually describe the axioms of set theory, and so neither of these theories can truly "precede" the other. I read around a bit, and to some other question on this site, someone commented that this is fine, since for example we use the English language to describe the grammar rules of the English language, and we are satisfied with that, so w should be satisfied with this. But there's a reason language and mathematics are different fields of study, and I find this answer unsatisfactory. Is there some way to avoid this circular dependency in a meaningful way? Edit: from my understanding of the answers given to the purposed duplicates, the consensus is "no we cannot". Can we do one better and in fact prove that a sufficient formalisation without this property is not possible? | 0 |
I am studying the solubility of gases in liquids (flowing then into the study of oscillations of gas bubbles out of the liquid phase). The task at the moment is to familiarize myself with the laws of solubility of gases in liquids, gather material, etc. However, the only law I've discovered is Henry's Law. So far I'm only studying the simple case like water as a solvent and air as a gas. (no electrolytes in the liquid, no chemical reactions or anything). But for such a simple case, apart from Henry's Law, I have found nothing. Also I could not find any literature that could help in studying this question. I am wondering if there are other laws or equations of gas-liquid solubility for the simple air-water case, or is there nothing else besides Henry's law? | 0 |
I have been reading recently about tension. I don't exactly understand how it works. Here are my major doubts: Here, Tension is said to be acting in the opposite direction of mg. I will assume tension is not the net force across the rope because net force is zero due to newton's third law. Thus, tension must simply be the force exerted on an object by another one through a rope/chain/rod, etc.In this case, since in all cases of an object exerting force on another object, since the other object exerts an equal and opposite force, the tension here is really the reaction force to the force of gravity. However, if pulling a box along the ground with a force F, tension along the rope is said to be in the same direction as F, but can't it be said that it exists in both directions as both the pulling body and the mass exert equal forces on each other? What decides which direction is assigned to tension? | 0 |
Hi I was working on a problem concerning euclidean topology. I was doing the following exercise : The first statement seemed not to be true because for (i) to be the basis some topology the intersection of any most belong to the basis. As the intersection of any two open squares parallel to the axes is not necessarily a square, possible a rectangle. I assumed it wasn't the basis for any topology. This however didn't make sense when I looked at (ii) as far as I know the collection of all open discs is a basis for the euclidean topology. But the intersection of two disc is not always a disc. Which would mean it isn't the basis for any topology which left me confused. Does anyone know where I could be wrong? Thanks in advance. | 0 |
Original question: If continuity is the only requirement, here is the solution: Dimension of space of continuous functions But the proofs use trigonometric functions and polynomial functions, which are not monotonic. A further question is, what if I would like to impose additional features on the functions, say, concavity? It seems that one has to search for new bases if using constructive proof because the bases in the original proof may not satisfy the additional feature. I was wondering whether there is a generic way of proof to circumvent this problem (My guess is that no matter what the feature is [as long as the set is a linear space], the set of functions with this feature on an interval is infinite-dimensional). Edit: Thanks for pointing out it is not a vector space. So I changed my question (I thought this new question is equivalent to the original question but it turned out they are not). Edit again [revoked]: I think maybe I should add another constraint that the functions are bounded, or even more stringent: the codomain is an interval...Otherwise, the answer seems to be obviously "No". | 0 |
I'm currently starting to self study probability and statistic, a friend recommend me to use a book he has but his book does not go deep in the theorem and formula, instead it just state the equation and when to use it along with some properties but not the proof for the equation (for example: the chapter about the Poisson random variable just tells you how to use it and when, but lacks the proof of how mathematicians arrive at that complex equation ). I would like a book that is rigorous and proof-based for every problem in it ( like Tom M. Apostol's calculus books for example), im a colleague student and has good calculus and linear algebra background so an more advance than regular books is ok with me. Do you have any recommendation ? | 0 |
I'm new to Kalman filters. I have a use case similar to the one-dimensional train example. But I have railroad track with switches and mergers. So it's a non-trivial topology. I would like to model the system as a graph with N nodes and E directed edges. Each edge has length (weight) L. So the system state (e, x, v) consists of the current edge e, the distance x along the edge and velocity v. The edge e is a discrete (categorical) variable, which does not fit the standard Kalman Gaussian methods. Can the Kalman filter be extended to this? Measurements: I do have measurements of (e, x) and related position error err_x. There is no error related to the measured edge e. Prediction model: We can assume a constant speed model with zero-mean acceleration as in the Wikipedia example. In case of a prediction through a node with multiple outgoing edges (rail switch) I'm not sure if I should return a multi-modal distribution. I don't have a requirement that this should run in real-time, so it is possible to look ahead in the data and see which edge is taken. Or to use some kind of backwards smoothing is also ok. The graph is in priciple cyclic, but in practice length of the edges are way larger than the measurement errors. | 0 |
According to general relativity, if an object keeps moving and warping space and then gets lost by entering a black hole, do all of the distortions in the space caused by that object get lost? Or do the distortions get back to where they began? There is an answer by "Navid" that says it is possible for the distortions that happened in space by the object to disappear. Well, first of all, my imagination says that space will keep warping if an object keeps moving. Meaning that the peak of the distortion keeps increasing; now if there is a case where the distortions will disappear, then wouldn't this come to mind that a new black hole would happen because there is a loss of space? By new black hole, I mean the disappeared or lost space between the destination black hole and the beginning position of the object. My other imagination is that when objects move in space, they carry the space with themselves and don't keep stretching or warping the space. | 0 |
This question could be really out of the blue and might receive lots of downvotes, but bugging me quite a time and would appreciate your thoughts easily explained. We know that when we do work against nature force, we increase the potential energy of an object. Lifting a ball and putting it on the table increases potential energy of it as my putting work into it is transferred to the ball's potential energy. So as a sum up, for something to gain potential energy, some work must be done externally in opposite of nature. The same happens with charges. Taking away negative charge from positive increases potential energy of negative charge. It's all clear till this point, but if we imagine that worm by default appeared on a mountain, how would it have potential energy? As I understand, some external work must be done so the work is transfered/converted into its potential energy. What energy is transfered into worm's potential energy? I don't believe that gravitation itself gives potential energy to the worm. | 0 |
Industrial printing is based on autotypical colour mixing, the simultaneous effect of subtractive and additive colour mixing. This makes it possible to render a large set of colours using only four standardized colours (CMYK): I do understand additive and subtractive colour mixing but I struggle to understand how two dots next to each other, i.e. with no overlapping, additively mix to another colour. This is illustrated in the lower part of the following diagram, e.g. a red dot and a green dot are perceived as a blue dot (provided they are small/far enough): I guess the answer revolves around the size of the dots, the amplitude of the resulting light wave and some properties of the human eye but I could not find any detailed physical/mathematical relation between them so far. | 0 |
Assertion:Energy changes the kinetic only, and changes in potential require an intermediate change in kinetic. Reasoning: eg:when I throw a ball upward I do work/transfer energy,At the topmost point, there's a changed potential which is an effect of changing its position through its ke. eg:when I lift a weight at constant ke,Im transferring energy to increase the kinetic as I'm doing work against gravity , I can't change the potential without changing the kinetic. I did some research but the only answer I could find is an input of energy can increase both kinetic and potential but I don't see how it changes the potential without changing the kinetic. this seems to make sense for a lot of situations and i need to realize where im going wrong with this My question is: Is this valid and if it is does it hold for all fields in physics? Can energy only change the kinetic and require an intermediate change in kinetic to alter the potential? | 0 |
This is my current understanding of convolution after having read through this blog post The convolution operator can be thought of as an operation of linear superposition. If we have the response of a linear system to a unit impulse, the overall response to an arbitrary input signal may be constructed by taking a linear superposition of the unit impulse responses accordingly translated and scaled. This can be done through the convolution integral. On the other hand, the convolution theorem allows us to perform the equivalent convolution operation by first taking the pointwise product of their Fourier transforms, then taking the inverse transform. In other words, the convolution operation is diagonalized in Fourier space, and acts on each Fourier component of the input signal by multiplying it by its eigenvalue, the corresponding Fourier coefficient from the unit impulse response. While I follow the logic leading up to either approach, my difficulty is in finding the connection between the two - how does it intuitivly make sense that superimposing unit impuse functions accordingly to the input signal has the equivalent effect to multiplying the Fourier spectra of the two functions? | 0 |
There are two different senses in which we use the word "attribute"; for example, I can describe someone as "blond", which is a hair color. We say "blond" is a characteristic or attribute, but it is clearly the characteristic or attribute with regards to the characteristic or attribute "hair color" - "blond" is not how they smell or how tall they are, it's one of a set of attributes that fall under the attribute "hair color". I have called both "blond" and "hair color" with the same word, "attribute", here, and, as far as I know, all the popular synonyms of "attribute" really are synonyms - "characteristic", "trait", "property", etc. Is there any way or term or expression to verbalize the difference between an attribute in the sense of what it is - in this case, a person really being blond - and an attribute in the sense of what "category" that attribute belongs to - in this case, blond being a hair color? To put it another way, for the sentence "For the attribute "hair color", he has the attribute "blond".", I would like to find a different word for one or both of the uses of "attribute" here, so that the words used unambiguously convey that one is the actual attribute (blond), and the other is the "category of attributes" (hair color). | 0 |
When conic sections are taught in high school, the concept of a focus is introduced from the geometric prespective. Well, at least if your teacher is any good. Later, once the algebraic equation of the conic is established, we find that this special geometric point can be in some way derived from the coefficients of the expressions in the algebraic equation for the conic. This is typically derived in school by moving back and forth between arguments involving that of geometric observation and that of algebraic manipulation. From an algebraic prespective, it seems quite unintuitive to me that there is a special point outside the curve which is so critical to understanding the curves' geometric properties. In a way, I see it as more of a meaningful quantity than a root. Now, my question is: Using advanced mathematics, what is the algebraic intuition behind the focus? Can we generalize this intuition to talk about foci of higher degree curves? And, are there analogoues to points like focus for higher degree algebraic curves? | 0 |
The classical analogy for understanding gravity in Einsteinian physics is picturing a sort of fabric that sinks when an object rests on it. Thus, due to this curvature, objects will move towards each other because of the inherent structure of the fabric-object relationship. However, in this analogy, the objects move towards each other because gravity acts on them. In other words, the curvature itself is not enough to explain this phenomenon. Thus, howcome gravity be defined as the curvature of space-time if curvature itself is not enough to have an effect on two masses? I feel like this analogy tries to explain one concept (gravity) as the unison of two concepts (curvature and a force akin to gravity (redundant)). I mean, try doing this experiment in space. Set a fabric, add two masses and curvature but for some reason the masses don't move towards each other; because gravity doesn't act on them! (I know technically it does but for the sakes of the mental experiment it doesn't). | 0 |
I have a doubt in this question, Diagram given below In the Question it is asked that "calculate the final velocity of the block in the figure" and in the solutions it is given that the work done by normal force exerted by the surface is not considered when applying the work-energy theorem but according to the proof of the work-energy theorem, the "'net' external force X displacement upto which the force was applied is equal to the change in kinetic energy of the object." (could use integration by taking the variable force to be constant for infinitely small time intervals) but by this definition normal force should also be added to calculate the net external force on the object as it also always acts in opposite directions to gravity (with different magnitudes obviously) So why is the normal force not considered here? Is the question/solution wrong? If it is correct, why? And what is the mathematical proof of not considering the normal force when applying the work-energy theorem? | 0 |
Most introductory logic textbooks that I have skimmed through in a while, keep the terms 'sentence' and 'expression' undefined. I would intuitively see Earth is round. Why didn't Harry come to the party? Come here, Harry! x lives in Norway. all as sentences. However, 'x lives in Norway' is usually not taken to be a sentence in logic because it contains a variable. It rather gives sentences after variables have been replaced by constants. It seems to me that the term 'expression' has a broader meaning and sentences are special cases of expressions. But still what exactly is an expression? Would arh ahfb hghd udh be an expression? Would 'arh', 'udh', etc. be expressions? If 'arh ahfb hghd udh' is an expression , would it be a sentence? Is it necessary for an expression to have a meaning? All these troubles imo are stemming from a lack of definition of terms 'sentence' and 'expression'. How exactly can we define these terms? | 0 |
So, as I have read and even been taught by my teachers, sign convention in trigonometric functions is based on the location of the respective x and y points denoting the coordinates of a particle going around a circle. Although I am kind of sure that I am right but I still want to confirm this one thing: When determining the slope of a graph, we encounter both obtuse and acute angles (the angle made by the tangent of the graph at a particular point on it with the x axis). So, is it just a coincidence that tan(a)(the angle made by the tangent) is positive in case of acute angles and negative in case of obtuse angles in both this graphical sense as well as tan(a) for a point on a circle? Because if a is obtuse(graph), that would mean that the quantity in the y axis is decreasing and if it is increasing, the angle is positive. So, I just want to ask that here tan(a) just happens to be equal to the slope of the graph right? I checked for other trigonometric functions and well sin(a) seems to be negative if the angle is obtuse in the graphical sense but positive in the circular sense. Edit: I have added a picture for clarity about my definition of graphical and circular sense. | 0 |
I want to learn calculus by myself, I searched a lot on internet as well as math stack exchange for suggestions for best calculus books, I know a lot of famous books like Stewart Thomas but I do not like the way it is presented. I want to learn rigorous mathematics from books like Apostol Spivak Courhant Serge-Lang, but I am not sure if I can take them, the level of proofs and logic they need, I am afraid if I buy them and not able to solve because high advanced level they are, some motivations is needed and also guide me if I need some more books as prerequisite to understand Apostol Spivak Courhant I am reading books like how to solve it by George Polya, if you can suggest some books which refine logic and thinking please suggest me. I am also someone who enjoys mathematics but not want to learn just basics. its more like dream if I one day able to read and understand books like real analysis by Walter Rudin THANK YOU | 0 |
I am trying to write down a complete/detailed definition for the parity symmetry. Symmetry as a concept is different in mathematics and in physics. There are also many other concepts which differ in their use in physics and mathematics i.e : symmetry group, discrete/continuous group, continuous and discrete symmetry etc. I am trying to consider the following concepts and phrases for the definition: Parity symmetry. Parity transformation. Invariant property (invariance and symmetry while similar they also differ in use) parity symmetry group. discrete symmetry group. Then: In physics: "Parity symmetry it describes the invariance of a system, it's properties, under the parirty transformation, a spatial transformation, represented via the symmetry group of parity, a discrete symmetry group." Is my definition, while including the above listed phrases accurate? Also, can someone show me the fact that there is a group for parity? I.e elements, neutral element, inverse etc? In other words, which are the elements and the operation in the symmetry group of parity? | 0 |
As we know, two bodies undergo radiative heat exchange due to each emitting a spectrum of light according to its temperature (blackbody radiation). When one body is hotter than the other, it emits a higher magnitude of light at each frequency. Thus the heat flow is from the hotter body to the colder body, even though light is exchanged in both directions. My question: as the photoelectric effect showed even a large magnitude of light of the 'wrong' frequency can't eject an electron, has the experiment been done where a large magnitude (high amount of W, i.e. not according to blackbody radiation) of low-frequency light was directed to an object to observe if the large amount of low-frequency light heats the object? Or is it not possible for other reasons? | 0 |
Suppose you trying to sell the idea of the Yoneda embedding to perhaps a rather mixed bunch of students (so you can't presuppose too much mathematical background). Still you can say Think of a group-as-a-category (one object, all the arrows isomorphisms). Then applying the Yoneda embedding theorem we get ... [a bit of chat] ... hey, Cayley's Theorem. Think of a poset-as-a-category. Then applying the Yoneda embedding we get ... [a bit more chat] ... hey, the familiar result that a poset is isomorphic to a certain bunch of subsets of its objects (upper sets) ordered by inclusion. Those are in fact the usual textbook offerings. But what third or fourth examples of such embeddings (not requiring the fully caffeinated Lemma) might work as equally accessible? Or maybe not quite as accessible but more interesting?? | 0 |
(Look at the picture) Let's assume there is an horizontal plane impacted by diagonal airflow with components from coming downwards and ahead. If we say that the airflow is fully deviated by the horizontal window (which isn't allowed to move through space), the outcoming airflow has only an horizontal component greater than the one it had before. So, if the wing generated a force in order to change airflow direction, the airflow generated an opposite force on the wing. According to this theory as long as the horizontal component of the airflow isn't reduced, the wing is pushed both upwards and forewards, even though the airflow comes from ahead (which is counterintuitive). Is that correct or am I missing something, since I would normally think the wing must be pushed backwards? | 0 |
I'm finally closing some gaps in sound waves, so forgive me for lots of questions. In metal, it's said sound travels fastest. The reason is molecules are tightly packed(more dense) in metals than in air lets say, and collisions with each other will be fast as each molecule doesn't have to travel long before it collides with another one, hence it travels faster. Though, I wonder even though it will be faster, we should have disadvantage here which is the signal created by source won't be much weak than it would be in air. My logical thinking is since molecules are tightly packed, it will be harder to transfer energy from particle to next particle(more tightly packed, harder to vibrate the same frequency as created by source). So even though sound travels fast in metal, it should give us weaker signal in the end than in air. Would this logical thinking be correct somehow? | 0 |
I'd like to know how the parallel transportation behaves in non-Levi-Civita connections and how does one realize it formally. I know that parallel transportation along some piece-smooth curve is defined through moving by geodesics: say, in case of conformal connection, one transports a vector along a short geodesic close to a piece of the given curve preserving the angle at each moment of moving, so, for example moving by a horizontal line in the flat realization of the Lobachevsky plane (where geodesics are semicircles and vertical lines) one gets that a vector rotates uniformly. But how does one understand the parallel transport in general (of a general connection given on a principal bundle)? So, are there any axioms of the parallel transport except being an isomorphism between model spaces in close points on a smooth manifold and "additivity: moving along one piece of a curve and then the second" and "inverse: going in both sides" axioms? | 0 |
I heard that if a dish of mercury is heated by a moving flame placed under it, the mercury will spin around - Sanderson [Ivan T. Sanderson, biologist and paranormal researcher] then goes on to make the basic observation that a circular dish of mercury revolves in a contrary manner to a naked flame circulated below it, and that it gathers speed until it exceeds the speed of revolution of said flame. Now, as in this question Why is mercury magnetic? , if an electric current is passed through the mercury in the presence of a magnetic field, the mercury will spin around, but I can't find any reference to a version with a flame. Is this true/would this theoretically be true? If so, why? Or was the person who said this just misremembering the electric current version? | 0 |
While calculating the electric potential at a point near charged bodies such as a uniform ring, hollow shell and solid sphere, iv'e seen that the potential at a point is equal to: V = KQ/D(avg) Here D average denotes the average distance between point of potential measurement and the elements of the object. It is somewhat like the distance between the "Centre of charge of the body" and the point of measurement. This seems to work out for my few cases which ive tried. It also sometimes works for electric field at a point by replacing D with D squared, but fails in some cases such as with a uniform ring. My doubt: Is this approach of calculating "Centre of charge" always applicable for potential at a point? Why so? And if yes, why does it fail for electric field strength at a point? | 0 |
I read several times about global warming leading to more exteme weather events, i.e. flooding, droughts and even winter storms occuring at higher rates and with more intensity. So, higher temperature supposedly leads to an increase in the variance of the probability distribution of the weather. This is not obvious at all. Naively, one might think that climate change would just shift the temperatures by several degrees. So, is there any physical mechanism, explaining the increase in the variance of the weather? I thought about the equilibrium vapor pressure of water at higher temperatures but I did not come useful conclusions. Btw, a standard physical model is the damped harmonic oscillator. Is there any similar model to demonstrate the effects of globabl warming or meteorolgy and climate change? I'm thinking of something like a closed box with stone islands and water under some radiation. | 0 |
Suppose a player wishes to move on a square lattice graph without diagonals. Vertices on this graph have a chance (fixed or time-dependent, e.g. cumulative density of X~Po(t) with range n to infinity) to become a "point of interest". Upon visiting such vertex, it will revert to a "normal" vertex, which can become a "point of interest" again. The player wishes to visit all "points of interest", but travelling through the least number of edges. The Elevator Algorithm seems to solve a simplified version of this problem, where the "player" can only travel in two directions. My guess is that, if the player starts from the center of the lattice graph, the optimal choice would be either try to walk over all the vertices exactly once for multiple rounds (Hamiltonian path maybe?) or to stand still. However, this guess only works for the fixed-chance scenario. Thus, I wish to know how an optimal strategy would be derived, given a certain number of lattices are already "points of interests", especially when vertices convert to "points of interests" with a time-dependent chance. | 0 |
I was recently reading about the Hafele-Keating experiment and asking, how does time in the plane which has flow westwards could have passed faster than on the surface of the earth if the frame of reference was here. It was then I realised that the frame of reference in the experiment was chosen to be the center of the earth. But now I'm asking what would have happened, if one really chose the frame of reference to be on the surface of earth. Then, the plane should have had a velocity greater than that of the inertial observer on the surface of the earth in both direction (in one even greater due to earth's rotation) and time should have passed in both situations slower with reference to the observer on the surface of the earth. I'm not sure if these assumptions are correct, as I've just started to learn SRT on my own. | 0 |
Isobars are atoms (nuclides) of different chemical elements that have the same number of nucleons. According to the https://en.wikipedia.org/wiki/Mattauch_isobar_rule if you have two adjacent elements on the periodic table have isotopes of the same mass number, one of these isotopes must be radioactive. The decay can happen by positron emission, Electron capture of Beta decay. When electron capture occurs, there will be a hole in the first electron shell that will quickly be filled by an electron from a higher shell, giving of what's called a https://en.wikipedia.org/wiki/Characteristic_X-ray Something I've realised though is that if the Isobar with more protons is the heavier Nuclide, but the mass difference is less then the Characteristic X-ray of the lighter Nuclide, then it would be impossible to decat via capture of an electron from the innermost shell. Are there any Nuclides where this is the case? | 0 |
This may be a very basic question, please excuse my lack of knowledge but I don't seem to understand the concept of anti-matter gravity. Upon research, many sources align with the conclusion that anti-matter reacts to gravity similarly to matter. ie. that it space-time warps around its mass. If we consider a particle/anti-particle pair we would expect and conclude that there is no gravity (warp in space-time) by this. ie. Just the vaccum of space. However, if we consider them as separate particles, then their corresponding gravities -pulling spacetime in the same direction, would add together (since gravity is not reversed for anti-matter). We would expect to increase the total gravity as they approach each other (greater density of mass in space-time). However, as we know, a pair would have no gravitational effects. It's hypothetical but to clarify; consider a plane vacuum with one particle and one antiparticle beginning to collide. Disregarding their electrical attraction, they each warp space-time in the same direction due to their masses. However, after the collision (as a pair) there is no gravity. So, how would their gravitational potentials dissipate and react as they approach each other and collide. | 0 |
I'm trying to create a cheap concave lens effect for a class I'm doing. It seems like a convex lens starts to create a similar effect anywhere passed twice it's focal length. It also makes everything upside down, but that's okay. Are the effects of a convex lens a demonstrational equivalent to a concave lens once you pass double the focal length? Ignoring the upside down aspect. This is for a hand made kaleidoscopes library program to give you some idea of the stakes. EDIT: I set up a magnifying glass and walked some distance away from it. What I see through it is: upside down, in focus, and inclusive of a wider view than what it blocks. I can see an entire shed within the lens, while it does not block me from seeing the shed around it. | 0 |
Applied Force is our label for a contact force that a person exerts. When an applied force acts at an angle, it is actually a combination of two forces: normal and friction. The component of the applied force that is perpendicular to the surface is a normal force, and the component parallel to the surface is a friction force. I have two questions; When the applied force is at an angle then forces of friction and normal are between the surface of the object and the hand, not between the object and the ground so can we say that the parallel component of applied force is responsible for the acceleration of the object? (Is it okay to say that it is the friction force between our hand and the object that accelerates the object)? What will happen if an applied force acts on a thread also can we say that the applied force is the combination of normal and friction force between my hands and thread if so then tension force will be also called a combination of normal force and friction force or if I am wrong then please correct me. | 0 |
I believe this question would have been asked before, but not like this. The popular answer to this question is that the slide-release action of a bow sets up vibrations in the strings, of which ultimately only the resonant frequencies would survive. Plucking a guitar string sets up a transverse oscillation on the strings at its resonant frequencies. But when you play a violin, you slide a bow over the strings. We know that the strings first attach to the bow, then release once the static friction is overcome, and this process repeats again and again. But how does this process ensure periodicity? We don't hear a discontinuous noise from a violin; rather we hear smooth continuous notes. How do the strings know when to catch the bow, and when to release them ? | 0 |
Bell's therorem seems to disprove localism because measuring, let's say spin of an entangled electron, seem to communicate the measurement to it's another pair instantaneously. But isn't another thing possible? Maybe the electrons are not communicating anything and instead the two instruments which are measuring the electrons "know" at what angle the other instrument is measuring the other electron and that is a part of the measuring process of instrument and thus the Bell's equality is violated but local realism is still valid. Here the two instruments and the people who are performing the experiments are in some kind of weird sync where they cannot measure arbitrarily in any direction but instead the measurement angles are predetermined before hand and the instruments already know the angles with of each other which are used to measure, hence the spin directions can be correlated. This would mean that Universe is deterministic but also locally real. Is this at least theoretically possible? If it is possible wouldn't it be more saner theory to adopt rather than throwing local realism away? Why? Why not? | 0 |
In "Gravitation" by Misner, Thorne and Wheeler the authors pose the following puzzle: The metric perturbation of the wave changes the scale of the distances slightly, but also correspondingly changes the scale of time. Therefore does not any possibility of any really meaningful and measurable effect cancel out? And they also give an answer: The widened separation between the geodesics is not a local effect but a cumulative one. [...] When one investigates the separation of geodesics [...] over a large number of periods he finds a cumulative, systematic, net slow bending of the rapidly wiggling geodesics toward each other. This small, attractive acceleration is evidence in gravitation physics [...]. Is LIGO measuring only a net acceleration (or maybe a net displacement as a cause of the net acceleration) of the mirrors instead of a wiggling? The LIGO collaboration states the opposite. On the FAQ page (https://www.ligo.caltech.edu/page/faq) you can find Back and forth the waves pass through (interfere with) each other as the arms themselves change length, causing light interference that ranges fully between totally destructive to totally constructive. In other words, instead of nothing coming out of the interferometer, a flicker of light appears. And how could we obtain the frequency of the gravitational wave when averaging over a large number of periods? But if I misunderstood the statements in the book "Gravitation", what is the correct solution of the puzzle stated in the beginning? | 0 |
Consider a simple gas (or fluid) within a box at thermal equilibrium. I manage to give a kick to one particle within the gas, such that it acquires some momentum. After some time, it should be expected that this localized energy (or momentum) input would be just distributed such that Boltzmann distribution is recovered. But how does this recovery take place ? The simplest mode of transport of this momentum disturbance could be for example a sound mode, that is, a sequence of kicks in the same direction of the initial kick is generated starting from the kicked particle. But I can also imagine that after the first kick, momentum is transferred in random walk slow manner. Here the momentum disturbance would be tracing a some kind of random trajectory instead of a linear trajectory as it would be for a sound mode. How to know which kinds of modes are taking place (roughly depending on kick strength, density, temperature and so on..) ? | 0 |
In this post Reconstructing isomorphisms via the bijection between the corresponding posets of subobjects I asked for the possibility of constructing an isomorphism via the order-preserving bijection between the corresponding posets of subobjects. In the case of the category of sets just work with the singletons to define the bijection - I'm working with classical NGB set theory - Now, an interesting counterexample has been exhibited in the aforementioned post. However, I've another question. Well, which are the categories sharing the same property as set (i.e. to give rise to an isomorphism between the corresponding posets of subobjects). I was thinking to toposes. Might it be a good way or are there also counterexamples coming from toposes? Actualy, there exists a paper by Barr and Diaconescu entitled "Atomic Toposes" (https://www.math.mcgill.ca/barr/papers/atom.top.pdf). Well, I think that what I'm searching for is related to the fact that the subobject lattice is atomistic, i.e. every element may be expressed as the join of its atoms. It is property related to my problem? | 0 |
If something is said and people understand it, sounds like a word to me. If people didn't understand it, I would think they would rather say, "what do you mean?", than making it a word after a definition is explained. Or are there exceptions I'm not thinking of? Dictionary or not, if people understand something, even if just locally, it's a word. Of course, there is some irony in people seemingly claiming to have a mastery of language who do not know what the word 'word' means. One example may be this David Cross bit I've heard, where he said 'irregardless' isn't a word, even though it's been used for a long time and is understood. However, it is considered nonstandard in at least some dictionaries. There may be cases where something incorrectly thinks that a valid word isn't part of standard English even though it actually is. I would still include this in the scope of the question, because they they think it's wrong, whether they are right or not. If someone is making a mistake, if it's still understood, it'd still be a word, for that moment, at least. A more common example may be the use of "ain't". (Kids exclaiming 'that's not a word' out of disbelief would be an exception, but I think it's a rather different usage.) | 0 |
I read this fascinating paper (RG). The total collision is a well-known singularity in Newton mechanics: the distances become zero and, therefore, the potential becomes infinite. In a paper before that one above, Paula Reichert describes a try to calculate through the singularity using shape dynamics. In shape dynamics, the absolute background space is abandoned and only relative positions and especially angles are considered. Unfortunately, shape dynamics alone turns out not to be sufficient to calculate through the singularity. Then, in the paper above, they make a reference to this paper where they calculate through the big bang singularity by adding a scalar "stiff matter" field in the Bianchi IX universe. How can one understand this scalar field? Since shape dynamics doesn't rely on an absolute space as a background, it comes to my mind that this scalar field may be interpreted as space itself, space that surrounds every single mass point as a field. Is that interpretation right or wrong? If wrong, how can one get an understanding for this scalar field? | 0 |
I'm attempting to develop an understanding of how equations are developed and wondered whether or not all equations started their development in the quest to document observable phenomenon or are there any 'purely synthetic' equations, which are developed from the basis of a thought in order to produce numeric results which are useful in some way. Put another way: Do observations of phenomenon drive the development of equations or does a conceptual need for a tool drive their development instead? The reason I ask, is because as someone who is a novice in mathematics (arithmetic skill, elementary algebra and descriptive stats), the choice of the construction of equations / their form always leads me to wonder why this form was chosen. In particular as I delve further into stats I have seen descriptions where certain equations may be adjusted based 'on ones need' which further deepens the difficulty of just using a tool blindly and wondering at what point or in what way I can develop the skill to discern how to choose how to alter an equation to better fit a desired outcome... I attempted to articulate this originally HERE, then asked about whether it made sense to start a new question HERE, but for lack of feedback in either, I decided to open this query and post any positive results as link backs to those two questions for the community's later use. If there is a better way to approach asking this please let me know and I'll be happy to adjust accordingly. | 0 |
In a laboratory a vessel was built which can sustain high pressure, thermostats and pressure gauge were connected, assuming closed system, dry ice was introduced in closed vessel, the temperature of vessel was increased the sublimation of dry ice resulted in formation of gaseous carbon dioxide increasing pressure in vessel, the increased pressure and temperature resulted in liquid carbon dioxide formation, after more heating critical temperature was reached and superfluid carbon dioxide was obtained, at these state gas and liquid phases became indistinguishable, ice was put upon vessel surface which resulted in decrease of temperature and the liquid carbon dioxide appeared again. My question is if we see typical pressure vs volume diagram it is observed that above critical conditions it is not possible to condense fluid by reducing temperature since the point will lie above the dome, then how come we see condensation of superfluid carbon dioxide in our experiment ? Note: the boundary liquid gas interface was seen disappearing in as it's typically seen in superfluids hence it's confirm that we had superfluid carbon dioxide. | 0 |
I encountered a proof that the empty set is a subset of every set via this comment(Is "The empty set is a subset of any set" a convention?) which shows that it cannot be false that the empty set is a subset of every set. Without necessarily going into a proof of how the empty set is a subset of every set, I was wondering why the fact that it cannot be false that the empty set is a subset of every set shows that this is true- could it not be the case that the concept of subsets is meaningless with regards to the empty set, and it is not enough to show that it could not be false; that this statement could neither true or false as it has no meaning in this context? Also, I would appreciate some explanation as to how this condition holds "vacuously" as far as terminology, as I have learned that for an implication to be vacuously true, it is true when it's hypothesis is false. Thanks | 0 |
I have a very abstract at the same time awkward question. In many formulas across physics we need to take several approximations and often we derive formula from previous formula which had certain approximations in them. By error analysis we know that errors keep getting carried forward. Hypothetically can there be situation in future when almost every advanced formula we use, which is a result of derivation of several heavily derived formulas(I am referring several heavily derived formulas as those which are derived from highly approximated formula), be filled error quite significantly high and our calculations are thrown off by huge percentage of error and inaccuracy. Can the error propagation in formula and theorems be a major problem we face in the future of physics? (Please note that I am not talking approximations involved in process of integration.) I am just curious to know to what level the approximations we take, effect the calculations we perform.I know this is a very awkwardly framed question. If you have any suggestions on improving the language of the question or any other changes, please let me know. | 0 |
Per wikipedia: natural frequency, also known as eigenfrequency, is the frequency at which a system tends to oscillate in the absence of any driving force. Let's take a wine glass as an example. The wine glass sits on a table, it is not visibly moving. But since natural frequency exists, therefore it is oscillate on some minute scale. Where does the oscillation come from? Is it from the electrons and protons that are whizzing inside of the glass and getting excited due to stochastic heat? Is it because protons are hitting the surface of the glass? Or is it oscillating due to the minor tremor in the earth? Or perhaps the minute attractive force exerted upon it by all other objects? In each case, it is clear that there is a driving force: heat, proton's impact force, tremor. So clearly natural frequency must be associated with a type of oscillation that's not any one of the above. So where does the oscillation come from instead? It seems that given any source of this oscillation, you can identify a driving force. Hence such driving force is never absent, hence there is no natural frequency. Are all objects naturally oscillatory and that natural frequency is like a fundamental aspect of an object, such as its 'mass'? If so, can we identify this natural frequency in some way? E.g., I want to know my body's natural frequency at this moment. | 0 |
My admittedly limited understanding of QM is that it is a matter of probabilities whether or not a photon is (re)transmitted through a polarising filter and that these are a function of the relative orientation of the polarization of the photon and that of the filter. Unfortunately, a single detection event cannot distinguish a very likely from a very unlikely one. Since the same applies to a case in which the photon is absorbed, it seems that not much information, if any, can be gained, when what would be nice is that we could learn something about the polarisation of the photon for example. Even if statistics are collected on streams of successive photons, they're likely each to be oriented differently in relation to the filter, so not much information can be gained by examining the statistics either. | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.