url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://www.purplemath.com/learning/viewtopic.php?p=7193
## Establishing an identity: tan(x)/(tan^2(x)-1)=1/(tan(x)-cot( Trigonometric ratios and functions, the unit circle, inverse trig functions, identities, trig graphs, etc. tampster Posts: 3 Joined: Sun Feb 26, 2012 4:35 am Contact: ### Establishing an identity: tan(x)/(tan^2(x)-1)=1/(tan(x)-cot( Establish the following identity: $\frac{\tan(x)}{\tan^2(x)\, -\, 1}\, =\, \frac{1}{\tan(x)\, -\, \cot(x)}$ My attempt was(using "Left Hand Side"): · Convert the tan and tan^2 to sin(x)/cos(x) and sin^2(x)/cos^2(x) respectively. $\mbox{Step 1. }\, \frac{\frac{\sin(x)}{\cos(x)}}{\frac{\sin^2(x)}{\cos^2(x)}\, -\, 1}$ $\mbox{Step 2. }\, \left(\frac{\sin(x)}{\cos(x)}\right)\, \times \,\left(\frac{\cos^2(x)}{sin^2(x)}\, -\, 1\right)$ (cross multiply flipped denominator?) Step 3. FOIL the sinx/cosx Step 4. Fid a common denominator to get: $\frac{\cos^2(x)\, -\, \sin^2(x)}{\sin(x) \cos(x)}$ I don't see how to simplify any further to get it to match the "Right Hand Side". stapel_eliz Posts: 1628 Joined: Mon Dec 08, 2008 4:22 pm Contact: $\mbox{Step 1. }\, \frac{\frac{\sin(x)}{\cos(x)}}{\frac{\sin^2(x)}{\cos^2(x)}\, -\, 1}$ $\mbox{Step 2. }\, \left(\frac{\sin(x)}{\cos(x)}\right)\, \times \,\left(\frac{\cos^2(x)}{sin^2(x)}\, -\, 1\right)$ (cross multiply flipped denominator?) Since you did not have the two original terms combined using a common denominator, you cannot flip "the" fraction underneath. Try doing the conversion first, and see where this leads. . . . . .$\frac{\frac{\sin(x)}{\cos(x)}}{\frac{\sin^2(x)}{\cos^2(x)}\, -\, 1} \, =\, \frac{\left(\frac{\sin(x)}{\cos(x)}\right)}{\left(\frac{\sin^2(x)}{\cos^2(x)}\, -\, \frac{\cos^2(x)}{\cos^2(x)}\right)}$ ...and so forth. tampster Posts: 3 Joined: Sun Feb 26, 2012 4:35 am Contact: ### Re: Establishing an identity: tan(x)/(tan^2(x)-1)=1/(tan(x)- Well, I attempted the problem using your suggestion and I was able to reduce it to this, but do not see any further simplification possibilities: The denominator could be changed to -cos(2x), but I don't see that helping. The denominator could be factored to get (sinx-cosx)(sinx+cosx), but I don't see that helping either. PLZ HELP my HW is due tomorrow maggiemagnet Posts: 358 Joined: Mon Dec 08, 2008 12:32 am Contact: ### Re: Establishing an identity: tan(x)/(tan^2(x)-1)=1/(tan(x)- Try working on the other side. Change this to sines and cosines and do the common denominator. Then turn the dividing into multiplying, and compare what you get with what you already have on the left-hand side. You can then do the proof by showing the steps from where you are on the left-hand side, going backwards through the steps on the right-hand side, until you get to the starting place on the right-hand side. tampster Posts: 3 Joined: Sun Feb 26, 2012 4:35 am Contact: ### SOLVED Establishing an identity: tan(x)/(tan^2(x)-1)=1/(tan( Solved: (original left hand side equation all divided by tan) =
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 7, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9664420485496521, "perplexity": 1885.336038439441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321458.47/warc/CC-MAIN-20170627152510-20170627172510-00541.warc.gz"}
http://www.kierandkelly.com/whats-driving-evolution/
What’s Driving Evolution? According to the Second Law of Thermodynamics we live in a universe that irreversibly decays over time.  But if this is indeed the case, then it begs the question:   How does Evolution’s Spontaneous and Progressive Complexity occur without some form of External Organizing Force?” Evolution vs. Entropy The Second Law of Thermodynamics (SLOT) is a law of physics that deals with “Spontaneous Change” (i.e. change that occurs without any external direction, change that happens all by itself…) The SLOT is, more precisely, the law of physics that deals with how energy distributes itself within a thermal system, always moving spontaneously and irreversibly to “Thermal Equilibrium”. In everyday terms the SLOT is simply the fact that hot coffee and cold milk, if left unstirred, will spontaneously mix themselves (in both composition and temperature), and will never spontaneously un-mix. Despite the fact that this seems rather obvious and trivial behavior, the SLOT is nonetheless considered to be one of the most fundamental and important laws of physics — and the reason for this exalted status is that the SLOT is both a “Probabilistic Law” and also the “Law of Maximum Entropy”! “Entropy” is a concept that deals with amount of “disorder” in a system, and it is widely understood that the spontaneous gravitational pull to maximum entropy is not restricted to simple thermal systems; but that all systems, if left undisturbed, will spontaneously gravitate towards a state of maximum disorder — a state that would seem to be the exact opposite of Nature’s “spontaneously self-organized complexity”. This apparent conflict between physics and natural evolution obviously begs the question:   “How does Evolution manage to spontaneously generate such incredible Complexity in the face of the SLOT?” How can natural complexity spontaneously arise in a universe dominated by the SLOT and its spontaneous and irreversible pull to disorder?  What exactly is the “Source” of all of Nature’s spontaneous order and complexity? The Export of Entropy In 1977, the Belgian chemist Ilya Prigogine won the Nobel Prize for Chemistry, for his work on his “Theory of Dissipative Structures”.  Prigogine’s theory suggests that complex ordered systems can indeed come into existence if these systems are open and capable of “exporting”  their internal disorder, to the external environment. But while this theory would seem to go some way towards solving the paradox of how order can occur without negating the SLOT, it still does not manage to identify what fundamental forces are actually driving evolution to evermore progressive complexity.   Physics has as yet offered no explanation for “evolution’s progressive arrow of time” As it turns out however, the resolution of this paradox is actually quite easy.  To resolve this apparent conflict  between physics and natural evolution we need merely to focus on a very simple fact that has been consistently overlooked about the “probabilistic” SLOT; the fact that it relies heavily on the “Law of Large Numbers (LLN)”… The LLN Most people are familiar with the concept that if we toss a coin four times, we won’t necessarily get a 50/50 split of heads and tails: indeed, we could actually get 4 tails in a row.  But if we toss the same coin a million times, we will almost certainly get something close to a 50/50 split.  It is the LLN that ensures that one million coins tosses will produce an average of 50% heads and 50% tails. [Note: In the simplest possible mathematical terms, the reason the LLN works so well is that the number of independent tosses (i.e. 1,000,000) is significantly larger than the number of options available to each toss (i.e. 2 – heads or tails).] The SLOT states that left undisturbed all systems gravitate towards the “most probable state”, a state that is referred to as “thermal equilibrium”.  In reality however, the achievement (and sustainment) of thermal equilibrium relies heavily on the number of independent elements (of the system) being significantly larger than the number of energy options available to each element.  Which means that the chances of any “statistical deviations” from the “most probable state” are extremely small, and consequently the system as a whole will (virtually) always exhibit uniformity. So although on the “microscopic level” (of particle interaction) there is a lot of energetic dynamics and non-equilibrium abnormalities, these dynamics and abnormalities are normally invisible on the macro “system level” thanks to both the “Damping” and “Balancing Effects” of the LLN… The RLLN Our universe is fundamentally a universe of “systems”, and the probabilistic pull of equilibrium is a concept that is applicable to all fluid and fluid-like systems. Now, in a thermal system there are billions of tiny particles which interact through collisions, but other than that we can more or less say that they behave completely independently of each other. Systems however, where the parts – be they particles, elements, components, entities, agents, organizations, etc – behave independently of each other are actually quite rare.  Many systems are populated by adaptive elements or agents, and the behavior of these agents has a tendency to weaken the gravitational pull of equilibrium by engineering the “Reverse Law of Large Numbers (RLLN)”… [Note: Since the LLN relies on the number of independent elements being significantly larger than the number of options available to each element, there are therefore two things can engineer the RLLN and they are:  either the number of independent elements in the system comes down, or, the number of options available to each element goes up…] RLLN 1:  Emergent Positive Feedback In all fluid-like systems, the LLN ensures the spontaneous movement to a “global equilibrium”; however for very small regions within these systems, there are not enough particles to ensure a “local equilibrium”.  At the very lowest level within all systems, random fluctuations are undampable and occurring all the time which means that local imbalances are constantly, and randomly, flittering in and out of existence. Occasionally these random temporary fluctuations can randomly be very persistent.  In a thermal system this is naught but a mere statistical curiosity, but in a complex adaptive systems it can easily happen that some parts within the system will begin to adapt to these persistent fluctuations; and often such adaptation can serve to amplify the imbalance even further, and in so doing, further extend the fluctuation’s duration.  Thus random local fluctuations can lead to the localized emergence of positive feedback which reduces the independence of the elements and ultimately has an unbalancing and reversing effect on the LLN. RLLN 2:  Insufficient Negative Feedback Positive Feedback however is not the only thing that can engineer the RLLN.  Since the LLN effectively operates like a negative feedback system (in that it dampens a system to a equilibrium) it should be no surprise that the movement away from equilibrium could also be the result of insufficient negative feedback. So although complex fluid-like systems might gravitate towards equilibrium, many can hold themselves some distance away from equilibrium by exhibiting excessive undampable adaptation and innovation.  Adaptation and innovation effectively increases element “Optionality” and such increased optionality among the elements of the system can also engineer the RLLN… So the reality of probability driven dynamics in the natural world is that just as the LLN pulls a system to thermal equilibrium, so too the RLLN can hold, or drive, a system away from equilibrium. But ultimately what is most interesting about all of this probabilistic behavior is that: while strong positive feedback in isolation can cause the emergence of self-reinforcing local segregation; and while insufficient negative feedback in isolation can cause the surfacing of incompressible innovative diversity; the most interesting stuff actually occurs at the intersection between the two… Positive reinforcement in a system of great diversity can spontaneously produce surprisingly complex “Integrated Diversity”.  So in other words, with the co-emergence of diversity Natural Complexity Evolution’s progressive complexity is often portrayed as spontaneous “Self-Organization”, but this is not the exactly accurate.  The secret sauce of evolution’s spontaneous and progressive complexity is actually spontaneous “Self-Integration”. In the simplest possible terms, Natural Complexity emerges from the finely-tuned self-integration of co-emergent self-organized diversity; and as a consequence “the complex whole is forever becoming greater that its less complex parts”… So there we go, Natural Complexity explained (by mathematical probability).  “Easy Peasy Lemon Squeezy”… In a universe supposedly dominated by the SLOT what drives nature’s progressive evolution is simply the mathematical interplay of the two distinct forms of the Reverse Law of Large Numbers… What drives evolution’s spontaneous and progressive complexity is the interplay of insufficient negative feedback and strong positive feedback; or in other words what drives evolution is The Interplay of Random Innovation and Natural Reinforcement… Author: Kieran D. Kelly Experimental Computer Scientist, and Specialist in Complex Nonlinear Systems and Dynamics.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8249160647392273, "perplexity": 1296.6292800419606}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669868.3/warc/CC-MAIN-20191118232526-20191119020526-00553.warc.gz"}
http://physicspages.com/2014/04/05/riemann-tensor-symmetries/
## Riemann tensor: symmetries Reference: Moore, Thomas A., A General Relativity Workbook, University Science Books (2013) – Chapter 19; a-b. We can derive a few useful symmetries of the Riemann tensor by looking at its form in a locally inertial frame (LIF). At the origin of such a frame, all first derivatives of ${g_{ij}}$ are zero, which means the Christoffel symbols are all zero there. However, the second derivatives of ${g_{ij}}$ are not, in general, zero, so the derivatives of the Christoffel symbols will not, in general, be zero either. Using the definition of the Riemann tensor: $\displaystyle R_{\; j\ell m}^{i}\equiv\partial_{\ell}\Gamma_{\; mj}^{i}-\partial_{m}\Gamma_{\;\ell j}^{i}+\Gamma_{\; mj}^{k}\Gamma_{\;\ell k}^{i}-\Gamma_{\;\ell j}^{k}\Gamma_{\; km}^{i} \ \ \ \ \ (1)$ we can write it at the origin of a LIF: $\displaystyle R_{\; j\ell m}^{i}\equiv\partial_{\ell}\Gamma_{\; mj}^{i}-\partial_{m}\Gamma_{\;\ell j}^{i} \ \ \ \ \ (2)$ The Christoffel symbols are $\displaystyle \Gamma_{\; ij}^{m}=\frac{1}{2}g^{ml}\left(\partial_{j}g_{il}+\partial_{i}g_{lj}-\partial_{l}g_{ji}\right) \ \ \ \ \ (3)$ The symmetries of the Riemann tensor are easiest to write if we look at its form with all indices lowered, that is: $\displaystyle R_{nj\ell m}$ $\displaystyle =$ $\displaystyle g_{nk}R_{\; j\ell m}^{k}\ \ \ \ \ (4)$ $\displaystyle$ $\displaystyle =$ $\displaystyle g_{nk}\left(\partial_{\ell}\Gamma_{\; mj}^{k}-\partial_{m}\Gamma_{\;\ell j}^{k}\right) \ \ \ \ \ (5)$ First, we calculate the derivative: $\displaystyle \partial_{\ell}\Gamma_{\; mj}^{k}=\frac{1}{2}\partial_{\ell}g^{ki}\left(\partial_{j}g_{mi}+\partial_{m}g_{ij}-\partial_{i}g_{jm}\right)+\frac{1}{2}g^{ki}\left(\partial_{\ell}\partial_{j}g_{mi}+\partial_{\ell}\partial_{m}g_{ij}-\partial_{\ell}\partial_{i}g_{jm}\right) \ \ \ \ \ (6)$ At the origin of a LIF, the first term is zero since all first derivatives of ${g_{ij}}$ are zero, so we’re left with $\displaystyle \partial_{\ell}\Gamma_{\; mj}^{k}=\frac{1}{2}g^{ki}\left(\partial_{\ell}\partial_{j}g_{mi}+\partial_{\ell}\partial_{m}g_{ij}-\partial_{\ell}\partial_{i}g_{jm}\right) \ \ \ \ \ (7)$ Multiplying this by ${g_{kn}}$ and using ${g_{kn}g^{ik}=\delta_{n}^{i}}$, we have $\displaystyle g_{kn}\partial_{\ell}\Gamma_{\; mj}^{k}$ $\displaystyle =$ $\displaystyle \frac{1}{2}\delta_{n}^{i}\left(\partial_{\ell}\partial_{j}g_{mi}+\partial_{\ell}\partial_{m}g_{ij}-\partial_{\ell}\partial_{i}g_{jm}\right)\ \ \ \ \ (8)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{1}{2}\left(\partial_{\ell}\partial_{j}g_{mn}+\partial_{\ell}\partial_{m}g_{nj}-\partial_{\ell}\partial_{n}g_{jm}\right) \ \ \ \ \ (9)$ By substituting indices, we can get the second term in 5: $\displaystyle g_{nk}\partial_{m}\Gamma_{\;\ell j}^{k}=\frac{1}{2}\left(\partial_{m}\partial_{j}g_{\ell n}+\partial_{m}\partial_{\ell}g_{nj}-\partial_{m}\partial_{n}g_{j\ell}\right) \ \ \ \ \ (10)$ Subtracting 10 from 9 we see that the middle terms cancel, so we’re left with $\displaystyle \boxed{R_{nj\ell m}=\frac{1}{2}\left(\partial_{\ell}\partial_{j}g_{mn}+\partial_{m}\partial_{n}g_{j\ell}-\partial_{\ell}\partial_{n}g_{jm}-\partial_{m}\partial_{j}g_{\ell n}\right)} \ \ \ \ \ (11)$ This equation is valid only at the origin on a LIF. From this we can get some symmetry properties. First, if we interchange the first two indices ${n}$ and ${j}$ in the tensor we see that the first and third terms on the RHS in 11 swap, as do the second and fourth, so we end up with the negative of what we started with. That is $\displaystyle \boxed{R_{jn\ell m}=-R_{nj\ell m}} \ \ \ \ \ (12)$ If we interchange the last two indices ${\ell}$ and ${m}$, again the first term swaps with the fourth, and the second with the third, so we get the same result: $\displaystyle \boxed{R_{njm\ell}=-R_{nj\ell m}} \ \ \ \ \ (13)$ If we swap the first and third indices, and also the second and fourth, we get $\displaystyle R_{\ell mnj}$ $\displaystyle =$ $\displaystyle \frac{1}{2}\left(\partial_{n}\partial_{m}g_{j\ell}+\partial_{j}\partial_{\ell}g_{mn}-\partial_{n}\partial_{\ell}g_{mj}-\partial_{j}\partial_{m}g_{n\ell}\right)\ \ \ \ \ (14)$ $\displaystyle$ $\displaystyle =$ $\displaystyle R_{nj\ell m} \ \ \ \ \ (15)$ Thus the Riemann tensor is symmetric under interchange of its first two indices with its last two: $\displaystyle \boxed{R_{\ell mnj}=R_{nj\ell m}} \ \ \ \ \ (16)$ A final symmetry property is a bit more subtle. If we cyclically permute the last 3 indices ${j}$, ${\ell}$ and ${m}$ and add up the 3 terms, we get $\displaystyle R_{nj\ell m}+R_{n\ell mj}+R_{nmj\ell}$ $\displaystyle =$ $\displaystyle \frac{1}{2}\left(\partial_{\ell}\partial_{j}g_{mn}+\partial_{m}\partial_{n}g_{j\ell}-\partial_{\ell}\partial_{n}g_{jm}-\partial_{m}\partial_{j}g_{\ell n}\right)+\nonumber$ $\displaystyle$ $\displaystyle$ $\displaystyle \frac{1}{2}\left(\partial_{m}\partial_{\ell}g_{jn}+\partial_{j}\partial_{n}g_{\ell m}-\partial_{m}\partial_{n}g_{\ell j}-\partial_{j}\partial_{\ell}g_{mn}\right)+\ \ \ \ \ (17)$ $\displaystyle$ $\displaystyle$ $\displaystyle \frac{1}{2}\left(\partial_{j}\partial_{m}g_{\ell n}+\partial_{\ell}\partial_{n}g_{mj}-\partial_{j}\partial_{n}g_{m\ell}-\partial_{\ell}\partial_{m}g_{jn}\right)\nonumber$ Using the symmetry of ${g_{ij}=g_{ji}}$ and the fact that partial derivatives commute, we find that the first two terms in the first line cancel with the last two terms in the second line, the first two in the second line cancel with the last two in the third line, and the first two in the third line cancel with the last two in the first line, giving the result: $\displaystyle \boxed{R_{nj\ell m}+R_{n\ell mj}+R_{nmj\ell}=0} \ \ \ \ \ (18)$ We’ve derived these results for the special case at the origin of a LIF. However, the origin of a LIF defines one particular event in spacetime and since all these symmetries are tensor equations, they must be true for that particular event, regardless of which coordinate system we’re using. Further, in our discussion of LIFs, we showed that we could define a LIF with its origin at any point in spacetime, provided that point is locally flat (that is, that there is no singularity at that point). So the argument shows that these symmetries are true for all non-singular points in spacetime. Incidentally, it might be confusing that we can say that these symmetries are universally valid at all points in all coordinate systems just because they are tensor equations, while we say that 11 is valid only at the origin of a LIF. The difference is that 11 is written explicitly in terms of a particular metric ${g_{ij}}$ and that metric is defined precisely so that all its first derivatives are zero at the origin of the LIF. If we wanted an equation for ${R_{nj\ell m}}$ at some other point in spacetime, we could write it in the same form, but we’d need to find a different metric ${g_{ij}}$ whose first derivatives are zero at this other point. If we wanted to use the original metric, then since this other point is not at the origin of the original LIF, the ${\Gamma_{\; k\ell}^{j}}$ would not be zero at this point since the derivatives of ${g_{ij}}$ wouldn’t be zero there, and the expression for ${R_{nj\ell m}}$ would be more complicated in terms of the original metric.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 57, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532004594802856, "perplexity": 123.1275088599374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462232.5/warc/CC-MAIN-20150226074102-00317-ip-10-28-5-156.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/29313/michael-edenfield?tab=activity
# Michael Edenfield less info reputation 5 bio website kutulu.org location Florida age 39 member for 2 years, 9 months seen 7 hours ago profile views 31 # 33 Actions Dec23 comment What's behind the Banach-Tarski paradox? +1 because I can upvote for any reason I feel like? Oct7 asked Choice of bounds for functions “defined” as integrals using the FTC Jun11 comment Use of the word “solve”? oddly enough, in English we "take" derivatives but not integrals.... Apr7 awarded Popular Question Mar19 comment Is a brute force method considered a proof? @camel brute force proofs, in general, are frowned upon not just because the chance of errors is much greater (which is one reason), but also because it tends to imply that there is no fundamental mathematical "reason" why the proof works, it "just so happens to work" for every possible case. IOW, there is a vague notion that a brute force proof is "a valid proof of a now provably-boring statement". Mar14 comment Does an equation containing infinity not equal 0 or infinity exist? @KRyan no, you can produce a limit that shows $\infty^0$ to be anything you want; for example, $$\lim_{x\to\infty} x^{1/ln x} = \infty^0 = e$$ This is why such expressions are called "indeterminate forms"; there's not enough information in $\infty^0$ to determine a single value for it. Jan2 comment Taking seats on a plane: probability that the last two persons take their proper seats I'm confused how you got the term for case 3, since the "rules" for seating aren't recursive; in particular, if passenger #2 finds his seat empty (passenger #1 did not take it) then #2 will always sit there, so the probability that #2 sits in the correct seat is (n - 1)/n, not 1/(n - 1)... isn't it? Oct12 comment Is zero irrational? @Stefan4024 it's definition varies with the branch of mathematics you're using; it would be undefined in algebra, but in calculus (where we have limits) it represent a limit of zero. Sep30 comment What does $d/dx$ actually mean? @PedroTamaroff so if my question has been asked and answered on here before I cannot find it, so I would appreciate any pointers... Sep30 comment What does $d/dx$ actually mean? @PedroTamaroff I've read lots of questions/answers that talk about $d/dx$ and all of them reinforce my intuition that it's an operator, not a numeric value you can do arithmetic on... Sep29 comment What does $d/dx$ actually mean? @JonasMeyer thanks, that looks like interesting information; but I got lost by sentence three of the answer :) I assume that stuff is from linear algebra (next on my list)? Sep29 asked What does $d/dx$ actually mean? Sep2 comment $2d^2=n^2$ implies that $n$ is multiple of 2 related to my earlier question (why this only works for prime numbers): math.stackexchange.com/q/162119/29313 Jul29 comment Set notation confusion (Empty Sets) @Bobby it helps (me, at least) to remember the name: the empty set is not "nothing" because it is still a set. All sets contain n other things; the empty set is a set, which contains 0 other things. Jun11 comment My sister absolutely refuses to learn math The most important part of your entire answer, IMO, was that the OP needs to show some willingness to help with her immediate needs, or she will just stop asking for help. Jun11 comment My sister absolutely refuses to learn math @JoelReyesNoche if I could +10000 this answer I would. May26 comment Does half-life mean something can never completely decay? @DanZimm He's actually asking about the pharmacological half-life, which is slightly different from the nuclear half-life. In particular, it's far less regular and predictable :) May21 awarded Commentator May21 comment What is a proof? @dkbose The only thing that really stops you from doing that on an exam is that your professor will probably fail you :) I took an MIT OCW course on discrete math where the professor said basically that: "You can use any basic rules of math that you already knew coming into this course as an axiom in your proofs, as long as you don't claim to 'already know' everything we're asking you to prove." :) Mar31 comment What does the notation $f\colon A\to B$ mean? I had the same question, though to me the meaning was pretty obvious from context I cannot figure out which "pre-req" class I should have learned this notation in. I did up through multi-dimensional calculus in college without ever seeing it, but when I started a discrete math course online it was taken for granted.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221296072006226, "perplexity": 1067.5898372504591}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115855897.0/warc/CC-MAIN-20150124161055-00028-ip-10-180-212-252.ec2.internal.warc.gz"}
https://www.meru.cloud/2018/12/05/I'm-already-Tr(R).html
I shall begin by saying that I thought about this today in my Detection and Estimation Class while looking at the mse calculations for the MMSE estimator. Let’s suppose we have a vector of discrete $RV$s $player$ of elements $player_{i} \$ with the following rule: which means Suppose we are given a $N\times N$ square matrix $R$. A square matrix has the operator Trace ($tr$) defined as the sum of the elements on its main diagonal: Suppose that the value $tr(R)$ is part of the set of values that $player_{i}$ can assume. Now suppose that the variable $player_{A}$ assumes the value of $tr(R)$. Indeed: So $player_{A}$ is Trace(R). Let’s now suppose that also another random variable $player_{x}$, with $x \ne A$ wants to assume the role of Trace(R). However, Therefore, the variable $player_{x}$ cannot assume the value of Trace(R) and should proceed to choose another value, since $player_{A}$ is already Trace(R).
{"extraction_info": {"found_math": true, "script_math_tex": 15, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9545599818229675, "perplexity": 298.77882575693786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00203.warc.gz"}
http://wp.doc.ic.ac.uk/ajf/ssa/
# Constant Propagation #### Difficulty: ** I was keen on the idea of an exercise that was about building one program from another and settled on this, as the core algorithm (constant propagation) is succinct and elegant and is certainly doable in the time. This test is quite similar in nature to the 2015 test, so students who did their homework had a leg-up, particularly in Part I. The first two questions in Part II are also quite straightforward, but then it gets harder and this is where most students got stuck. I originally thought that Part III might be a little simpler than Part II, but it’s actually very easy to forget a crucial call to unPhi among the various refactorings. Very few students got this right, so I came to the conclusion that both the order and balance of credit were about right. Note that the substitution algorithm hinted at in Part II leads to a much more elegant solution than the traditional ‘worklist’ algorithm; you also only need one pass over a block for each new set of constant assignments uncovered. However, all but two students opted for the ‘textbook’ approach based on work lists. I had originally considered including copy propagation, which involves a tiny change to the code and specification for Part II. However, it leads to some unpleasant edge cases when phi functions are removed in Part III. It’s all fixable, but I decided to keep it simple. I also contemplated conditional propagation, as you can then remove dead code, but I had to be mindful of the fact that this is a first-year Haskell programming exercise, and not an exercise on compiler technology, so I dropped that too – they will see this in due course. Part IV is included only for completeness and I did not expect anyone to work out the details of how to plant phi functions. However, the variable renaming and expression simplification problems are actually quite doable at this level. Two students made a decent start before running out of time. This was an easy test to pass, especially as we had covered the 2015 test as a revision exercise. However, Part II requires some quite careful thought in order to understand the logic required and arrive at a beautiful solution. I’ve rated it ** because it’s easy to get stuck in Part II and most students did! The maximum mark was 30. There were lots of 29s and 28s and there were some beautiful solutions to individual questions, some of which inspired me to revisit my own specimen solution. However, nobody seemed to find a beautiful solution to all the questions, so no one got full marks. The average mark was around 70%.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426191806793213, "perplexity": 504.41114585229093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358208.31/warc/CC-MAIN-20211127163427-20211127193427-00391.warc.gz"}
https://dsp.stackexchange.com/questions/59059/fft-command-line-application
# FFT command line application? [closed] Are there any command line programs for Windows, preferably free and stand-alone, which can report the peak/strongest frequency within a given range of frequencies? I need something like this to automate finding the frequency of a calibration signal which slowly drifts. • Hi! Sorry, you're asking for a program that fulfills your specifications – these questions are best-case borderline off-topic. You could, however, write a program yourself that does that. Can you tell us more about your signal, and the tools you're using? (also, the title stands somewhat in conflict with your question's body, so this really calls for more background on what signal you're dealing with, and how much precision with how much observation you need) – Marcus Müller Jun 22 '19 at 18:18 • I need the strongest frequency from within a range with white noise and 1 strong calibration sine signal. The added calibration signal shifts as the receiver has no TCXO, so this allows me to determine its characteristics and calibrate the other signals. From this I can determine the calibration signals' amplitude and relate to its known power. I want to automate this as there are 576 audio files to process. 'FFT 1.0' by Lionel Loudet sidstation.loudet.org/fft-en.xhtml does FFT from the command line. I coded a script to extract the info but it takes a lot of processing time. – Petoetje59 Jun 23 '19 at 14:18 • why is the script slow? I'm not sure a standalone program would be faster, since in scripting languages, you'll typically just call a FFT function from a library (that is very fast), so I'd assume the overhead is negligible. – Marcus Müller Jun 25 '19 at 6:57 • In this script the fft.exe is the culprit - it gobbles up most of the processing time. In all it takes 732 secs to process just a 5 minute file, so visually determining the calibration frequency by just watching it in SpecLab is way faster, but still very time consuming to do. – Petoetje59 Jun 26 '19 at 12:02 • yes, but having a different program do the same will not be faster. You need to have a better way of estimating the frequency than to use the FFT, not a different program to encapsulate the FFT. – Marcus Müller Jun 26 '19 at 12:13 The calibration tone is a pure sine wave. I solved the problem without resorting to FFT - by using the "Sine fitting algorithms" (4-parameter method) described in an annex to IEEE-STD-1057/1241. Estimating Multiple Frequencies The Goertzel algorithm is more efficient than the DFT for a small number of frequency bins and can be easily implemented in open source software such as Octave or Python. More info is available on Wikipedia https://en.wikipedia.org/wiki/Goertzel_algorithm including the handy rule of thumb as to when it is more efficient: $$M \le \frac{5N_2}{6N}log_2(N_2)$$ Estimating A Single Tone in Low SNR Conditions For estimation of a single tone in low SNR conditions see this paper by Rim Elasmi_Ksibi, Hichem Bessbes, Roberto Lopez-Lacarace and Sofiane Cherife: https://www.researchgate.net/publication/220228033_Frequency_estimation_of_real-valued_single-tone_in_colored_noise_using_multiple_autocorrelation_lags which extracts the estimate for $$cos(\omega_0)$$ from the samples of the autocorrelation, where $$\omega_0 \in [0, \pi]$$. Which derives the estimate for $$\omega_0$$ as: $$\cos(\hat{\omega_0}) = \frac{\sum_{k=p}^q \hat{r_k}(\hat{r}_{k-1}+\hat{r}_{k+1})}{2\sum_{k=p}^q \hat{r}_k^2}$$ Where $$\hat{r}_k$$ is the unbiased autocorrelation for the observed samples y given as: $$\hat{r}_k = \frac{1}{N-k}\sum_{n=k+1}^N y_n y_{n-k}$$ and p and q are any integers q>p large enough such that the noise in the samples compared are independent. The larger the range of q-p the more processing required but the lower the noise in the estimate (so you can make that trade). If you choose a p that is less than the lag for noise independence, then you will have no advantage from additional processing for p+n until p+n is at the lag where the noise is independent. You can access this from the autocorrelation of the noise process alone to determine the lag at which the autocorrelation is 0. For example with white noise the autocorrelation = 0 for any offset, meaning all noise samples are indpendent in which case p can be as low as 1. Intuitive Explanation of Autocorrelation As a Frequency Discriminator The above gives the actual frequency estimate with all scaling accounted for, and you can trade the computational complexity with the noise of the estimate approaching the CRLB as detailed in the referenced paper. What follows is to provide and intuitive understanding of how this works. This works on the simple principle that the product of a sinusoid and a phase shifted version of the same sinusoid is scaled by the cosine of the phase as given by the following trigonometric identity: $$cos(\alpha)cos(\beta) = cos(\alpha+\beta) + cos(\alpha-\beta)$$ So when the frequency is the same and only the phase is different, the product is: $$cos(\omega_c t+\phi)cos(\omega_c t) = cos(\phi) + cos(2\omega t + \phi)$$ If we average (low pass filter) the above, the $$cos(2\omega t+ \phi)$$ term goes to zero and we are left with $$\cos(\phi)$$. This shows how the product is a phase detector. When we delay and multiply (as done in the autocorrelation!), the delay produces a signal with the same frequency but a phase shift. The resulting phase measured by the phase detector (product) is the change in phase over that delay which by definition is frequency! (Frequency is the derivative of phase). A commmon frequency discriminator topology is to delay and multiply (a frequency discriminator produces an output value that is proportional to the frequency of the input): Each sample of the Autocorrelation Function is a delay and multiply with a different delay value for each. The above referenced paper is simply scaling each result back to be $$cos(\omega)$$ and averaging to minimize the noise contribution and improve the estimate. In the plot below the vertical axis is crossing the horizontal axis at $$-\pi/2$$ to be at the point of maximum slope (operating point when used as a discriminator): This all applies to complex signals as well, in which case a complex conjugate multiply is done as shown with phase detector topologies below comparing real signals to complex signals. This suggests for a single complex tone the use of either the real or imaginary output of the complex conjugate multiplication to get a similar $$cos(\omega_0)$$ (real out, I) or $$sin(\omega_0)$$ (imag our, Q) result but with further processing a direct result of $$\omega_0$$ is obtained using: $$\omega_0=atan2(Q, I)$$ Where atan2 is the 2-argument arctangent with I and Q are the real and imaginary results of the complex conjugate multiplication, suggesting how the referenced approach for a single real sinusoid can also be extended to the case for a single complex tone. And for a single complex tone in high SNR conditions the estimate is trivial since the normalized angular frequency is the phase change from one complex sample to next, which is readily extracted from complex conjugate multiplication of the two samples: $$Ae^{j\omega_0} = y[n-1]y[n]^*$$ With $$\omega_0$$ extracted using the atan2 function on the real (I) and imaginary (Q) result of the product as $$atan2(Q,I)$$. This ends up with the following in terms of $$y[n]=I[n]+jQ[n]$$: $$\omega_0=atan2 \bigg( \frac{I[n]I[n-1]+Q[n]Q[n-1]}{I[n]Q[n-1]-Q[n]I[n-1]}\bigg)$$ (And there are numerous efficient estimators for the atan2 process that can be used to further simplify this, including the iterative CORDIC rotator when cycle times to iterate are more available than multipliers and look up tables.) What is useful and interesting from this is the imaginary portion of the autocorrelation function for any waveform will be proportional to the frequency offset of that waveform, which is useful for carrier recovery implementations for radio receivers! This is demonstrated below in the result for autocorrelation of a complex additive white Gaussian noise signal with a frequency offset in one direction ($$e^{j\omega_o t}$$) as plotted on a complex plane showing the real and imaginary terms of the autocorrelation. • Hey, I fixed the formula for you. There was a "+' missing which made the numerator third order and the denominator second order. If you double the amplitude of the signal, the result has to stay the same. This formula looks like a boat load of calculations, but I am going to compare it to mine. Might take a while to get to though. – Cedron Dawg Dec 28 '19 at 21:17 • Thanks for reading- this is for low SNR where you have the option to use a boatload of computations to the extent you want to reduce noise (you select). In high SNR you can just do the multiplication of two samples delayed from each other to get $Acos(\omega)$ and the ideal delay to use in this case is close to where the phase would be $\pi/2$ given the result has the highest slope, hence sensitivity. For complex tones with high SNR it is even easier since you can just do the complex conjugate multiplication between any two samples within $2\pi$ rotation and then use atan2/N to get $\omega$! – Dan Boschen Dec 28 '19 at 21:27 • It's even easier than that with high SNR in the time domain. $$\frac{\hat{r}_{k-1}+\hat{r}_{k+1} }{ \hat{r}_k }$$ has the same form as $$\frac{ y[k-1]+y[k+1] }{ y[k] }$$ which is known as Turner's three point formula. I've generalized this into various families of formulas in three blog articles: dsprelated.com/showarticle/1051.php, dsprelated.com/showarticle/1056.php, and dsprelated.com/showarticle/1074.php. Yes, I'm curious to compare. Boat loads of calculations don't cost anywhere near what they used to. ;-) – Cedron Dawg Dec 28 '19 at 22:21 • Check out my followup. That is with my original 3-bin formula. My 2-bin formula is even better except when the frequency is really close to a bin, then the 3-bin formula is a little more robust. Both are exact in the noiseless case. BTW, thanks for the paper reference! – Cedron Dawg Dec 28 '19 at 23:05 • Cool - thanks Cedron. Would be cool to compare apples to apples including processing metrics as I have a use for efficient tone estimators (in fact I have an approach that is done with just adds and compares)— your approach is on DFT bin’s correct? Could it work with a 2 point DFT (would be ideal) or do your exclude the DC bin because of divide by zero issues? When you say “first of it’s kind” for an exact estimate with no noise, isn’t the cosine(angle) from the product exact? – Dan Boschen Dec 28 '19 at 23:21 If you have a high SNR signal and can bookend your signal with two DFT bins, I don't think you can do better than https://www.dsprelated.com/showarticle/1095.php with an implementation shown in https://www.dsprelated.com/showarticle/1284.php if accuracy is important. For a noiseless, non-integer frequency (cycles per frame), the formula works with any two bins, as it is mathematically exact. For frequencies near the middle between two bins, it is the most robust by my testing. Here is the graphic comparison which is sparking my curiosity: From the Elasmi-Ksibi, et. al., paper cited by D.B.: This shows the autocorrelation formula's results. There is a full write up available as a PDF in the upper right corner under the link "Resources: Comparison of different frequency estimation algorithms (pdf)" This shows the comparisons of several formulas. Notice that the green line, which is my original 3-bin real valued formula, is the only one that hugs the line into the high SNR territory. (It is exact in the noiseless case, first of its kind.) What is more interesting though, is that it also seems to do better in the low SNR range. This may not be a valid comparison based on noise type, not sure. • (continued from under D.B.'s, curse the comments to chat policy) They were really clever in the autocorrelation formulas by multiplying eq (3) by $r_k$ before summing, ensuring the denominator will be a sum of squares and thus always non-zero. The near instantaneous article formulas work on either real or complex tones. Quite a difference to the DFT ones. The main point of these formulas is to be able to calculate the frequencies in a short duration, much shorter than a cycle. For a rapidly changing tone, this is an advantage. – Cedron Dawg Dec 29 '19 at 0:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368151783943176, "perplexity": 672.8325241845449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737206.16/warc/CC-MAIN-20200807172851-20200807202851-00089.warc.gz"}
http://www.cs.nyu.edu/pipermail/fom/2006-February/009703.html
# [FOM] Feferman's natural well-ordering problem. Andreas Weiermann weiermann at math.uu.nl Mon Feb 6 18:29:07 EST 2006 On Feb 1, 2006, at 10:33 PM, Bill Taylor wrote: > This was mentioned by Andreas Weierman[n] the other day, and I can > find no > help on Googol for it. > Can someone please explain what it is, for us? I would like to make some additions. The well chosen reference by Chiari contains a description of the current state of the art by a world leading expert. A reference by Feferman which mentiones the problem is: http://math.stanford.edu/~feferman/papers/conceptualprobs.pdf Another description is found in the second edition of Takeuti's proof theory book in the appendix part by Feferman. The point is that ordinal analysis is not just the calculation of an ordinal of a reasonable mathematical theory but to provide a natural well-ordering for the proof-theoretical ordinal of the theory in question. For \epsilon_0,\Gamma_0, the Bachmann-Howard ordinal, ...(and some more).. natural presentations are well established. Providing a convincing definition of a natural well-ordering is still the challenge. (It is even not clear whether this is a problem where a solution can be given.) My suggestion is to collect natural properties of natural well-orderings to approximate the problem. Kind regards, Andreas Weiermann
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8869544863700867, "perplexity": 3461.099477734608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999676834/warc/CC-MAIN-20140305060756-00090-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/207104-range-rational-quadratic-function-print.html
# range of rational quadratic function • Nov 9th 2012, 06:12 AM Stuck Man I want to show the range of y is R for all x. y=(2x-1)/(2x^2-4x+1) I made a quadratic equation and used the fact that the discriminant >=0 for real x. It was solved to y>=0.5(i-1). • Nov 9th 2012, 06:51 AM richard1234 Re: range of rational quadratic function What is the meaning of $y \ge .5(i-1)$? How do you define inequalities in the complex plane? We want to show that for $\frac{2x-1}{2x^2 - 4x + 1} = k$, no matter what our choice of k is, there is always a real solution for x. Equation becomes $2x - 1 = 2kx^2 - 4kx + k$ $2kx^2 - (4k+2)x + (k+1) = 0$ The discriminant D is $D = (4k+2)^2 - 4(2k)(k+1) = 8k^2 + 8k + 4$, which is positive for all real k. • Nov 9th 2012, 08:09 AM Stuck Man Re: range of rational quadratic function That is the discriminant I got. I thought I had to factorise it. Is there nothing further to do? I am finding I can't make replies using IE8. Is this a common problem with this forum?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951206207275391, "perplexity": 998.329651470237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188891.62/warc/CC-MAIN-20170322212948-00422-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/trigonometric-functions.113225/
# Trigonometric Functions 1. Mar 6, 2006 ### funktion I'm having some trouble with applying trigonometric functions to some real life situations, particularly this one problem in my homework. Andrea, a local gymnast, is doing timed bounces on a trampoline. The trampoline mat is 1 meter above ground level. When she bounces up, her feet reach a height of 3 meters above the mat, and when she bounces down her feet depress the mat by 0.5 meters. Once Andrea is in rhythm, her coach uses a stopwatch to make the following readings: 1. At the highest point the reading is 0.5 seconds. 2. At the lowest point the reading is 1.5 seconds. Now I'm not asking anyone to do my homework as it would defeat the whole purpose of me learning, but I would certainly love some advice as to how to formulate an equation and graph for this particular problem. One such problem I have with the problem is figuring out where the graph begins on the y-axis, or in other words, how high she is when the coach starts timing her. Any help would be greatly appreciated. Thanks. Last edited: Mar 6, 2006 2. Mar 6, 2006 ### xman to begin this problem i would suggest that you draw four lines corresponding to each line the bottom is at ground level, @ 1m is the base of the trampoline, @2m is some height and @3m you have her highest point. Now draw a horizontal line for time, and mark off units of 0.5 corresponding to seconds. Finally, plot the given points, from the coaches readings, i.e. at t=0.5 she's at 3 m and at t=1.5 she's at 0.5 m. Now draw a 'cosine' like plot connecting the points and you've got a rough sketch of her motion as a function of time between the given intervals. hope this helps, sincerely, x 3. Mar 6, 2006 ### funktion That actually helps me a lot, but rather than drawing a cosine graph, I drew a sine graph. Thanks a bunch x. I've come up with an equation, but I'm not sure how right I am, if someone could verify, that would be great. y=1.25sin(π)x+1.75 The thing in the brackets is pi, sorry, but I don't know the keystroke for it. 4. Mar 6, 2006 ### xman looks like you got it. good job, and i'm glad i could help. sincerely, x $$y(x) = 1.25 \sin(\pi x)+1.75 \Rightarrow y(0.5) =3, y(1.5) =0.5$$ note also you have the period, amplitude and all right from the equation, this is important to note if you are doing a lot of graphing with trig func. 5. Mar 6, 2006 ### funktion Yeah, thanks a bunch x, I've figured out that question. However, I have a few other questions. This regards a ferris wheel problem I'm working with. Ferris wheel: Diameter: 76m Maximum Height: 80m Ferris wheel has 36 carts, which each cart able to hold approximately 60 people. It rotates once every 3 minutes. Work that I've figured out: Equation: $$h(t)=38\cos(2\pi/3)x+42$$ (assuming that the ferris wheel begins rotating at maximum height) How many seconds after the wheel starts rotating does the cart first reach 10 meters from the ground? Now when I try doing this, my answer comes up negative, which isn't possible. Can anyone help me on this? 6. Mar 6, 2006 ### xman sorry, i took so long to get back i didn't know you had another question. your equation for the equation of motion is correct, with your assumption the cart starts at y(t=0)=80 m. so you want to know when the cart will first reach 10 m from the ground. $$10 = 38\, cos \left( \frac{2 \pi t }{3} \right) +42 \Rightarrow t=-\frac{3 cos^{-1} \left(\frac{16}{19} \right)-3 \pi}{2\pi} \sim 1.23 \, min$$ this is what i got, which makes sense, if the cart starts from the top then our time should be less than half the period since at that point our height above the ground is 4 m. Thus, starting from the top and rotating we first reach 10 m above ground approx 1.23 min later. hope this helps, x Last edited: Mar 6, 2006
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8179419040679932, "perplexity": 936.0011545945348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982946797.95/warc/CC-MAIN-20160823200906-00005-ip-10-153-172-175.ec2.internal.warc.gz"}
https://www.tutorialspoint.com/resistance-and-impedance-in-an-ac-circuit
# Resistance and Impedance in an AC Circuit ## Resistance in an AC Circuit Consider the circuit diagram, containing an AC sinusoidal voltage source and an unknown passive element (K). The element K will be a resistance, only if the voltage across it and the current flowing through it, are in phase with each other. ### Explanation Let the equation for alternating voltage is $$\mathrm{v=V_{m}\sin\omega\:t\:\:\:...(1)}$$ As a result of this voltage, an alternating current i will flow through the element. Now, the applied voltage has to overcome the drop in the element only, i.e. $$\mathrm{v=i\times\:k}$$ $$\mathrm{\Rightarrow\:i=\frac{v}{k}=\frac{V_{m}\sin\omega\:t}{k}\:\:\:...(2)}$$ The value of current will be maximum, when sin(ωt) = 1. $$\mathrm{\therefore\:I_{m}=\frac{V_{m}}{K}}$$ Thus, the equation (2) becomes, $$\mathrm{i=I_{m}\sin\omega\:t\:\:\:...(3)}$$ It is clear from the eqns. (1) and (3) that the applied voltage and resulting current are in phase with each other. Therefore, the unknown element is resistance, i.e. $$\mathrm{K=R\:\:\:...(4)}$$ ## Impedance in an AC Circuit If an AC circuit containing both resistive and reactive components, then the total opposition offered by the circuit in the flow of electric current is known as impedance of the AC circuit. It is denoted by letter ‘Z’ and measured in ohms (Ω). Mathematically, the impedance is expressed as, $$\mathrm{Impedance,Z=R+jX\:\:\:...(5)}$$ Case 1 – Impedance of a Series R-L Circuit $$\mathrm{Z=R+jX_{L}=R+j\omega\:L\:\:\:...(6)}$$ Case 2 – Impedance of a Series R-C Circuit $$\mathrm{Z=R+jX_{C}=R-j\frac{1}{\omega\:C}\:\:\:...(7)}$$ Case 3 – Impedance of a Series RLC Circuit $$\mathrm{Z=R+j(X_{L}-X_{C})=R+j(\omega\:L-\frac{1}{\omega\:C})\:\:\:...(8)}$$ Case 4 – Impedance in a Parallel AC Circuit In case of a parallel AC circuit, the impedance being given in terms of admittance, i.e. $$\mathrm{Impdance=\frac{1}{Admittance}}$$ $$\mathrm{\Rightarrow\:Z=\frac{1}{Y}=\frac{1}{G+jB}\:\:\:....(9)}$$ Where, • G = 1/R, is called conductance • B = 1/X, is called susceptance
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8835408091545105, "perplexity": 1310.3479306978716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00057.warc.gz"}
https://indico.cern.ch/event/303593/
Sally Dawson # Double Higgs Production at the LHC US/Central WH11NE - Sunrise (Fermilab - LPC) ### WH11NE - Sunrise #### Fermilab - LPC Description Abstract: I will address the question of how far the rate for double Higgs production can be from the Standard Model prediction and briefly survey the prospect of experimental searches for double Higgs production at the LHC.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9536815285682678, "perplexity": 2208.4954722605958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662546071.13/warc/CC-MAIN-20220522190453-20220522220453-00607.warc.gz"}
http://mathhelpforum.com/pre-calculus/58990-y-mx-b.html
# Math Help - y=mx+b??? 1. ## y=mx+b??? Okay on my homework it says: Use the graph to write the equation for the line. (there is blank graphs for all problems) 8a. (0,-2/7) this is the one that really got me... 8c. (0,-2) 8d. (0,-2) Wouldn't it be the same answer? they want me to put it in y=mx+b form. But i do not understand the double problems. Thanks for anyone that can help me. 2. Originally Posted by abb1327 Okay on my homework it says: Use the graph to write the equation for the line. (there is blank graphs for all problems) 8a. (0,-2/7) this is the one that really got me... 8c. (0,-2) 8d. (0,-2) Wouldn't it be the same answer? they want me to put it in y=mx+b form. But i do not understand the double problems. Thanks for anyone that can help me. In order to develop an equation for a line, we at least need a point and slope or two points. We have to know the direction of the line. One point will not do. So, if your graph is blank, then you cannot tell what the slope is or another point the line might go through, so we can go no further. Make sure you're not overlooking something in the text.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8534377217292786, "perplexity": 713.7223993009836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894319.36/warc/CC-MAIN-20140722025814-00215-ip-10-33-131-23.ec2.internal.warc.gz"}
http://mathhelpforum.com/statistics/134574-simple-probability-print.html
# Simple probability - • Mar 19th 2010, 09:21 AM WannaBe Simple probability - Hey there everyone.. Here is a question I will be delighted to get some verification on: A man is trying to shoot a bullet into a marksmanship squared board. In every shooting attemp, the probability that he won't hit the board at all is 0.2 . If he hits the board, the probability that he will hit an area that will give him points is 0.9. If he hits an area that gives him points, the probability that he will hit an area that will give 10 points is 0.2 , 20 points is 0,3 and 30 points is 0.5. A. What is the probability that in a random shooting attemp, the shooter will get 10 points? B. Let's assume that in a specific shooting attempt, the shooter didn't get points. What is the probability that the reason for this is that he hit the board but in an area that doesn't give points? I'll be delighted to get some help in part B...The answer to part A is 0.144 (Hope I did it right...) ...How shold I solve part B? What is the condition in it? Thanks! • Mar 19th 2010, 10:14 AM u2_wa Quote: Originally Posted by WannaBe Hey there everyone.. Here is a question I will be delighted to get some verification on: A man is trying to shoot a bullet into a marksmanship squared board. In every shooting attemp, the probability that he won't hit the board at all is 0.2 . If he hits the board, the probability that he will hit an area that will give him points is 0.9. If he hits an area that gives him points, the probability that he will hit an area that will give 10 points is 0.2 , 20 points is 0,3 and 30 points is 0.5. A. What is the probability that in a random shooting attemp, the shooter will get 10 points? B. Let's assume that in a specific shooting attempt, the shooter didn't get points. What is the probability that the reason for this is that he hit the board but in an area that doesn't give points? I'll be delighted to get some help in part B...The answer to part A is 0.144 (Hope I did it right...) ...How shold I solve part B? What is the condition in it? Thanks! Hello WannaBe: b) The shooter did not get points, it means: P(He did not hit the target) $= 0.2$ P[He hit the board] $=\color{blue}0.8$ P(He hit but did not get points) $=0.8*0.1=0.08$ P(He did not get points) $=0.2+0.8*0.1=0.28$ P(he hit the board but in an area that did not get points) $=\frac{0.08}{0.28}=\frac{2}{7}$ Hope this helps • Mar 20th 2010, 01:11 AM WannaBe Actually I think you're wrong... We know that: P(He did not hit the target) = 0.2 indeed, but: P(He hit but did not get points) = 0.8*0.1=0.08... Hence:P(He did not get points) = 0.28 and then: P(he hit the board but in an area that did not get points) = 0.08/0.28=2/7 :) Well, thanks a lot anyway :) You've verified my calculation :) • Mar 20th 2010, 04:22 AM u2_wa Oh I made a blunder(Angry). Sorry for that! The main strategy to solve was correct!!! • Mar 20th 2010, 06:38 AM WannaBe NVM man, as I said in my previous msg - your strategy gave me the verification I needed and I thank you for this...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8658730387687683, "perplexity": 1327.0917414340183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123560.51/warc/CC-MAIN-20170423031203-00057-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.futurelearn.com/courses/thermodynamics/1/steps/106618
2.16 # Summary of week 2 This week, you have practiced the calculation of energy changes using energy balance equations. Many exercises for the calculation of energy changes upon temperature change are presented. You learend how to calculate energy difference between two states of different temperature if we know the heat capacities. When the pressure is constant, integration of Cp with respect to temperature gives the energy changes upon temperature change within a single phase. This energy change under constant pressure is called sensible heat. Since we can only calculate the energy difference between two states, the concept of a reference state where the zero enthalpy is assigned might be particularly useful. Then the enthalpy of a material at a certain state is not an absolute term but the enthalpy difference from the reference state. For compounds, heat of formation is assigned for the convenience in tabulation. The enthalpy changes in common chemical reactions can be expressed with the enthalpy of formations for the compounds involved by the Hess’s law. In week 3, you will examine the second law of thermodynamics. While the first law of thermodynamics is intuitively comprehensible, the second law is tricky to understand. The second law is the entropy principle. There are various statements escribing the second law. The second law is concerned with heat into work and the quality of energy, which is a amount of useful energy that can do work. The change in quality of energy can be determined by the entropy function. You will examine the physical meaning of entropy. In week 3, we focus on the traditional interpretation of entropy and the closely related topics such as heat engines, refrigerators and heat pumps.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9037954807281494, "perplexity": 394.45371369144607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660877.4/warc/CC-MAIN-20190118233719-20190119015719-00043.warc.gz"}
https://www.intechopen.com/online-first/79179
Open access peer-reviewed chapter - ONLINE FIRST # Analysis of Heat Transfer in Non-Coaxial Rotation of Newtonian Carbon Nanofluid Flow with Magnetohydrodynamics and Porosity Effects By Wan Nura’in Nabilah Noranuar, Ahmad Qushairi Mohamad, Sharidan Shafie, Ilyas Khan, Mohd Rijal Ilias and Lim Yeou Jiann Submitted: September 9th 2021Reviewed: September 24th 2021Published: November 8th 2021 DOI: 10.5772/intechopen.100623 ## Abstract The study analyzed the heat transfer of water-based carbon nanotubes in non-coaxial rotation flow affected by magnetohydrodynamics and porosity. Two types of CNTs have been considered; single-walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs). Partial differential equations are used to model the problem subjected to the initial and moving boundary conditions. Employing dimensionless variables transformed the system of equations into ordinary differential equations form. The resulting dimensionless equations are analytically solved for the closed form of temperature and velocity distributions. The obtained solutions are expressed in terms of a complementary function error. The impacts of the embedded parameters are graphically plotted in different graphs and are discussed in detail. The Nusselt number and skin friction are also evaluated. The temperature and velocity profiles have been determined to meet the initial and boundary conditions. An augment in the CNTs’ volume fraction increases both temperature and velocity of the nanofluid as well as enhances the rate of heat transport. SWCNTs provides high values of Nusselt number compared to MWCNTs. For verification, a comparison between the present solutions and a past study is conducted and achieved excellent agreement. ### Keywords • Nanofluids • Carbon nanotubes • Newtonian fluid • Magnetohydrodynamics • Heat transfer ## 1. Introduction The growing demand in manufacturing has led to a significant process of heat energy transfer in industry applications such as nuclear reactors, heat exchangers, radiators in automobiles, solar water heaters, refrigeration units and the electronic cooling devices. Enhancing the heating and cooling processes in industries will save energy, reduce the processing time, enhances thermal rate and increase the equipment’s lifespan. Sivashanmugam [1] found that nanofluid emergence has improved heat transfer capabilities for processes in industries. Choi and Eastman [2] established the nanofluid by synthesizing nanoparticles in the conventional base fluid. To be specific, nanofluid is created by suspending nano-sized particles with commonly less than 100 nm into the ordinary fluids such as ethylene glycol, propylene glycol, water and oils [3]. Various materials from different groups can be used as the nanoparticles such as Al2O3 and CuO from metalic oxide, Cu, Ag, Au from metals, SiC and TiC from carbide ceramics, as well as TiO3 from semiconductors [4]. In addition, immersion of nanoparticles is a new way of enhancing thermal conductivity of ordinary fluids which directly improves their ability in heat transportation [5]. In line with nanofluid’s contribution in many crucial applications, a number of research has been carried out to discover the impacts of various nanofluid suspension on the flow features and heat transfer with several effects including Sulochana et al. [6] considering CuO-water and TiO-water, Sandeep and Reddy [7] using Cu-water, and Abbas and Magdy [8] choosing Al2O3-water as their nanofluid. Magnetohydrodynamics (MHD) is known as the resultant effect due to mutual interaction of magnetic field and moving electrical conducting fluid. Their great applications such as power generation system, MHD energy conversion, pumps, motors, solar collectors have drawn significant attention of several researcher for MHD nanofluid in convective boundary layer flow [9]. Benos and Sarris [10] studied the impacts of MHD flow of nanofluid in a horizontal cavity. Hussanan et al. [11] analyzed the transportation of mass and heat for MHD nanofluid flow restricted to an accelerated plate in a porous media. In this study, water-based oxide and non-oxide had been considered as the nanofluids. Prasad et al. [12] performed similar work as [11] concerning the radiative flow of nanofluid over a vertical moving plate. Anwar et al. [13] conducted the MHD nanofluid flow in a porous material with heat source/sink and radiation effects. Cao et al. [14] analyzed the heat transfer and flow regimes for a Maxwell nanofluid under MHD effect. While, Ramzan et al. [15] investigated for a radiative Jeffery nanofluid and Khan et al. [16] carried out for a Casson nanofluid with Newtonian heating. One of the greatest discoveries in material science history is carbon nanotubes (CNTs), which was discovered by a Japanese researcher in the beginning of the 1990s. Since the discovery, due to the unique electronic structural and mechanical characteristics, CNTs are found as valuable nanoparticles, especially in nanotechnology field. CNTs are great conductance which is highly sought in medical applications. They have been used as drug carriers and have benefited cancer therapy treatments [17]. The high thermal conductivity of CNTs has attracted significant attention from many researchers, including Xue [18], Khan et al. [19] and Saba et al. [20]. CNTs are hollow cylinders of carbon atoms in the forms of metals or semiconductors. CNTs are folded tubes of graphene sheet made up of hexagonal carbon rings, and their bundles are formed. CNTs are classified into two types with respectively differ in the graphene cylinder arrangement which are single-walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs). SWCNTs has one layer [21], while MWCNTs consist of more than one graphene cylinder layers [22]. Khalid et al. [23] studied the characteristics of flow and heat transfer for CNTs nanofluid affected by MHD and porosity effects. Acharya et al. [24] discussed a comparative study on the properties of MWCNTs and SWCNTs suspended in water with the imposition of magnetic field. The CNTs nanofluid flow induced by a moving plate was investigated by Anuar et al. [25] and a prominent effect on heat transfer and skin friction by SWCNTs was observed. Ebaid et al. [26] analyzed convective boundary layer for CNTs nanofluid under magnetic field effect. The closed form solution was derived using Laplace transform method and the findings showed increasing magnetic strength and volume fraction of CNTs had deteriorated the rate of heat transport. Aman et al. [27] improved heat transfer for a Maxwell CNTs nanofluid moving over a vertical state plate with constant wall temperature. The investigation of velocity slip of carbon nanotubes flow with diffusion species was conducted by Hayat et al. [28]. Recently, the heat transmission analysis for water-based CNTs was discussed by Berrehal and Makinde [29], considering the flow over non-parallel plates and Ellahi et al. [30] considering flow past a truncated wavy cone. Inspiring from the above literature, new study is essential to explore more findings on non-coaxial rotation of CNTs nanofluid. Therefore, the investigation of MHD non-coaxial rotating flow of CNTs nanofluid due to free convection in a porous medium become the primary focus of the current study. Water base fluid is chosen to suspend nanoparticle of SWCNTs and MWCNTs. The exact solutions for velocity and temperature distributions are attained by solving the problem analytically using the Laplace transform method. The results are illustrated in several graphs and tables for further analysis of various embedded parameters. ## 2. Problem formulation The incompressible time-dependent carbon nanofluid instigated by non-coaxial rotation past a vertical disk with an impulsive motion is considered as illustrated in Figure 1, where xand zare the Cartesian coordinates with x-axis is chosen as the upward direction and z-axis is the normal of it. The semi-finite space z>0is occupied by nanofluid that composed by constant kinematic viscosity υnfof SWCNTs and MWCNTs suspended in water and acts as an electrically conducting fluid flowing through a porous medium. The disk is placed vertically along the x-axis with forward motion and a uniform transverse magnetic field of strength B0is applied orthogonal to it. The plane x=0is considered as rotation axes for both disk and fluid. Initially, at t=0, the fluid and disk are retained at temperature Tand rotate about z-axis with the same angular velocity Ω. After time t>0, the fluid remains rotating at z-axis while the disk begins to move with velocity U0and rotates at z-axis. Both rotations have a uniform angular velocity Ω. The temperature of the disk raises to Twand the distance between the two axes of rotation is equal to . With above assumptions, the usual Boussinesq approximation is applied, and the nanofluid model proposed by Tiwari and Das [52] is used to represent the problem in the governing equations, express as ρnfFt+ρnfΩi+σnfB02+μnfk1=μnf2Fz2+ρβTnfgxTT+ρnfΩi+σnfB02+μnfk1Ω,E1 ρCpnfTt=knf2Tz2.E2 The corresponding initial and boundary conditions are Fz0=Ω;Tz0=T;z>0, F0t=U0;T0t=Tw;t>0,E3 Ft=Ω;Tt=T;t>0, in which F=f+igis the complex velocity; fand gare (real) primary and (imaginary) secondary velocities respectively, Tis the temperature of nanofluid and U0is the characteristic velocity. The following nanofluid constant for dynamic viscosity μnf, density ρnf, heat capacitance ρCpnf, electrical conductivity σnf, thermal expansion coefficient βTnfand thermal conductivity knfcan be used as μnf=μf1ϕ2.5,ρnf=1ϕρf+ϕρCNTs,ρCpnf=1ϕρCpf+ϕρCpCNTs,σnfσf=1+3σCNTsσf1ϕσCNTsσf+2ϕσCNTsσf1,βTnf=1ϕρβTf+ϕρβTCNTsρnf,knfkf=1ϕ+2ϕkCNTskCNTskflnkCNTs+kf2kf1ϕ+2ϕkfkCNTskflnkCNTs+kf2kf,E4 where the subscripts fis for fluid and CNTsis for carbon nanotubes. Meanwhile, ϕis the solid volume fraction of nanofluid. The constants in Eq. (4) are used based on the thermophysical features in Table 1. MaterialProperties ρKgm3CpJKg1K1kWm1K1β×105K1σSm1 Water997.141790.613210.05 SWCNTs2600425660027106107 MWCNTs16007963000441.9×104 ### Table 1. Thermophysical features of water, SWCNTs, and MWCNTs. Introducing following dimensionless variables F=FΩ1,z=Ωυz,t=Ωt,T=TTTwT.E5 Using Eqs. (4) and (5), the governing equations in Eqs. (1)(3) reduce to (excluding the * notation to simplify the equations) Ft+d1F=1ϕ12Fz2+ϕ3GrT,E6 Tt=1a12Tz2E7 and the conditions take the form Fz0=0,Tz0=0;z>0,F0t=U1,T0t=1;t>0,Ft=0,Tt=0;t>0,E8 where d1=i+M2ϕ2+1ϕ1K,a1=Prϕ4λ,M=σfB02Ωρf,1K=υfk1Ω,Pr=υfρCpfkf,Gr=gxβTfTwTΩ2,U=U0Ω.E9 At this point, d1and a1are constant parameters, Mis the magnetic parameter (magnetic field), Kis the porosity parameter, Pris Prandtl number, Gris Grashof number and Uis the amplitude of disk. Besides that, the other constant parameters are λ=knfkf,ϕ1=1ϕ2.51ϕ+ϕρCNTsρf,ϕ2=1+3σCNTsσf1ϕσCNTsσf+2ϕσCNTsσf111ϕ+ϕρCNTsρf,ϕ3=1ϕ+ϕρβCNTsρβf1ϕ+ϕρCNTsρf,ϕ4=1ϕ+ϕρCpCNTsρCpf.E10 ## 3. Exact solution Next, the system of equations in Eqs. (6)(8) after applying Laplace transform yield to the following form d2dz2F¯zqϕ1q+d2F¯zq=d3GrT¯zq,E11 F¯0q=U11q,F¯q=0,E12 d2dz2T¯zqa1qT¯zq=0,E13 T¯0q=1q,T¯q=0.E14 Then, Eqs. (11) and (13) are solved by using the boundary conditions, Eqs. (12) and (14). After taking some manipulations on the resultant solutions, the following Laplace solutions form F¯zq=F¯1zqF¯2zqF¯3zq+F¯4zq+F¯5zqF¯6zq,E15 T¯zq=1qexpza1q,E16 where F¯1zq=Uqexpzϕ1q+d2,F¯2zq=1qexpzϕ1q+d2,F¯3zq=a4qexpzϕ1q+d2,F¯4zq=a4qa3expzϕ1q+d2,F¯5zq=a4qexpza1q,F¯6zq=a4qa3expza1qE17 are defined, respectively. The exact solutions for the temperature and velocity are finally generated by utilizing the inverse Laplace transform on Eqs. (15) and (16). Hence, it results Fzt=F1ztF2ztF3zt+F4zt+F5ztF6ztE18 Tzt=erfcz2a1tE19 with F1zt=U2expzϕ1d4erfcz2ϕ1t+d4t+U2expzϕ1d4erfcz2ϕ1td4t,F2zt=12expzϕ1d4erfcz2ϕ1t+d4t+12expzϕ1d4erfcz2ϕ1td4t,F3zt=a42expzϕ1d4erfcz2ϕ1t+d4t+a42expzϕ1d4erfcz2ϕ1td4t,F4zt=a42expa3t+zϕ1a3+d4erfcz2ϕ1t+a3+d4t+a42expa3tzϕ1a3+d4erfcz2ϕ1ta3+d4t,F5zt=a4erfcz2a1t,F6zt=a42expa3t+za1a3erfcz2a1t+a3t+a42expa3tza1a3erfcz2a1ta3t,E20 where d2=ϕ1d1,d3=ϕ1ϕ3,d4=d2ϕ1,a2=a1ϕ1,a3=d2a2,a4=d3Gra2a3.E21 ## 4. Physical quantities In this study, the skin friction τtand Nusselt number Nufor the flow of Newtonian nanofluid in non-coaxal rotation are also analyzed. Their dimensional form is expressed as τt=μnfFzz=0E22 Nu=knfTzz=0E23 Incorporating Eqs. (22) and (23) with the nanofluid model Eq. (4), dimensionless variables Eq. (5) and solutions Eqs. (18) and (19), the following dimensionless skin friction and Nusselt number form as τt=11ϕ2.5∂F∂zz=0,=11ϕ2.5τ1tτ2tτ3t+τ4tτ5t+τ6t,E24 Nu=knfkfTzz=0=λa1πt,E25 where τ1t=Uϕ1d4erfcd4tUϕ1d4U2ϕ1πtexpd4t,τ2t=ϕ1d4erfcd4tϕ1d4ϕ1πtexpd4t,τ3t=a4ϕ1d4erfcd4ta4ϕ1d4a4ϕ1πtexpd4t,τ4t=a4ϕ1a3+d4expa3terfca3+d4ta4ϕ1πtexpd4ta4ϕ1a3+d4expa3t,τ5t=a4a1πt,τ6t=a4a1a3expa3terfca3ta4a1a3expa3ta4a1πt,E26 with τ=τνfμfΩ32. ## 5. Analysis of results The dimensionless differential equations of non-coaxial rotating nanofluid flow with associated boundary and initial conditions are analytically solved using the method of Laplace transform to obtain the closed form solutions of heat transfer. Further analysis for the role of dimensionless time t, Grashof number Gr, volume fraction of nanoparticles ϕ, porosity parameter K, magnetic field parameter Mand amplitude of disk Uon velocity and temperature distributions as well as Nusselt number and skin friction are presented in figures and tables. The profiles are plotted with the physical value of parameters as Pr=6.2,Gr=0.5,M=0.2,K=2.0,ϕ=0.02,U=2.0and t=0.2. The values are same unless for the investigated parameter of the profile. Since the rotating nanofluid is part of the problem, the results are discussed by presenting the graph of velocity profile in real and imaginary parts, specifically describes the primary fand secondary gvelocities. The velocity profiles are demonstrated in Figures 27 and the temperature profiles are illustrated in Figures 8 and 9. From these profiles, it is found that all the obtained results satisfy both boundary and initial conditions. SWCNTs and MWCNTs have an identical nature of fluid flow and heat transfer. Figure 2 depicts the plotting of fand gprofiles with varying tvalues. Overall, the velocity of both SWCNTs and MWCNTs rises over time. As tincreases, the buoyancy force becomes more effective and functions as an external source of energy to the flow, causing the velocity of fluid to increase. Figure 3 illustrates the variation of fand gprofiles for SWCNTs and MWCNTs cases under the effect of Gr. It is essential to note that Gris an approximation of the buoyancy force to the viscous force exerting on the flow. Hence, an increase of Grsuggests to the domination of buoyancy force and reduces the viscosity of fluid. Thus, growing Grleads to an augment of fluid velocity. On the other hand, Figure 4 discloses the nature of fluid flow in response to M. For both SWCNTs and MWCNTs cases, the figure suggests that amplifying Mdecreases fand gprofiles. This impact is owing to the fact that a greater Mvalue increases the frictional forces acting on the fluid, commonly known as the Lorentz force. Consequently, the fluid encounters substantial resistance along the flow and its velocity decreases. Next, the contribution of Kin SWCNTs and MWCNTs nanofluids for both fand gprofiles are displayed in Figure 5. It suggests that Kvalue increases linearly with the velocities for both SWCNTs and MWCNTs. Noting that porosity is also greatly affected by the permeability of a medium, where it determines the ability of a medium to enable the fluid to flow through it. Then, the increasing values of Kcause the medium to be more permeable and the fluid can easily pass through the medium. Therefore, it increases both fand gprofiles. Figure 6 reveals the consequences of ϕon fand gprofiles in the cases of SWCNTs and MWCNTs. It shows that increasing ϕvalues result in the increment of fprofiles and fluctuating trend of gprofiles. This suggests significant advantages of non-coaxial rotation in CNTs, especially in industrial and medical applications. In line with a general finding, an analysis proceeding in cancer treatment has reported that the CNTs with higher velocity have been used to reach the tumor’s site. Besides, referring to Figure 7, it is noticed that ascending Ualso has a positive impact on velocity profiles for both CNTs suspensions, where the velocity ascends linearly with the values of U. As Uincreases, this proposes to the creation of external sources, which are used to enhance the thrust force acting in the fluid flow. Thus, the velocity fluid elevates with increasing U. Furthermore, the temperature profiles Tztunder the impacts of tand ϕare displayed graphically in Figures 8 and 9. It reveals that increment of tand ϕcontributes to a rise in nanofluid temperature for both types of CNTs case and followed by the magnification of thermal boundary layer. Physically, the addition of sufficient ϕof CNTs can improve nanofluid’s thermal conductivity. The more CNTs being inserted, the higher the thermal conductivity, which unsurprisingly improves the ability of fluid to conduct heat. Therefore, a growth of temperature profile is exhibited for increasing ϕ. The comparison of physical behavior for SWCNTs and MWCNTs are clearer when referring to the zooming box of each graph. Overall, Figures 27 reveal that the velocity profile of MWCNTs case is more significant compared to the velocity of SWCNTs. This behavior is agreed to the thermophysical features in Table 1, where MWCNTs have low density, which also being a key factor for the increase of velocity profiles. Meanwhile, from Figures 8 and 9, SWCNTs have provided a prominent effect on temperature profiles as it is affected by a high thermal conductivity property. Tables 2 and 3 show the results of skin friction (τpand τs) and Nusselt number Nufor various parameters on both cases SWCNTs and MWCNTs. According to Table 2, it shows that both τpand τsof SWCNTs and MWCNTs rise when the strength of Mhigher. These effects cause the surface to produce high friction drag due to the maximization of wall shear stress. On the contrary, as Gr,Kand tincrease, both suspension of SWCNTs and MWCNTs report a diminution in τpand τs. This shows that augmentation of Gr,Kand thave reduced the friction between fluid and surfaces which lead the velocity to increase. Meanwhile, as ϕand Uincrease, both suspension of SWCNTs and MWCNTs report a growth of τpand a diminution in τs. From Table 3, it shows that Nufor both CNTs cases decrease as the values of tincrease. However, when involving high ϕ, both SWCNTs and MWCNTs have large Nuwhich also implies to have a great of heat transfer rate. This effect is also directly affected by the reduction of nanofluid heat capacitance as ϕincreases. Overall, for Table 3, it is found that SWCNTs case have high value of Nucompared to MWCNTs, due to its reduction of heat capacitance. This effect also signifies for a better heat transfer process that can be used in several engineering and industrial system. tGrMKϕUSWCNTsMWCNTs τpτsτpτs 0.20.50.220.0221.3811−0.25501.3691−0.2523 0.40.50.220.0221.0318−0.34921.0236−0.3455 0.250.220.0220.6276−0.27050.6195−0.2676 0.20.5320.0223.2171−0.15963.0871−0.1620 0.20.50.230.0221.3377−0.25781.3252−0.2552 0.20.50.220.1221.6820−0.31381.6016−0.2966 0.20.50.220.0232.8459−0.50822.8214−0.5030 ### Table 2. Values of primary τpand secondary τsskin friction for SWCNTs and MWCNTs. The significance of bold emphasis used in Table 2 is for the comparison of the effects for varied values of the particular parameters. For each parameter, the changes of skin friction values are compared among the bold values of parameters. tϕNu SWCNTsMWCNTs 0.20.023.62383.5818 0.40.022.56242.5327 0.20.125.48405.3185 ### Table 3. Values of Nusselt number Nufor SWCNTs and MWCNTs. The significance of bold emphasis used in Table 3 is for the comparison of the effects for varied values of the particular parameters. For each parameter, the changes of Nusselt number values are compared among the bold values of parameters. The accuracy of the obtained solution is verified by comparing solution in Eq. (18) with the solution obtained by Mohamad et al. [40] in Eq. (53). The comparison is conducted by letting magnetic parameter and nanoparticle volume fraction M=ϕ=0, and porosity parameter Kin the present solution for both types of CNTs and letting phase angle ω=0and amplitude of disk oscillation U=2in the published work. This comparison shows that fand gprofiles for both present and previous works are identical to each other as clearly presented in Figures 10 and 11, which thus proves that the accuracy of obtained solution is verified. Meanwhile, another verification is also carried out to verify the validity of present solution by comparing the values of velocity profiles from the present work with the numerical values solved by numerical Gaver-Stehfest algorithm [53, 54]. Tables 4 and 5 observe that the results of fand gprofiles from the exact solution in Eq. (18) and the results from numerical solution are in excellent agreement. zExact Eq. (18)Numerical Laplace Eq. (15) SWCNTsMWCNTsSWCNTsMWCNTs 01.00001.00001.00001.0000 0.50.41650.42060.41650.4206 1.00.10890.11220.10890.1121 1.50.01710.01820.01720.0182 2.00.00160.00170.00150.0017 ### Table 4. Comparison of exact and numerical solution of fprofiles for SWCNTs and MWCNTs with t=0.2,Gr=0.5,M=0.2,K=2,ϕ=0.02,U=2,Pr=6.2. zExact Eq. (18)Numerical Laplace Eq. (15) SWCNTsMWCNTsSWCNTsSWCNTs 00.00000.00000.00000.0000 0.50.03660.03670.03660.0367 1.00.01460.01490.01460.0150 1.50.00270.00290.00270.0029 2.00.00030.00030.00030.0003 ### Table 5. Comparison of exact and numerical solution of gprofiles for SWCNTs and MWCNTs with t=0.2,Gr=0.5,M=0.2,K=2,ϕ=0.02,U=2,Pr=6.2. ## 6. Summary with conclusion The unsteady non-coaxial rotation of water-CNTs nanofluid flow in a porous medium with MHD effect is analytically solved for the exact solutions by applying the Laplace transform method. The temperature and velocity profiles with various values of parameter for the immersion of SWCNTs and MWCNTs are plotted graphically and analyzed for their effects. From the discussion, significant findings emerge: 1. Both primary and secondary velocities for SWCNTs and MWCNTs suspension increase as the values of t,Gr,K,and Uincrease while decrease as the values of Mincrease. 2. The insertion of higher ϕof SWCNTs and MWCNTs increases the primary velocity profiles while for secondary velocity profiles, fluctuating trend is reported for both cases. 3. The temperature of nanofluid increases when ϕand tincrease for both SWCNTs and MWCNTs cases. 4. MWCNTs have higher primary and secondary velocity profiles compared to SWCNTs because of their low-density property 5. SWCNTs have higher temperature profile than MWCNTs owing to their high thermal conductivity property. 6. The increasing values of t,Grand Kdecrease both primary and secondary skin friction for both types of CNTs while the increase of Mgives opposite effect on both skin friction. 7. Nusselt number for both CNTs cases reduce as tincreases and amplify as ϕincreases. 8. The findings in present work are in accordance to findings in Mohamad et al. [40] and numerical values obtained by numerical Gaver-Stehfest algorithm. ## Acknowledgments The authors would like to acknowledge the Ministry of Higher Education Malaysia and Research Management Centre-UTM, Universiti Teknologi Malaysia (UTM) for the financial support through vote number 17 J98, FRGS/1/2019/STG06/UTM/02/22 and 08G33. ## Conflict of interest The authors declare that they have no conflicts of interest to report regarding the present study. ## Nomenclature βT Thermal expansion coefficient Cp Specific heat ρ Density σ Electrical conductivity μ Dynamic viscosity gx Acceleration due to gravity k Thermal conductivity T Temperature of nanofluid T∞ Free stream temperature Tw Wall temperature B0 Magnetic field k1 Permeability U0 Characteristic of velocity Nu Nusselt number τ Skin friction τp Primary skin friction τs Secondary skin friction F Complex velocity f Primary velocity g Secondary velocity ϕ Volume fraction nanoparticles Ω Angular velocity t Time i Imaginary unit Pr Prandtl number Gr Grashof number K Porosity CNTs Carbon nanotubes nf Nanofluid f Fluid chapter PDF ## More © 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ## How to cite and reference ### Cite this chapter Copy to clipboard Wan Nura’in Nabilah Noranuar, Ahmad Qushairi Mohamad, Sharidan Shafie, Ilyas Khan, Mohd Rijal Ilias and Lim Yeou Jiann (November 8th 2021). Analysis of Heat Transfer in Non-Coaxial Rotation of Newtonian Carbon Nanofluid Flow with Magnetohydrodynamics and Porosity Effects [Online First], IntechOpen, DOI: 10.5772/intechopen.100623. Available from:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8436328172683716, "perplexity": 2668.6516654631423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363189.92/warc/CC-MAIN-20211205130619-20211205160619-00008.warc.gz"}
http://duken.club/what-does-the-slope-mean-math/
# What Does The Slope Mean Math What does the slope mean math no meaning slope what does point slope math problems. What does the slope mean math enter image description here slope equation calculator with 2 points. What does the slope mean math slope math games. What does the slope mean math finding slope formula calculator. What does the slope mean math gradient and slope 1 slope math definition in spanish. What does the slope mean math negative slope lines definition examples slope math def. What does the slope mean math 7 steeper slope slope equation calculator. What does the slope mean math kids math slope math problems. What does the slope mean math slope definition math is fun. What does the slope mean math gradient and slope formula 7 finding slope problems worksheet. What does the slope mean math slopes definition slope calculator math papa. What does the slope mean math what does that mean 6 2 slope intercept formula. What does the slope mean math determine the slope given a graph slope equation definition. What does the slope mean math what are equations and what do they represent this board introduces equations and their graphs perpendicular slope math is fun. What does the slope mean math mathantics slope of a line. What does the slope mean math math slope formula ymxb. What does the slope mean math part 2 slope critical in building wheelchair ramp construction services finding slope formula worksheet. What does the slope mean math video thumbnail slope equation calculator graph.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9988834261894226, "perplexity": 4940.932629698535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371624083.66/warc/CC-MAIN-20200406102322-20200406132822-00277.warc.gz"}
https://www.satyenkale.com/pubs/online-sparse-linear-regression/
Dean Foster, Satyen Kale, and Howard Karloff Proceedings of 29th Conference on Learning Theory (COLT), 2016 We consider the online sparse linear regression problem, which is the problem of sequentially making predictions observing only a limited number of features in each round, to minimize regret with respect to the best sparse linear regressor, where prediction accuracy is measured by square loss. We give an inefficient algorithm that obtains regret bounded by $$\tilde{O}(\sqrt{T})$$ after $$T$$ prediction rounds. We complement this result by showing that no algorithm running in polynomial time per iteration can achieve regret bounded by $$O(T^{1-\delta})$$ for any constant $$\delta > 0$$ unless $$NP \subseteq BPP$$. This computational hardness result resolves an open problem presented in COLT 2014 (Kale, 2014) and also posed by Zolghadr et al. (2013). This hardness result holds even if the algorithm is allowed to access more features than the best sparse linear regressor up to a logarithmic factor in the dimension.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9074408411979675, "perplexity": 464.83106026539156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592636.25/warc/CC-MAIN-20200118135205-20200118163205-00040.warc.gz"}
https://plainmath.net/post-secondary?start=20
# College math questions and answers Recent questions in Post Secondary Thomas Hubbard 2022-05-24 Answered ### What is the mean, median, and mode of 1, 4, 5, 6, 10, 25? Cara Duke 2022-05-24 Answered ### What is the mean of 58, 76, 40, 35, 46, 45, 0, and 100? wanaopatays 2022-05-24 Answered ### Prove combinatorially the recurrence ${p}_{n}\left(k\right)={p}_{n}\left(k-n\right)+{p}_{n-1}\left(k-1\right)$ for all $0Recall that ${p}_{n}\left(k\right)$ counts the number of partitions of k into exactly n positive parts (or, alternatively, into any number of parts the largest of which has size n). skottyrottenmf 2022-05-24 Answered ### Why are measures of central tendency essential to descriptive statistics? Mauricio Hayden 2022-05-24 Answered ### Acos 90 degree matrix transformation.I'm writing a program that transforms a matrix of points by 90°. In it, I have two vectors from which I am performing the rotation. Both vectors are normalized:$A:x:\sqrt{\left(}\frac{1}{3}\right),y:\sqrt{\left(}\frac{1}{3}\right),z:-\sqrt{\left(}\frac{1}{3}\right)\phantom{\rule{0ex}{0ex}}B:x:\sqrt{\left(}\frac{1}{3}\right),y:\sqrt{\left(}\frac{1}{3}\right),z:\sqrt{\left(}\frac{1}{3}\right)$As I visualize it, these two vectors are separated by 90°, but the dot product of these vectors comes out to $\frac{1}{3}$$\sqrt{\left(}\frac{1}{3}\right)\ast \sqrt{\left(}\frac{1}{3}\right)+\sqrt{\left(}\frac{1}{3}\right)\ast \sqrt{\left(}\frac{1}{3}\right)+\sqrt{\left(}\frac{1}{3}\right)\ast -\sqrt{\left(}\frac{1}{3}\right)\phantom{\rule{0ex}{0ex}}=\frac{1}{3}+\frac{1}{3}-\frac{1}{3}\phantom{\rule{0ex}{0ex}}=\frac{1}{3}\phantom{\rule{0ex}{0ex}}$My code is then supposed to use arc-cos to come up with 90° from this number, but I believe arc-cos needs an input of 0 in order to produce a result of 90°. What am I missing here? patzeriap0 2022-05-24 Answered ### The mean length of 6 rods is 44.2 cm. The mean length of 5 of them is 46 cm. How long is the sixth rod? osmane5e 2022-05-24 Answered ### Bounding $A\left(n,d\right)=max\left\{M|$ exists a code with parameters n,M,d}I would like to prove that this lower bound of $A\left(n,d\right)=max\left\{M|$ exists a code with parameters n,M,d} (where n is the length of the block code, M the number of words of the code, and d, the minimal distance of the code), holds:$\frac{{2}^{n}}{\sum _{i=0}^{d-1}\left(\genfrac{}{}{0}{}{n}{i}\right)}\le A\left(n,d\right)$For trying to see this, I have tried to connect this inequality with the cardinal of the ball of radius $d-1$, that is $\sum _{i=0}^{d-1}\left(\genfrac{}{}{0}{}{n}{i}\right){2}^{i}$, so for sure, that quantity is less than ${2}^{n}\sum _{i=0}^{d-1}\left(\genfrac{}{}{0}{}{n}{i}\right)$. But I don't see if this is or not helping me at all... I would appreciate some guidance, help, hint,... Thanks! Quintacj 2022-05-24 Answered ### How to solve this differential equation:$x\frac{dy}{dx}=y+x\frac{{e}^{x}}{{e}^{y}}?$I tried to rearrange the equation to the form $f\left(\frac{y}{x}\right)$ but I couldn't thus I couldn't use $v=\frac{y}{x}$ to solve it. cricafh 2022-05-24 Answered ### Let $\mathcal{g}$ be a Lie algebra and let $a,b,c\in \mathcal{g}$ be such that $ab=ba$ and $\left[a,b\right]=c\ne 0$. Let . How to prove that $\mathcal{h}$ is isomorphic to the strictly upper triangular algebra $\mathcal{n}\left(3,F\right)$?Problem: If $\mathcal{h}\cong n\left(3,F\right)$ then $\mathrm{\exists }{a}^{\prime },{b}^{\prime },{c}^{\prime }\in \mathcal{n}\left(3,F\right)$ with ${a}^{\prime }{b}^{\prime }={b}^{\prime }{a}^{\prime }$ and $\left[{a}^{\prime },{b}^{\prime }\right]={c}^{\prime }$ as in $h$ But then ${c}^{\prime }$ must equal $0$ whereas $c\in h$ is not $0$? Nylah Burnett 2022-05-24 Answered ### Let $R$ be a commutative finite dimensional $K$-algebra over a field $K$ (for example the monoid ring of a a finite monoid over a field). Assume we have $R$ in GAP. Then we can check whether $R$ is semisimple using the command RadicalOfAlgebra(R). When the value is 0, $R$ is semisimple. Thus $R$ can be written as a finite product of finite field extensions of $K$.Question: Can we obtain those finite field extensions of $K$ or at least their number and $K$-dimensions using GAP? Thomas Hubbard 2022-05-24 Answered ### What is the Z-score for a 10% confidence level (i.e. 0.1 pvalue)?I want the standard answer used for including in my thesis write up. I googled and used excel to calculate as well but they are all slightly different.Thanks. Nerya Fozailov 2022-05-23 il2k3s2u7 2022-05-23 Answered ### In how many ways can we distribute 2 types of gifts?The problem: In how many ways can we distribute 2 types of gifts, m of the first kind and n of the second to k kids, if there can be kids with no gifts?From the stars and bars method i know that you can distribute m objects to k boxes in $\left(\genfrac{}{}{0}{}{m+k-1}{k-1}\right)$ ways. So in my case i can distribute m gifts to k kids in $\left(\genfrac{}{}{0}{}{m+k-1}{k-1}\right)$ ways, same for n gifts i can distribute them in $\left(\genfrac{}{}{0}{}{n+k-1}{k-1}\right)$ ways. So now if we have to distribute m and n gifts we can first distribute m gifts in $\left(\genfrac{}{}{0}{}{m+k-1}{k-1}\right)$ ways, then n gifts in $\left(\genfrac{}{}{0}{}{n+k-1}{k-1}\right)$ ways, so in total we have:$\left(\genfrac{}{}{0}{}{m+k-1}{k-1}\right)\cdot \left(\genfrac{}{}{0}{}{n+k-1}{k-1}\right)\phantom{\rule{1em}{0ex}}\text{ways.}$.Is my reasoning correct?What about when we have to give at least 1 gift to each kid, can we do that in$\left(\genfrac{}{}{0}{}{m-1}{k-1}\right)\cdot \left(\genfrac{}{}{0}{}{n+k-1}{k-1}\right)+\left(\genfrac{}{}{0}{}{n-1}{k-1}\right)\cdot \left(\genfrac{}{}{0}{}{m+k-1}{k-1}\right)\phantom{\rule{1em}{0ex}}\text{ways?}$ Timiavawsw9 2022-05-23 Answered ### Find the limit of:$\underset{x\to \frac{\pi }{3}}{lim}\frac{1-2\mathrm{cos}x}{\pi -3x}$ Landyn Jimenez 2022-05-23 Answered ### How to approach this discrete graph question about Trees.A tree contains exactly one vertex of degree d, for each $d\in \left\{3,9,10,11,12\right\}$.Every other vertex has degrees 1 and 2. How many vertices have degree 1?I've only tried manually drawing this tree and trying to figure it out that way, however this makes the drawing far too big to complete , I'm sure there are more efficient methods of finding the solution.Could someone please point me in the right direction! Hailey Newton 2022-05-23 Answered ### If $\mathcal{A}$ is a commutative ${C}^{\ast }$-subalgebra of $\mathcal{B}\left(\mathcal{H}\right)$, where $\mathcal{H}$ is a Hilbert space, then the weak operator closure of $\mathcal{A}$ is also commutative.I can not prove this. hushjelpw4 2022-05-23 Answered ### Representing a sentence with quantified statementsMy approach to this question: $\mathrm{\exists }x\left(P\left(x\right)\to R\left(x\right)\right)$I cannot verify if my answer is correct, any help to verify my answer would be appreciated and if I did wrong any help to explain why would also be appreciated. Wayne Steele 2022-05-23 Answered ### How do you find the range of 4, 6, 3, 4, 5, 4, 7, 3? res2bfitjq 2022-05-23 Answered ### I have a first order PDE:$x{u}_{x}+\left(x+y\right){u}_{y}=1$With the initial condition:I have calculated result in Mathematica: $u\left(x,y\right)=\frac{y}{x}$, but I am trying to solve the equation myself, but I had no luck so far. I tried with method of characteristics, but I could not get the correct results. I would appreciate any help or maybe even whole procedure. groupweird40 2022-05-23 Answered ### Exercise involving DFTThe fourier matrix is a transformation matrix where each component is defined as ${F}_{ab}={\omega }^{ab}$ where $\omega ={e}^{2\pi i/n}$. The indices of the matrix range from 0 to $n-1$ (i.e. $a,b\in \left\{0,...,n-1\right\}$)As such we can write the Fourier transform of a complex vector v as $\stackrel{^}{v}=Fv$, which means that${\stackrel{^}{v}}_{f}=\sum _{a\in \left\{0,...,n-1\right\}}{\omega }^{af}{v}_{a}$Assume that n is a power of 2. I need to prove that for all odd $c\in \left\{0,...,n-1\right\}$, every $d\in \left\{0,...,n-1\right\}$ and every complex vectors v, if ${w}_{b}={v}_{cb+d}$, then for all $f\in \left\{0,...,n-1\right\}$ it is the case that:${\stackrel{^}{w}}_{cf}={\omega }^{-fd}\phantom{a}{\stackrel{^}{v}}_{f}$I was able to prove it for $n=2$ and $n=4$, so I tried an inductive approach. This doesn't seem to be the best way to go and I am stuck at the inductive step and I don't think I can go any further which indicates that this isn't the right approach.Note that I am not looking for a full solution, just looking for a hint. College Math problems only seem challenging at first, yet what they truly represent is not much harder than high school Math with some equations that are closer to the engineering approach. Approaching your college math helper is essential because your questions may just have to be worded differently. Regardless if you are dealing with equations or need answers to your college math questions, taking a look at college math equations will let you see things differently. Approach college math answers step-by-step. The majority of college math problems with answers represented here will be similar to what you already have.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 70, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9156633615493774, "perplexity": 226.83106202882925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00307.warc.gz"}
https://aleksas.eu/publication/garcia-montero-2019-kjk/
# Probing the evolution of heavy-ion collisions using direct photon interferometry ### Abstract We investigate the measurement of Hanbury Brown-Twiss (HBT) photon correlations as an experimental tool to discriminate different sources of photon enhancement, which are proposed to simultaneously reproduce the direct photon yield and the azimuthal anisotropy measured in nuclear collisions at RHIC and the LHC. To showcase this, we consider two different scenarios in which we enhance the yields from standard hydrodynamical simulations. In the first, additional photons are produced from the early pre-equilibrium stage computed from the extit{bottom-up} thermalization scenario. In the second, the thermal rates are enhanced close to the pseudo-critical temperature $T_c\approx 155~ \text{MeV}$ using a phenomenological ansatz. We compute the correlators for relative momenta qo,qs and ql for different transverse pair momenta, K⊥, and find that the longitudinal correlation is the most sensitive to different photon sources. Our results also demonstrate that including anisotropic pre-equilibrium rates enhances non-Gaussianities in the correlators, which can be quantified using the kurtosis of the correlators. Finally, we study the feasibility of measuring a direct photon HBT signal in the upcoming high-luminosity LHC runs. Considering only statistical uncertainties, we find that with the projected $\sim 10^{10}$ heavy ion events a measurement of the HBT correlations for $K_{\perp} \ll1~ \text{GeV}$ is statistically significant. Type Publication Phys. Rev. C102
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9787022471427917, "perplexity": 2360.8143719097748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061562.11/warc/CC-MAIN-20210411055903-20210411085903-00325.warc.gz"}
http://clay6.com/qa/4057/find-the-equation-of-a-curve-passing-through-the-point-0-2-given-that-the-s
Browse Questions # Find the equation of a curve passing through the point (0,2) given that the sum of the coordinate of any point on the curve exceeds the magnitude of the slope of the tangent to the curve at that point by 5. Can you answer this question?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9315878748893738, "perplexity": 57.63968143862375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718284.75/warc/CC-MAIN-20161020183838-00290-ip-10-171-6-4.ec2.internal.warc.gz"}
https://arxiv.org/abs/astro-ph/0402498
astro-ph (what is this?) (what is this?) # Title: High sensitivity measurements of the CMB power spectrum with the extended Very Small Array Abstract: We present deep Ka-band ($\nu \approx 33$ GHz) observations of the CMB made with the extended Very Small Array (VSA). This configuration produces a naturally weighted synthesized FWHM beamwidth of $\sim 11$ arcmin which covers an $\ell$-range of 300 to 1500. On these scales, foreground extragalactic sources can be a major source of contamination to the CMB anisotropy. This problem has been alleviated by identifying sources at 15 GHz with the Ryle Telescope and then monitoring these sources at 33 GHz using a single baseline interferometer co-located with the VSA. Sources with flux densities $\gtsim 20$ mJy at 33 GHz are subtracted from the data. In addition, we calculate a statistical correction for the small residual contribution from weaker sources that are below the detection limit of the survey. The CMB power spectrum corrected for Galactic foregrounds and extragalactic point sources is presented. A total $\ell$-range of 150-1500 is achieved by combining the complete extended array data with earlier VSA data in a compact configuration. Our resolution of $\Delta \ell \approx 60$ allows the first 3 acoustic peaks to be clearly delineated. The is achieved by using mosaiced observations in 7 regions covering a total area of 82 sq. degrees. There is good agreement with WMAP data up to $\ell=700$ where WMAP data run out of resolution. For higher $\ell$-values out to $\ell = 1500$, the agreement in power spectrum amplitudes with other experiments is also very good despite differences in frequency and observing technique. Comments: 16 pages. Accepted in MNRAS (minor revisions) Subjects: Astrophysics (astro-ph) Journal reference: Mon.Not.Roy.Astron.Soc.353:732,2004 DOI: 10.1111/j.1365-2966.2004.08206.x Cite as: arXiv:astro-ph/0402498 (or arXiv:astro-ph/0402498v2 for this version) ## Submission history From: Clive Dickinson [view email] [v1] Fri, 20 Feb 2004 18:33:54 GMT (615kb) [v2] Sat, 10 Jul 2004 03:24:35 GMT (583kb)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8615219593048096, "perplexity": 3691.8565682425165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542686.84/warc/CC-MAIN-20161202170902-00438-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.global-sci.org/intro/article_detail/cicp/10547.html
Volume 23, Issue 3 The Competition Between the CDW and the Superconducting State in Valence Skip Compounds Commun. Comput. Phys., 23 (2018), pp. 773-780. Published online: 2018-03 Preview Purchase PDF 9 4010 Export citation Cited by • Abstract In some superconductors the charge density wave (CDW) state is adjacent to the superconducting state in the phase diagram. This CDW phase can be collapsed either pressure or by chemical doping, depending on compound. Among them, in so-called valence skip compounds, a large charge fluctuation with the large electron-phonon interaction is expected. We performed a first-principle study and investigated how the CDW gap is collapsed for several valence-skip compounds, i.e. SnX3, RbTlX3(X=F,Cl,Br,I) and CsTlI3. For all these compounds we found that the CDW gap is rather robust for the uniform volume change, and on the contrary, the magnitude of the CDW gap strongly depends on the position of the anion. We found that this CDW gap is already collapsed at ambient pressure in SnBr3, SnI3 and CsTlI3. • Keywords Valence skip, CDW, superconductivity, electronic structure, RbTlX$_3$, SnX$_3$, BaBiO$_3$. • BibTex • RIS • TXT @Article{CiCP-23-773, author = {}, title = {The Competition Between the CDW and the Superconducting State in Valence Skip Compounds}, journal = {Communications in Computational Physics}, year = {2018}, volume = {23}, number = {3}, pages = {773--780}, abstract = { In some superconductors the charge density wave (CDW) state is adjacent to the superconducting state in the phase diagram. This CDW phase can be collapsed either pressure or by chemical doping, depending on compound. Among them, in so-called valence skip compounds, a large charge fluctuation with the large electron-phonon interaction is expected. We performed a first-principle study and investigated how the CDW gap is collapsed for several valence-skip compounds, i.e. SnX3, RbTlX3(X=F,Cl,Br,I) and CsTlI3. For all these compounds we found that the CDW gap is rather robust for the uniform volume change, and on the contrary, the magnitude of the CDW gap strongly depends on the position of the anion. We found that this CDW gap is already collapsed at ambient pressure in SnBr3, SnI3 and CsTlI3. }, issn = {1991-7120}, doi = {https://doi.org/10.4208/cicp.OA-2017-0060}, url = {http://global-sci.org/intro/article_detail/cicp/10547.html} } TY - JOUR T1 - The Competition Between the CDW and the Superconducting State in Valence Skip Compounds JO - Communications in Computational Physics VL - 3 SP - 773 EP - 780 PY - 2018 DA - 2018/03 SN - 23 DO - http://doi.org/10.4208/cicp.OA-2017-0060 UR - https://global-sci.org/intro/article_detail/cicp/10547.html KW - Valence skip, CDW, superconductivity, electronic structure, RbTlX$_3$, SnX$_3$, BaBiO$_3$. AB - In some superconductors the charge density wave (CDW) state is adjacent to the superconducting state in the phase diagram. This CDW phase can be collapsed either pressure or by chemical doping, depending on compound. Among them, in so-called valence skip compounds, a large charge fluctuation with the large electron-phonon interaction is expected. We performed a first-principle study and investigated how the CDW gap is collapsed for several valence-skip compounds, i.e. SnX3, RbTlX3(X=F,Cl,Br,I) and CsTlI3. For all these compounds we found that the CDW gap is rather robust for the uniform volume change, and on the contrary, the magnitude of the CDW gap strongly depends on the position of the anion. We found that this CDW gap is already collapsed at ambient pressure in SnBr3, SnI3 and CsTlI3. Izumi Hase, Takashi Yanagisawa & Kenji Kawashima. (2020). The Competition Between the CDW and the Superconducting State in Valence Skip Compounds. Communications in Computational Physics. 23 (3). 773-780. doi:10.4208/cicp.OA-2017-0060 Copy to clipboard The citation has been copied to your clipboard
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.908571183681488, "perplexity": 3471.3386808455775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703544403.51/warc/CC-MAIN-20210124013637-20210124043637-00429.warc.gz"}
http://rcd.ics.org.ru/authors/detail/2611-elena_lega
0 2013 Impact Factor # Elena Lega ## Publications: Guzzo M., Lega E. The Nekhoroshev Theorem and the Observation of Long-term Diffusion in Hamiltonian Systems 2016, vol. 21, no. 6, pp.  707-719 Abstract The long-term diffusion properties of the action variables in real analytic quasiintegrable Hamiltonian systems is a largely open problem. The Nekhoroshev theorem provides bounds to such a diffusion as well as a set of techniques, constituting its proof, which have been used to inspect also the instability of the action variables on times longer than the Nekhoroshev stability time. In particular, the separation of the motions in a superposition of a fast drift oscillation and an extremely slow diffusion along the resonances has been observed in several numerical experiments. Global diffusion, which occurs when the range of the slow diffusion largely exceeds the range of fast drift oscillations, needs times larger than the Nekhoroshev stability times to be observed, and despite the power of modern computers, it has been detected only in a small interval of the perturbation parameter, just below the critical threshold of application of the theorem. In this paper we show through an example how sharp this phenomenon is. Keywords: Hamiltonian systems, Nekhoroshev theorem, long-term stability, diffusion Citation: Guzzo M., Lega E.,  The Nekhoroshev Theorem and the Observation of Long-term Diffusion in Hamiltonian Systems, Regular and Chaotic Dynamics, 2016, vol. 21, no. 6, pp. 707-719 DOI:10.1134/S1560354716060101
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9385431408882141, "perplexity": 947.6475079873848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526401.41/warc/CC-MAIN-20190720004131-20190720030131-00164.warc.gz"}
http://mathoverflow.net/questions/99739/how-do-you-bound-exponent-of-x21-yp
# how do you bound exponent of x^2+1=y^p for p a prime exponent using linear forms in logs? So far I have (x-i)(x+i)=y^p which are coprime and hence x+i=(a+ib)^p , now how do I get a linear form in logs so that I can find an upper bound on p? - –  Will Jagy Jun 15 '12 at 19:43 not what link is supposed to mean, I'm not asking for a full proof of cataln's conjecture but consider this particular e.g. –  Kale Jun 15 '12 at 20:34 perhaps you can give some references that use the techniques you want to apply. –  Will Jagy Jun 15 '12 at 23:53 You can look up the work of Tijdeman for this kind of argument. But why would you not want to use a known theorem? –  Felipe Voloch Jun 16 '12 at 0:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9550269246101379, "perplexity": 515.7437712513115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096991.38/warc/CC-MAIN-20150627031816-00132-ip-10-179-60-89.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/115208/given-solubility-of-sodium-carbonate-find-mass-of-hydrate-sodium-carbonate-diss
Given solubility of sodium carbonate, find mass of hydrate sodium carbonate dissolved? I am working on a problem from my scholarship exam practice (MEXT). I am not quite sure if I got it right. The question was provided with multiple choices (a) to (f) below: The solubility of sodium carbonate in $$\pu{100 g}$$ water is $$\pu{25.0 g}$$ at $$\pu{22 ^{\circ}C}$$. How many grams of the hydrate $$\ce{Na2CO3∙10H2O}$$ can be dissolved in $$\pu{100 g}$$ of water at $$\pu{22 ^{\circ}C}$$? (a) $$\pu{0.556g}$$; (b) $$\pu{0.762g}$$; (c) $$\pu{9.27g}$$; (d) $$\pu{67.5g}$$; (e) $$\pu{81.7g}$$; (f) $$\pu{117g}$$ I honestly do not know how to solve this problem. My best guess is to find the moles of sodium carbonate that can be dissolved in water first (from $$\pu{25.0 g}$$). And use that moles equate to the moles of hydrate form (1:1). Then find the mass of hydrate form by multiplying its molecular weight and my answer would be $$\pu{67.5g}$$. There is no answer key provided so I would like to hear other opinions and get some advice. To answer this kind of problem, it'd be better starting with elimination approach. It is given the solubility of sodium carbonate in $$\pu{100 g}$$ of water as $$\pu{25.0 g}$$ at $$\pu{22 ^{\circ}C}$$. By that, you know, solubility of its hydrate should be more than $$\pu{25.0 g}$$ at $$\pu{22 ^{\circ}C}$$ because the mass of hydrate is including additional mass of water as well (as your calculation shows). Therefore, you can eliminate answers (a), (b), and (c) directly, because they are less than 25 and are not correct. Your calculations shows it must be greater than $$\pu{67.5 g}$$ (therefore, answer (d) is incorrect as well). It should be greater than $$\pu{67.5 g}$$, because you forgot to include additional amount of salt, which would dissolve in 10 equivalents of water in that added salt: Molar mass of $$\ce{Na2CO3}$$ is $$\pu{106 gmol^{-1}}$$ while Molar mass of $$\ce{Na2CO3.10H2O}$$ is $$\pu{286 gmol^{-1}}$$. This means, each $$\pu{mol}$$ of hydrate has $$\pu{186 g}$$ or $$\pu{186 mL}$$ of water. That is a lot and we need to consider that as well in this question. Now we have only two answers to be considered: (e) and (f). To do the calculations, suppose $$x\; \pu{g}$$ of $$\ce{Na2CO3.10H2O}$$ would dissolve in $$\pu{100 g}$$ of water at $$\pu{22 ^{\circ}C}$$. Thus, $$\text{amount of }\ce{Na2CO3}\text{ in the solution} = \left(\frac{\pu{106 gmol^{-1}}}{\pu{286 gmol^{-1}}}\right) x \;\pu{g}$$ $$\text{and, amount of }\ce{H2O}\text{ in the solution} = \pu{100 g} + \left(\frac{\pu{10 \times 18 gmol^{-1}}}{\pu{286 gmol^{-1}}}\right) x \;\pu{g}$$ From given data for anhydrous salt, we can conclude that upon saturation: $$\text{The amount of }\ce{H2O}\text{ in the solution} = 4\times \text{amount of }\ce{Na2CO3}\text{ in the solution}$$ $$\therefore \pu{100 g} + \left(\frac{\pu{10 \times 18 gmol^{-1}}}{\pu{286 gmol^{-1}}}\right) x \;\pu{g} = 4 \times \left(\frac{\pu{106 gmol^{-1}}}{\pu{286 gmol^{-1}}}\right) x \;\pu{g}$$ Solve for $$x$$: $$\left(4 \times 106 - 180\right)x = 28600 \; \text{and therfore, } x=\frac{28600}{244}=117.2$$ Therefore your answer is (f) ($$\pu{117 g}$$). Your calculation would be good, if you did not forget that hydrates contain water. The solubility of the anhydrous carbonate is 25 g in 100 g of water. Both carbonate forms, once diluted, are the same compound, forming the same ions plus eventually releasing water. So in case of the hydrate, the ratio carbonate:water must be 1:4 as well. $$M_{\ce{Na2CO3}}=106\mathrm{~g/mol}$$. $$M_{\ce{Na2CO3. 10 H2O}}=286\mathrm{~g/mol}$$. Using $$x$$ grams of the hydrate + $$100$$ grams of water produces solution of $$x \cdot 106/286 \mathrm{g}~\ce{Na2CO3}$$ in $$100 + x \cdot 180/286~ \mathrm{g}$$ of water. I will leave the rest on you to give you the honour of some effort, as this question is rather the homework class of questions. See Homework
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 45, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8003586530685425, "perplexity": 480.0498195146518}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675598.53/warc/CC-MAIN-20191017172920-20191017200420-00014.warc.gz"}
https://ja.overleaf.com/articles/newtons-method-cycles/pmtgdpygfwkc
AbstractBased on the paper Sometimes Newton's Method Cycles, we first asked ourselves if there were any Newtonian Method Cycle functions which have non-trivial guesses. We encountered a way to create functions that cycle between a set number of points with any initial, non-trivial guesses when Newton's Method is applied. We exercised these possibilities through the methods of 2-cycles, 3-cycles and 4-cycles. We then generalized these cycles into k-cycles. After generalizing Newton's Method, we found the conditions that skew the cycles into a spiral pattern which will either converge, diverge or become a near-cycle. Once we obtained all this information, we explored additional questions that rose up from our initial exploration of Newton's Method.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.838822066783905, "perplexity": 1176.9509958891447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00613.warc.gz"}
https://par.nsf.gov/search/author:%22Mapelli,%20Michela%22
# Search for:All records Creators/Authors contains: "Mapelli, Michela" Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval). What is a DOI Number? Some links on this page may take you to non-federal websites. Their policies may differ from this site. 1. The detection of the binary black hole merger GW190521, with primary mass 85+21−14 M⊙ , proved the existence of black holes in the theoretically predicted pair-instability gap ( ∼60−120M⊙ ) of their mass spectrum. Some recent studies suggest that such massive black holes could be produced by the collision of an evolved star with a carbon-oxygen core and a main sequence star. Such a post-coalescence star could end its life avoiding the pair-instability regime and with a direct collapse of its very massive envelope. It is still not clear, however, how the collision shapes the structure of the newly produced star and how much mass is actually lost in the impact. We investigated this issue by means of hydrodynamical simulations with the smoothed particle hydrodynamics code StarSmasher, finding that a head-on collision can remove up to 12% of the initial mass of the colliding stars. This is a non-negligible percentage of the initial mass and could affect the further evolution of the stellar remnant, particularly in terms of the final mass of a possibly forming black hole. We also found that the main sequence star can plunge down to the outer boundary of the carbon-oxygen core of the primary, changingmore » Free, publicly-accessible full text available April 1, 2023 2. ABSTRACT The association of GRB170817A with a binary neutron star (BNS) merger has revealed that BNSs produce at least a fraction of short gamma-ray bursts (SGRBs). As gravitational wave (GW) detectors push their horizons, it is important to assess coupled electromagnetic (EM)/GW probabilities and maximize observational prospects. Here, we perform BNS population synthesis calculations with the code mobse, seeding the binaries in galaxies at three representative redshifts, $z$ = 0.01, 0.1, and 1 of the Illustris TNG50 simulation. The binaries are evolved and their locations numerically tracked in the host galactic potentials until merger. Adopting the microphysics parameters of GRB170817A, we numerically compute the broad-band light curves of jets from BNS mergers, with the afterglow brightness dependent on the local medium density at the merger site. We perform Monte Carlo simulations of the resulting EM population assuming either a random viewing angle with respect to the jet, or a jet aligned with the orbital angular momentum of the binary, which biases the viewing angle probability for GW-triggered events. We find a gamma-ray detection probability of $\sim\!2{{\rm per\ cent}},10{{\rm per\ cent}},\mathrm{and}\ 40{{\rm per\ cent}}$ for BNSs at $z$ = 1, 0.1, and 0.01, respectively, for the random case, rising to $\sim\!75{{\rm per\ cent}}$more »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8014036417007446, "perplexity": 1607.9573256629815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00593.warc.gz"}
http://math.stackexchange.com/questions/160123/totally-uni-modular-matrices
# Totally Uni-modular Matrices A matrix is totally uni-modular if the determinant of any (square) sub-matrix is {+1, 0, -1}. My question is, "Is there a way to transform(linear or non) a general matrix into a totally uni-modular matrix?" or, "Are there only certain matrices that can be transformed in such a way?" This is for an application in Linear (convex) optimization. Thanks Edit: As a linear-based method, I was thinking more of multiplying the starting matrix by a Gaussian matrix (or some other distribution) and then using the sign() function to restrict the values to {-1,1} with small values probably set to 0. I mostly want to see it anyone knows any transformations for restricting values. - What properties would you like the map to have? Zero map certainly satisfies your condition, and is even linear, but I doubt it's what you're looking for. Your question is too vague. – tomasz Jun 19 '12 at 1:07 I agree, what properties is it supposed to satisfy? Given an integer Matrix $A$, it is transformable to a diagonal integer matrix and if we normalize each row, we get a totally unimodular matrix. But this transformation,for example, wouldn't preserve integrality of a polytope defined by A. What do you want to do with this? – Tim Seguine Jan 20 '13 at 14:48 Since every entry of a totally unimodular matrix is $\pm 1$ or $0$, there are only finitely many such matrices of any given size, and only countably many in all. So there is certainly no one-to-one transformation from general matrices to totally unimodular matrices.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.816535234451294, "perplexity": 572.9389410027696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00122-ip-10-164-35-72.ec2.internal.warc.gz"}
https://forum.azimuthproject.org/discussion/2132/p1
#### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Options # Exercise 9 - Chapter 3 edited June 2018 The free category on the graph shown here: $$\tag{3.6} \textbf{3} := \textbf{Free}( [ v_1 \overset{f_1}{\rightarrow} v_2 \overset{f_2}{\rightarrow} v_3 ] )$$ It has three objects and six morphisms: the three vertices and six paths in the graph. Create six names, one for each of the six morphisms in 3. Write down a six-by-six table, label the rows and columns by the six names you chose. 1. Fill out the table by writing the name of the composite in each cell. 2. Where are the identities? • Options 1. edited May 2018 $$\begin{array}{ c l c c c c c c} hom_3 & f_1 & f_2 & \text{?} & \text{?} & \text{?} & \text{?} \\ \hline f_1 & ? & ? & ? & ? & ? & ? \\ f_2 & ? & ? & ? & ? & ? & ? \\ \text{?} & ? & ? & ? & ? & ? & ? \\ \text{?} & ? & ? & ? & ? & ? & ? \\ \text{?} & ? & ? & ? & ? & ? & ? \\ \text{?} & ? & ? & ? & ? & ? & ? \end{array}$$ Comment Source:$$\begin{array}{ c l c c c c c c} hom_3 & f_1 & f_2 & \text{?} & \text{?} & \text{?} & \text{?} \\\\ \hline f_1 & ? & ? & ? & ? & ? & ? \\\\ f_2 & ? & ? & ? & ? & ? & ? \\\\ \text{?} & ? & ? & ? & ? & ? & ? \\\\ \text{?} & ? & ? & ? & ? & ? & ? \\\\ \text{?} & ? & ? & ? & ? & ? & ? \\\\ \text{?} & ? & ? & ? & ? & ? & ? \end{array}$$ • Options 2. edited May 2018 1. Define $$f_2\circ f_1:=f_3$$. Entries will be of the form $$\text{Top}\circ\text{Side}$$, or $$NA$$ if no such composition is possible. $$\begin{array}{ c l c c c c c c} \text{Hom}_\mathbf{3} & f_1 & f_2 & f_3 & \text{id}_1 & \text{id}_2 & \text{id}_3 \\ \hline f_1 & NA & f_3 & NA & NA & f_1 & NA \\ f_2 & NA & NA & NA & NA & NA & f_2 \\ f_3 & NA & NA & NA & NA & NA & f_3 \\ \text{id}_1 & f_1 & NA & f_3 & \text{id}_1 & NA & NA \\ \text{id}_2 & NA & f_2 & NA & NA & \text{id}_2 & NA \\ \text{id}_3 & NA & NA & NA & NA & NA & \text{id}_3\end{array}$$ 1. The identities are on the diagonal. This is because the only morphisms with inverses in this category are the identity morphisms. Comment Source:1. Define \$$f_2\circ f_1:=f_3\$$. Entries will be of the form \$$\text{Top}\circ\text{Side}\$$, or \$$NA\$$ if no such composition is possible. $$\begin{array}{ c l c c c c c c} \text{Hom}_\mathbf{3} & f_1 & f_2 & f_3 & \text{id}_1 & \text{id}_2 & \text{id}_3 \\\\ \hline f_1 & NA & f_3 & NA & NA & f_1 & NA \\\\ f_2 & NA & NA & NA & NA & NA & f_2 \\\\ f_3 & NA & NA & NA & NA & NA & f_3 \\\\ \text{id}_1 & f_1 & NA & f_3 & \text{id}_1 & NA & NA \\\\ \text{id}_2 & NA & f_2 & NA & NA & \text{id}_2 & NA \\\\ \text{id}_3 & NA & NA & NA & NA & NA & \text{id}_3\end{array}$$ 2. The identities are on the diagonal. This is because the only morphisms with inverses in this category are the identity morphisms.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8510875105857849, "perplexity": 934.5814059830749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00378.warc.gz"}
https://infoscience.epfl.ch/record/171600?ln=fr
## Implementing global Abelian symmetries in projected entangled-pair state algorithms Due to the unfavorable scaling of tensor-network methods with the refinement parameter M, new approaches are necessary to improve the efficiency of numerical simulations based on such states, in particular for gapless, strongly entangled systems. In one-dimensional density matrix renormalization group methods, the use of Abelian symmetries has led to large computational gain. In higher-dimensional tensor networks, this is associated with significant technical efforts and additional approximations. We explain a formalism to implement such symmetries in two-dimensional tensor-network states and present benchmark results that confirm the validity of these approximations in the context of projected entangled-pair state algorithms. Publié dans: Physical Review B, 83, - Année 2011 Mots-clefs: Laboratoires: Notice créée le 2011-12-16, modifiée le 2018-12-03 Évaluer ce document: 1 2 3 (Pas encore évalué)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8067341446876526, "perplexity": 2736.5133791555936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526714.15/warc/CC-MAIN-20190720214645-20190721000645-00012.warc.gz"}
https://www.physicsread.com/latex-floor-symbol/
# How to write floor symbol ⌊x⌋ or function like floor() in LaTeX? Mathematically, the floor function is denoted by the floor ⌊x⌋ symbol and floor(x). The shape of the floor symbol looks like a square bracket ⌊x⌋ without top portions. Latex does not have any individual commands to mark the floor symbol, you need to use separate commands for both brackets. Symbol Floor symbol Type Mathematics Package Default Command \lfloor...\rfloor Example \lfloor x \rfloor → ⌊x⌋ Thus, \lfloor and \rfloor commands define the left and right brackets without top portion. \documentclass{article} \begin{document} $$\lfloor x \rfloor$$ $$\lfloor x^{2} \rfloor$$ $$\lfloor \frac{1}{x} \rfloor$$ \end{document} Output : Latex does not have any individual pre-defined commands for the floor symbol, but you can create individual commands with that \newcommand. As a result, you don’t need to write such a large expression for the floor symbol again and again. \documentclass{article} \newcommand{\floor}[1]{\lfloor #1 \rfloor} \begin{document} $$\floor{x}$$ $$\floor{x^2}$$ $$\floor{\frac{1}{x}}$$ \end{document} Output : ## Big floor symbol in LaTeX Most of the time mathematical variables are passed as arguments in floor symbols. You need to define the size of the floor symbol according to the size of the mathematical variable. See the example below, where \lfloor and \rfloor are used for mathematical variables 1/x. \documentclass{article} \begin{document} $$\lfloor \frac{1}{x} \rfloor$$ $$\lfloor \frac{1}{x^2} \rfloor$$ $$\lfloor \frac{1}{x+1} \rfloor$$ \end{document} Output : However, in this case, the size of 1/x is larger than the floor symbol which is not correct. For this, Big floor symbol or responsive size floor symbol is required. You need to use the \left and \right commands before the \lfloor and \rfloor commands for a responsive size floor symbol. \documentclass{article} \begin{document} $$\left \lfloor \frac{1}{x} \right \rfloor$$ $$\left \lfloor \frac{1}{x^2} \right \rfloor$$ $$\left \lfloor \frac{1}{x+1} \right \rfloor$$ \end{document} Output : If you need to use the big floor symbol more than once, create an independent command with \newcommand. For example \documentclass{article} \newcommand{\floor}[1]{\left \lfloor #1 \right \rfloor} \begin{document} $$\floor{\frac{1}{x}}$$ $$\floor{\frac{1}{x^2}}$$ $$\floor{\frac{1}{x+1}}$$ \end{document} Output : You can also use four types of big commands before the \lfloor and \rfloor commands for the big floor symbol. \documentclass{article} \begin{document} $$\Bigg \lfloor \bigg \lfloor \Big \lfloor \big \lfloor x \big \rfloor \Big \rfloor \bigg \rfloor \Bigg \rfloor$$ \end{document} Output : Four types of big commands do not work according to the shape of the mathematical variable. For example \documentclass{article} \newcommand{\floor}[2]{#2\lfloor #1 #2\rfloor} \begin{document} $$\floor{x}{\big} \; \floor{x}{\Big} \; \floor{x}{\bigg} \; \floor{x}{\Bigg}$$ \end{document} Output : So, you saw multiple methods for big floor symbol. But, the best practice is to use \left\lfloor x \right\rfloor syntax. ## Use mathtools package for floor symbol in LaTeX Instead of denoting the floor symbol with two separate brackets, you can create a new command using the mathtools package and denote it with a single symbol. \documentclass{article} \usepackage{mathtools} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \begin{document} $$\floor{x} \; \floor{\frac{x}{y}} \; \floor{\frac{\floor{x}}{x}}$$ $$\floor*{x} \; \floor*{\frac{x}{y}} \; \floor*{\frac{\floor*{x}}{x}}$$ \end{document} Output : If you look at the program above, you will see that \floor and \floor* return two different outputs. However, using the \floor* command is best practice. Because it defines the size of the floor symbol according to the size of the variable. Also, you can pass that four types of big commands within \floor command as an optional argument. \documentclass{article} \usepackage{mathtools} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \begin{document} $$\floor[\big]{x} \; \floor[\Big]{x} \; \floor[\bigg]{x} \; \floor[\Bigg]{x}$$ \end{document} Output : ## Define floor() function in LaTeX Most of you will write a mathematical function in a document like text which is not right! For example \documentclass{article} \begin{document} $$Floor function : floor(x)$$ $$Floor function : floor(x)$$ \end{document} Output : Latex uses pre-defined commands or mathrm and mbox commands to write a mathematical function. However, there is no predefined command for the floor function. \documentclass{article} \begin{document} % In math moods use \mathrm{} and \mbox{} commands $$Floor function : \mathrm{floor}(x)$$ $$Floor function : \mbox{floor}(x)$$ \end{document} Output : ## Define floor fraction in LaTeX When you pass a fractional variable within a floor symbol, the shape of the whole symbol becomes larger. In this case, you must use the \left and \right commands before the \lfloor and \rfloor commands. As a result, you get a floor symbol of responsive size for fraction variable. \documentclass{article} \newcommand{\floor}[1]{\left \lfloor #1 \right \rfloor} \begin{document} $$\floor{\frac{\floor{x}}{x}}$$ $$\floor{\frac{x}{y}}$$ $$\floor{\frac{x}{x+1}}$$ $$\floor{\frac{x}{\frac{1}{x+1}}}$$ \end{document} Output : Notice the code above, the \frac{num..}{den..} command is used for the fraction part, the numerator and denominator are passed as arguments. ## Use another package like MnSymbol, fdsymbol, and stix Each package contains the same command to represent the floor symbol. For example \documentclass{article} \usepackage{MnSymbol,fdsymbol,stix} \begin{document} $$\lfloor x \rfloor$$ $$\left \lfloor \frac{1}{x} \right \rfloor$$ \end{document} Output : So, in my opinion, there is no need to use any package to represent this symbol. Because you can use the same command by default without the help of any package. ## Ceiling symbol in LaTeX The shape of the ceiling symbol ⌈x⌉  is opposite to the shape of the floor symbol. And \lceil and \rceil commands are used for both brackets. For example \documentclass{article} \begin{document} $$\lceil x \rceil$$ $$\left \lceil \frac{1}{x} \right \rceil$$ \end{document} Output : So, what you have learned about floor symbols in this tutorial, you can also apply in the case of ceiling symbols. Just you need to use \lceil and \rceil instead of \lfloor and \rfloor. One request! Don't forget to share if I have added any value to your education life. See you again in another tutorial. thank you!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9689230918884277, "perplexity": 1996.4345279337326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500357.3/warc/CC-MAIN-20230206181343-20230206211343-00373.warc.gz"}
https://www.scienceforums.net/profile/143222-boson-quark/
# Boson Quark Members 2 1 Neutral • Rank Lepton 1. ## Has the Riemann Hypothesis has been proved here ? It seems Motl's comments are on an earlier (2019) claimed proof of Kabalaika. The recent claimed proof seems to have a different approach to the one described by Motl. By the way, there is a revised version. https://doi.org/10.6084/m9.figshare.13838111 Indeed. The first version was a bit complicated for e to read, but i can certainly pass a judgement on the latest and much more elementary version https://doi.org/10.6084/m9.figshare.13838111. Will give it quick a read. 2. ## Has the Riemann Hypothesis has been proved here ? I just came across this paper https://figshare.com/articles/preprint/Primorial_numbers_and_the_Riemann_Hypothesis/13838111, claiming to prove the Riemann Hypothesis. I'm not an expert on this subject, but the proof seems to be valid. I have also attached the file below. Primorial numbers and the Riemann Hypothesis..pdf ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398481249809265, "perplexity": 569.0867759981224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178347321.0/warc/CC-MAIN-20210224194337-20210224224337-00501.warc.gz"}
http://tex.stackexchange.com/questions/185415/declaremathoperator-and-align-environment
DeclareMathOperator and align environment Minimal working example: \documentclass{article} \usepackage{amsmath} \begin{document} \begin{align} &s + t &\text{foo}\\ &\sin x &\text{bar} \end{align} \end{document} Here, the \sin (or any other operator declared with \DeclareMathOperator) is not aligned with the s, it has some spacing in front of it. I know that math operators are inserting some spacing such that x \sin y works as expected, but since there is nothing in front of the \sin here, I wouldn't expect that to apply. Typesetting the paragraph \noindent$$s + t$$\\$$\sin x$$ works as expected, too, so why doesn't it align correctly when used with the align environment? - Since the question already answers the cause of the space. The macro, defined by \DeclareMathOperator is defined as \mathop with additional spacing in some situations. Here align adds an empty math ordinary atom to get correct spacing for binary or relation symbols. Also a space is added between \mathord and \mathop. In this case the space can be avoided using an emtpy \mathopen in front of \sin. TeX does not insert a space between \mathord and \mathopen and between \mathopen and \mathop: \documentclass{article} \usepackage{amsmath} \begin{document} \begin{align} &s + t &\text{foo}\\ &\mathopen{}\sin x &\text{bar} \end{align} \end{document} - @Manuel: Close, but there is a case left, a preceding punctuation atom. Putting \mathclose in \scriptstyle helps: \begingroup\scriptstyle\mathclose{}\endgroup\mathord{<here the input>}\mathopen{} –  Heiko Oberdiek Jun 18 at 10:17 The align environment expects a relation symbol after &, as the point of alignment, so there is an implicit {} at the beginning of the second column (and all other even numbered columns). This has the unfortunate consequence that, if a math operator follows &, a thin space is added, because of TeX's spacing rules: when a math operator follows an ordinary atom a thin space is added, like in $3\sin x$ which results in 3<thin space>sin<thin space>x Solution: \documentclass{article} \usepackage{amsmath} \begin{document} \begin{align} &s + t &&\text{foo}\\ &\!\sin x &&\text{bar} \end{align} \end{document} Since a thin space is automatically added, because of the rule explained above, a \! will cancel it, because it's exactly the opposite of a thin space. Note the && before \text, so the conditions are left aligned. Here's a set of variant environments where the automatic empty group is not added: \documentclass{article} \usepackage{amsmath,etoolbox} \makeatletter \newcommand{\patch@align@preamble}{\patchcmd{\align@preamble}{{}}{}{}{}} \newenvironment{varalign} {\patch@align@preamble\start@align\@ne\st@rredfalse\m@ne} {\endalign} \newenvironment{varalign*} {\patch@align@preamble\start@align\@ne\st@rredtrue\m@ne} {\endalign} \newenvironment{varalignat} {\patch@align@preamble\start@align\z@\st@rredfalse} {\endalign} \newenvironment{varalignat*} {\patch@align@preamble\start@align\z@\st@rredtrue} {\endalign} \makeatother \begin{document} \begin{varalign} &s + t &&\text{foo}\\ &\sin x &&\text{bar} \end{varalign} text in between \begin{varalign*} &s + t &&\text{foo}\\ &\sin x &&\text{bar} \end{varalign*} text in between \begin{varalignat}{2} I think you should remark that \! is exactly the space you need to substract in such case, it's not just a “small amount of negative space”. I learned it rather late. –  Manuel Jun 18 at 10:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9888811707496643, "perplexity": 2814.2961182826716}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657104131.95/warc/CC-MAIN-20140914011144-00099-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://link.springer.com/article/10.1007%2Fs13366-011-0016-z
Beiträge zur Algebra und Geometrie / Contributions to Algebra and Geometry , Volume 52, Issue 1, pp 125–132 # Generalized inflection points of very general effective divisors on smooth curves Original Paper DOI: 10.1007/s13366-011-0016-z Coppens, M. Beitr Algebra Geom (2011) 52: 125. doi:10.1007/s13366-011-0016-z • 46 Views ## Abstract Let E be a very general effective divisor of degree d on a smooth curve C of genus g. We study inflection points on linear systems |aE | for an integer a ≥ 1. They are called generalized inflection points of the invertible sheaf $${\mathcal{O}_C(E)}$$. In case $${P\notin E}$$ is a generalized inflection point of $${\mathcal{O}_C(E)}$$ then it is a normal generalized inflection point. In case $${P\in E}$$ then P has minimal vanishing sequences for E. ### Keywords CurveLinear systemInflection point 14H5114H55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9726662039756775, "perplexity": 1273.2553834251037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661768.23/warc/CC-MAIN-20160924173741-00186-ip-10-143-35-109.ec2.internal.warc.gz"}
https://mathspace.co/textbooks/syllabuses/Syllabus-1014/topics/Topic-20183/subtopics/Subtopic-265931/
# 5.02 The exponential function Lesson An exponential expression is an expression of the form $A^x$Ax, where $A$A is a positive number and $x$x is a pronumeral. $A$A is called the base of the exponential. An exponential equation is an equation where one or both sides are exponential expressions. To solve an exponential equation, we can write both sides of the equation as exponentials with the same base and then the indices will be equal. #### Worked example Solve $4^x=512$4x=512. Think: Since $4^x$4x is an exponential expression, this is an exponential equation. In order to solve this equation, we will write both sides of the equation with the same base and then equate the indices. Do: $4^x$4x is already an exponential, so how can we write $512$512 as an exponential? Notice that $512$512 is the result of multiplying together $2$2 nine times. That is, $512=2^9$512=29. Now we have $4^x=2^9$4x=29, so both sides are exponentials but they have different bases. Can we write $4^x$4x with a base of $2$2? Notice that $4=2^2$4=22. Using index laws: $4^x$4x $=$= $\left(2^2\right)^x$(22)x Since $4=2^2$4=22 $=$= $2^{2x}$22x Using the rule $\left(A^m\right)^n=A^{mn}$(Am)n=Amn Now both sides of the equation have the same base. $4^x$4x $=$= $512$512 $2^{2x}$22x $=$= $2^9$29 Since $4^x=2^{2x}$4x=22x and $512=2^9$512=29 $2x$2x $=$= $9$9 Since the bases are the same we know that the indices are equal $x$x $=$= $\frac{9}{2}$92​ Dividing both sides of the equation by $2$2 So the solution is $x=\frac{9}{2}$x=92 which we can verify by substituting into the original equation. Reflect: Notice that the original question is equivalent to asking what index do we raise $4$4 to to get $512$512. We could also solve this equation by noticing that $2=\sqrt{4}=4^{\frac{1}{2}}$2=4=412 and $512=256\times2=4^4\times4^{\frac{1}{2}}=4^{\frac{9}{2}}$512=256×2=44×412=492. This gives the equation $4^x=4^{\frac{9}{2}}$4x=492, which gives us the same solution as above. Summary An exponential expression is an expression of the form $A^x$Ax, where $A$A is a positive number and $x$x is a pronumeral. $A$A is called the base of the exponential. An exponential equation is an equation where one or both sides are exponential expressions. To solve an exponential equation, we can write both sides of the equation as exponentials with the same base and then the indices will be equal. #### Practice questions ##### Question 1 Solve $3^x=3^6$3x=36. ##### Question 2 Solve $3^y=\frac{1}{27}$3y=127. ##### Question 3 Solve $9^x=\sqrt[9]{9}$9x=99. ### Outcomes #### VCMNA339 Explore the connection between algebraic and graphical representations of relations such as simple quadratic, reciprocal, circle and exponential, using digital technology as appropriate
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824888706207275, "perplexity": 934.9377472886058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304261.85/warc/CC-MAIN-20220123111431-20220123141431-00292.warc.gz"}
http://mathhelpforum.com/pre-calculus/154923-hyperbola-equation-help-print.html
# Hyperbola equation help • Sep 1st 2010, 01:20 AM Zora Hyperbola equation help Find equation of a hyperbola in standard form with center (-2,5), one vertex at (-2,8) and slope of one of its asymptotes is 3/5 • Sep 1st 2010, 02:27 AM sa-ri-ga-ma Distance between center and the vertex is a, and equation of asymptote is y = (a/b)x. Find a and b from the given information. So the equation of the hyperbola is $\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1$ • Sep 1st 2010, 12:00 PM HallsofIvy Well, actually, no. If the center is at $(x_0, y_0)$, the distance from the center to a vertex is a, and the equation of the asymptote is y= (b/a)x, the equation of the hyperbola is $\frac{(x- x_0)^2}{a^2}- \frac{(y- y_0)^2}{b^2}= 1$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.941606342792511, "perplexity": 868.638341422295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823229.49/warc/CC-MAIN-20171019050401-20171019070401-00015.warc.gz"}
http://mathoverflow.net/questions/47885/trace-of-the-atiyah-class-equals-chern-class
# trace of the atiyah class equals chern class In several textbooks ("The Geometry of Moduli Spaces of Sheaves" by Huybrechts and Lehn, "Calcul differentiel et classes caracteristiques..." by Angeniol and Lejeune-Jalabert) it is mentioned that the trace of the p-th atiyah class equals the p-th chern class or the p-th component of the chern character. I could not find a reference where this statement is proven. Thanks for any help. - It is more an approach to the definition of the Chern character than a fact that needs to be proven. An old reference that uses the language of twisted cochains is "The trace map and characteristic classes for coherent sheaves", by O'Brian, Toledo, and Tong, Amer. J. Math. 103 (1981), pp. 225–252 (MR 82f:32021). They use this construction in further papers to prove a Riemann-Roch theorem in Hodge cohomology. A more modern reference is Caldararu's "The Mukai pairing, I: the Hochschild structure" (arXiv:math/0308079), see also arXiv:math/0308080 and arXiv:0707.2052. Here, the Chern character is presented in the language of derived categories and Fourier-Mukai transforms. - thanks. this is more or less what I expected. by now I found an elementary prove by Atiyah himself in Complex analytic connections in fibre bundles. Trans. Amer. Math. Soc. 85 (1957), 181–207, proposition 12. this is what I was actually looking for. but thank you anyway for the reference from caldararu. it's a really nice paper. –  Malte Wandel Dec 6 '10 at 17:20 I might be wrong but it seems to me that the $p$-th Atiyah class does not have any reason to agree with the usual $p$-th Chern class unless the manifold under consideration is Kahler. Namely, if $X$ is not Kahler then for a holomorphic vector bundle $E\to X$, $c_p(E)\in H^{2p}(X)$ and $at_p(E)\in H^{p}(X,\Omega^p_X)$ live in different spaces. The point is that $c_p(E)$ can be defined as the class of $tr(R^p)$, where $R$ is the curvature of an hermitian connection on $E$, while $at_p(E)$ can be defined as the class of $tr(R_{1,1}^p)$, where $R_{1,1}$ is the $(1,1)$-part of the curvature of a $(1,0)$-connection on $E$. The point is that if $X$ is Kahler then there exists an Hermitian $(1,0)$-connection with curvature being of type $(1,1)$. The relation between the Atiyah classes and the Chern classes can be made through the Hodge-to-de Rham spectral sequence. So, I think that the Chern classes you are talking about are not the usual (i.e. topological) ones, but the Chern classes in Hodge cohomology. Then they coincide with the Atiyah classes almost by definition (by the way, there is a very nice paper of Grothendieck on Chern classes in Hodge cohomology). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926668643951416, "perplexity": 288.13125924036007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443062.21/warc/CC-MAIN-20141017005723-00003-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/quick-question-about-oribtal-velocities.623358/
# Quick question about Oribtal Velocities 1. Jul 25, 2012 ### ZedCar 1. The problem statement, all variables and given/known data I was reading in a book a little section about orbital velocities. It states: For elliptical orbits, consider a body moving from point A to point B. The total work done by the force on the body is given by W = KE (for B) - KE (for A) 2. Relevant equations 3. The attempt at a solution Since W=Fd, does the KE simply represent the kinetic energy? Thank you. Last edited: Jul 25, 2012 2. Jul 25, 2012 ### tiny-tim Hi ZedCar! Yes, the work energy theorem says that the change in energy equals the work done. Since the only energy that changes for an orbiting body is the kinetic energy, and the only force is gravity, that means the change in kinetic energy equals the work done by gravity. (We ignore the gravitational potential energy, since it is defined as minus the work done by a conservative force, such as gravity … and we cannot count it twice! ) Similar Discussions: Quick question about Oribtal Velocities
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.941169798374176, "perplexity": 726.6076084270569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517181.32/warc/CC-MAIN-20171212134318-20171212154318-00286.warc.gz"}
https://brilliant.org/problems/square-your-limit-problem/
# Square your limit problem Calculus Level 4 $\large \lim \limits_{n\to \infty }\sqrt[n]{\dfrac{( 2n) ! }{n! n^{n}} }$ If the value of the limit above equals to $$A \times e^B$$ for integers $$A$$ and $$B$$, find the value of $$A\times B$$. × Problem Loading... Note Loading... Set Loading...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8527198433876038, "perplexity": 1519.2364312036718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825900.44/warc/CC-MAIN-20171023111450-20171023131450-00397.warc.gz"}
http://mathhelpforum.com/number-theory/11140-proving-irrationality-print.html
# Proving irrationality • February 4th 2007, 06:18 PM lyla Proving irrationality Hi, Can anyone help with this one: Establish the following facts: √p is irrational for any prime p. • February 4th 2007, 06:52 PM ThePerfectHacker Quote: Originally Posted by lyla Hi, Can anyone help with this one: Establish the following facts: √p is irrational for any prime p. Euclid's proof. -------------- Assume, $\sqrt{p}=a$ a positive integer. By definition it is equivalent to say, $p=a^2$. Now the prime decomposition of $a^2$ has an even amount of prime factors because of the square. While $p$ does not. By uniquness this is impossible. Pythagorus' Proof. ------------------- Assume that $\sqrt{p}=\frac{n}{m}$ where $n,m$ is a reduced fraction, meaning no common factors. Then, $p=\frac{n^2}{m^2}$ $m^2p=n^2$. Note, the right hand side is divisible by $p$ because the left hand side. Meaning $p$ divides $n^2$. But then $n$ itself is divisible by $p$ by properties of prime numbers. That is $n=pk$. Subsitute, $m^2p=k^2p^2$ $m^2=k^2p$ But then the left hand side is divisble by $p$, that is $o$ divides $m^2$. But then $p$ divides $m$. Hence $n$ and $n$ have common factors, contrary to assumption. (This is also a similar approach to Fermat's principle of infinite descent.) Eisenstein Proof. ----------------- The polynomial $x^2-p\in \mathbb{Z}[x]$ fits the conditions of Eisenstein irreducibility criterion for $p$. Thus, there are no solution in $\mathbb{Z}$ and hence none in $\mathbb{Q}$. Rational Roots Proof. ---------------- The polynomial $x^2-p$ can only have zeros for $\pm 1,\pm p$. None of which work. Thus, it is irrational. • February 5th 2007, 03:35 AM lyla I have to thank you very much for the help.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.875359833240509, "perplexity": 516.0883024098509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258950570.93/warc/CC-MAIN-20160723072910-00195-ip-10-185-27-174.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/39210/solve-in-positive-integers-n-mm1/54072
# Solve in positive integers $n!=m(m+1)$ Is anybody know a solution of this problem? (Sorry, I've missed one summand in the previous post.) - $3!=2\cdot3$ ? – Joel David Hamkins Sep 18 '10 at 10:39 I can add one more $2!=1\cdot 2$ – Alexey Ustinov Sep 18 '10 at 10:44 Before someone posts an elementary solution, note that abc conjecture implies finitely many solutions since $rad(n!) = \prod_{p<n} p \sim e^n$ and $(e^n)^{1+\epsilon} < \sqrt{n!}$. – Dror Speiser Sep 18 '10 at 14:56 If $n$ is a solution then $4n!+1$ is a square, but it looks like if $n\ne 2,3$, $4n!+1$ is not divisible by a square of a prime. – Mark Sapir Sep 18 '10 at 15:50 This is very similar to Brocard's problem, which is unresolved... en.wikipedia.org/wiki/Brocard%27s_problem – Byron Schmuland Sep 18 '10 at 15:54 I'm pretty sure this is open. As suggested from Brocard's problem, it is interesting to investigate the Diophantine equations $$n!=P(m)$$ for polynomials $P$. You can see the paper "On polynomial-factorial diophantine equations", by D. Berend, J.E. Harmse where they make some advances on the problem and prove that this equation has finitely many solutions for many classes of polynomials (irreducible, with an irreducible factor of large degree or with an irreducible factor to a large power). So by their results it is known that the equation $n!=m^r(m+1)$ has finitely many solutions if $r\geq 4$. But for $r\in \{1,2,3\}$ the problem is open. - I thought I should share with you the results from my computations from the past two days. All these computations could be carried out to significantly higher limits with better code and a bit more time. However, my motivation was just to play with them. Please check the code before taking these computations as facts! I did not check the code too carefully. ## Computation 1 The equation $$n! = m(m+1)$$ does not have integer solutions for $3 < n < 10^9$. I used Mark Sapir's observation that if $n$ is a solution then $4n!+1$ is a square. I wrote a small program which calculates Legendre symbols $$\left(\frac{4n!+1}{p}\right),$$ with a set of primes $p>n$, to check if there might be a solution for $n$. Confirming that for $3 < n < 10^9$ there are no solutions took about one hour on my laptop using 40 primes. The code is available at http://users.jyu.fi/~tamaraja/temp/brogen.c for anyone interested to see it. ## Computation 2 I wanted to test the slightly stronger guess in Mark Sapir's post that for $p>5$, $p^2$ does not divide any integer of the form $4n!+1$. This turned out not to be true. I checked that for $5 < p <10^6$ there is one exception, namely $$761671^2 ~|~ 4\cdot446142!+1.$$ Here I used only brute force and the trivial observation that $$p ~|~ 4n!+1 \Rightarrow n < p.$$ Testing this range for $p$ took a couple of hours. The code is available at http://users.jyu.fi/~tamaraja/temp/sqrdiv.c ## Computation 3 Byron Schmuland pointed out the similarity to Brocard's problem in his comment. Because the previous computations I found for the equation $$n! + 1 = m^2$$ were carried out over a decade ago, I decided to extend them a bit. With the same approach which was used by Berndt and Galway in 2000 (and in Computation 1 above) I confirmed that the equation has no solutions for $7 < n < 10^{10}$. This took about one day. The code was the same as in Computation 1. ## Computation 4 After the previous 3 computations I did a little bit of the "missing" fourth. The equation $$n!+1 \equiv 0 \pmod{p^2}$$ has no solutions $(p,n)$ with $p$ prime and $613 < p < 10^6$. - Let us look for a probabilistic hint. Given $n\ge3$, define $N:=[\sqrt{n!}]$, the integer part of the square root $n!$ . Then $n!\in[N^2+1,\ldots,N^2+2N]$. The answer to the question is positive if and only if $n!=N^2+N$, because $m$ has to be $N$. At first glance, the probability of this event is $1/2N$. However, we know a priori that both $n!$ and $N(N+1)$ are even. Therefore this probability is $1/N\sim(n!)^{-1/2}$. Since the series $$\sum_{n=2}^{\infty}\frac{1}{\sqrt{n!}}$$ converges, I expect that the number of solutions to this problem be finite. Actually, I checked that the answer is No for $4\le n\le 10$. Then the number of solutions with $n\ge4$ can be estimated by the series $$\sum_{n=11}^{\infty}\frac{1}{\sqrt{n!}}$$ Because this number is very small (not greater than $10^{-3}$), I bet that there does not exist a solution $n\ge4$. This is the same kind of reasoning that is used to guess that there does not exist a prime number among Fermat numbers $F_m$ with $m\ge5$. - Did you mean $N = [\sqrt{n!}]$ in the first line ? – Chris Wuthrich Sep 20 '10 at 11:30 @Chris. Yes I did. Thanks for the correction. – Denis Serre Sep 20 '10 at 12:13 The probability that N(N+1) is divisible by 3 is not 1/3, then shouldn't this also be accounted for? – Dror Speiser Sep 20 '10 at 17:04 Probabilistic arguments are clear. They are even more powerful than $abc$-conjecture. – Alexey Ustinov Sep 21 '10 at 5:50 Assuming this is an heuristic argument, I still have a problem with this when using a serie: I feel you are making some regularity assumption that you do not specify. In fact there could be very very few $n$ - yet an infinite number- ( like one every triple exponential) that will not change the serie convergence and not by much. Could you specify that "regularity" or "independence". – Jérôme JEAN-CHARLES Oct 30 '10 at 23:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8791933059692383, "perplexity": 253.48394144113283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00060-ip-10-164-35-72.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/94805/is-the-following-product-of-q-binomial-coefficients-a-polynomial-in-q
# Is the following product of $q$-binomial coefficients a polynomial in $q$? $$\frac{\binom{n}{j}_q\binom{n+1}{j}_q \cdots\binom{n+k-1}{j}_q}{\binom{j}{j}_q\binom{j+1}{j}_q\cdots\binom{j+k-1}{j}_q}$$ where $n,j,k$ are non-negative integers. - What do the $q$ subscripts on the binomial coefficients mean? –  Henning Makholm Dec 28 '11 at 22:39 It means it is q binomial coefficient and here is the link to its definition mathworld.wolfram.com/q-BinomialCoefficient.html –  sophie668 Dec 28 '11 at 22:44 If $n<j$ then this is $0$, which is certainly a polynomial in $q$. Otherwise, we have $$\frac{\binom{n}{j}_q\binom{n+1}{j}_q \cdots\binom{n+k-1}{j}_q}{\binom{j}{j}_q\binom{j+1}{j}_q\cdots\binom{j+k-1}{j}_q}$$ $$= \frac{([n]_q[n-1]_q\cdots[n-j+1]_q)\cdots([n+k-1]_q[n+k-2]_q\cdots[n+k-j]_q)}{([j]_q[j-1]_q\cdots[1]_q)\cdots([j+k-1]_q[j+k-2]_q\cdots[k]_q)}$$ where $[m_q] = 1 + q + \cdots + q^{m-1}=(q-\zeta_m)(q-\zeta_m^2)\cdots(q-\zeta_m^{m-1})$ and $\zeta_m$ is a primitive $m^{th}$ root of unity. These polynomials are irreducible over $\mathbb{C}$ and since $\mathbb{C}[x]$ is a unique factorization domain we can only write $$\frac{([n]_q[n-1]_q\cdots[n-j+1]_q)\cdots([n+k-1]_q[n+k-2]_q\cdots[n+k-j]_q)}{([j]_q[j-1]_q\cdots[1]_q)\cdots([j+k-1]_q[j+k-2]_q\cdots[k]_q)}$$ as a polynomial in $q$ over $\mathbb{C}$ if the factors $(q-\zeta_m^l)$ in the denominator cancel with factors in the numerator. Note that $\zeta_{im}^i = \zeta_i$ (this ignores our choice of primitive root, but we can make our choice arbitrarily). The factor $(q-\zeta_j)$ of $[j]_q$ cancels with the factor $(q-\zeta_{im}^i)$ of $[ij]_q$ chosen as the unique multiple of $j$ among $n,n-1,\ldots,n-j+1$. Similarly, the factor $(q-\zeta_j^2)$ cancels with $(q-\zeta_{im}^{2i})$, and repeating this procedure we cancel all the factors of $[j]_q$ with factors of $[ij]_q$. We can pick a multiple $i'(j-1)$ of $j-1$ among $\{n,n-1,\ldots,n-j+1\}-\{ij\}$, and similarly a multiple of $j-2$ among $\{n,n-1,\ldots,n-j+1\}-\{ij,i'(j-1)\}$, and by repeating this procedure cancel all factors of $[j]_q[j-1]_q\cdots[1]_q$ with factors of $[n]_q[n-1]_q\cdots[n-j+1]_q$. These are easy factors to cancel, because some multiple of $j$ is among $n+1,n,\ldots,n-j+1$. But what about the factors of $[j+1]_q$? Well, we have $i'(j+1)=i'j+i'$ among $n,n-1,\ldots,n-j+2$ for some $i'$, as the gap between $i'(j+1)$ and $(i'+1)(j+1)$ is $j+1$, and so we can cancel its factors with those of $[n-j+1]_q$ or of one of $[n+1]_q,\ldots,[n-j+2]_q$. But wait, we've already used $[n-j+1]_q$ to cancel factors in the denominator! True, but if $n-j+1$ is a multiple of $j+1$ then we must have used it to cancel factors of $[\frac{j+1}{2}]_q$ or smaller, so we can still cancel at least half the factors of $[j+1]_q$, and the other half will involve roots of unity of smaller order and so can be canceled with other leftovers among $[n]_q,\ldots,[n-j+1]_q$. The same should extend to $[j]_q,[j-1]_q,\ldots,[2]_q$. Iterating this argument should allow cancellation of all the factors in the denominator.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8543401956558228, "perplexity": 110.58415894515073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066275.44/warc/CC-MAIN-20150827025426-00337-ip-10-171-96-226.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/21677/three-non-coplanar-lines-in-the-3d-space-always-have-a-fourth-one-that-intersect
# Three non-coplanar lines in the 3D-space always have a fourth one that intersect them all? If I have three lines $a,$ $b$ and $c$ in the euclidean 3D space, which are pairwise non-coplanar, is there always a fourth line $x$, that intersects theses three lines? - Label the three lines $\ell_1$, $\ell_2$, and $\ell_3$. They cannot intersect for then they would be co-planar. Since lines $\ell_1$ and $\ell_2$ do not intersect, they have points where they are closest and the line $m$ connecting those two points will be perpendicular to both $\ell_1$ and $\ell_2$. In fact, perpendicular to this line $m$ are two parallel planes: $\mathcal{P}_1$ which contains $\ell_1$ and $\mathcal{P}_2$ which contains $\ell_2$. Now pick any point $A$ of line $\ell_3$ not in either plane. The point $A$ together with the line $\ell_2$ defines a plane $\mathcal{Q}$ that contains them. This plane, since it intersects plane $\mathcal{P}_2$, must intersect the parallel plane $\mathcal{P}_1$. Moreover, the line $r$ formed by the intersection of planes $\mathcal{Q}$ and $\mathcal{P}_1$ is parallel to line $\ell_2$. Note that lines $r$ and $\ell_1$ are both on $\mathcal{P}_1$ and cannot be parallel. If they were, $\ell_2$ would also be parallel to $\ell_1$, and $\ell_1$ and $\ell_2$ would, therefore, be co-planar. So, lines $r$ and $\ell_1$ must intersect at some point $B$. If you connect points $A$ and $B$ with a line, it will connect point $A$ on $\ell_3$ passes through $\ell_2$ and connect to $B$ on $\ell_1$. - This proof is the easiest to understand for me. Thank you. – FUZxxl Feb 23 '11 at 16:28 The answer is yes. Three lines in general position determine a unique hyperboloid of one sheet (since they determine a ruling, or a one parameter family of lines on a hyperbola). But there is a second ruling of the same hyperboloid; every line in this ruling meets the original three lines. So Ross is correct, there is a one parameter family of solutions (in the shape of a hyperboloid). If I'm ever not up to my eyeballs in homework, I'll make a picture of this and post it. NOTE: Questions of this sort can be addressed in generality using the Schubert calculus, which is roughly a part of enumerative geometry. - I think so, but have not finished the proof. I would argue that if you pick two of the lines you can find an affine transformation to make the them (x,0,0) and (0,y,1) where x and y parameterize the lines. The general line through these points is (x-xt,yt,t) where t parameterizes this line. The third line is (u+u's, v+v's, w+w's). So to get an intersection, you have three equations in four unknowns: x,y,t,s. There ought to be a one-parameter family of solutions unless something goes wrong. You have to show that non-coplanar is enough to make sure nothing goes wrong. Added: the equations are \begin {align} x-xt&=u+u's \\ yt&=v+v's \\ t&=w+w's \end{align} where $u,u',v,v',w,w'$ are the fixed parameters of the third line. So you can pick $s$ at will, calculate $t$ from $w, w'$, calculate $y$ from $v, v'$ as long as $t \neq 0$ and calculate $x$ from $u, u'$ as long as $t \neq 1$. This gives only two values of $s$ that are not acceptable. So we have the promised one dimensional family of solutions. - Actually, this was not homework or something else. I thought about this in context of another problem and found out, that this can be false if the lines might be coplanar. But in this case I can't think of any reason why there should be no such fourth line. This is a good idea for a proof. I'm going to think about it. – FUZxxl Feb 12 '11 at 15:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9144440293312073, "perplexity": 172.72525353372788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276415.60/warc/CC-MAIN-20160524002116-00024-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/star-formation.94681/
# Star formation 1. Oct 14, 2005 ### vincentm I'm reading up on star formation and from what i've understood so far, is that protogalactic clouds with density fluctuations cool and then fragment after which, they fragment again into subfragments. Now do the density in these individual subfragments increase the temperature enough to start nuclear fusion? and what is the temperature in which fusion can start? I know that an increase in temperature alone isn't enough to start fusion. So what does happen to make the temperature increase besides the density and collapsing of the cloud? 2. Oct 14, 2005 ### Labguy A decent synopsis of the basics of protostar collapse can be found at http://www.astronomynotes.com/evolutn/s3.htm (and following pages). But, it doesn't mention temperature, which is about 12 to 14 million K. Any protostar with less than ~0.079 solar masses will not have enough mass for H fusion to start, so we have a brown dwarf instead of a star. Either way, the high temperatures in a new stellar core are caused only by gravity compressing the protostar material at the center. Last edited: Oct 14, 2005 3. Oct 14, 2005 ### vincentm Thanks labguy. 4. Oct 14, 2005 ### hellfire Fragmentation requires energy dissipation. This phase of the collapse of a cloud is called isothermal collapse. As soon as the cloud cannot cool efficiently anymore because it becomes opaque due to the high density, fragmentation stops (the Jeans mass does not decrease anymore) and the temperature increases. This phase is called adiabatic collapse and lasts until there is enough radiation pressure that stops the collapse. 5. Oct 15, 2005 ### SpaceTiger Staff Emeritus You're right, there's a density dependence for nuclear burning as well. Does this mean that it's wrong for Labguy and others to give you a temperature range for nuclear burning? Nope. Well, not for astronomy purposes anyway. The basic reason that the process occurs within a small range of temperatures is that the temperature dependence is very steep. For the proton-proton chain, for example, it goes roughly as: $$\epsilon \propto \rho T^4$$ while another hydrogen burning process, the CNO cycle, goes $$\epsilon \propto \rho T^{17}$$ That means that you can vary the density of the stellar interior quite a bit, but the onset of nuclear burning will still occur at roughly the same temperature. As stars move on to burn heavier elements, the temperature dependences become even steeper. The basic reason that the collapse leads to an increase in temperature is that you're releasing gravitational potential energy. It's not too different from the reason that a dropping ball increases its speed as it approaches the ground. Gravitational potential energy gets converted into kinetic energy. In the collapsing star, it's the kinetic energy of the molecules -- and, therefore, the temperature -- that's increased. Of course, things are not always this simple. Sometimes the energy can be released via other means (like radiating light), leaving the temperature constant as the cloud collapses. This is the "isothermal" phase that hellfire was talking about. However, the radiation can only escape as long as the material it's passing through is of low enough density that it's not absorbed. As the cloud collapses, its density increases and eventually it's capable of absorbing the light before it escapes. This then allows the temperature to rise and the cloud transitions to the "adiabatic" (constant heat) phase, again mentioned by hellfire. These are (relatively) simple cases and you can probably imagine that real stars are much more complicated than that. Nevertheless, it's always good to get a grasp of the conceptual picture before trying to understand the details. Last edited: Oct 15, 2005 Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: Star formation
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9029377102851868, "perplexity": 957.1881432532587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190753.92/warc/CC-MAIN-20170322212950-00125-ip-10-233-31-227.ec2.internal.warc.gz"}
https://ageconsearch.umn.edu/record/55190
Formats Format BibTeX MARC MARCXML DublinCore EndNote NLM RefWorks RIS ### Abstract This study identifies the effects of free trade agreements at the multilateral (WTO) and regional (FTAA and MERCOSUR-EU) levels on the bovine meat markets in several regions. In order to evaluate these effects, we used a spatial allocation model, presented as a mixed complementarity problem (MCP). The study identifies the changes in production and consumption levels, as well as the changes in producer and consumer surplus, in four possible scenarios. In general, gains to bovine meat producers in the MERCOSUR countries are expected in all scenarios. Although, these gains are greater when we simulate a multilateral trade liberalization, with or without the elimination of subsidies.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8816385269165039, "perplexity": 3666.8467843466788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039560245.87/warc/CC-MAIN-20210422013104-20210422043104-00046.warc.gz"}
https://thomaspowell.com/category/geek-humor/
## Entropy of iPhone headphone cables (an excuse to experiment with WP-LaTeX) Why is it that iPhone earbuds seem to tangle with so many more hopeless tangles than any other cables? $displaystyle frac{dS_{iPhoneCable}}{dt} gg frac{dS_{genericCables}}{dt}$ Of course, this whole exercise was an excuse to play with LaTeX and the WP-LaTeX plugin. I also managed to find a nice LaTeX cheat sheet in the process. ## Everything in math comes down to calculus Everything else is a generalization. Take, for example, the formula for the area of a rectangle: $Area=lw$ In reality, this is the result of the equation: $int_{0}^{w}l,dx=lw$ Where the length of the rectangle lies along the y-axis, and the width along the x-axis See? Isn't that simple, and a much more accurate representation of the area of a rectangle? I thought so.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.851626992225647, "perplexity": 854.5819576748422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937193.1/warc/CC-MAIN-20180420081400-20180420101400-00639.warc.gz"}
https://nyuscholars.nyu.edu/en/publications/boosting-background-suppression-in-the-next-experiment-through-ri
# Boosting background suppression in the NEXT experiment through Richardson-Lucy deconvolution The NEXT collaboration Research output: Contribution to journalArticlepeer-review ## Abstract Next-generation neutrinoless double beta decay experiments aim for half-life sensitivities of ∼ 1027 yr, requiring suppressing backgrounds to < 1 count/tonne/yr. For this, any extra background rejection handle, beyond excellent energy resolution and the use of extremely radiopure materials, is of utmost importance. The NEXT experiment exploits differences in the spatial ionization patterns of double beta decay and single-electron events to discriminate signal from background. While the former display two Bragg peak dense ionization regions at the opposite ends of the track, the latter typically have only one such feature. Thus, comparing the energies at the track extremes provides an additional rejection tool. The unique combination of the topology-based background discrimination and excellent energy resolution (1% FWHM at the Q-value of the decay) is the distinguishing feature of NEXT. Previous studies demonstrated a topological background rejection factor of ∼ 5 when reconstructing electron-positron pairs in the 208Tl 1.6 MeV double escape peak (with Compton events as background), recorded in the NEXT-White demonstrator at the Laboratorio Subterráneo de Canfranc, with 72% signal efficiency. This was recently improved through the use of a deep convolutional neural network to yield a background rejection factor of ∼ 10 with 65% signal efficiency. Here, we present a new reconstruction method, based on the Richardson-Lucy deconvolution algorithm, which allows reversing the blurring induced by electron diffusion and electroluminescence light production in the NEXT TPC. The new method yields highly refined 3D images of reconstructed events, and, as a result, significantly improves the topological background discrimination. When applied to real-data 1.6 MeV ee+ pairs, it leads to a background rejection factor of 27 at 57% signal efficiency. [Figure not available: see fulltext.]. Original language English (US) 146 Journal of High Energy Physics 2021 7 https://doi.org/10.1007/JHEP07(2021)146 Published - Jul 2021 ## Keywords • Dark Matter and Double Beta Decay (experiments) ## ASJC Scopus subject areas • Nuclear and High Energy Physics ## Fingerprint Dive into the research topics of 'Boosting background suppression in the NEXT experiment through Richardson-Lucy deconvolution'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814614474773407, "perplexity": 3960.5624103053456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710503.24/warc/CC-MAIN-20221128102824-20221128132824-00328.warc.gz"}
https://dsp.stackexchange.com/questions/9488/calculating-filter-parameters-from-system-response
# Calculating filter parameters from system response In trying to answer another question, I got stuck applying what I thought would be a valid way to answer it: using the system function. Here's what I did: First, I put the question Y[i] = Y[i-1] + ALPHA * ( X[i] + Y[i-1] ) into DF I: Y[i] = ALPHA * X[i] - (ALPHA-1) * Y[i-1] This tells me that the system function is: H(z) = ALPHA / ( 1 + (ALPHA-1) z^-1 ) Substituting: z=e^(jw) and |H(z)| = .5 (or .707) should allow me to calculate ALPHA, but I only get complex values for ALPHA. What am I doing wrong? • Note that you're solving for $|H(z)| = 0.5$. If you take the magnitude of both sides of the equation, then there won't be any complex numbers any more. – Jason R Jun 6 '13 at 16:06 • It should be $|H(e^{j\theta})|^2=0.5$, i.e. the 3dB cut-off and $z$ on the unit circle. – Matt L. Jun 6 '13 at 16:23 Just math I think. Let's substitute g = ALPHA -1, which makes the algebra a little easier. Then you need to solve $$\left | \frac{1+g}{1+g\cdot e^{-j\omega }} \right |^{2}=0.5$$ That means $$\frac{1+g}{1+g\cdot e^{-j\omega }}\cdot \frac{1+g}{1+g\cdot e^{+j\omega }} = 0.5$$ Multiplying this out yields $$\frac{(1+g)^{2}}{1+2\cdot g\cdot cos(\omega) + g^{2} }= 0.5$$ We can put this into standard quadratic equation form: $$g^{2}+(4-2\cdot cos(\omega))\cdot g + 1 = 0$$ Solve, plug in the numbers and you get $$g_{1} = -1.0744, g_{2} = -0.93074$$ $$\alpha = 0.069262$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9153450131416321, "perplexity": 734.3792717253926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481281.1/warc/CC-MAIN-20191205164243-20191205192243-00360.warc.gz"}
https://testbook.com/question-answer/in-the-circuit-shown-a-5-v-zener-diode-is-used-to--6030d9abb553490aab124cea
# In the circuit shown, a 5 V Zener diode is used to regulate the voltage across load R0. The input is an unregulated DC voltage with a minimum value of 6 V and a maximum value of 8 V. The value of RS is 6 Ω. The Zener diode has a maximum rated power dissipation of 2.5 W. Assuming the Zener diode to be ideal, the minimum value of R0 is _________ Ω. This question was previously asked in GATE EE 2021 Official Paper View all GATE EE Papers > ## Answer (Detailed Solution Below) 29 - 31 Free CT 1: Ratio and Proportion 1784 10 Questions 16 Marks 30 Mins ## Detailed Solution Concept: The working of the Zener diode is explained in the below figures. Case 1: Vi and Ro are fixed $$I_s=\frac{{(V_i-V_z)}}{{R_s}}$$   → fixed $$I_o=\frac{{V_o}}{{R_o}}=\frac{{V_z}}{{R_o}}$$   → fixed Iz = Is - Io  → fixed Case 2: Vi is variable, Ro is fixed $$I_{smin}=\frac{{(V_{imin}-V_z)}}{{R_s}},\;\;\;\;I_{smax}=\frac{{(V_{imax}-V_z)}}{{R_s}}$$ $$I_o=\frac{{V_o}}{{R_o}}=\frac{{V_z}}{{R_o}}$$  → fixed​ Izmin = Ismin - Io Izmax = Ismax - Io Case 3: Vis fixed, Ro is variable $$I_s=\frac{{(V_i-V_z)}}{{R_s}}$$  → fixed $$I_{omax}=\frac{{V_z}}{{R_{omin}}},\;\;\;I_{omin}=\frac{{V_z}}{{R_{omax}}}$$ Izmin = Is - Iomax Izmax = Is - Iomin Case 4: Vi and Ro are variable Izmin = Ismin - Iomax Izmax = Ismax - Iomin Calculation: Given, Zener voltage Vz = 5 V For maximum power in Zener. Power = Vz Izmax 2.5 = 5 × Izmax Izmax = 0.5 A Given diode is ideal so, Izmin = 0 A Output resistance Ro = Vz / Io Since Vz is constant, to get the minimum value of Ro, Io must be maximum. From case 4 equations, Ismin = Iomax + Izmin The current drawn from the source will be minimum when the input voltage is minimum. Minimum input voltage Vmin = 6 V $$I_{smin}=\frac{6-5}{6}=\frac{{1}}{{6}}~A=I_{omax}$$ So the minimum output resistance can be calculated as $$R_{omin}=\frac{{5}}{{\frac{{1}}{{6}}}}=30\;\Omega$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8534107208251953, "perplexity": 2725.5748788968413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056752.16/warc/CC-MAIN-20210919065755-20210919095755-00588.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/15057/measuring-in-the-computational-basis-in-the-single-qubit-gate-qft-implementation
# Measuring in the computational basis in the single qubit gate QFT implementation I've come across this paper about a single-qubit-gate-only QFT implementation. In the paper it is claimed that measuring a qubit after applying the Hadamard gate (it isn't called Hadamard gate in the paper but its description matches that of the Hadamard gate) is no different from measuring before the gate. Now what I assumed is that the measurement before the gate would have been made in the $$\{|0⟩,|1⟩\}$$ basis and the measurement after the gate would have been made in the $$\{|+⟩,|-⟩\}$$ basis. However, I came across these conflicting sources about the term "Computational Basis" regarding measurement: So my questions are: 1. Does saying "measure in the computational basis" means that in whatever basis you qubits are in you measure them according to that basis to get classical bits or does it always refer to measuring in the {|0⟩,|1⟩} basis? 2. If it is the latter then how does the claim hold when we know that $$H|0\rangle=\frac{1}{\sqrt 2}(|0⟩+|1⟩)$$ and $$H|1\rangle=\frac{1}{\sqrt 2}(|0⟩-|1⟩)$$ so the probabilities of measuring an outcome after H should be $$\frac{1}{2}$$ regardless of the original state of the qubit? Measuring in the 'Computational Basis' always means to measure in the $$\{|0\rangle,|1\rangle\}^n$$ basis for $$n$$ qubits. As for your second question, I feel that you misinterpreted the author. It is certainly true that on applying the $$H$$ gate on $$|0\rangle$$ or $$|1\rangle$$ you would get the states $$|0\rangle$$ and $$|1\rangle$$ with equal probabilities. But the underlying idea is this: let $$q$$ be a qubit in the circuit given in the paper. Let $$m$$ many gates are applied on $$q$$. For some $$k\le m$$, see that if only the first $$k$$ gates actually transform the state of $$q$$ (by some rotation) and if for the rest of the $$m-k$$ gates $$q$$ just acts as a control qubit, then the last $$m-k$$ gates cannot change the state of $$q$$. So, if the probability of obtaining $$|0\rangle$$ on measuring the qubit $$q$$ as $$|0\rangle$$ was $$p$$ before applying the last $$m-k$$ gates, then even after applying the $$m-k$$ gates, the probability of obtaining $$|0\rangle$$ on measuring $$q$$ remains $$p$$. The author then uses this fact and uses the measured outcomes to control the other gates. An example might help understand. Take, for instance, the following three qubit circuit: In this circuit, $$q_0$$ gets rotated in the first gate and just acts as a control qubit for the next two gates. Now notice that if you were to measure $$q_0$$ just after the first gate, you would get $$|0\rangle$$ and $$|1\rangle$$ with equal probabilities. But if you measure $$q_0$$ after all the gates have been applied, the probabilities of obtaining $$|0\rangle$$ and $$|1\rangle$$ still remain the same. Next, notice that the output of this circuit is $$|\psi\rangle = \frac{1}{\sqrt{2}}(|000\rangle + |111\rangle)$$. So you get the states $$|000\rangle$$ and $$|111\rangle$$ with equal probabilities on measuring all the qubits of the circuit after all the gates have been applied. Now, on the other hand let us first measure the qubit $$q_0$$ just after $$H$$, then apply an $$X$$ gate to $$q_1$$ and $$q_2$$ only if the measured outcome of $$q_0$$ is 1 and then measure the qubits $$q_1$$ and $$q_2$$. Now, if the measurement of $$q_0$$ yields $$|0\rangle$$ then the other two gates have no effect on $$q_1$$ and $$q_2$$. So after the final measurement, the complete measured state would be $$|000\rangle$$. Similarly, if the measurement of $$q_0$$ yields $$|1\rangle$$ then the other two gates flips the qubit state in $$q_1$$ and $$q_2$$. So after the final measurement, the complete measured state would be $$|111\rangle$$. Now, if you perform this for some $$d$$ times, then for an expected $$d/2$$ times you obtain $$|0\rangle$$ during the first measurement and so obtain $$|000\rangle$$ as the final measurement and for an expected $$d/2$$ times you obtain $$|1\rangle$$ during the first measurement and so obtain $$|111\rangle$$ as the final measurement. Now, see that in both the cases, you obtain the same probability distribution. But in the first case we use two two-qubit gates but in the second case, we use no two-qubit gates. So irrespective of whether you measure a qubit after all the gates in a circuit or whether you measure a qubit just after all the rotations on it have been applied, the output probability distribution turns out to be the same. This is the idea that the author tries to convey and use in constructing the single-qubit-gate-only QFT implementation in the paper.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 66, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9769344925880432, "perplexity": 186.68710821032496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150134.86/warc/CC-MAIN-20210724063259-20210724093259-00314.warc.gz"}
https://mathlake.com/composition-relations
# Composition of Relations We assume that the reader is already familiar with the basic operations on binary relations such as the union or intersection of relations. Now we consider one more important operation called the composition of relations. ## Definition Let $$A, B$$ and $$C$$ be three sets. Suppose that $$R$$ is a relation from $$A$$ to $$B,$$ and $$S$$ is a relation from $$B$$ to $$C.$$ The composition of $$R$$ and $$S,$$ denoted by $$S \circ R,$$ is a binary relation from $$A$$ to $$C,$$ if and only if there is a $$b \in B$$ such that $$aRb$$ and $$bSc.$$ Formally the composition $$S \circ R$$ can be written as $S \circ R = \left\{ {\left( {a,c} \right) \mid \exists b \in B: {aRb} \land {bSc} } \right\},$ where $$a \in A$$ and $$c \in C.$$ The composition of binary relations is associative, but not commutative. To denote the composition of relations $$R$$ and $$S,$$ some authors use the notation $$R \circ S$$ instead of $$S \circ R.$$ This is, however, inconsistent with the composition of functions where the resulting function is denoted by $y = f\left( {g\left( x \right)} \right) = \left( {f \circ g} \right)\left( x \right).$ The composition of relations $$R$$ and $$S$$ is often thought as their multiplication and is written as $S \circ R = RS.$ ## Powers of Binary Relations If a relation $$R$$ is defined on a set $$A,$$ it can always be composed with itself. So, we may have $R \circ R = {R^2},$ $R \circ R \circ R = {R^3},$ $\underbrace {R \circ R \circ \ldots \circ R}_n = {R^n}.$ ## Composition of Relations in Matrix Form Suppose the relations $$R$$ and $$S$$ are defined by their matrices $$M_R$$ and $$M_S.$$ Then the composition of relations $$S \circ R = RS$$ is represented by the matrix product of $$M_R$$ and $$M_S:$$ ${M_{S \circ R}} = {M_{RS}} = {M_R} \times {M_S}.$ Recall that $$M_R$$ and $$M_S$$ are logical (Boolean) matrices consisting of the elements $$0$$ and $$1.$$ The multiplication of logical matrices is performed as usual, except Boolean arithmetic is used, which implies the following rules: $0 + 0 = 0,\;\;1 + 0 = 0 + 1 = 1,\;\;1 + 1 = 1;$ $0 \times 0 = 0,\;\;1 \times 0 = 0 \times 1 = 0,\;\;1 \times 1 = 1.$ ### Example: Consider the sets $$A = \left\{ {a,b} \right\},$$ $$B = \left\{ {0,1,2} \right\},$$ and $$C = \left\{ {x,y} \right\}.$$ The relation $$R$$ between sets $$A$$ and $$B$$ is given by $R = \left\{ {\left( {a,0} \right),\left( {a,2} \right),\left( {b,1} \right)} \right\}.$ The relation $$S$$ between sets $$B$$ and $$C$$ is defined as $S = \left\{ {\left( {0,x} \right),\left( {0,y} \right),\left( {1,y} \right),\left( {2,y} \right)} \right\}.$ To determine the composition of the relations $$R$$ and $$S,$$ we represent the relations by their matrices: ${M_R} = \left[ {\begin{array}{*{20}{c}} 1&0&1\\ 0&1&0 \end{array}} \right],\;\;{M_S} = \left[ {\begin{array}{*{20}{c}} 1&1\\ 0&1\\ 0&1 \end{array}} \right].$ The matrix of the composition of relations $$M_{S \circ R}$$ is calculated as the product of matrices $$M_R$$ and $$M_S:$$ ${M_{S \circ R}} = {M_R} \times {M_S} = \left[ {\begin{array}{*{20}{c}} 1&0&1\\ 0&1&0 \end{array}} \right] \times \left[ {\begin{array}{*{20}{c}} 1&1\\ 0&1\\ 0&1 \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {1 + 0 + 0}&{1 + 0 + 1}\\ {0 + 0 + 0}&{0 + 1 + 0} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} 1&1\\ 0&1 \end{array}} \right].$ In roster form, the composition of relations $$S \circ R$$ is written as $S \circ R = \left\{ {\left( {a,x} \right),\left( {a,y} \right),\left( {b,y} \right)} \right\}.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995761513710022, "perplexity": 86.13334290050909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446709929.63/warc/CC-MAIN-20221126212945-20221127002945-00191.warc.gz"}
https://www.thejournal.club/c/paper/104119/
#### Network reconstruction via density sampling ##### Tiziano Squartini, Giulio Cimini, Andrea Gabrielli, Diego Garlaschelli Reconstructing weighted networks from partial information is necessary in many important circumstances, e.g. for a correct estimation of systemic risk. It has been shown that, in order to achieve an accurate reconstruction, it is crucial to reliably replicate the empirical degree sequence, which is however unknown in many realistic situations. More recently, it has been found that the knowledge of the degree sequence can be replaced by the knowledge of the strength sequence, which is typically accessible, complemented by that of the total number of links, thus considerably relaxing the observational requirements. Here we further relax these requirements and devise a procedure valid when even the the total number of links is unavailable. We assume that, apart from the heterogeneity induced by the degree sequence itself, the network is homogeneous, so that its (global) link density can be estimated by sampling subsets of nodes with representative density. We show that the best way of sampling nodes is the random selection scheme, any other procedure being biased towards unrealistically large, or small, link densities. We then introduce our core technique for reconstructing both the topology and the link weights of the unknown network in detail. When tested on real economic and financial data sets, our method achieves a remarkable accuracy and is very robust with respect to the sampled subsets, thus representing a reliable practical tool whenever the available topological information is restricted to small portions of nodes. arrow_drop_up
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8685478568077087, "perplexity": 623.3285814311682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358570.48/warc/CC-MAIN-20211128164634-20211128194634-00324.warc.gz"}
http://math.stackexchange.com/users/43901/user43901?tab=summary
user43901 Reputation 493 Next privilege 500 Rep. Access review queues 3 17 Impact ~20k people reached • 0 posts edited ### Questions (36) 5 Existence of an improper integral without the existence of a limit 5 If the graph of a function $f: A \rightarrow \mathbb R$ is compact, is $f$ continuous where $A$ is a compact metric space? 5 How is Cantor's diagonal argument related to Russell's paradox in naive set theory? 5 $\sin(1/x)$ not uniformly continuous 4 What is the interior of a singleton? ### Reputation (493) This user has no recent positive reputation changes 4 If every subset of $M$ is clopen, then show that any function $f: M \rightarrow N$ is continuous where M and N are metric spaces 3 Suppose $M$ is connected and suppose $f : M \rightarrow \mathbb R$ is continuous and only has irrational values, then $f$ is a constant function. ### Tags (20) 7 general-topology × 13 0 integration × 7 3 real-analysis × 22 0 elementary-set-theory × 5 3 connectedness × 3 0 compactness × 5 3 continuity × 2 0 derivatives × 5 0 calculus × 9 0 dynamical-systems × 3 ### Accounts (2) Mathematics 493 rep 317 Stack Overflow 101 rep 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9509915113449097, "perplexity": 1262.379142853511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398459875.44/warc/CC-MAIN-20151124205419-00333-ip-10-71-132-137.ec2.internal.warc.gz"}
http://techie-buzz.com/science/fermilab-no-particle-discovered.html
# Eureka Moment Claims Rejected: No New Particle Discovered At Fermilab It is disappointing news for the particle physics community coming out of Fermilab, we’re afraid. Fermilab has confirmed that the earlier bump seen in the data, presumed to be a new particle, is not significant enough to be considered a detection. We broke the news in emphatic fashion of a new particle discovered in Fermilab in an earlier post. ### Story So Far We had reported that there was a bump found at about 145 GeV with a 5 GeV spread. Data acquired from proton-antiproton collisions with semi-leptonic dijet emissions showed a peak at 145 GeV. The curve was Gaussian in nature with a spread of 5 GeV on either side of the peak. Initial analysis showed that the curve had a three-sigma confidence level (More on confidence levels later). There was thus a strong possibility that a new particle was on the way, since no boson is known having a mass of 145 GeV. The new particle was named as the Z’ or the W’ (Z-primed or W-primed) boson. The Standard Model, wildly successful in particle physics, did not predict this and to fit this in would have required a serious rethinking of known physics. Physicists were naturally excited. This detection was made at Fermilab at their CDF detector. Fermilab was the biggest particle accelerator till the Large Hadron Collider came onto the scene. It has been a major progressive force for particle physics over the last three decades, also serving to etch the American superiority in the particle physics arena. It is however expected to be closed down forever late this year. Data recovered from it over the years is still being analysed, and as such will continue for the next five years. One of the two detectors at Fermilab the CDF had detected the anomalous bump of our present interest. ### So What’s Wrong? There are two problems with the CDF data it cannot be corroborated and it falls outside the required confidence levels. Problem 1: The other detector at Fermilab, named DZero, repeated the experiment, but failed to come up with any conclusive evidence of detection. The negative DZero result would definitely cast shadows over the CDF discovery. Scientists are now baffled as to how the two detectors extremely alike could give such widely varying results under the same experimental conditions. However this is a very good safeguard. Problem 2: Remember that earlier we had said something about a three-sigma confidence level? It means that the data is reliable and the chances of it being wrong are one-in-a-thousand (99.9% accurate). Confidence levels measure reliability of data. For a discovery to be accepted by the scientific community, the event must have at least a five-sigma confidence level or higher, which means that doubts must reduce to less than one-in-a-million. The problem with the current bump is that it lies just below the five-sigma confidence level. Take a look at the above graph. Never mind the mathematics and abstruse symbols. Know that the horizontal axis represents the mass of the particles and the vertical axis represents the number of particles detected. At the 145-150 GeV range (point 145 GeV on the horizontal axis), you’d have expected a curve if the previous CDF results were replicated. This is marked with the dotted curve. There is nothing there as far as DZero is concerned. The red regions represent detections and these are in complete agreement with the Standard Model. There is no anomaly to be seen anywhere. On both counts, the bump is rejected as a new discovery. ### What changes then? Practically nothing changes. The 145 GeV particle, if discovered, would have been interesting, as the Standard Model doesn’t predict it. Further, it could have provided a mechanism for particles acquiring mass without the need of the Higgs boson (essentially becoming the new God particle’). With it being ruled out, the Standard Model stands as it is with the Higgs mechanism being the most favoured mechanism for mass generation. The discovery would have been exciting, but the field’s exciting even without it. After all, science is like this. DZero spokesperson Stefan Soldner-Rembold  put it approproiately in a Fermilab press conference: This is exactly how science works. Independent verification of any new observation is the key principle of scientific research. So very true!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8345906734466553, "perplexity": 1153.415596938976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513512.31/warc/CC-MAIN-20171211125234-20171211145234-00305.warc.gz"}
http://math.stackexchange.com/questions/258934/how-to-prove-this-simple-statement-max-a-b-frac12aba-b
# How to prove this simple statement: $\max\{a,b\}=\frac{1}{2}(a+b+|a-b|)$ [duplicate] I am trying to prove this statement. for any $a,b \in \mathbb{R}$, $$\max\{a,b\}=\frac{1}{2}\big(a+b+|a-b|\big)$$ and $$\min\{a,b\}=\frac{1}{2}\big(a+b-|a-b|\big)$$ I am eating myself not knowing where and how to start. For any guidance Iwill be thankful in tons - ## marked as duplicate by Martin Sleziak, Claude Leibovici, Sami Ben Romdhane, TooTone, Git GudMar 10 '14 at 12:50 Hint: $\max(a,b) + \min(a,b)=a+b$ and $\max(a,b)-\min(a,b)=|a-b|$. Now solve for $\max(a,b)$ and $\min(a,b)$ – Thomas Andrews Dec 14 '12 at 22:17 Without loss of generality, we can assume that $a = \max(a, b)$ and $b=\min(a, b)$, as both of the expressions are symmetric. So since $a \geq b$ we have $a-b \geq 0$, thus $a-b=|a-b|$, and so $\dfrac{a+b+|a-b|}{2} = \dfrac{a+b+a-b}{2}=a=\max(a, b)$. Similarly, we have $\dfrac{a+b-|a-b|}{2}=\dfrac{a+b-(a-b)}{2} = b=\min(a, b)$. - great, thanks a lot – doniyor Dec 14 '12 at 22:17 @doniyor: As you say, $\frac12(a+b)$ is the arithmetic mean of $a$ and $b$, so it’s the point midway between them. $\frac12|a-b|$ is half the distance between $a$ and $b$. If you start right in the middle, at $\frac12(a+b)$, and add half the distance between $a$ and $b$, you reach the right endpoint of the interval with $a$ and $b$ as endpoints; if you subtract half the distance between them, you reach the left endpoint. – Brian M. Scott Dec 14 '12 at 22:24 @BrianM.Scott oh okay, so i am reaching the min or max thru this right? – doniyor Dec 14 '12 at 22:29 @doniyor: Yes: the interval with $a$ and $b$ as endpoints is $$\big[\min\{a,b\},\max\{a,b\}\big]\;,$$ so if you start at the middle of the interval and move half the length of the interval to the right, you reach $\max\{a,b\}$. – Brian M. Scott Dec 14 '12 at 22:32 @BrianM.Scott great, Brian, thank you so much! you are better than my prof in the lecture :D – doniyor Dec 14 '12 at 22:33 What is the definition of $\max\{a,b\}$? Hint: it involves two possible cases. For each of these cases, check that the right hand side gives the same answer. Job done. Repeat for $\min\{a,b\}$. - thanks, i will try now – doniyor Dec 14 '12 at 22:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9478191137313843, "perplexity": 231.56530800723513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051268601.70/warc/CC-MAIN-20160524005428-00125-ip-10-185-217-139.ec2.internal.warc.gz"}
http://becomingindiedev.blogspot.com.es/2013/05/integration-our-best-friend-part-2.html
## jueves, 23 de mayo de 2013 ### Integration: Our Best Friend (Part 2) In the previous part of this post, we covered some mathematical foundations behind the concepts of position, velocity and acceleration. We explained that velocity is the derivative of position and that acceleration is the derivative of velocity. We also saw that derivatives represent rates of changes, and therefore, velocity is the rate of change in position and acceleration the rate of change in velocity. If velocity is constant (i.e. acceleration is 0), the rate of change in position is constant, and the position would change linearly. If on the other hand acceleration $>$ 0 (but constant), velocity would grow linearly and position would grow quadratically (i.e. forming a curve). Figure 1 shows different possibilities: stationary objects (no velocity), uniform motion (constant velocity) and motion with constants acceleration. Figure 1 (source: http://cnx.org) The problem that tries to solve an integrator is how to figure out (fast and efficiently) the velocity from the acceleration and the position from the velocity (i.e. doing the inverse of derivatives). And why? Because in most games, the acceleration on an object is the only information that we have. Actually, this is not completely true. The information that we usually have is the force applied to an object (due to a collision or a hit, for example). However, as the Newton's second law states: 'The acceleration of a body is directly proportional to, and in the same direction as, the net force acting on the body, and inversely proportional to its mass'. Mathematically, this can be represented as follows: $F = m \cdot a$     (1) Therefore, knowing the direction and magnitude of an applied force, we can figure out the acceleration from (1): $a = \frac{F}{m}$     (2) At this point, we know the acceleration. We may find two different situations here, depending on whether we have constant acceleration or not. Constant acceleration When we have constant acceleration, we can simply move our minds back to high school times and apply the famous equations that models how position changes when there is constant acceleration, that is: $p = p_{0} + v_{0} \cdot t + \frac{1}{2} \cdot a \cdot t^{2}$  (3) Then, we could simply update velocity for the next iteration: $v = v_{0} + a \cdot t$  (4) Formulas (3) and (4) are also integration formulas, but they don't approximate: they provide the real position and velocity values. Here, in a couple of comments, they call this type of integration a ballistic or parabolic integrator, even though I haven't found any other references or entries for these names. In any case, the point is that if we know that acceleration is constant, we can and should use these formulas, because they provide accurate results. Euler's integrator, however, accumulates errors over the time and if the time step is big, this error grows faster. Euler's integration uses the following formulas: $p = v \cdot t$ (5) $v = a \cdot t$  (6) Let's see how we could implement both integrators in C. int dt = 1; //time step int position = 0, velocity = 0, acceleration = 10; printf("Using physics formulas\n"); for (i = 1; i <= limit; i++) { printf("Time %d: ", i); position += velocity * dt + acceleration*dt*dt/2; velocity += acceleration*dt; printf("Position %d, Velocity %d\n", position, velocity); } printf("--------------------------------\n"); velocity = 0, position = 0; printf("Euler integration\n"); for (i = 1; i <= limit; i++) { printf("Time %d: ", i); position += velocity * dt; velocity += acceleration * dt; printf(" Position %d, Velocity %d\n", position, velocity); } If we execute the previous code, and we change the number of iterations we can see that the higher the number of iterations, the higher the error is (that is, error is accumulating in Euler's integration). Also, if we reduce the time step (dt), we can see how the Euler's error decreases, whereas it would increase in case of a bigger time step. If you carefully see the formulas, all of this makes sense. Notice that the only difference between the 'accurate' integration and Euler integration is the position update. The former includes more information (basically an extra multiplication by 0.5 * dt^2). The smaller dt, the less significant is the difference with Euler's. Also note that velocity update is exactly the same in both methods. As a conclusion, there is no problem with constant acceleration, as we can achieve exact results. This becomes a little trickier in non-constant acceleration situations. Non-constant acceleration Many physics engines have to deal with non-constant acceleration. For example, consider a force on an object that is proportional to the velocity of the object, that is: $F = -v$    (7) This could model the air friction, for example. In this case, and because we always assume constant mass, we deduce from (2) that if the velocity changes, the acceleration changes. In these cases, we find a differential equation: $a = \frac{dv}{dt} = \frac{-v}{m}$      (8) It's a ordinary differential equation (ODE) because the equation of the velocity has the derivative of the velocity in it. Analytically integrating (i.e. with paper and pen and with exact results) ODEs is tough, but integrating them numerically is pretty straightforward. We have already seen the formulas of Euler integration (5 and 6), but let's re-write them again, this time considering a time step called $h$ (instead of $t$), using the differential notation and considering explicitly the iterations with a subscript: $p_{n+1} = p_{n} + h \cdot \frac{dp_{n}}{dx}$     (9) $v_{n+1} = v_{n} + h \cdot \frac{dv_{n}}{dx}$  (10) Most integration methods follow the same pattern. They evaluate the derivative (remember, the slope) of the variable we want to figure out, and they step forward in time by a fixed amount, h, on the tangent line with that slope. In the next step, they evaluate the derivative at the new position to get a new slope, taking another time step. Chris Hecker gives a very good, graphical explanation of this process here Of course, Euler will have the same problems we discussed previously, but at least if we choose a sufficiently small time step (as we may find in 60 frames/second games, where the time step is approximately 0.017 seconds), results are not that bad. However,  in physics-intensive games, where you need high accuracy, or in order to avoid inaccuracy problems due to drops in frame rates, you'll need other better integration methods, and that's what I'll cover in the last post. Concretely, I will explain Runge-Kuta order 4 and Verlet methods. I'll also explain how I dealt with all this for the game I'm working on and I'll provide a list of references that were useful to me while writing these posts. Some of them will give you more mathematical insight. See you! #### 2 comentarios: 1. Buff, nuestro amigo Runge-Kutta. ¡Qué grandes tiempos aquéllos! Bueno, yo paso de leer todo esto, y además en inglés, pero veo que lo de LaTeX te ha quedado muy bien ;)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9146126508712769, "perplexity": 770.4097084810788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866201.72/warc/CC-MAIN-20180524092814-20180524112814-00477.warc.gz"}
http://www.thefullwiki.org/Robust_statistics
# Robust statistics: Wikis Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles. # Encyclopedia Robust statistics provides an alternative approach to classical statistical methods. The motivation is to produce estimators that are not unduly affected by small departures from model assumptions. ## Introduction Robust statistics seeks to provide methods that emulate classical methods, but which are not unduly affected by outliers or other small departures from model assumptions. In statistics, classical methods rely heavily on assumptions which are often not met in practice. In particular, it is often assumed that the data residuals are normally distributed, at least approximately, or that the central limit theorem can be relied on to produce normally distributed estimates. Unfortunately, when there are outliers in the data, classical methods often have very poor performance. This can be studied empirically by examining the sampling distribution of various estimators under a mixture model, where one mixes in a small amount (1–5% is often sufficient) of contamination. For instance, one may use a mixture of 95% a normal distribution, and 5% a normal distribution with the same mean but significantly higher standard deviation (the errors). In order to quantify the robustness of a method, it is necessary to define some measures of robustness. Perhaps the most common of these are the breakdown point and the influence function, described below. Robust parametric statistics tends to rely on replacing the normal distribution in classical methods with the t-distribution with low degrees of freedom (high kurtosis; degrees of freedom between 4 and 6 have often been found to be useful in practice) or with a mixture of two or more distributions. ## Examples of robust and non-robust statistics Trimmed estimators and Winsorised estimators are general methods to make statistics more robust. M-estimators are a general class of robust statistics. ## Definition There are various definitions of a "robust statistic". Strictly speaking, a robust statistic is resistant to errors in the results, produced by deviations from assumptions[1] (e.g. of normality). This means that if the assumptions are only approximately met, the robust estimator will still have a reasonable efficiency, and reasonably small bias, as well as being asymptotically unbiased, meaning having a bias tending towards 0 as the sample size tends towards infinity. One of the most important cases is distributional robustness[1]. Classical statistical procedures are typically sensitive to "longtailedness" (e.g., when the distribution of the data has longer tails than the assumed normal distribution). Thus, in the context of robust statistics, distributionally robust and outlier-resistant are effectively synonymous[1]. A related topic is that of resistant statistics, which are resistant to the effect of extreme scores. Most statistics are either robust and resistant, or neither. ## Example: speed of light data Gelman et al. in Bayesian Data Analysis (2004) consider a data set relating to speed of light measurements made by Simon Newcomb. The data sets for that book can be found via the Classic data sets page, and the book's website contains more information on the data. Although the bulk of the data look to be more or less normally distributed, there are two obvious outliers. These outliers have a large effect on the mean, dragging it towards them, and away from the center of the bulk of the data. Thus, if the mean is intended as a measure of the location of the center of the data, it is, in a sense, biased when outliers are present. Also, the distribution of the mean is known to be asymptotically normal due to the central limit theorem. However, outliers can make the distribution of the mean non-normal even for fairly large data sets. Besides this non-normality, the mean is also inefficient in the presence of outliers and less variable measures of location are available. ### Estimation of location The plot below shows a density plot of the speed of light data, together with a rug plot (panel (a)). Also shown is a normal QQ-plot (panel (b)). The outliers are clearly visible in these plots. Panels (c) and (d) of the plot show the bootstrap distribution of the mean (c) and the 10% trimmed mean (d). The trimmed mean is a simple robust estimator of location that deletes a certain percentage of observations (10% here) from each end of the data, then computes the mean in the usual way. The analysis was performed in R and 10,000 bootstrap samples were used for each of the raw and trimmed means. The distribution of the mean is clearly much wider than that of the 10% trimmed mean (the plots are on the same scale). Also note that whereas the distribution of the trimmed mean appears to be close to normal, the distribution of the raw mean is quite skewed to the left. So, in this sample of 66 observations, only 2 outliers cause the central limit theorem to be inapplicable. Robust statistical methods, of which the trimmed mean is a simple example, seek to outperform classical statistical methods in the presence of outliers, or, more generally, when underlying parametric assumptions are not quite correct. Whilst the trimmed mean performs well relative to the mean in this example, better robust estimates are available. In fact, the mean, median and trimmed mean are all special cases of M-estimators. Details appear in the sections below. ### Estimation of scale The outliers in the speed of light data have more than just an adverse effect on the mean; the usual estimate of scale is the standard deviation, and this quantity is even more badly affected by outliers because the squares of the deviations from the mean go into the calculation, so the outliers' effects are exacerbated. The plots below show the bootstrap distributions of the standard deviation, median absolute deviation (MAD) and Qn estimator of scale (Rousseeuw and Croux, 1993). The plots are based on 10000 bootstrap samples for each estimator, and some normal random noise was added to the resampled data (smoothed bootstrap). Panel (a) shows the distribution of the standard deviation, (b) of the MAD and (c) of Qn. The distribution of standard deviation is erratic and wide, a result of the outliers. The MAD is better behaved, and Qn is a little bit more efficient than MAD. This simple example demonstrates that when outliers are present, the standard deviation cannot be recommended as an estimate of scale. ### Manual screening for outliers Traditionally, statisticians would manually screen data for outliers, and remove them, usually checking the source of the data to see if the outliers were erroneously recorded. Indeed, in the speed of light example above, it is easy to see and remove the two outliers prior to proceeding with any further analysis. However, in modern times, data sets often consist of large numbers of variables being measured on large numbers of experimental units. Therefore, manual screening for outliers is often impractical. Outliers can often interact in such a way that they mask each other. As a simple example, consider a small univariate data set containing one modest and one large outlier. The estimated standard deviation will be grossly inflated by the large outlier. The result is that the modest outlier looks relatively normal. As soon as the large outlier is removed, the estimated standard deviation shrinks, and the modest outlier now looks unusual. This problem of masking gets worse as the complexity of the data increases. For example, in regression problems, diagnostic plots are used to identify outliers. However, it is common that once a few outliers have been removed, others become visible. The problem is even worse in higher dimensions. Robust methods provide automatic ways of detecting, downweighting (or removing), and flagging outliers, largely removing the need for manual screening. ### Variety of applications Although this article deals with general principles for univariate statistical methods, robust methods also exist for regression problems, generalized linear models, and parameter estimation of various distributions. ## Measures of robustness The basic tools used to describe and measure robustness are, the breakdown point, the influence function and the sensitivity curve. ### Breakdown point Intuitively, the breakdown point of an estimator is the proportion of incorrect observations (i.e. arbitrarily large observations) an estimator can handle before giving an arbitrarily large result. For example, given n independent random variables $(X_1,\dots,X_n)\sim\mathcal{N}(0,1)$ and the corresponding realizations $x_1,\dots,x_n$, we can use $\overline{X_n}:=\frac{X_1+\cdots+X_n}{n}$ to estimate the mean. Such an estimator has a breakdown point of 0 because we can make $\overline{x}$ arbitrarily large just by changing any of $x_1,\dots,x_n$. The higher the breakdown point of an estimator, the more robust it is. Intuitively, we can understand that a breakdown point cannot exceed 50% because if more than half of the observations are contaminated, it is not possible to distinguish between the underlying distribution and the contaminating distribution. Therefore, the maximum breakdown point is 0.5 and there are estimators which achieve such a breakdown point. For example, the median has a breakdown point of 0.5. The X% trimmed mean has breakdown point of X%, for the chosen level of X. Huber (1981) and Maronna et al. (2006) contain more details. Statistics with high breakdown points are sometimes called resistant statistics.[2] #### Example: speed of light data In the speed of light example, removing the two lowest observations causes the mean to change from 26.2 to 27.75, a change of 1.55. The estimate of scale produced by the Qn method is 6.3. Intuitively, we can divide this by the square root of the sample size to get a robust standard error, and we find this quantity to be 0.78. Thus, the change in the mean resulting from removing two outliers is approximately twice the robust standard error. The 10% trimmed mean for the speed of light data is 27.43. Removing the two lowest observations and recomputing gives 27.67. Clearly, the trimmed mean is less affected by the outliers and has a higher breakdown point. Notice that if we replace the lowest observation, -44, by -1000, the mean becomes 11.73, whereas the 10% trimmed mean is still 27.43. In many areas of applied statistics, it is common for data to be log-transformed to make them near symmetrical. Very small values become large negative when log-transformed, and zeroes become negatively infinite. Therefore, this example is of practical interest. ### Empirical influence function Tukey's biweight function The empirical influence function gives us an idea of how an estimator behaves when we change one point in the sample and relies on the data (i.e. no model assumptions). On the right is Tukey's biweight function, which, as we will later see, is an example of what a "good" (in a sense defined later on) empirical influence function should look like. The context is the following: 1. $(\Omega,\mathcal{A},P)$ is a probability space, 2. $(\mathcal{X},\Sigma)$ is a measure space (state space), 3. Θ is a parameter space of dimension $p\in\mathbb{N}^*$, 4. (Γ,S) is a measure space, 5. $\gamma:\Theta\rightarrow\Gamma$ is a projection, 6. $\mathcal{F}(\Sigma)$ is the set of all possible distributions on Σ For example, 1. $(\Omega,\mathcal{A},P)$ is any probability space, 2. $(\mathcal{X},\Sigma)=(\mathbb{R},\mathcal{B})$, 3. $\Theta=\mathbb{R}\times\mathbb{R}^+$ 4. $(\Gamma,S)=(\mathbb{R},\mathcal{B})$, 5. $\gamma:\mathbb{R}\times\mathbb{R}^+\rightarrow\mathbb{R}$ is defined by γ(x,y) = x. The definition of an empirical influence function is: Let $n\in\mathbb{N}^*$ and $X_1,\dots,X_n:(\Omega, \mathcal{A})\rightarrow(\mathcal{X},\Sigma)$ are iid and $(x_1,\dots,x_n)$ is a sample from these variables. $T_n:(\mathcal{X}^n,\Sigma^n)\rightarrow(\Gamma,S)$ is an estimator. Let $i\in\{1,\dots,n\}$. The empirical influence function EIFi at observation i is defined by: $EIF_i:x\in\mathcal{X}\mapsto T_n(x_1,\dots,x_{i-1},x,x_{i+1},\dots,x_n)\in\Gamma$ What this actually means is that we are replacing the i-th value in the sample by an arbitrary value and looking at the output of the estimator. This notion of influence function is analogous to other notions of influence function, such as impulse response: it measures sensitivity to the value at a point. ### Influence function and sensitivity curve Instead of relying solely on the data, we could use the distribution of the random variables. The approach is quite different from that of the previous paragraph. What we are now trying to do is to see what happens to an estimator when we change the distribution of the data slightly: it assumes a distribution, and measures sensitivity to change in this distribution. By contrast, the empirical influence assumes a sample set, and measures sensitivity to change in the samples. Let A be a convex subset of the set of all finite signed measures on $\mathcal{X}$. We want to estimate the parameter $\theta\in\Theta$ of a distribution F in A. Let the functional $T:A\rightarrow\Gamma$ be the asymptotic value of some estimator sequence $(T_n)_{n\in\mathbb{N}}$. We will suppose that this functional is Fisher consistent, i.e. $\forall \theta\in\Theta, T(F_\theta)=\theta$. This means that at the model F, the estimator sequence asymptotically measures the right quantity. Let G be some distribution in A. What happens when the data doesn't follow the model F exactly but another, slightly different, "going towards" G? We're looking at: $dF_{G-F}(F) = \lim_{t\rightarrow 0^+}\frac{T(tG+(1-t)F) - T(F)}{t}$, which is the directional derivative of T at F, in the direction of G. Let $x\in\mathcal{X}$. Δx is the probability measure which gives mass 1 to x. We chose G = Δx. The influence function is then defined by: $IF(x; T; F):=\lim_{t\rightarrow 0^+}\frac{T(t\Delta_x+(1-t)F) - T(F)}{t}.$ It describes the effect of an infinitesimal contamination at the point x on the estimate we are seeking, standardized by the mass t of the contamination (the asymptotic bias caused by contamination in the observations). For a robust estimator, we want a bounded influence function, that is, one which does not go to infinity as x becomes arbitrarily large. ## Desirable properties Properties of an influence function which bestow it with desirable performance are: 1. Finite rejection point ρ * , 2. Small gross-error sensitivity γ * , 3. Small local-shift sensitivity λ * . ### Rejection point $\rho^*:=\inf_{r>0}\{r:IF(x;T;F)=0, |x|>r\}$ ### Gross-error sensitivity $\gamma^*(T;F) := \sup_{x\in\mathcal{X}}|IF(x; T ; F)|$ ### Local-shift sensitivity $\lambda^*(T;F) := \sup_{(x,y)\in\mathcal{X}^2\atop x\neq y}\left\|\frac{IF(y ; T; F) - IF(x; T ; F)}{y-x}\right\|$ This value, which looks a lot like a Lipschitz constant, represents the effect of shifting an observation slightly from x to a neighbouring point y, i.e., add an observation at y and remove one at x. ## M-estimators (The mathematical context of this paragraph is given in the section on empirical influence functions.) Historically, several approaches to robust estimation were proposed, including R-estimators and L-estimators. However, M-estimators now appear to dominate the field as a result of their generality, high breakdown point, and their efficiency. See Huber (1981). M-estimators are a generalization of maximum likelihood estimators (MLEs). What we try to do with MLE's is to maximize $\prod_{i=1}^n f(x_i)$ or, equivalently, minimize $\sum_{i=1}^n-\log f(x_i)$. In 1964, Huber proposed to generalize this to the minimization of $\sum_{i=1}^n \rho(x_i)$, where ρ is some function. MLE are therefore a special case of M-estimators (hence the name: "Maximum likelihood type" estimators). Minimizing $\sum_{i=1}^n \rho(x_i)$ can often be done by differentiating ρ and solving $\sum_{i=1}^n \psi(x_i) = 0$, where $\psi(x) = \frac{d\rho(x)}{dx}$ (if ρ has a derivative). Several choices of ρ and ψ have been proposed. The two figures below show four ρ functions and their corresponding ψ functions. For squared errors, ρ(x) increases at an accelerating rate, whilst for absolute errors, it increases at a constant rate. When Winsorizing is used, a mixture of these two effects is introduced: for small values of x, ρ increases at the squared rate, but once the chosen threshold is reached (1.5 in this example), the rate of increase becomes constant. Tukey's biweight (also known as bisquare) function behaves in a similar way to the squared error function at first, but for larger errors, the function tapers off. ### Properties of M-estimators Notice that M-estimators do not necessarily relate to a probability density function. Therefore, off-the-shelf approaches to inference that arise from likelihood theory can not, in general, be used. It can be shown that M-estimators are asymptotically normally distributed, so that as long as their standard errors can be computed, an approximate approach to inference is available. Since M-estimators are normal only asymptotically, for small sample sizes it might be appropriate to use an alternative approach to inference, such as the bootstrap. However, M-estimates are not necessarily unique (i.e. there might be more than one solution that satisfies the equations). Also, it is possible that any particular bootstrap sample can contain more outliers than the estimator's breakdown point. Therefore, some care is needed when designing bootstrap schemes. Of course, as we saw with the speed of light example, the mean is only normally distributed asymptotically and when outliers are present the approximation can be very poor even for quite large samples. However, classical statistical tests, including those based on the mean, are typically bounded above by the nominal size of the test. The same is not true of M-estimators and the type I error rate can be substantially above the nominal level. These considerations do not "invalidate" M-estimation in any way. They merely make clear that some care is needed in their use, as is true of any other method of estimation. ### Influence function of an M-estimator It can be shown that the influence function of an M-estimator T is proportional to ψ (see Huber, 1981 (and 2004), page 45), which means we can derive the properties of such an estimator (such as its rejection point, gross-error sensitivity or local-shift sensitivity) when we know its ψ function. IF(x;T,F) = M − 1ψ(x,T(F)) with the $p\times p$ given by: $M = -\int_{\mathcal{X}}\left(\frac{\partial \psi(x,\theta)}{\partial \theta}\right)_{T(F)}dF(x)$. ### Choice of ψ and ρ In many practical situations, the choice of the ψ function is not critical to gaining a good robust estimate, and many choices will give similar results that offer great improvements, in terms of efficiency and bias, over classical estimates in the presence of outliers (Huber, 1981). Theoretically, ψ functions are to be preferred, and Tukey's biweight (also known as bisquare) function is a popular choice. Maronna et al. (2006) recommend the biweight function with efficiency at the normal set to 85%. ## Robust parametric approaches M-estimators do not necessarily relate to a density function and so are not fully parametric. Fully parametric approaches to robust modeling and inference, both Bayesian and likelihood approaches, usually deal with heavy tailed distributions such as Student's t-distribution. For the t-distribution with ν degrees of freedom, it can be shown that $\psi(x) = \frac{x}{x^2 + \nu}$. For ν = 1, the t-distribution is equivalent to the Cauchy distribution. Notice that the degrees of freedom is sometimes known as the kurtosis parameter. It is the parameter that controls how heavy the tails are. In principle, ν can be estimated from the data in the same way as any other parameter. In practice, it is common for there to be multiple local maxima when ν is allowed to vary. As such, it is common to fix ν at a value around 4 or 6. The figure below displays the ψ-function for 4 different values of ν. ### Example: speed of light data For the speed of light data, allowing the kurtosis parameter to vary and maximizing the likelihood, we get $\hat\mu = 27.40, \hat\sigma = 3.81, \hat\nu = 2.13.$ Fixing ν = 4 and maximizing the likelihood gives $\hat\mu = 27.49, \hat\sigma = 4.51.$ ## Robust decision theory Decision theory based on maximizing expected value or the expected utility hypothesis is sensitive to assumptions about probabilities of various outcomes, particularly if expectation is dominated by rare extreme events. By contrast, non-probabilistic decision theories like minimax and minimax regret are independent of assumptions about the probabilities of outcomes, depending only on evaluating possible outcomes and their desirabilities. Scenario analysis and stress testing are informal non-probabilistic methods, while info-gap decision theory is a formal robust decision theory. Possibility theory and Dempster–Shafer theory are other non-probabilistic methods. Advocates of probabilistic approaches to decision theory argue that in fact all decision rules can be derived or dominated by Bayesian methods, appealing to results such as the complete class theorems, which show that all admissible decision rules are equivalent to a Bayesian decision rule with some prior distribution (possibly improper) and some utility function. ## Related concepts A pivotal quantity is a function of data, whose underlying population distribution is a member of a parametric family, that is not dependent on the values of the parameters. An ancillary statistic is such a function that is also a statistic, meaning that it is computed in terms of the data alone. Such functions are robust to parameters in the sense that they are independent of the values of the parameters, but not robust to the model in the sense that they assume an underlying model (parametric family), and in fact such functions are often very sensitive to violations of the model assumptions. Thus test statistics, frequently constructed in terms of these to not be sensitive to assumptions about parameters, are still very sensitive to model assumptions. ## Key contributors Key contributors to the field of robust statistics include Frank Hampel, Peter J. Huber and John Tukey. ## References 1. ^ a b c Robust Statistics, Peter. J. Huber, Wiley, 1981 (republished in paperback, 2004), page 1. 2. ^ Resistant statistics, David B. Stephenson • Robust Statistics - The Approach Based on Influence Functions, Frank R. Hampel, Elvezio M. Ronchetti, Peter J. Rousseeuw and Werner A. Stahel, Wiley, 1986 (republished in paperback, 2005) • Robust Statistics, Peter. J. Huber, Wiley, 1981 (republished in paperback, 2004) • Robust Regression and Outlier Detection, Peter J. Rousseeuw and Annick M. Leroy, Wiley, 1987 (republished in paperback, 2003) • Robust Statistics - Theory and Methods, Ricardo Maronna, Doug Martin and Victor Yohai, Wiley, 2006 • Bayesian Data Analysis, Andrew Gelman, John B. Carlin, Hal S. Stern and Donald B. Rubin, Chapman & Hall/CRC, 2004 • Alternatives to the Median Absolute Deviation, P. J. Rousseeuw and C. Croux, C., Journal of the American Statistical Association, 88, 1993
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 43, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301735162734985, "perplexity": 687.2955657091305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894289.49/warc/CC-MAIN-20140722025814-00132-ip-10-33-131-23.ec2.internal.warc.gz"}
http://theoilconundrum.blogspot.com/2012_01_01_archive.html
## Context/Earth [[ Check out my Wordpress blog Context/Earth for environmental and energy topics tied together in a semantic web framework ]] ## Monday, January 23, 2012 ### Thermal Diffusion and the Missing Heat I have this documented already (in The Oil Conundrum) but let me put a new spin on it. What I will do is solve the heat equation with initial conditions and boundary conditions for a simple experiment. And then I will add two dimensions of Maximum Entropy priors. The situation is measuring the temperature of a buried sensor situated at some distance below the surface after an impulse of thermal energy is applied. The physics solution to this problem is the heat kernel function which is the impulse response or Green's function for that variation of the master equation. This is pure diffusion with no convection involved (heat is not sensitive to fields, gravity or electrical, so no convection). However the diffusion coefficient involved in the solution is not known to any degree of precision. The earthen material that the heat is diffusing through is heterogeneously disordered, and all we can really guess at that it has a mean value for the diffusion coefficient. By inferring through the maximum entropy principle, we can say that the diffusion coefficient has a PDF that is exponentially distributed with a mean value D. We then work the original heat equation solution with this smeared version of D, and then the kernel simplifies to a exp() solution. $${1\over{2\sqrt{Dt}}}e^{-x/\sqrt{Dt}}$$ But we also don't know the value of x that well and have uncertainty in its value. If we give a Maximum Entropy uncertainty in that value, then the solution simpilfies to $${1\over2}{1\over{x_0+\sqrt{Dt}}}$$ where x0 is a smeared value for x. This is a valid approximation to the solution of this particular problem and the following Figure 1 is a fit to experimental data. There are two parameters to the model, an asymptotic value that is used to extrapolate a steady state value based on the initial thermal impulse and the smearing value which generates the red line. The slightly noisy blue line is the data, and one can note the good agreement. Figure 1: Fit of thermal dispersive diffusion model (red) to a heat impulse response (blue). Notice the long tail on the model fit.  The far field response in this case is the probability complement of the near field impulse response. In other words, what diffuses away from the source will show up at the adjacent target. By treating the system as two slabs in this way, we can give it an intuitive feel. By changing an effective scaled diffusion coefficient from small to large, we can change the tail substantially, see Figure 2. We call it effective because the stochastic smearing on D and Length makes it scale-free and we can longer tell if the mean in D or Length is greater. We could have a huge mean for D and a small mean for Length, or vice versa, but we could not distinguish between the cases, unless we have measurements at more locations. Figure 2 : Impulse response with increasing diffusion coefficient top to bottom. The term x represents time, not position . In practice, we won't have a heat impulse as a stimulus. A much more common situation involves a step input for heat. The unit step response is the integral of the scaled impulse response The integral shows how the heat sink target transiently draws heat from the source.  If the effective diffusion coefficient is very small, an outlet for heat dispersal does not exist and the temperature will continue to rise. If the diffusion coefficient is zero, then the temperature will increase linearly with time, t (again this is without a radiative response to provide an outlet). Figure 3 : Unit step response of dispersed thermal diffusion. The smaller the effective thermal diffusion coefficient, the longer the heat can stay near the source. Eventually the response will attain a square root growth law, indicative of a Fick's law regime of what is often referred to as parabolic growth (somewhat of a misnomer).  The larger the diffusion coefficient, the more that the response will diverge from the linear growth. All this means is that the heat is dispersively diffusing to the heat sink. Application to AGW This has implications for the "heat in the pipeline" scenario of increasing levels of greenhouse gases and the expected warming of the planet.  Since the heat content of the oceans are about 1200 times that of the atmosphere, it is expected that a significant portion of the heat will enter the oceans, where the large volume of water will act as a heat sink.  This heat becomes hard to detect because of the ocean's large heat capacity; and it will take time for the climate researchers to integrate the measurements before they can conclusively demonstrate that diffusion path. In the meantime, the lower atmospheric temperature may not change as much as it could, because the GHG heat gets diverted to the oceans.  The heat is therefore "in the pipeline", with the ocean acting as a buffer, capturing the heat that would immediately appear in the atmosphere in the absence of such a large heat sink.  The practical evidence for this is a slowing of the atmospheric temperature rise, in accordance with the slower sqrt(t) rise than the linear t.   However, this can only go on so long, and when the ocean's heat sink provides a smaller temperature difference than the atmosphere, the excess heat will cause a more immediate temperature rise nearer the source, instead of being spread around. In terms of AGW, whenever the global temperature measurements start to show divergence from the model, it is likely due to the ocean's heat capacity.   Like the atmospheric CO2, the excess heat is not "missing" but merely spread around. EDIT: The contents of this post are discussed on The Missing Heat isn't Missing at all. I mentioned in comments that the analogy is very close to sizing a heat sink for your computer’s CPU. The heat sink works up to a point, then the fan takes over to dissipate that buffered heat via the fins. The problem is that the planet does not have a fan nor fins, but it does have an ocean as a sink. The excess heat then has nowhere left to go. Eventually the heat flow reaches a steady state, and the pipelining or buffering fails to dissipate the excess heat. What's fittingly apropos is the unification of the two "missing" cases of climate science. 1. The "missing" CO2. Skeptics often complain about the missing CO2 in atmospheric measurements from that anticipated based on fossil fuel emissions. About 40% was missing by most accounts. This lead to confusion between the ideas of residence times versus adjustment times of atmospheric CO2. As it turns out, a simple model of CO2 diffusing to sequestering sites accurately represented the long adjustment times and the diffusion tails account for the missing 40%. I derived this phenomenon using diffusion of trace molecules, while most climate scientists apply a range of time constants that approximate diffusion. 2. The "missing" heat. Concerns also arise about missing heat based on measurements of the average global temperature. When a TCR/ECS* ratio of 0.56 is asserted, 44% of the heat is missing. This leads to confusion about where the heat is in the pipeline. As it turns out, a simple model of thermal energy diffusing to deeper ocean sites may account for the missing 44%. In this post, I derived this using a master heat equation and uncertainty in the parameters. Isaac Held uses a different approach based on time constants. So that is the basic idea behind modeling the missing quantities of CO2 and of heat -- just apply a mechanism of dispersed diffusion. For CO2, this is the Fokker-Planck equation and for temperature, the heat equation. By applying diffusion principles, the solution arguably comes out much more cleanly and it will lead to better intuition as to the actual physics behind the observed behaviors. I was alerted to this paper by Hansen et al (1985) which uses a box diffusion model. Hansen’s Figure 2 looks just like my Figure 3 above. This bends over just like Hansen’s does due to the diffusive square root of time dependence. When superimposed, it is not quite as strong a bend as shown in Figure 4 below. Figure 4: Comparison against Hansen's model of diffusion This missing heat is now clarified in my mind. In the paper Hansen calls it “unrealized warming”, which is heat entering into the ocean without raising the climate temperature substantially. EDIT: The following figure is a guide to the eye which explains the role of the ocean in short- and long-term thermal diffusion, i.e. transient climate response. The data from BEST illustrates the atmospheric-land temperatures, which are part of the fast response to the GHG forcing function. While the GISTEMP temperature data reflects more of the ocean's slow response. Figure 5: Transient Climate Response explanation Figure 6: Hansen's original projection of transient climate sensitivity plotted against the GISTEMP data, which factors in ocean surface temperatures. * TCR = Transient Climate Response ECS = Equilibrium Climate Sensitivity "Somewhere around 23 x 10^22 Joules of energy over the past 40 years has gone into the top 2000m of the ocean due to the Earth’s energy imbalance " That is an amazing number. If one assumes an energy imbalance of 1 watt/m^2, and integrate this over 40 years and over the areal cross-section of the earth, that accounts for 16 x 10^22 joules. The excess energy is going somewhere and it doesn't always have to be reflected in an atmospheric temperature rise. To make an analogy consider the following scenario. Lots of people understand how the heat sink works that is attached to the CPU inside a PC. What the sink does is combat the temperature rise caused by the electrical current being injected into the chip. That number multiplied by the supply voltage gives a power input specified in watts. Given a large enough attached heat sink, the power gets dissipated to a much large volume before it gets a chance to translate quickly to a temperature rise inside the chip. Conceivably, with a large enough thermal conductance and a large enough mass for the heat sink, and an efficient way to transfer the heat from the chip to the sink, the process could defer the temperature rise to a great extent. That is an example of a transient thermal effect. The same thing is happening to the earth, to an extent that we know must occur but with some uncertainty based on the exact geometry and thermal diffusivity of the ocean and the ocean/atmospheric interface. The ocean is the heat sink and the atmosphere is the chip. The difference is that much of the input power is going directly into the ocean, and it is getting diffused into the depths. The atmosphere doesn't have to bear the brunt of the forcing function until the ocean starts to equilibrate with the atmosphere's temperature. This of course will take a long time based on what we know about temporal thermal transients and the Fickian response of temperature due to a stimulus. ## Sunday, January 22, 2012 ### The belief in Chaos The Chief says : “Nothing just happens randomly in the Earth climate system. Randomness – or stochasticity – is merely a statistical approach to things you haven’t understood yet. ” One of the unsung achievements in physics, in comparison to the imagination-capturing aspects of relativity and quantum mechanics, is statistical mechanics. This will scale at many levels -- originally intended to bridge the gap between the microscopic theory and macroscopic measurements, such as with the Planck response, scientists have provided statistical explanations to large coarse-grained behaviors as well (wind, ocean wave mechanics, etc). It's not that we don't understand the chaotic underpinnings, more like that we don't always need to, due the near-universal utility of the Boltzmann partition function (see the discussion on the Thermodynamics Climate Etc thread). Many scientists consider pawning off difficulties to "Chaos" as a common crutch. This is not my original thought, as it is discussed at depth in Science of Chaos or Chaos in Science" by Bricmont. The issue with chaos theories is that they still have to obey some fundamental ideas of energy balance and conservation laws. Since stochastic approaches deal with probabilities, one rarely experiences problems with the fundamental bookkeeping. The basic idea with probability, that it has to integrate to unity probability, making it a slick tool for basic reasoning. That is why I like to use it so much for my own basic understanding of climate science (and all sorts of other things), but unfortunately leads to heated disagreements to the chaos fans and non-linear purists, such as David Young and Chief Hydrologist. They are representative of the opposite side of the debate. You notice this when Chief states the importance of chaos theory: "You should try to understand and accept that – along with the reality that my view has considerable support in the scientific literature. You should accept also that I am the future and you are the past. I think they sould teach the 3 great ideas in 20th centruy physics – relativity, quantum mechanics and chaos theory. They are such fun." There are only 4 fundamental forces in the universe, gravity, electromagnetism, and the strong and weak nuclear forces. For energy balance of the earth, all that matters is the electromagnetic force, as that is the predominant way that the earth exchanges energy with the rest of the universe. The 33 degree C warming temperature differential from the earth's gray-body default needs to be completely explained by a photonic mechanism. The suggestion is that clouds could change the climate. Unfortunately this points it in the incorrect direction of explaining the 33C difference. Water vapor, when not condensed into droplets, acts as a strong GHG and likely does cause a significant fraction of the 33C rise. But when the water vapor starts condensing into droplets and thus forming clouds, the EM radiation begins to partially reflect the incoming radiation, and thus the sun providing even less heat to the earth. So obviously there is a push-pull effect to raising water vapor concentrations in the atmosphere. Chief is daring us with his statement that "I am the future and you are the past". He evidently thinks that clouds are the feedback that will not be understood unless we drop down to chaos considerations. In other words, that any type of careful statistical considerations of the warming impact of increasing water vapor concentrations with the cooling impact of cloud albedo, will not be explainable unless a full dynamical model is attempted and done correctly. The divide is between whether one believes as Chief does, that the vague "chaos theory", which is really short-hand for doing a complete dynamical calculation of everything, no exceptions, is the answer. Or is the answer one of energy balance and statistical considerations? I lean toward the latter, along with the great majority of climate scientists, as Andrew Lacis described a while ago here and in his comments. The full dynamics, as Lacis explained is useful for understanding natural variability, and for practical applications such as weather prediction. But it is not the bottom-line, as chaotic natural variability always has to obey the energy balance constraints. And the only practical way to do that is by considering a statistical view. The bottom-line is that I chuckle at much of the discussion of chaos and non-linearity when it comes to try to understand various natural phenomenon. The classic case is the simplest model of growth described by the logistic differential equation. This is a non-linear equation with a solution described by the so-called logistic function. Huge amounts of work have gone into modeling growth using the logistic equation because of the appearance of an S-shaped curve in some empirical observations. (when it is a logistic difference equation, chaotic solutions result but we will ignore that for this discussion) Alas, there are trivial ways of deriving the same logistic function without having to assume non-linearity or chaos; instead one only has to assume disorder in the growth parameters and in the growth region. The derivation takes a few lines of math (see the TOC). Once one considers this picture, the logistic function arguably has a more pragmatic foundation based on stochastics than on non-linear determinism. That is the essential problem of invoking chaos, in that it precludes (or at least masks) considerations of the much more mundane characteristics of the system. The mundane is that all natural behaviors are smeared out by differences in material properties/characteristics, variation in geometrical considerations, and in thermalization contributing to entropy. The issue is that obsessives such as the Chief and others think that chaos is the hammer and that they can apply it to every problem that appears to look like a nail. Certainly, I can easily understand how the disorder in a large system can occasionally trigger tipping points or lead to stochastic resonances, but these are not revealed by analysis of any governing chaotic equations. They simply result from the disorder allowing behaviors to penetrate a wider volume of the state space. When these tickle the right positive feedback modes of the system, then we can observe some of the larger fluctuations. The end result is that the decadal oscillations are of the order of a tenths of degrees in global average temperature. Of course I am not wedded to this thesis, just that it is a pragmatic result of stochastic and uncertainty considerations that I and a number other people are interested in. This is reproduced from a comment I made to Climate Etc: I am glad that Myrrh posted this bit of pseudoscience. As I said earlier (when I thought that this thread was winding down) the actual pseudoscience is in the crackpot theories that commenters submit to this site. There is a huge amount of projection that goes on amongst the skeptical readership -- the projection is in the framing of their own scientific inadequacies onto the qualified scientists trying to understand and quantify the climate system. Projection is a devious rhetorical strategy. It is an offensive as opposed to defensive approach. It catapults the propaganda from one of doubt on the skeptical side, to an apparent uncertainty on the mainstream science side. This adds FUD to the debate. As in politics, by attacking the strong points of your opponents argument, you can actually make him look weaker. Everyone realizes how effective this is whenever the audience does not have the ability to discriminate nonsense from objective fact. The key to projection is to make sure that the confidence game is played out according to script amongst the participants, both those in on the game and those unaware of what is happening. The shills in on the game have it easy -- they just have to remain silent, as it appears that no comment is condoning the arguments. The marks are the readership that gets suckered into the pseudoscience arguments, and are not sophisticated enough to be aware of the deception. The antidote to this is to not remain silent. Call these confidence tricksters, including Myrrh, out on their game. Do it every time, because a sucker is born every minute. On the street corner, the 3-Card Monte hucksters will just move to another corner when they get called on it. Fortunately, there is no place for the tricksters to relocate on this site. Ultimately, science has the advantage, and as the objective of Climate Etc is to work out uncertainty objectively, not by confidence games, you all should know about the way to proceed. Sorry if this bursts any bubbles, but when it comes to an aggressively nonsensical argument as that relayed by Myrrh, someone has to keep score. Then you have a kook like StephanTheDenier, where he actually tries a thinly veiled psychological threat against me, by saying ("The souls of hose 600 people that did freeze to death in winter coldness, will haunt you in your sleep…"). What can I say but that threats are even more preposterous than projecting via lame theories. ---------- Outside of a sled dog, an igloo is an Eskimo's best friend. Inside of a sled dog, it's too cramped to sleep. ## Tuesday, January 17, 2012 ### Wave Energy Spectrum Ocean waves are just as disordered as the wind. We may not notice this because the scale of waves is usually smaller. In practice, the wind energy distribution relates to an open water wave energy distribution via similar maximum entropy disorder considerations. The following derivation assumes a deep enough water such that troughs do not touch bottom First, we make a maximum entropy estimation of the energy of a one-dimensional propagating wave driven by a prevailing wind direction. The mean energy of the wave is related to the wave height by the square of the height, H. This makes sense because a taller wave needs a broader base to support that height, leading to a scaled pseudo-triangular shape, as shown in Figure 1 below. Figure 1: Total energy in a directed wave goes as the square of the height, and the macroscopic fluid properties suggest that it scales to size. This leads to a dispersive form for the wave size distribution Since the area of such a scaled triangle goes as H^2, the MaxEnt cumulative probability is: $$P(H) = e^{-a H^2}$$ where a is related to the mean energy of an ensemble of waves. This relationship is empirically observed from measurements of ocean wave heights over a sufficient time period. However, we can proceed further and try to derive the dispersion results of wave frequency, which is the very common oceanography measure. So we consider -- based on the energy stored in a specific wave -- the time, t, it will take to drop a height, H, by the Newton's law relation: $$t^2 \sim H$$ and since t goes as 1/f, then we can create a new PDF from the height cumulative as follows: $$p(f) df = \frac{dP(H)}{dH} \frac{dH}{df} df$$ where $$H \sim \frac{1}{f^2}$$ $$\frac{dH}{df} \sim -\frac{1}{f^3}$$ then $$p(f) \sim \frac{1}{f^5} e^{-\frac{c}{f^4}}$$ which is just the Pierson-Moskowitz wave spectra that oceanographers have observed for years (developed first in 1964, variations of this include the Bretschneider and ITTC wave spectra) . This concise derivation works well despite the correct path of calculating an auto-correlation from p(f) and then deriving a power spectrum from the Fourier Transform of p(f). Yet this convenient shortcut remains useful in understanding the simple physics and probabilities involved. As we have an interest in using this derived form for an actual potential application, we can seek out public-access stations to obtain and evaluate some real data. The following Figure 2 is data pulled from the first region I accessed -- a pair of measuring stations located off the coast of San Diego. The default data selector picked the first day of this year, 1/1/2012 and the station server provided an averaged wave spectra for the entire day.  The red points correspond to best fits from the derived MaxEnt algorithm to the blue data set. Figure 2: Wave energy spectra from two sites off of the San Diego coastal region. The Maximum Entropy estimate is in red. To explore the dataset, here is a link to the interactive page : http://cdip.ucsd.edu/?nav=historic&sub=data&units=metric&tz=UTC&pub=public&map_stati=1,2,3&stn=167&stream=p1&xyrmo=201201&xitem=product25 Like the wind energy spectrum, the wave spectrum derives simply from maximum entropy conditions. Refs Introduction to physical oceanography, RH Stewart
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425293564796448, "perplexity": 829.9544770946756}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188717.24/warc/CC-MAIN-20170322212948-00162-ip-10-233-31-227.ec2.internal.warc.gz"}
http://wiki.webpopix.org/index.php?title=Introduction_and_notation&oldid=6770
# Introduction and notation (diff) ← Version précédente | Voir la version courante (diff) | Version suivante → (diff) ## Different representations of the same model The description of a model requires variables such as observations $(y_i)$, individual parameters $(\psi_i)$, population parameters $\theta$, covariates $(c_i)$, etc. Tasks to be performed (estimation, simulation, likelihood calculation, etc.) involve these variables. Algorithms used to perform these tasks can use different parameterizations, i.e., different mathematical representations of the same model. We will see that depending on the task, some mathematical representations are more suitable than others. There exists for a modeler a natural parametrization involving a vector of individual parameters $\psi_i$ which have a physical or biological meaning (rate, volume, bioavailability, etc.). We will denote by $\psi$-representation the mathematical representation of the model which uses $\psi_i$: $$\pyipsii(y_i , \psi_i ; \theta) = \pcyipsii(y_i | \psi_i)\ppsii( \psi_i ; \theta, c_i).$$ (1) When there exists a transformation $h: \Rset^d \to \Rset^d$ such that $\phi_i=h(\psi_i)$ is a Gaussian vector, we can use equivalently the $\phi$-representation which involves the transformed parameters (log-rate, log-volume, logit-bioavailability, etc.) and now represents the joint distribution of $y_i$ and $\phi_i$: $$\pyiphii(y_i , \phi_i ; \theta, c_i) = \pcyiphii(y_i | \phi_i)\pphii( \phi_i ; \theta, c_i),$$ (2) where $\phi_i =h(\psi_i) \sim {\cal N}( \mu(\beta,c_i) , \Omega)$ and $\theta=(\beta,\Omega)$. There is yet another mathematical representation which uses the vector of random effects $\eta_i$ to represent the individual parameters model: $$\begin{eqnarray} \phi_i &=& \mu(\beta,c_i) + \eta_i , \end{eqnarray}$$ where $\eta_i \sim {\cal N}( 0 , \Omega)$. This $\eta$-representation leads to the joint distribution of $y_i$ and $\eta_i$: $$\pyietai(y_i , \eta_i ; \theta, c_i) = \pcyietai(y_i | \eta_i;\beta,c_i)\petai( \eta_i ; \Omega).$$ (3) We can see that the fixed effects $\beta$ now appear in the conditional distribution of the observations. This will have a strong impact on tasks such as estimation of population parameters since a sufficient statistic for estimating $\beta$ derived from this representation will be a function of the observations $\by$, as opposed to the other representations, where the sufficient statistic is a function of the individual parameters $\bpsi$ (or equivalently, $\bphi$). In the $\psi$-representation (1), if the model $\ppsii( \psi_i ; \theta, c_i)$ is not a regular statistical model (some components of $\psi_i$ may have no variability, or more generally $\Omega$ may not be positive definite), no sufficient statistic $S(\psi_i)$ for estimating $\theta$ exists. Thus, estimation algorithms will not use representation (1) in these cases, but another decomposition into regular statistical models. Some examples 1. Consider the following model for continuous data with a constant error model: 2. $$\begin{eqnarray} y_{ij} &\sim& {\cal N}(f(t_{ij},\phi_i) ,a_i^2) \\ \phi_i &\sim& {\cal N}(\beta, \Omega) \\ a_i &\sim& p_a(\, \cdot \, ; \theta_a) . \end{eqnarray}$$ Here, the variance of the residual error is a random variable. The vector of individual parameters is $(\phi_i, a_i)$ and the vector of population parameters is $\theta=(\beta,\Omega,\theta_a)$. Assuming that $\Omega$ is positive definite, the joint model of $y_i$, $\phi_i$ and $a_i$ can be decomposed as a product of three regular models: $$\pyiphii(y_i , \phi_i, a_i ; \theta) = \pcyiphii(y_i | \phi_i ,a_i)\pphii( \phi_i ; \beta, \Omega)\pmacro(a_i ; \theta_a).$$ 3. Assume instead that the variance of the residual error is fixed for the whole population: 4. $$\begin{eqnarray} y_{ij} &\sim& {\cal N}(f(t_{ij},\phi_i) ,a^2) . \end{eqnarray}$$ The vector of population parameters is now $\theta=(\beta,\Omega,a)$ and the joint model of $y_i$ and $\phi_i$ can be decomposed as $$\pyiphii(y_i , \phi_i ; \theta) = \pcyiphii(y_i | \phi_i ; a)\pphii( \phi_i ; \beta, \Omega).$$ 5. Suppose that some components of $\phi_i$ have no inter-individual variability. More precisely, let $\phi_i=(\phi_i^{(1)} \phi_i^{(0)})$ and $\beta=(\beta_1,\beta_0)$, such that 6. $$\begin{eqnarray} \phi_i^{(1)} &\sim& {\cal N}(\beta_1, \Omega_1) \\ \phi_i^{(0)} &=& \beta_0 , \end{eqnarray}$$ and $\Omega_1$ is positive definite. Here, $\theta=(\beta_1,\beta_0,\Omega_1,a)$ and $$\pyiphii(y_i , \phi_i^{(1)} ; \theta) = \pcyiphii(y_i | \phi_i^{(1)} ; \beta_0, a)\pphii( \phi_i^{(1)} ; \beta_1, \Omega_1).$$ 7. Assume instead that $\phi_i = (\phi_{i,1}, \phi_{i,2})$, where 8. $$\begin{eqnarray} \phi_{i,1} &=& \beta_1 + \omega_1\eta_i \\ \phi_{i,2} &=& \beta_2 + \omega_2\eta_i , \end{eqnarray}$$ and $\eta_i \sim {\cal N}(0,1)$. Here, the useful model is the joint distribution of $y_i$ and $\eta_i$. We can use for instance the following $\eta$-representation: $$\pyietai(y_i , \eta_i ; \theta) = \pcyietai(y_i | \eta_i ;\theta)\petai( \eta_i),$$ where $\theta= (\beta_1,\beta_2, \omega_1,\omega_2,a)$. ## Some notation We assume that the set of population parameters $\theta$ takes its values in $\Theta$, an open subset of $\Rset^m$. Let $f : \Theta \to \Rset$ be a twice differentiable function of $\theta$. We will denote $\Dt{f(\theta)} = (\partial f(\theta)/\partial \theta_j, 1 \leq j \leq m)$ the gradient of $f$ (i.e., the vector of partial derivatives of $f$) and $\DDt{f(\theta)} = (\partial^2 f(\theta)/\partial \theta_j\partial \theta_k, 1 \leq j,k \leq m)$ the Hessian of $f$ (i.e., the square matrix of second-order partial derivatives of $f$).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9883654117584229, "perplexity": 300.23752026038596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363292.82/warc/CC-MAIN-20211206103243-20211206133243-00596.warc.gz"}
https://www.physicsforums.com/threads/tension-of-rope-by-hanging-mass.352028/
# Tension of rope by hanging mass 1. Nov 5, 2009 ### yankeekd25 1. The problem statement, all variables and given/known data A mass of 11 kg is hung on a rope of L = 2.7 meters. It is raised by 90 degrees (a quarter circle) held at rest, then released and it falls due to gravity alone. What is the tension in the rope at the bottom of its path in Newtons? 2. Nov 5, 2009 ### Staff: Mentor What do you think? Hint: What forces act on the mass? What is its acceleration? 3. Nov 5, 2009 ### Nebozilla force diagrams yay! i remember those helped me understand whats was going on. 4. Nov 5, 2009 ### Cryphonus Force Diagrams saves lives everyday. Don't forget to use energy equations for Mechanics they are also good for your health :) Similar Discussions: Tension of rope by hanging mass
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8472522497177124, "perplexity": 1971.646062585044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948597585.99/warc/CC-MAIN-20171217210620-20171217232620-00146.warc.gz"}
https://www.physicsforums.com/threads/order-of-groups-in-relation-to-the-first-isomorphism-theorem.271812/
# Order of groups in relation to the First Isomorphism Theorem. 1. Nov 14, 2008 ### sairalouise Given H,K and general finite subgroups of G, ord(HK) = [(ord(H))(ord(K))] / ord(H intersection K) I know by the first isomorphism theorem that Isomorphic groups have the same order, but the left hand side of the equation is not a group is it? I am struggling to show this. 2. Nov 14, 2008 ### morphism Yes, HK is not necessarily a group, but this is irrelevant. The identity you posted follows from an easy counting argument. The only theorem you need is Lagrange's. Here's a hint: $HK = \cup_{h \in H} hK$. So ord(HK) = ord(K) * number of distinct cosets of K of the form hK. 3. Nov 21, 2008 This doesn't help your problem, but if one of the subgroups, say H, is normal in G, then HK is a subgroup of G and (HK)/H and K/(H ∩ K) are isomorphic (this is the so-called second isomorphism theorem), from which the statement about the orders follows easily. Similar Discussions: Order of groups in relation to the First Isomorphism Theorem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9535416960716248, "perplexity": 743.9798115541532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188717.24/warc/CC-MAIN-20170322212948-00413-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/definition-of-topology.121801/
# Definition of topology 1. May 23, 2006 ### jordanl122 Hi, I've been studying topology over the last semester and one thing that I was wondering about is why exactly is topology defined the way it is? For a refresher: given a set X we define a topology, T, to be a collection of subsets of X with the following 3 properties: 1) the null set and X are elements of T 2) the union of any elements of T is also in T (infinite) 3) the intersection of any of the elements of T is also in T (finite) I was reading some measure theory and sigma-algebras are defined in a similar way, so I was wondering if someone could shed some light for me. thanks, Jordan 2. May 23, 2006 ### LeonhardEuler As I understand it, (which I am only beginning to) topology is defined this way because it is an abstraction of studying open sets in a metric space. The sets that satisfy those requirements are defined to be the open sets. Defining them this way allows one to study topology on even sets without metrics. Those properties are satisfied by open sets in a metric space so it is a true generalization. 3. May 23, 2006 ### matt grime because that is the generalization of the metric structure on R that has worked at as the correct one in which to do analysis Light on what exactly? Sigma algebras differ in one significant way: they are closed under complements, thus effectively saying that you need to be closed under arbitrary intersection and union.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9672496318817139, "perplexity": 290.34147794225964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719286.6/warc/CC-MAIN-20161020183839-00006-ip-10-171-6-4.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/143438-solve-system-equations-below-graphing-them-pencil-paper-print.html
# Solve the system of equations below by graphing them with a pencil and paper. Show 40 post(s) from this thread on one page Page 1 of 2 12 Last • May 6th 2010, 02:27 PM daniel323 Solve the system of equations below by graphing them with a pencil and paper. Choose the correct ordered pair (b, u). u = -b + 21 u = -2b + 30 • May 6th 2010, 02:49 PM pickslides Graph the two lines. The answer is the point where they intersect. (Rofl) • May 6th 2010, 02:51 PM daniel323 i got it how about this 1 Solve the system of equations below by graphing them with a pencil and paper. Enter your answer as an ordered pair. y = -x + 2 y = -2x + 6 • May 6th 2010, 02:53 PM pickslides Exactly the same way you did the first one. Make sure you have a ruler, pencil, eraser and steady hands. • May 6th 2010, 02:56 PM daniel323 wat do i graph • May 6th 2010, 02:59 PM pickslides Quote: Originally Posted by daniel323 wat do i graph Each of these $y =-x+2$ $y=-2x+6$ are straight lines, graph them both then following advice given in post #2. • May 6th 2010, 03:02 PM daniel323 cant do it • May 6th 2010, 03:21 PM pickslides What have you tried? • May 6th 2010, 03:23 PM daniel323 wat am i suppose to graph the 6 and 2 or the x • May 6th 2010, 03:27 PM Anonymous1 Quote: Originally Posted by daniel323 wat am i suppose to graph the 6 and 2 or the x I showed you how to do this in a previous thread. I also showed you how to solve a system. Go try to figure out what is going on there, and come back with any serious questions. • May 6th 2010, 03:35 PM daniel323 the other problem the numbers are the same so it diffrent • May 6th 2010, 03:42 PM Anonymous1 Quote: Originally Posted by daniel323 the other problem the numbers are the same so it diffrent HINT: Multiply the first equation through by -2. • May 6th 2010, 03:45 PM daniel323 • May 6th 2010, 03:47 PM Anonymous1 Quote: Originally Posted by daniel323 How did you get that? y = -x + 2 y = -2x + 6 Following the hint we multiply the first equation through by -2 to get: -2y = 2x - 4 y = -2x + 6 Now what do you want to do? • May 6th 2010, 03:50 PM daniel323 how did u get 4 in there Show 40 post(s) from this thread on one page Page 1 of 2 12 Last
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8083899021148682, "perplexity": 2252.75693117173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096209.79/warc/CC-MAIN-20150627031816-00103-ip-10-179-60-89.ec2.internal.warc.gz"}
https://pastebin.com/fRyvNHRv
• API • FAQ • Tools • Archive SHARE TWEET # discrete Project Roshdy Maco153 Dec 9th, 2019 126 Never Not a member of Pastebin yet? Sign Up, it unlocks many cool features! 1. \documentclass[12pt, a4paper]{article} 2. \usepackage[utf8]{inputenc} 3. \usepackage{amsthm} 4. \usepackage{amsmath, amssymb} 5. \usepackage{xcolor} 6. 7. \theoremstyle{plain} 8. \newtheorem{theorem}{Theorem}[section] 9. \newtheorem{corollary}[theorem]{Corollary} 10. \newtheorem{definition}[theorem]{Definition} 11. \newtheorem{example}[theorem]{Example} 12. 13. \parindent=0mm 14. \parskip=1.9mm 16. 17. \renewcommand\qedsymbol{$\blacksquare$} 18. \renewcommand{\restriction}{\mathord{\upharpoonright}} 19. 20. \binoppenalty=\maxdimen 21. \relpenalty=\maxdimen 22. 24. \author{Mohamed Roshdy \\ \\ The American University in Cairo} 25. %\email{@aucegypt.edu} 26. %\address{The American University in Cairo} 27. 28. 29. \begin{document} 30. \maketitle 31. 32. \textcolor{red}{} 33. 34. \textbf{Abstract.} 35. 36. \textcolor{blue}{In this paper, we will begin by a summary of Cantor's life and highlights of his most significant theorems. Then we will introduce some definitions about sets and countability of sets. After that, we will prove one of Cantor's theorem and two corollaries using the previous theorem.} 37. 38. \section{Cantor's Life} 39. 40. \textcolor{blue}{ Georg Cantor was born in Saint Petersburg on the 3rd of March 1845 to Danish parents. His mother Marie came from a family famous for music. His father Georg Woldemar was a very successful businessman who helped his son in may situations with his great advice, however, he also tried to convince him to study engineering rather than mathematics as he thought it was more promising. His education was not much different of the education of all the great minds, he showed early signs of excellence, which resulted in an early recognition of his talent at a very early age. At first, he received private tutoring lessons in Saint Petersburg, then he attended private school in Frankfurt at the Darmstadt nonclassical school, that was before he got into the Wiesbaden Gymnasium in 1860. He graduated from the Realshule in Darmstadt in 1862 to begin his university studies. He actually started his university education as an engineering student, he studied engineering for two years at Höheren Gewerbschule, before switching to the Swiss Federal Politechnic to study mathematics, then after his father died he transferred to the University of Berlin. In Berlin, he attended lectures by some great mathematicians. His huge interest in math affected his earliest work hugely. After receiving his PH.D he left Berlin to work as a Privatdozent at Hale University to work under Eduard Heine the professor of mathematics there. Some people claimed that the reason for Cantor’s groundbreaking work can be actually traced back to his very early post graduation publications. As a matter of fact, one could actually find traces of his early work in his later work. His first paper titled “On a theorem concerning the trigonometric series” was completed and published in march of 1870. In this paper he presented Cantor’s Uniqueness Theorem, which states that every function can only have a maximum of one representation by a trigonometric series. In 1871, he strengthen his theorem by proving that it holds even if the series diverges at a finite number of points in any given interval, this result was attempted by a handful of known mathematicians before him, but none of them could prove to be true. His second paper, which was published in 1872, extended the results of his first paper and strengthened it further. He proved that his theorem holds even if the trigonometric series diverges at an infinite number of points. In the same year, he provided a definition of real numbers, which earned him a promotion to associate professor at Halle University in the same year. Later in the same year, Cantor met Richard Dedekind for the first time. Richard was a mathematics professor at the Technische Hochshule at Brunswick. He previously published a paper related to the topic of real numbers. Cantor and him exchanged letters over the years. In one of their letters in 1973, Cantor sent a letter to Dedekind, which contained a proof, which states that the real numbers cannot be put in one-to-one correspondence with the natural numbers. Two days later, Cantor sent him a simpler proof, Dedekind approved of the proof. Cantor’s work in the period between 1873-1884 led to the creation of the set theory. The origins of set theory is traced back to some of his earlier work, mainly a single paper he published in 1874 titled “On a Property of the Collection of All Real Algebraic Numbers.”. The paper was published just before Cantor turned 30 years old, this shows how successful he was at such a young age. His paper introduced three main points, which are the set of algebraic numbers is countable, in every interval [a,b] there are infinitely many numbers not included in sequence, which means that the set of rea numbers are uncountably infinite. Cantor provided a definition of the concept of a set, his definition basically stated that a set is a collection of elements. He also provided the concept of countability, which stated that a countable set has the same cardinality as one of the subsets of the set of the natural numbers, which led to the definition of a countable set. He defines a set S to be countable if there exists an injective function from it (S) to the set of Natural numbers. In 1873, Cantor proved that the set of rational numbers are countable. A year later in 1874, Cantor proved that the set of real algebraic numbers are countable. In 1874, Cantor published his most fruitful work regarding the topic of countability, he published the paper that proves the uncountability of real numbers, which he was exchanging letter with Dedekind as mentioned earlier. In 1891, Cantor presented his diagonal argument, which is a simpler proof of the uncountability of the set of real numbers. Cantoor’s gave a definition of infinite sets, stating that a set is infinite only of there is a one-toone correspondence between A and a set X which is a proper set oof A. in 1878, Cantor provided a definition of cardinal numbers. Cantor also introduced the continuum hypothesis. He spent many years of his life attempting to prove the truth of his continuum hypothesis, he tried many different strategies, provided many proofs, but all of them were false. Cantor suffered his first serious mental breakdown in 1884, he had just returned from a trip to paris, most historians believe the breakdown occurred as a result of his dispute with Leopold Kronecker who had been refusing Cantor’s work and trying to hold him back. After his 1884 hospitalization, there was no record to show that he has been hospitalized again until 1899. That was the year his youngest son died and he reportedly lost his passion for mathematics then. In 1903, Julios Konig presented a paper which attempted to disprove the basic tenants of transfinite set theory, Cantor descriped it as public humiliation and it shook his believe in God, after he was a devout Christian his whole life. He was hospitalized many times in the following two to three years, he spent the last 20 years in depression and bad mental health until he died in 1918 of a heart attack. He never left Halle University. 41. .} 42. 43. 44. \section{Cantor's Theorem} 45. 46. \textcolor{red}{} 47. 48. We start by defining a set. 49. 50. \begin{definition}\rm 51. A \textit{set} is a collection of objects. 52. \end{definition} 53. 54. \begin{definition}\rm 55. Let $A$ and $B$ be sets. We say that $A$ is a \textit{subset} of $B$, and write $A 56. \subseteq B$ if \textcolor{blue}{$\forall x (x\in A \implies x\in B)$ } 57. \end{definition} 58. 59. \begin{example}\rm The following are examples of subsets. 60. \textcolor{red}{} 61. \begin{enumerate} 62.   \item The set of natural numbers is a subset of the set of integers. 63.   \item The set of integers is a subset of the set of real numbers. 64.   \item The empty set is a subset of any other set. 65. \end{enumerate} 66. \end{example} 67. 68. We are now ready to introduce the notion of a power set. 69. \begin{definition}\rm 70. The \textit{power set} $\mathcal{P}(S)$ of a set $S$ is \textcolor{blue}{the set that contains the all the subsets of the subset}. 71. \end{definition} 72. 73. \begin{example} 74. \textcolor{blue}{let $A=\{5,10\}$ then $\mathcal{P}(A)=\{\{5\},\{10\},\phi, \{5,10\}\}$ \\ 75.                     $B=\{b\}$ then $\mathcal{P}(B)=\{b,\phi \}$  } 76. \end{example} 77. 78. \begin{definition}\rm Let $A$ and $B$ be sets. 79. \begin{itemize} 80.   \item We say that the cardinality of $A$ is equal to the cardinality of $B$, and write $|A|=|B|$, if there is a \textcolor{blue}{ $bijection$ function f \colon A \to B} 81. 82.   \item We say that the cardinality of $A$ is less than or equal to the cardinality of $B$, and write $|A|\leq |B|$, if there is an \textcolor{blue}{ $injection$ function f \colon A \to B } 83. 84.   \item We say that the cardinality of $A$ is strictly less than the cardinality of $B$, and write $|A|<|B|$, if \textcolor{blue}{$|A|\leq|B| \hspace {0.1cm} and \hspace {0.1cm} |A|\neq|B|$}. 85. 86.   \item We say a set is \textit{countably infinite} if \textcolor{blue}{ there is a bijection function from the set of natural numbers $\mathbb {N}$ to the set} 87. \end{itemize} 88. \end{definition} 89. 90. We will now show that the set of integers is a countably infinite set. 91. 92. \begin{theorem} 93. The set of integers $\mathbb{Z}$ is countably infinite. 94. \end{theorem} 95. 96. \begin{proof} 97. \textcolor{blue}{Let the domain of a function is $\mathbb {N}$ and the codomain is $\mathbb {Z}$,  where $f \colon \mathbb {N} \to \mathbb {Z}$ defined in the next line: \\ 98. \(f(n) =$\begin{cases} 99. \frac{-n}{2} & when n is even\\ 100. \frac{n+1}{2} & when n is odd\\ 101. \end{cases} 102.$\\ 103. Both branches of the function are both bijective in this domain, so there is a bijection from $\mathbb {N}$ to $\mathbb {Z}$, so we proved that the set of integers is countably infinite. 104. \end{proof} 105. } 106. 107. \begin{theorem}[Cantor's Theorem] 108. Let $S$ be any (finite or infinite) set. Then $|S|<|\mathcal{P}(S)|$. 109. \end{theorem} 110. \begin{proof} 111. Let $S$ be any set. 112. 113. First, we will show that $S\leq \mathcal{P}(S)$ by deriving an injective function $g:S\to \mathcal{P}(S)$ where  $\(f(x)=\{x\}$ and as the codomain (power set) contains all the elements of the domain, all elements in the domain have exactly one image. 114. 115. \textcolor{red}{} 116. 117. Second, we will prove that $|S|\neq |\mathcal{P}(S)|$. For the sake of contradiction, assume that there is a bijection $h:S\to \mathcal{P}(S)$. 118. 119. \textcolor{blue}{First we assume that h is bijective for the sake of contradiction, so by definition it is surjective. Let us define a new set where:\\ 120. $B=\{s\in S \land s\notin h(x)\}$, so $B\in\mathcal{P}(S)$ and using surjectivity, $B=h(x)$ for some $x$ in the domain, and this creates a contradiction that $B$ contains only the elements which is in $S$ and not in the range of the function, so $h$ is not surjective} 121. 122. Therefore, we have shown that $|S| < |\mathcal{P}(S)|$. 123. \end{proof} 124. 125. 126. 127. From Cantor's theorem we can deduce the following consequences. 128. 129. 130. 131. \begin{corollary} 132. The power set of the natural numbers is uncountable. 133. \end{corollary} 134. 135. \begin{proof} 136. \textcolor{blue}{Applying Cantor's theorom, $|\mathbb{N}|<|\mathcal{P}(\mathbb{N})|$, so there is injection and there is no bijection $f\colon \mathbb{N} \to \mathcal{P}(\mathbb{N})$, for a set to be uncountable, there must not be any bijection, therefore the power set of the natural numbers is uncountable.} 137. \end{proof} 138. 139. 140. \begin{corollary} 141. There is an infinite sequence $A_0, A_1, A_2, \ldots$ of infinite sets such that $|A_i|<|A_{i+1}|$ for all $i\in \mathbb{N}$. In other words, there is an infinity hierarchy of infinities. 142. \end{corollary} 143. \begin{proof} 144. \textcolor{blue}{Applying Cantor's theorom, $|S| < |\mathcal{P}(S)|$. Therefore,  whenever we get the power set of a set, we will have greater cardinality. if we keep getting the power set, we will have infinte secuence where the $nth term =\mathcal{P}(n-1)$.\\ 145. \\ 146. Therefore, we proved that there is an infinite sequence $A_0, A_1, A_2, \ldots$ of infinite sets such that $|A_i|<|A_{i+1}|$ for all $i\in \mathbb{N}$ 147. } 148. \end{proof} 149. 150. 151. \begin{thebibliography}{9} 152. 153. \bibitem{Veisdal} 154. Jorgen Veisdal. \textit{The Nature of Infinity - and Beyond}. Medium, 2018. 155. 156. \end{thebibliography} 157. 158. \end{document} RAW Paste Data We use cookies for various purposes including analytics. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9260463118553162, "perplexity": 659.8708626273354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778168.77/warc/CC-MAIN-20200128091916-20200128121916-00495.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3756370
## Rotational kinetic energy Okay thanks! Very helpful! Though one question: Why is it that the rotation MUST be around the center of pole, if the balls angular momentum is to be conserved? There is rotation about the centre in the case we're considering, isn't there? Angular momentum is not conserved, though. Admittedly it's not a pure rotation,because the ball is also moving radially inwards. But this isn't relevant, either! When you talk about the ball's angular momentum (that is the angular momentum of a body acted upon by an external force) you have to specify about what point you're calculating that angular momentum. You need a fixed point (certainly not, for example, the point of run-off of the string from the circumference of the pole, because that point keeps moving round the pole, and is therefore accelerating towards the centre). Exactly the same goes for torque. If you choose the same fixed point about which to calculate torque, G and angular momentum, L, a very simple law applies: G = dL/dt. Quote by aaaa202 Okay thanks! Very helpful! Though one question: Why is it that the rotation MUST be around the center of pole, if the balls angular momentum is to be conserved? Angular momentum is always conserved for the whole system (ball + pole + Earth), no matter which axis is chosen to measure it. In this case angular momentum is transferred from the ball to the Earth via the pole: there is a torque on the pole, and therefore on the Earth. This means that if we consider only the pole and the ball we come to the conclusion that angular momentum is not conserved: in fact the momentum has been transferred elsewhere. The ball would have a constant angular momentum if it did not exert a torque on something else. This would be the case if the centre of rotation of the string always stayed at the same point, for instance in the examples where the string is being pulled through a hole. hmm it's just that when you see the ball for the point of contact between string and pole it makes a uniform circular motion. So can't you say that the angular momentum is conserved in this frame for the ball? And why does that not qualify to the ball's angular momentum being conserved like if the rotation was around the center of mass? :) Michael C. Agreed. Though, of course, there are interesting cases, such as a system of two charges moving at an angle to each other. [That's not the complete system, I hear someone say.] aaaa202 As I said, the point of contact of string and pole is accelerating. The laws of Physics need modifying somewhat for use in an accelerating (non-inertial) reference frame. That's why I'm choosing to take our torque and angular momentum about the still centre of the pole. Quote by aaaa202 hmm it's just that when you see the ball for the point of contact between string and pole it makes a uniform circular motion. So can't you say that the angular momentum is conserved in this frame for the ball? And why does that not qualify to the ball's angular momentum being conserved like if the rotation was around the center of mass? :) The point of contact is changing all the time: it's turning in a circle around the pole, so (as Philip pointed out) it's constantly accelerating. If we fix one point on the surface of the pole and measure the angular momentum around this point, we'll see that the momentum of the ball must be changing, since there is only one instant when the ball exerts no torque in this frame: the instant when the centre of rotation is at the point we have fixed. For the rest of the time, the centre of rotation is not at the point we have fixed, so there is torque around this point. yes okay, I should have realized that. But doesn't there exist conservation laws in non inertial reference frames? I expect so. Indeed I expect we could easily find such a frame in which angular momentum is conserved for the ball. But that would not be anything to be especially pleased about. The ball's angular momentum would be conserved simply because we've chosen a special frame in which it is conserved. In this frame, things whose angular momentum we'd normally expect to be conserved won't have it conserved... The laws of Physics are usually easier in inertial frames. Similar discussions for: Rotational kinetic energy Thread Forum Replies Introductory Physics Homework 2 Introductory Physics Homework 2 Introductory Physics Homework 1 Introductory Physics Homework 6 Introductory Physics Homework 0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.919012725353241, "perplexity": 257.06937440454664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710313659/warc/CC-MAIN-20130516131833-00019-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.lessonplanet.com/teachers/puzzling-numbers
# Puzzling Numbers In this math worksheet, students write large numbers into a cross-number puzzle. Numbers are written in word and expanded notation formats. Students translate these into standard form for the puzzle.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956232309341431, "perplexity": 4270.119182626807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689572.75/warc/CC-MAIN-20170923070853-20170923090853-00385.warc.gz"}
https://www.physicsforums.com/threads/gamma-radiation-photon-energies-and-wavelength-question.920061/
# I Gamma radiation, photon energies and wavelength question 1. Jul 13, 2017 ### girts I haven't though about this from such a perspective but today while reading wikipedia (yes yes not the best source) I got confused, now the "eV" is said to measure the energy gained by an electron between a potential difference of 1V. I assume particle physicists use this measurement because its quite handy for particle accelerator where if you have say 250kV between cathode and anode and you use an electron beam you can then say the electrons got a final energy of 250 keV, correct? but here's the part that confuses me, in terms of gamma radiation which is among the highest frequency EM radiation and so ionizing, I see a chart where frequency, wavelength and the corresponding energy is given and it is written in eV. it is written that frequencies from the Ghz region up towards the Phz and even EHz, have corresponding energies of only μeV to keV, and only from gamma the energy starts in the MeV region, does that mean that photon Em radiation is not very powerful except starting from X ray to gamma region? And even then I see X rays have on average only about 100KeV, although I assume its enough to damage a cell because the binding energy between an electron and the nucleus is far far less than that between protons and neutrons inside the nucleus correct? What is the typical binding energy between electrons and nucleus in eV for organic materials like flesh? If I for example accelerate a beam of electrons between a cathode and anode to a potential of 100KeV, do they then reach an energy equivalent to the 100KeV X ray range photons for example? I have a feeling this might be the way an X ray tube works by accelerating electrons which are then directed into the anode which serves also as the target creating photon emission of an energy which is proportional to the applied potential difference? I read also that most of the energy of the electron beam gets absorbed in the anode and simply manifests itself as heat, yet the photons that are emitted by bremsstrahlung type of radiation from the target are of the corresponding eV energy of the PD between cathode and anode does this simply mean that of all the electrons from the beam (intensity) most don't make for a photon emission event and get trapped or otherwise end up as heat but those few that get their job done then emit a photon of their corresponding energy? If my assumptions are correct seems like the opposite of the photoelectric effect only with bigger losses in the conversion? Oh and one final question if I may, what determines the wavelength of an EM wave, I have always wondered why the wavelengths of even such high frequencies as Mhz and up to Ghz have such large wavelengths, is the visual picture of a photon flying through space and time making up and down motions much like a dot that would ride a sine wave a correct way of thinking about this? and then between each full up-down-up moment there is some distance that the photon has traveled in the horizontal frame which then become the wavelength? thank you all for taking time to answer. 2. Jul 13, 2017 ### Staff: Mentor Right. It is also a more convenient unit in terms of its size - you don't need to talk about zeptojoule or other prefixes no one ever heard about. Define "very powerful". In addition to the photon energy, the number of photons can matter as well. Typically a few eV for the outermost electrons (=the electrons that matter for chemical bonds), this applies to all chemical bonds. Sure. It also means the maximal photon energy you can get out of this setup is about 100 keV. Right. I'm not sure how useful that comparison is. wavelength*frequency = propagation speed (phase velocity, but let's skip technical details). This is true for waves in general. For light, the propagation speed is the speed of light, which is typically very fast. That is completely wrong. A photon doesn't even have a position, and there is no up and down motion of anything. The wavelength is the distance between two subsequent maxima of the electric field strength, for example. 3. Jul 13, 2017 ### Staff: Mentor That's right. The amount of energy per photon is not very large until you get to very high frequencies. Typically a few eV at most. The electric and magnetic fields at any point in space can be represented by vectors (arrows) whose magnitude (length) represents the strength of the electric or magnetic force a charged particle would feel. The direction the arrows point represent the direction of the field lines, which can be thought of as the direction of the force on a charged particle. In the case of the electric field at least. The magnetic field is a bit different since the force acts perpendicular to the field line, but the idea is the same. An EM wave is a fluctuation of the EM field that propagates outwards from a source. These fluctuations can be thought of as all of these vectors rapidly oscillating directions and magnitudes as the wave passes. This rapid oscillation creates an alternating force on charged particles as it passes, and, for example, causes electrons in an antenna to oscillate back and forth, which can be detected by a radio receiver or other equipment attached to the antenna. If you draw a line running perpendicular to the wavefront (so in a spherical wave the line would run straight out from the source), the tips of the field vectors will form the squiggly line you always see representing light. The wavelength of the EM wave is the distance between the nearest vectors pointing in the same direction. The frequency is how quickly these oscillations occur. Note that a photon is not a classical particle and does not travel up and down as it moves through space. An EM wave interacts with matter in such a way as to transfer discrete "chunks" of energy to the matter. This interaction, this quanta of energy, is a photon. 4. Jul 15, 2017 ### girts Right my fault, I had read about the photon earlier but somehow though about it still like a classical particle, well to rephrase it would it then be fair to say that the photon is simply a phenomenon which we have observed and so have given a name to it, and the phenomenon in question being that certain spectrum light (EM radiation) gives off certain energy electrons aka the now famous photoelectric experiment and we then simply concluded that if there are electrons emitted with specific energy and intensity which somehow manages to be proportional to the incoming light (EM radiation) so it must be that light also consists of discrete quanta only apart from electrons we cannot directly measure or see them so we observe them indirectly through their interaction with a metal target for example, is this a correct way of approaching this question? Now I personally can understand the wavelength in terms of a generator rotor magnet passing by a coil as it approaches the induced field going up in strength then reaching its maxima and then as it goes away decreasing etc, I can also understand it when a semiconductor or a spark gap switches current at a fast pace creating a high frequency EM wave and then it can be seen for example in a microwave oven if something other than food is put in there and some parts of it become burned fast while other stay relatively cool because they were inbetween the maximum points of the EM sine wave, though it becomes harder for me to understand why for example a decaying nucleus emits a very high frequency photon/s that we call gamma rays, since all EM waves travel at c in vacuum is the frequency sort of an approximation of the energy of the event that created the "light" photons in the first place so for example nuclear fission releases very high frequency photons simply because the event itself release alot of energy? Sorry if this seems obvious but one more question, I can understand when a nucleus breaking apart or fusing creates particles that are ejected like alpha ones for fusion or beta electrons from fission or neutrons for that matter but it is a bit harder to understand why it emits photons since they are not originally part of the atomic structure, is it simply a mechanism for which we have no deeper explanation than the one which simply states that during certain transitions in nuclear reactions a photon is created etc? or is there more detail to it ? 5. Jul 15, 2017 ### Staff: Mentor It's a good start. We have many more observations than just the photoelectric effect that supports the idea that EM radiation is quantized. See the bottom part of the first reply to this reddit question for an example. Pretty much. Decaying nuclei release a lot of energy, which either goes into the kinetic energy of the decay products, the creation of new particles (such as neutrinos), or EM radiation. Since the energy released is much larger than chemical reactions, nuclear reactions release radiation with much more energy. The full picture would require an understanding of quantum electrodynamics, but we can say that we have a bunch of interacting, electrically charged particles and that when electrically charged particles interact they tend to release EM radiation. Classically this happens when a charged particle is accelerated, but the nucleus can release radiation even when no fusion or fission events occur and no particles are accelerated. Like I said, we'd need to get into quantum theory to fully explain it. Have something to add? Draft saved Draft deleted Similar Discussions: Gamma radiation, photon energies and wavelength question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8903822302818298, "perplexity": 357.39705841026256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886123312.44/warc/CC-MAIN-20170823171414-20170823191414-00371.warc.gz"}
https://saravananthirumuruganathan.wordpress.com/category/math/
Feeds: Posts ## Detailed Tutorial on Markov and Chebyshev Inequalities In this post, I plan to discuss about two very simple inequalities – Markov and Chebyshev. These are topics that are covered in any elementary probability course. In this post, I plan to give some intuitive explanation about them and also try to show them in different perspectives. Also, the following discussion is closer to discrete random variables even though most of them can be extended to continuous ones. ### Inequalities from an Adversarial Perspective One interesting way of looking at the inequalities is from an adversarial perspective. The adversary has given you some limited information and you are expected to come up with some bound on the probability of an event. For eg, in the case of Markov inequality, all you know is that the random variable is non negative and its (finite) expected value. Based on this information, Markov inequality allows you to provide some bound on the tail inequalities. Similarly, in the case of Chebyshev inequality, you know that the random variable has a finite expected value and variance. Armed with this information Chebyshev inequality allows you to provide some bound on the tail inequalities. The most fascinating this about these inequalities is that you do not have to know the probabilistic mass function(pmf). For any arbitrary pmf satisfying some mild conditions, Markov and Chebyshev inequalities allow you to make intelligent guesses about the tail probability. ### A Tail Inequality perspective Another way of looking at these inequalities is this. Supposed we do not know anything about the pmf of a random variable and we are forced to make some prediction about the value it takes. If the expected value is known, a reasonable strategy is to use it. But then the actual value might deviate from our prediction. Markov and Chebyshev inequalities are very useful tools that allow us to estimate how likely or unlikely that the actual value varies from our prediction. For eg, we can use Markov inequality to bound the probability that the actual varies by some multiple of the expected value from the mean. Similarly, using Chebyshev we can bound the probability that the difference from mean is some multiple of its standard deviation. One thing to notice is that you really do not need the pmf of the random variable to bound the probability of the deviations. Both these inequalities allow you to make deterministic statements of probabilistic bounds without knowing much about the pmf. ### Markov Inequality Let us first take a look at the Markov Inequality. Even though the statement looks very simple, clever application of the inequality is at the heart of more powerful inequalities like Chebyshev or Chernoff. Initially, we will see the simplest version of the inequality and then we will discuss the more general version. The basic Markov inequality states that given a random variable X that can only take non negative values, then $Pr(X \geq k E[X]) \leq \frac{1}{k}$ There are some basic things to note here. First the term P(X >= k E(X)) estimates the probability that the random variable will take the value that exceeds k times the expected value. The term P(X >= E(X)) is related to the cumulative density function as 1 – P(X < E(X)). Since the variable is non negative, this estimates the deviation on one side of the error. ### Intuitive Explanation of Markov Inequality Intuitively, what this means is that , given a non negative random variable and its expected value E(X) (1) The probability that X takes a value that is greater than twice the expected value is atmost half. In other words, if you consider the pmf curve, the area under the curve for values that are beyond 2*E(X) is atmost half. (2) The probability that X takes a value that is greater than thrice the expected value is atmost one third. and so on. Let us see why that makes sense. Let X be a random variable corresponding to the scores of 100 students in an exam. The variable is clearly non negative as the lowest score is 0. Tentatively lets assume the highest value is 100 (even though we will not need it). Let us see how we can derive the bounds given by Markov inequality in this scenario. Let us also assume that the average score is 20 (must be a lousy class!). By definition, we know that the combined score of all students is 2000 (20*100). Let us take the first claim – The probability that X takes a value that is greater than twice the expected value is atmost half. In this example, it means the fraction of students who have score greater than 40 (2*20) is atmost 0.5. In other words atmost 50 students could have scored 40 or more. It is very clear that it must be the case. If 50 students got exactly 40 and the remaining students all got 0, then the average of the whole class is 20. Now , if even one additional student got a score greater than 40, then the total score of 100 students become 2040 and the average becomes 20.4 which is a contradiction to our original information. Note that the scores of other students that we assumed to be 0 is an over simplification and we can do without that. For eg, we can argue that if 50 students got 40 then the total score is atleast 2000 and hence the mean is atleast 20. We can also see how the second claim is true. The probability that X takes a value that is greater than thrice the expected value is atmost one third. If 33.3 students got 60 and others got 0 , then we get the total score as around 2000 and the average remains the same. Similarly, regardless of the scores of other 66.6 students, we know that the mean is atleast 20 now. This also must have made clear why the variable must be non negative. If some of the values are negative, then we cannot claim that mean is atleast some constant C. The values that do not exceed the threshold may well be negative and hence can pull the mean below the estimated value. Let us look at it from the other perspective : Let p be the fraction of students who have a score of atleast a . Then it is very clear to us that the mean is atleast a*p. What Markov inequality does is to turn this around. It says, if the mean is a*p  then the fraction of students with a score greater than a is atmost p. That is, we know the mean here and hence use the threshold to estimate the fraction . ### Generalized Markov Inequality The probability that the random variable takes a value thats greater than k*E(X) is at most 1/k. The fraction 1/k act as some kind of a limit. Taking this further, you can observe that given an arbitrary constant a, the probability that the random variable X takes a value >= a ie P(X >= a) is atmost 1/a times the expected value. This gives the general version of Markov inequality. $Pr(X \geq a) \leq \frac{1}{a} E[X]$ In the equation above, I seperated the fraction 1/a because that is the only varying part. We will later see that for Chebyshev we get a similar fraction. The proof of this inequality is straightforward. There are multiple proofs even though we will use the follow proof as it allows us to show Markov inequality graphically.This proof is partly taken from Mitzenmacher and Upfal’s exceptional book on Randomized Algorithms. Consider a constant a >= 0. Then define an indicator random variable I which takes value of 1 is X >=a . ie $\displaystyle I = \begin{cases} 1, & \mbox{if } \mbox{ X} \geq \mbox{a} \\ 0, & \mbox{otherwise } \end{cases}$ Now we make a clever observation. We know that X is non negative. ie X >= 0. This means that the fraction X/a is atleast 0 and atmost can be infinty. Also, if X < a, then X/a < 1. When X > a, X/a > 1. Using these facts, $I \leq \frac{X}{a}$ If we take expectation on both sides, we get $E[I] \leq \frac{1}{a} E[X]$ But we also know that the expectation of indicator random variable is also the probability that it takes the value 1. This means E[I] = Pr(X>=a). Putting it all together, we get the Markov inequality. $Pr(X \geq a) \leq \frac{1}{a} E[X]$ ### Even more generalized Markov Inequality Sometimes, it might happen that the random variable is not non-negative. In cases like this, a clever hack helps. Design a function f(x) such that f(x) is non negative. Then we can apply Markov inequality on the modified random variable f(X). The Markov inequality for this special case is : $Pr(f(X) \geq a) \leq \frac{1}{a} E[f(X)]$ This is a very powerful technique. Careful selection of f(X) allows you to derive more powerful bounds. (1) One of the simplest examples is f(X) = |X| which guarantees f(X) to be non negative. (2) Later we will show that Chebyshev inequality is nothing but Markov inequality that uses $f(X) = |X-E(X)|^2$ (3) Under some additional constraints, Chernoff inequality uses $f(X) = e^{tX}$ . ### Simple Examples Let us consider a simple example where it provides a decent bound and one where it does not. A typical example where Markov inequality works well is when the expected value is small but the threshold to test is very large. Example 1: Consider a coin that comes up with head with probability 0.2 . Let us toss it n times. Now we can use Markov inequality to bound the probability that we got atleast 80% of heads. Let X be the random variable indicating the number of heads we got in n tosses. Clearly, X is non negative. Using linearity of expectation, we know that E[X] is 0.2n.We want to bound the probability P(X >= 0.8n). Using Markov inequality , we get $P(X \geq 0.8n) \leq \frac{0.2n}{0.8n} = 0.25$ Of course we can estimate a finer value using the Binomial distribution, but the core idea here is that we do not need to know it ! Example 2: For an example where Markov inequality gives a bad result, let us the example of a dice. Let X be the face that shows up when we toss it. We know that E[X] is 7/2 = 3.5. Now lets say we want to find the probability that X >= 5. By Markov inequality, $P(X \geq 5) \leq \frac{3.5}{5} = 0.7$ The actual answer of course is 2/6 and the answer is quite off. This becomes even more bizarre , for example, if we find P(X >= 3) . By Markov inequality, $P(X \geq 3) \leq \frac{3.5}{3} = \frac{7}{6}$ The upper bound is greater than 1 ! Of course using axioms of probability, we can set it to 1 while the actual probability is closer to 0.66 . You can play around with the coin example or the score example to find cases where Markov inequality provides really weak results. ### Tightness of Markov The last example might have made you think that the Markov inequality is useless. On the contrary, it provided a weak bound because the amount of information we provided to it is limited. All we provided to it were that the variable is non negative and that the expected value is known and finite. In this section, we will show that it is indeed tight – that is Markov inequality is already doing as much as it can. From the previous example, we can see an example where Markov inequality is tight. If the mean of 100 students is 20 and if 50 students got a score of exactly 0, then Markov implies that atmost 50 students can get a score of atleast 40. Note : I am not 100% sure if the following argument is fully valid – But atleast it seems to me 🙂 Consider a random variable X such that $X = \displaystyle \begin{cases} k & \mbox{with probability } \frac{1}{k} \\ 0 & \mbox{else} \end{cases}$ We can estimate its expected value as $E[X] = \frac{1}{k} \times k + \frac{k-1}{k} \times 0 = 1$ We can see that , $Pr(X \geq k E[X]) = Pr(X \geq k) = \frac{1}{k}$ This implies that the bound is actually tight ! Of course one of the reasons why it was tight is that the other value is 0 and the value of the random variable is exactly k. This is consistent with the score example we saw above. ### Chebyshev Inequality Chebyshev inequality is another powerful tool that we can use. In this inequality, we remove the restriction that the random variable has to be non negative. As a price, we now need to know additional information about the variable – (finite) expected value and (finite) variance. In contrast to Markov, Chebyshev allows you to estimate the deviation of the random variable from its mean. A common use of it estimates the probability of the deviation from its mean in terms of its standard deviation. Similar to Markov inequality, we can state two variants of Chebyshev. Let us first take a look at the simplest version. Given a random variable X and its finite mean and variance, we can bound the deviation as $P(|X-E[X]| \geq k \sigma ) \leq \frac{1}{k^2}$ There are few interesting things to observe here : (1) In contrast to Markov inequality, Chebyshev inequality allows you to bound the deviation on both sides of the mean. (2) The length of the deviation is $k \sigma$ on both sides which is usually (but not always) tighter than the bound k E[X]. Similarly, the fraction 1/k^2 is much more tighter than 1/k that we got from Markov inequality. (3) Intuitively, if the variance of X is small, then Chebyshev inequality tells us that X is close to its expected value with high probability. (4) Using Chebyshev inequality, we can claim that atmost one fourth of the values that X can take is beyond 2 standard deviation of the mean. ### Generalized Chebyshev Inequality A more general Chebyshev inequality bounds the deviation from mean to any constant a . Given a positive constant a , $Pr(|X-E[X]| \geq a) \leq \frac{1}{a^2}\;Var[X]$ ### Proof of Chebyshev Inequality The proof of this inequality is straightforward and comes from a clever application of Markov inequality. As discussed above we select $f(x) = |X-E[X]|^2$. Using it we get , $Pr(|X-E[X]| \geq a) = Pr( (X-E[X])^2 \geq a^2)$ $Pr( (X-E[X])^2 \geq a^2) \leq \frac{1}{a^2} E[(X-E[X])^2]$ We used the Markov inequality in the second line and used the fact that $Var[X] = E[(X-E[X])^2]$. ### Common Pitfalls It is important to notice that Chebyshev provides bound on both sides of the error. One common mistake to do when applying Chebyshev is to divide the resulting probabilistic bound by 2 to get one sided error. This is valid only if the distribution is symmetric. Else it will give incorrect results. You can refer Wikipedia to see one sided Chebyshev inequalities. ### Chebyshev Inequality for higher moments One of the neat applications of Chebyshev inequality is to use it for higher moments. As you would have observed, in Markov inequality, we used only the first moment. In the Chebyshev inequality, we use the second moment (and first). We can use the proof above to adapt Chebyshev inequality for higher moments. In this post, I will give a simple argument for even moments only. For general argument (odd and even) look at this Math Overflow post. The proof of Chebyshev for higher moments is almost exactly the same as the one above. The only observation we make is that $(X-E[X])^{2k}$ is always non negative for any k. Of course the next observation is $E[(X-E[X])^{2k}$ gives the 2k^th central moment . Using the statement from Mitzenmacher and Upfal’s book we get , $Pr(|X-E[X]| > t \sqrt[2k] {E[(X-E[X])^{2k}]}) \leq \frac{1}{t^{2k}}$ It should be intuitive to note that the more information we get the tighter the bound is. For Markov we got 1/t as the fraction. It was 1/a^2 for second order Chebyshev and 1/a^k for k^th order Chebyshev inequality. ### Chebyshev Inequality and Confidence Interval Using Chebyshev inequality, we previously claimed that atmost one fourth of the values that X can take is beyond 2 standard deviation of the mean. It is possible to turn this statement around to get a confidence interval. If atmost 25% of the population are beyond 2 standard deviations away from mean, then we can be confident that atleast 75% of the population lie in the interval $(E[X]-2 \sigma, E[X]+2 \sigma)$. More generally, we can claim that, $100 * (1-\frac{1}{k})$ percentage of the population lies in the interval $(E[X]-k. \sigma, E[X]+k \sigma)$ . We can similarly derive that 94% of the population lie within 4 standard deviations away from mean. ### Applications of Chebyshev Inequality We previously saw two applications of Chebyshev inequality – One to get tighter bounds using higher moments without using complex inequalities. The other is to estimate confidence interval. There are some other cool applications that we will state without providing the proof. For proofs refer the Wikipedia entry on Chebyshev inequality. (1) Using Chebyshev inequality, we can prove that the median is atmost one standard deviation away from the mean. (2) Chebyshev inequality also provides the simplest proof for weak law of large numbers. ### Tightness of Chebyshev Inequality Similar to Markov inequality, we can prove the tightness of Chebyshev inequality. I had fun deriving this proof and hopefully some one will find it useful. Define a random variable X as , [Note: I could not make the case statement work in WordPress Latex and hence the crude work around] X = { $\mu$ + C  with probability p { $\mu$ – C  with probability p { $\mu$ with probability 1-2p $E[X] = p(\mu +C) + p(\mu -C) + (1-2p) \mu = \mu$ $Var[X] = E[(X-\mu)^2]$ $= p (\mu+C-\mu)^2 + p (\mu-C-\mu)^2 + (1-2p)(\mu-\mu)^2$ $\Rightarrow Var[X] = 2pC^2$ If we want to find the probability that the variable deviates from mean by constant C, the bound provided by Chebyshev is , $Pr(|X-\mu| \geq C) \leq \frac{Var[X]}{C^2} = \frac{2pC^2}{C^2}=2p$ which is tight ! ### Conclusion Markov and Chebyshev inequalities are two of the simplest , yet very powerful inequalities. Clever application of them provide very useful bounds without knowing anything about the distribution of the random variable. Markov inequality bounds the probability that a nonnegative random variable exceeds any multiple of its expected value (or any constant). Chebyshev’s inequality , on the other hand, bounds the probability that a random variable deviates from its expected value by any multiple of its standard deviation. Chebyshev does not expect the variable to non negative but needs additional information to provide a tighter bound. Both Markov and Chebyshev inequalities are tight – This means with the information provided, the inequalities provide the most information they can provide. Hope this post was useful ! Let me know if there is any insight I had missed ! ### References (1) Probability and Computing by Mitzenmacher and Upfal. (2) An interactive lesson plan on Markov’s inequality – An extremely good discussion on how to teach Markov inequality to students. (3) This lecture note from Stanford – Treats the inequalities from a prediction perspective. (4) Found this interesting link from Berkeley recently. ## How to add CRAN Ubuntu Repository to your system and fixing the GPG error R is one of the coolest language designed and I am having lot of fun using it. It has become my preferred language of programming next only to Python. If you are also using Ubuntu, the rate of update of R in Ubuntu’s official repositories is slightly slow. If you want to get the latest packages as soon as possible, then the best option is to add some CRAN mirror to your Ubuntu repository. This by itself is straightforward. I decided to write this post on how to solve the GPG error if you get it. ### Steps (1) Decide on which CRAN repository you want to use. Finding the nearest one usually gives the best speed. Lets say it is http://cran.cnr.berkeley.edu/ . Append "bin/linux/ubuntu". Typically this works. You can confirm this by going to this url in the browser too. (2) Add this to your Ubuntu repository. There are multiple ways. In the steps, below, replace http://cran.cnr.berkeley.edu/bin/linux/ubuntu with your mirror (a) Synaptic -> Settings -> Repositories -> Other Software -> Add . In the apt line enter "deb http://cran.cnr.berkeley.edu/bin/linux/ubuntu natty/". (b) sudo vim /etc/apt/sources.list and add "deb http://cran.cnr.berkeley.edu/bin/linux/ubuntu natty/" at the end. If you are not comfortable with vim, use gedit but instead of sudo , used gksudo. (3) Refresh the source repository by using refresh in Synaptic or using  "sudo apt-get update " (4) Install R or any other package you want. If you are installing R , I suggest you install r-base-dev instead of r-base. If you are installing some R package , check if it exists with the name r-cran-* . Else, install it using install.packages command inside R. (5) Enjoy 🙂 ### Fixing GPG Errors When I did these steps, I got an error like the following (This occurred when I updated last month, this might be fixed now !): GPG error: http://cran.cnr.berkeley.edu natty/ Release: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 51716619E084DAB9 If you get the error, enter the following commands in the terminal. gpg –keyserver keyserver.ubuntu.com –recv-key E084DAB9 gpg -a –export E084DAB9 | sudo apt-key add – Repeat the steps above and this should fix the key error. ## Detailed discussion on NP-Completeness of Subset Sum I recently spent some time developing notes on Subset sum – specifically the NP-Completeness part of it. I thought I will share it with the blog readers. ### Introduction Subset sum is one of the very few arithmetic/numeric problems that we will discuss in this class. It has lot of interesting properties and is closely related to other NP-complete problems like Knapsack . Even though Knapsack was one of the 21 problems proved to be NP-Complete by Richard Karp in his seminal paper, the formal definition he used was closer to subset sum rather than Knapsack. Informally, given a set of numbers S and a target number t, the aim is to find a subset S’ of S such that the elements in it add up to t. Even though the problem appears deceptively simple, solving it is exceeding hard if we are not given any additional information. We will later show that it is an NP-Complete problem and probably an efficient algorithm may not exist at all. ### Problem Definition The decision version of the problem is :  Given a set S and a target t does there exist a subset $S^{'} \subseteq S$ such that $t = \sum_{s \in S'} s$ . ### Exponential time algorithm approaches One thing to note is that this problem becomes polynomial if the size of  S’ is given. For eg,a typical interview question might look like : given an array find two elements that add up to t. This problem is perfectly polynomial and we can come up with a straight forward $O(n^2)$ algorithm using nested for loops to solve it. (what is the running time of best approach ?). A slightly more complex problem asks for ,say, 3 elements that add up to t. Again, we can come up with a naive approach of complexity $O(n^3)$. (what is the best running time?). The catch in the general case of subset sum is that we do not know $|S^{'}|$. At the worst case $|S^{'}|$ is $O(n)$ and hence the running time of brute force approach is approximately $n^{O(n)}$. A slightly more efficient algorithm checks out all possible $2^n$ subsets. One typical way to do this is to express all numbers from 0 to $2^{n}-1$ in binary notation and form a subset of elements whose indexes are equal to the bit positions that correspond to 1. For eg, if n is 4 and the current number, in decimal, is say $10$ which in binary is 1010. Then we check the subset that consists of $1^{st}$ and $3^{rd}$ elements of S. One advantage of this approach is that it uses constant space. At each iteration, you examine a single number. But this approach will lead to a slower solution if $|S^{'}|$ is small. Consider the case where $t=S[\frac{n}{2}]$. We will have to examine around $O(2^{\frac{n}{2}})$ different subsets to reach this solution. A slightly different approach finds all possible sums of subsets and checks if t has occurred in the subset. EXPONENTIAL-SUBSET-SUM(S,t): n =  |S| $L_{0}$ = {0} for i in 1 to n  : $L_{i}$ = merge-lists($L_{i-1}, L_{i-1} + S[i]$) if $L_{i}$ has t, return true. remove all elements greater than t from $L_i$ if $L_{n}$ has t, return true else return false This algorithm uses the notation S+x to mean ${s+x :s \in S}$ . Refer CLRS 35.5 for a discussion of a similar algorithm for a variant of subset sum problem. ### NP-Completeness of Subset Sum Decimal In this section we will prove that a specific variant of Subset sum is NP-Complete. Subset sum decimal is defined very similar to standard Subset sum but each number in S and also t is encoded in decimal digits. We can show that Subset sum decimal is in class NP by providing the subset S’ as the certificate. Clearly, we can check if elements in S’ adds up to t in polynomial time. The next step is to select another NP-Complete problem which can be reduced to Subset sum decimal. So far we have not discussed any arithmetic NP complete problems. The only non graph theoretic problem that we have discussed in 3SAT and we will use it for the proof. Of course there are multitude of other reductions including Vertex cover, 3 dimensional matching, partition etc. We are now given a 3SAT formula $\phi$ with n variables – $x_1, x_2,\ldots,x_n$ and m clauses – $C_1, C_2,\ldots, C_m$. Each clause $C_i$ contains exactly 3 literals. Our aim is to construct an instance of subset sum problem $$ such that $\phi$ is satisfiable if and only if a solution to our instance of Subset sum decimal exists. The outline of the proof is as follows : 1. Construct a set S of unique large decimal numbers that somehow encode the constraints of $\phi$. Additionally this operation must take polynomial time. 2. Construct an appropriate target t such that this instance of Subset sum decimal is solvable if and only if a solution to 3SAT instance exists. Handle complications like carries in addition. 3. Devise a way to find the satisfying assignment from subset solution and vice versa. To simplify the proof, we make the following assumptions : 1. All the literals $x_1$ to $x_n$ is used in some clause of $\phi$ . 2. No clause can contain both a literal and its complement. As a consequence of these assumptions, we do not have any variables that are superfluous. Also we do not have any clauses that get satisfied trivially. We will not duplicate the proof in the lecture notes as a detailed sketch of the reduction is given in CLRS section 34.5.5. Instead we will focus on certain observations. Observation 1 : Construction of S and t takes polynomial time This is easy to see. For each variable $x_i$ we create 2 variables. Similarly we create two variables for each clause $C_j$. The total number of variables in S is 2(m+n). Each number in set S and t contains exactly n+m digits. Hence the total construction takes time polynomial in n+m . Observation 2 : There are no carries when elements in subset are added to form t. We can see that the only allowed integers in number construction are 0,1 and 2. The columns corresponding to variables (the leading n digits) can add up to at the most 2. The columns corresponding to clauses (trailing m digits) cannot have a sum of more than 6. This is because of two facts : (a) 3SAT has at most 3 literals in each clause (b) A clause cannot contain a literal and its complement. So, each variable can add at most 1 to that clause column and there at most 3 variables in a clause. Additionally, we have 1 and 2 from the slack variables. Concisely, we get at most 3 from $v_i$ or $v_i^{'}$ and 3 from $s_i$ and $s_i^{'}$. Hence we can conclude that carries does not occur at each column(digit) as the base we use is 10. Observation 3 : All variables in S corresponding to $x_i$s are unique. Each variable $x_i$ creates two variables $v_i$ and $v_i^{'}$. The proof is in two parts : (a) First we show that if $i \neq j$ , $v_i$ and $v_j$ does not match in the leading n digits. Similar argument holds for $v_i^{'}$ and $v_j^{'}$. (b) Next, we can show that $v_i$ does not equal to $v_i^{'}$. This is because our assumption that a literal and its complement does not occur in the same clause. This means that the trailing m digits will not be equal. In conclusion, no pair of variables in S corresponding to $x_i$ are equal. Observation 4 : All variables in S corresponding to $C_i$s are unique. Each clause $C_i$ creates two variables $s_i$ and $s_i^{'}$. If  $i \neq j$, $s_i (s_i^{'})$ and $s_j (s_j^{'})$ does not match in the trailing m digits. Additionally, by construction, $s_i \neq s_i^{'}$ as the digit position corresponding to $C_i$ has 1 for $s_i$ and 2 for $s_i^{'}$. Observation 5 : All variables in S is unique. i.e. S forms a set. This can observed from Observation 3 and 4. By construction $v_i$ and $s_i$ do not match. Similar argument hold for $v_i^{'}$ and $s_i^{'}$. Observation 6 : New variables corresponding $x_i$ and $C_j$ are both needed for proof. A detailed sketch is given in CLRS. The variables $v_i$ and $v_i^{'}$ created from $x_i$ makes sure that each variable has a unique boolean assignment of 0 or 1. Else the sum for that column in target will be 2. This is due to the assumption that all variables $x_i$ HAS to be used in some clause $C_j$ and hence has a unique assignment. Of course, it is possible that $\phi$ has multiple satisfying assignment but the target digit forces only one of them to be selected when you select the elements of subset $S^{'}$. The digits corresponding to clauses makes sure that each clause has at least one variable that evaluates to true. This is because each digit of slack variable corresponding to $C_i$ (ie $s_i,s_i^{'}$) contribute at most 3 towards t and hence the remaining (at least) 1 has to come from $v_j$ or $v_j^{'}$s. So variables $v_i$ ensure that each $x_i$ has a unique assignment. Variables $s_i$ ensure that each clause $C_j$ of $\phi$ is satisfied. Observation 7 : Subset sum is NP complete if the numbers are expressed in base $b \geq 7$. From observation 2 , we know that the maximum possible digit due to summation of elements in S is 6. This means we can reuse the proof of Subset sum decimal to prove that Subset sum is NP-Complete for any base b that is greater that 6. Observation 8 : Given S’ we can find a satisfying assignment for $\phi$. We know that any satisfying subset $S^{'}$ must include either $v_i$ or $v_i^{'}$ for $\forall i , 1 \leq i \leq n$. If $S^{'}$ includes $v_i$ then set $x_i$ to 1. Else set it to 0. Observation 9 Given a satisfying assignment for $\phi$ , we can find S’ This is a bit tricky and is done in two steps. More details can be found in CLRS proof. 1. If the satisfying assignment had $x_i$ , then select $v_i$. Else select $v_i^{'}$. 2. For each clause $C_j$ find how many variables in it evaluated to true due to the boolean assignment. At least one variable has to be true and at most 3 variables are true. a. If $C_j$ has only one variable that evaluates to true, then select $s_j$ and $s_j^{'}$. b. If $C_j$ has two variables that evaluate to true, then select $s_j^{'}$. c. If $C_j$ has three variables that evaluate to true, then select $s_j$. Observation 10 : If $\phi$ is not satisfied, then S’ cannot be found. If $\phi$ is not satisfied, then there exist at least one clause $C_j$ that is not satisfied. This means that for ${n+j}^{th}$ digit, the slack variables $s_j,s_j^{'}$ contribute only 3 but the corresponding digit in t has 4. Hence no S’ exists. ### NP-Completeness of Subset Sum Binary The formal definition of Subset sum binary is similar to Subset sum decimal . The only difference is that all numbers are encoded in bits. We can notice that the above proof for Subset sum decimal holds only for numbers expressed in base of at least 7 (from observation 7). For bases from 1-6, the previous proof does not apply – partly due to the fact that there will be carries during addition. We need an alternate proof approach. Since we have proved Subset sum decimal as NP-Complete , we can use the result to prove Subset sum binary as NP-Complete. The certificate is the subset S’ given in binary. We can see that it can be done in polynomial time and hence Subset sum binary is in NP. The next step is to reduce Subset sum decimal to Subset sum binary. First we observe that any number encoded in decimal can be encoded to binary in polynomial time and vice versa. When given S and t in decimal as input, we encode them in binary and pass it to our Subset sum binary routine. The decision version of Subset sum binary returns true or false which can be fed directly as result of Subset sum decimal. In the optimization version , we just convert the $S’$ returned by the Subset sum binary subroutine to decimal. Observation 11 : A decimal number can be converted to binary in polynomial time. Assume some number n is encoded in both binary and decimal. This means $n = 10^k = 2^{k1}$ where k is the number of digits in the decimal representation and k1 is the number of bits needed to encode it. Taking log to the base 2 on both sides, $k * log_{2} {10} = {k1} \implies {3.3} {k} = {k1}$ So to express a decimal number with k digits, we need between 3k – 4k bits. Observation 12 : Subset sum is NP complete for any base $b\geq 2$. The logarithms of the same number in two different bases differ by at most a constant. ie, $log_{b1}^{b2} = \frac{log_{b1}^{n}}{log_{b2}^{n}}$. $log_{b1}^{b2}$ is a constant irrespective of n. So if n needs k digits in base b1, then it needs at most $\frac{k}{log_{b1}b2}$ to be represented in base b2. (Verify observation 11 using this equation !). ### NP-Completeness of Subset Sum Unary From observation 12, the only base left is 1 and this section handles the special case where all numbers are expressed in base 1. Subset sum unary is similar to Subset sum decimal where all numbers are expressed in unary notation. Numbers in base 1 are called as being represented in unary. Any number k is represented as $1^k$  which is a string of k 1’s. Let us check if Subset sum unary is NP-Complete . The certificate is the subset where all elements are expressed in unary. If we are given numbers in unary, then  verification takes time that is polynomial in the length of individual unary numbers. Hence Subset sum unary is in unary. To prove Subset sum unary is in NP-Complete , we have to reduce either Subset sum decimal/binary to unary. Superficially, it looks straightforward and hence  it seems as though Subset sum unary is in NP-Complete. But the catch is that expressing a number n in base b to unary needs time exponential when computed wrt the size of n’s representation in base b. For eg, representing a binary number n that needs k bits needs around $2^{k's}$ unary digits. We can see that $2^k$ is exponential when viewed from k. In summary, converting a number from any base to unary takes exponential time. So we cannot use our reduction technique as there the reduction is not polynomial. ### Dynamic Programming solution for Subset Sum Unary What we showed above was that Subset sum unary is in NP but not NP-Complete. Here we show that there exists a dynamic programming formulation for this problem. We represent the problem as a matrix A of size n*t. A is a boolean matrix where the interpretation of cell A[i,j]=True is that there exists a subset of ${x_1,x_2,\ldots,x_i}$ that sum up to j. ie $\exists S^{'} \subseteq \{x_1,x_2,\ldots,x_i\}$ such that $j=\sum_{s \in S'} s$. The algorithm goes as follows : SUBSET-SUM-UNARY(S,t): Form matrix A Set A[1,0] = True Set A[1,j] = False unless j==S[1] in which case set A[1,j] to True for i=2 to t for j=2 to n if A[i-1,j] == True A[i,j] = True else if A[i-1,j-x_i] == True A[i,j] = True else A[i,j] = False Consider the set $S=\{2,3,4,5\}$ and let t=8. The worked out DP is given below : 0 1 2 3 4 5 6 7 8 2 T F T F F F F F F 3 T F T T F T F F F 4 T F T T T T T T F 5 T F T T T T T T T Since A[5,8]=True , we conclude that there exists a subset of S that sum up to t(8). ### Strong and Weak NP-Complete Problems Subset sum is interesting in the sense that its binary/decimal can be proved as NP-Complete but its unary version seems to allow a polynomial looking dynamic programming solution. Looking at the dynamic programming solution carefully, the time (and space) complexity of the approach is $O(n*t)$ where n=|S| and t is the target. By itself, the DP solution looks feasible and ‘somehow’ polynomial. But one of the reasons that Subset sum is NP-Complete is due to the fact that it allows "large" numbers. If t is large, then the table A is huge and the DP approach takes a lot of time to complete. Given S and t , there are two ways to define an polynomial algorithm. One uses the length of S ie n to measure algorithm complexity. From this angle, $O(n*t)$ is not polynomial. This is because t can be huge irrespective of n. For eg, we have have a small set with 4 elements but the individual elements (and t) are of the order , say, $O(10^{10})$ . But from the perspective of magnitude of t, this dynamic programming approach is clearly polynomial. In other words, we have two ways to anchor our polynomial – $Length[S]$ and $Magnitude[t]$. An algorithm is called pseudo polynomial, if its time complexity is bounded above by a polynomial function of two variables – $Length[S]$ and $Magnitude[t]$ . Problems that admit pseudo polynomial algorithms are called weak NP-Complete problems and those that do not admit are called Strong NP-Complete problems. For example, Subset sum is a weak NP-Complete problem but Clique is a strong NP-Complete problem. There are lot of interesting discussion about the strong/weak NP-Complete problems in both Garey and Johnson and in Kleinberg/Tardos. See references for more details. Observation 13 : Only number theoretic problems admit pseudo polynomial algorithms. Observation 14 : Strong NP-Complete problems do not admit a pseudo polynomial time algorithm unless P=NP. ### References 1. CLRS 34.5.5 – Proof of NP-Completeness of Subset sum. 2. CLRS 35.5 – An exponential algorithm to solve a variant of subset sum problem. 3. Garey and Johnson 4.2 – Discussion of pseudo polynomial time algorithms along with strong and weak NP-Complete problems. 4. Kleinberg and Tardos 6.4 – Discusses a variant of the DP algorithm given in the lecture notes and the concept of pseudo polynomial time algorithms. Section 8.8 has an alternate NP-Completeness proof of Subset sum using vertex cover which you can skim through if interested. Hope you enjoyed the discussion on various facets of Subset sum problem ! ## Impressions on MIT OCW Calculus Revisited Sometime early this week , I finished listening to the excellent video lectures from MIT OCW’s Calculus Revisited course . I had been meaning to listen to MIT OCW’s Single Variable Calculus course for quite sometime as my background in Calculus is a bit flaky. My interests are in machine learning, data mining and AI where Calculus has a nasty habit of making surprise entries 🙂 I somehow finished my Master’s using my old Calculus knowledge. I took a course on Numerical Methods which kind of exposed my weaknesses. I kept getting confused with the error approximations which used ideas from  infinite series. Other advanced ideas like multivariable optimization were also problematic to me. Once that course was over, I swore myself to refresh my Calculus stuff and also learn multivariable calculus. I started listening to MIT OCW’s Single Variable Calculus lecture videos and felt two things – The course was a bit slow for my pace and the course jumped right away into the mechanics without spending much time on the intuitive explanations of the Calculus. In other words, I felt 18.01 was more focused on the analytic part which emphasized proofs and derivations whereas for my purposes an intuitive explanation of the concept would have sufficed. In fact, I remembered almost all of the Calculus formulas from undergrad – My only problem was the lack of “sense” in how to apply it to the problem I faced (say in machine learning or some optimization). Then I found the Calculus Revisited course from MIT OCW. It consists of a series of lectures on Calculus but also assumes that students have had prior exposure to it. This assumption had some interesting consequences and I fit the bill perfectly. I downloaded the set of videos and started listening to them. Interestingly, all the lectures were between 20-40 minutes which allowed for maximum focus and also allowed you to listen to multiple lectures in the same day. In fact, Arlington had a heavy snow this week and my university had to be closed for the entire week. I completed around 16 lectures in 3 days and was able to finish it ahead of my target date of Feb 15. The course starts with the absolute basic ideas of sets, functions, induction and other stuff. If you are from CS and had taken discrete math, you can feel free to skip the first section. But I would suggest you to still take a look as it , in a sense, sets the stage for the entire course. Do take some time to listen to the lecture on limits. (Part 1 , lecture 4). Here, the discussion of limits effortlessly leads to the derivation of the formula for instantaneous speed and hence differentiation. Part 2 forms the crux of the course and covers differentiation. Professor Herbert Gross had a beautiful way of teaching stuff about derivatives. In particular, he extensively used the idea of geometric proofs or visualizations to expound basic ideas. The way he brought out the tight relation between analysis (as in Math) and geometry was enthralling. He had a huge emphasis on geometric intuition which helped me to “grasp” the key concepts. Part 3 had some nice discussion on Circular functions. He joked about how teachers don’t provide good motivation for learning trigonometry which I felt very true to me. He also explained some concepts that were new to me – that you do not really need triangles to define cosine and sine. Previously, I was aware of the radian concept but never put it all together. He also explained how sine and cosine tend to come up in unexpected places – like as the solution of the differential equation for harmonic motion 🙂 He also masterfully showed the close relation between circular and hyperbolic functions with a playful title of ‘What a difference a sign makes’  (in Part 5). Part 4 discussed about integration and how it can be used to calculate 2 and 3 dimensional areas (and volumes). This part also had a great discussion on how differential and integral calculus are related. That being said, I was a bit dissatisfied with the discussion on the two fundamental theorems of Calculus. The discussion on Mean Value Theorem also felt a bit rushed. I got a bit lost on the discussion on 1 dimensional arc length calculations. May be I should revisit the lecture notes for the same when I get some free time. Part 6 was my favorite part for two reasons – This had a discussion of infinite series and my favorite quip of the course. When discussing about the non intuitiveness and the intellectual challenges posed by infinity , professor Herbert Gross playfully quips (which goes something like this)– ‘ of course, one thing to do is to not study it. I can call it as the right wing conservative educational philosophy’ – Ouch 🙂 I think I mostly understood the idea of infinite series even though there was not much explanation of “why” it works that way. I also felt the topic of Uniform Convergence to be way beyond my comprehension level. Overall, it is a great course and acted as a fast paced refresher for those who had already taken Calculus. The course slowly starts from basic pre-calculus ideas and rapidly gains speed and covers a huge list of calculus topics. I felt few of important Calculus topics were not covered or rushed up – First and second fundamental theorem of Calculus, Mean Value theorem, Taylor series, L’Hospital rule, discussion of exponents and logarithms etc. But that being said, I feel the course more than makes it up for the way the basic ideas were covered. I had fun learning the ideas of limits, infinitesimals , intuitive ideas of differentiation/integration, geometric explanation of differentiation/integration, how the concept of inverse functions pervades Calculus etc. Prof. Herbert Gross had a jovial air around him and occasionally delved into philosophical discussions which made listening to the lectures more interesting. He also had an extensive set of supplementary notes and huge amount of problems with solutions. I had to skip the problems part to conserve time. But if you have some time do spend some on it. Lastly, I found that one of the lectures in the series was missing. Lecture 5 in Part 2 on Implicit Differentiation was the same as the one on Lecture 4. I had sent a mail to MIT OCW about this and got an reply saying they will fix it soon. Hopefully, it will be fixed soon. In conclusion, this is a great , fast paced course on Calculus that emphasizes geometric intuition of the major ideas in Calculus.  Listen to it if you already know Calculus and want a fast refresher ! I am currently listening to the lectures on Multi Variable calculus. I do intend to listen to Single variable Calculus again , may be in Summer. I will put out another post on how it went 🙂 ## Learning Math using YouTube Videos Last week, I was searching for tutorials on using Lagrange multipliers. I was most interested in the case where there are multiple constraints. I found some good youtube videos in the process. So, I spent some time looking at good Youtube channels where good math lessons are taught. To my delight , I found some good channels. One thing I need to mention is that in the channels I mention here , the Math level is not very advanced – at the most up to the undergraduate level. If you want really advanced stuff check out MIT OCW or similar places. Also, since most of the efforts were voluntary , there was also quite a bit of overlap in the lessons. Most of them are around 10 minutes which is excellent as they allow me to listen to a lesson when I feel bored and in the process refresh my basic math 🙂 Some of my favorite channels (not in any order) are : 1. Math2b Prof’s channel Has some interesting stuff on Partial fractions, calculus and some geometry ish topics. 2. Partick JMT’s channel Has some nice and organized stuff about trigonometry and calculus. 3. MathTV’s channel Has a series of videos of algebra, calculus and other stuff This is probably the most popular education Youtube channel. It contains basic tutorial videos on lot of subjects like physics , biology and math. It also has some nice videos on contemporary economic issues. Most of the videos are well packages using playlists that will help you listen in a organized fashion. You can also check Khan academy’s website . ### Other Resources 1. Steven Strogatz’s NYTimes Math article series Steven Strogatz writes a weekly article series on Math in NYTimes. He explains lot of interesting stuff in Math in a simple manner. You can check out the Strogatz’s Opinionator blog page for more details. ### Some Tips Most of the channels may not be very useful for grad students in their studies. But they can act as a refresher. The easiest way to follow the channels is by Subscribing to it. In each of the web page , there is a subscribe button which allows you to be notified when  new video are uploaded. Once you subscribe , you either visit your My Subscriptions page to get the videos uploaded per user. You can also add the subscription widget to your Youtube homepage. Just to bring the topic to closure, I finally found a good tutorial on using Lagrange Multipliers with multiple constraints at An Introduction to Lagrange Multipliers . Have fun with all the Math videos 🙂 ## Big Picture of Calculus Calculus is one of the important fields to master if you want to do research in data mining or machine learning. There is a very good set of video lectures on Single Variable Calculus at MIT OCW. The video lectures are here . I had listened to first 5-6 lectures. Since I had some grounding in calculus already, I did not have any trouble understanding it. But I felt professor David Jerison went a bit too fast but without giving a deep intuition of calculus. I was a bit dissatisfied and quit watching the lectures. When I was looking for alternate video lectures on calculus, I came across a set of 5 lectures on calculus titled "Big Picture Of Calculus". It consists of recordings of Professor Gilbert Strang and focuses explicitly on giving an intuitive feel on calculus. From the lectures, it looks like it might grow to a full series of lectures on calculus although for quite some time the lecture count has stayed constant at 5. The lectures span the most important topics in differential and integral calculus. I have talked about Gilbert Strang and his linear algebra course lectures here . The calculus lectures are also excellent. He focuses on the main topics and gives a geometric intuition. The lectures are short (around 30 minutes) and hence are quite convenient to watch also. I hope that the series will be expanded to cover other important topics too. Once you get the basic intuition , the OCW course on calculus should be easy to follow. The website for the videos is at Big Picture of Calculus . The lectures can be watched online. If you want to download them , you need to follow a convoluted procedure. a. Goto the html page for the individual lecture. Eg  Big Picture of Calculus at http://www-math.mit.edu/~gs/video1.html . c. Search for this line <param name="FlashVars" value="configxml=somexmlfile.xml" /> d. You can download the actual file at http://www-math.mit.edu/~gs/somexmlfile.flv . In case it did not work , open the xml file at http://www-math.mit.edu/~gs/somexmlfile.xml . View the xml file’s source and get the flv name at this line <param name="flv" value="flvfilename.flv" /> . Then the file name will be http://www-math.mit.edu/~gs/flvfilename.flv Have fun with Calculus ! ## Impressions On MIT OCW Linear Algebra Course Linear Algebra is one of the coolest and most useful math courses you can take. Basically , it deals with vectors , matrices all the cool stuff you can do with them. Unfortunately, I did not really have a dedicated course on Linear Algebra in my undergrad. From what I hear , most of the CS people I meet (from India) also don’t have this course in their undergrad. Sure we have had some of the topics (like vectors, basic matrices, determinants, Eigenvalues) split across in multiple courses or in our high school ; but not a single,unified course on it. Linear algebra is useful on its own but it becomes indispensable when your area of interest is AI , Data Mining or Machine Learning. When I took a machine learning course , I spent most of the time learning things in Linear Algebra, adv Calculus or Linear Optimization. In hindsight , machine learning would have been an easy course if I had previously taken courses on Linear Algebra or Linear Optimization. As a concrete example, I had a hard time understanding the proof of optimality of PCA or the equivalence of different techniques for calculating PCA (eg Eigen space decomposition or SVD etc) . But once I learnt all about basis, dimension , ,Eigen space and Eigen space decomposition, QR decomposition,SVD etc (which are btw taught in any intro course of Linear Algebra) the whole PCA concept looked really really simple and the proofs looked like straight forward algebraic derivations. Oh well, the benefits of hindsight 🙂 Ok, enough on my rant on lack of Linear Algebra in undergrad. After I struggled mightily in my machine learning course, I decided that I had to master Linear Algebra before taking any more advanced courses. I spent the entire winter holidays learning Linear Algebra as I was taking an advanced data mining course this spring. So this blog post is a discussion of my experience. ### Video Resources Arguably the best resource to learn Linear Algebra is MIT’s OCW course taught by Professor Gilbert Strang . This course are is one the most popular OCW course and so far had more than 1 Million visits . I also searched for alternate courses, but this course wins hands down both for its excellent teaching style and its depth. The course website is here. It contains around 35 video lectures on various topics. The lecture are available for download both from ITunes and from Internet Archive. If you prefer YouTube, then the playlist for this course is here. ### Books The recommended book for this course is Introduction to Linear Algebra. 4th ed. by Gilbert Strang. I found the book to be quite costly , even used books for old versions ! I don’t mind buying expensive books (I shell out a lot of money for data mining books , but a rant on it later ) but since I was interested in Linear Algebra primarily to help me master data mining, I preferred the equivalent book Linear Algebra and Its Applications , also by Gilbert Strang. This book had a very similar content to the recommended book but I felt was more fast paced which suited me fine. Also I was able to get an old copy from Amazon for 10 bucks. Sweet ! My only complaint of the book is that the examples and exercises felt a bit disconnected (or should I say, I wasn’t clear of the motivation ? ) from the topics. If you don’t want to purchase these expensive books , then there is an EXCELLENT free e-book by Professor Jim Hefferon .The book’s website is here , from where you can download the e-book. I have to say, this book really blew me away. It was really intuitive, has excellent (mostly plausible) examples, was slightly more theoretical than Strang’s book with more proofs. It also had a very helpful solution manual , and a LaTeX version of the book. Too good to be true 🙂 I felt this book had a much limited set of topics than Strang’s course/book, (hence this a truly intro book) , but whatever topic it took, it gave it a thorough treatment. Another thing I liked in the book are the  exercises – Most of them were excellent. And having a solution manual helped clarify a lot of things given that I was doing essentially a self-study. Thanks Jim ! ### Impressions on the Lectures I felt, overall, the lectures were excellent. They were short (40-50 minutes). So my usual daily schedule was to listen to a lecture, and read the relevant sections in the book , solve the exercises for which the answers are available at the end of book. All these steps took at most 2-3 hours a day. I was also taking notes in LaTeX using Lyx. I have talked about using Lyx previously in this blog post. I really liked Strang’s teaching style. He often emphasizes intuition , especially geometric intuition rather than proofs. I felt that is how intro courses must be structured. Proofs are important but not before I have a solid understanding of the topics. But I also have to say that the lectures were varying in quality. Some of the lectures were exceptional while some were not so enlightening. But on the whole, I was really glad that he has made the lectures available online. It has certainly helped me learn Linear Algebra. ### Topics If possible see all the lectures as almost all of them cover important topics. I did and have to say all of them were excellent and useful. But if you are mostly interested in applied Linear Algebra and planning to use it in Data Mining/ Machine learning, then my suggestion will be Lectures 1-11 , 14-22,25,27-29,33. If you are interested watch lectures 30,31 too. Again a better way to learn is to take notes during lectures and solving at least a few exercises in the book. If you have Matlab or Octave then you can verify answers to some other exercises for which solutions are not given. ### Notes I have taken LaTeX notes for this course but they are a bit scattered and unorganized. Hopefully, I will organize them together and create a single PDF soon. I kind of put it on a lower priority after I noticed that Peteris Krumins’s blog has a partial set of lecture notes for this course. His lecture notes can accessed here . As of now (Jan 30 , 2010) , he has put notes for first 5 lectures although the frequency seems to be a bit slow. Have fun with Vectors and Matrices !!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 164, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097506761550903, "perplexity": 482.78401664034425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570830.42/warc/CC-MAIN-20190915072355-20190915094355-00361.warc.gz"}
https://chem.libretexts.org/LibreTexts/Bellarmine_University/BU%3A_Chem_103_(Christianson)/Phase_3%3A_Atoms_and_Molecules_-_the_Underlying_Reality/7%3A_Quantum_Atomic_Theory/7.2%3A_Electromagnetic_Radiation
Skills to Develop • To learn about the characteristics of electromagnetic waves. Light, X-Rays, infrared and microwaves are among the types of electromagnetic waves. Scientists discovered much of what we know about the structure of the atom by observing the interaction of atoms with various forms of radiant, or transmitted, energy, such as the energy associated with the visible light we detect with our eyes, the infrared radiation we feel as heat, the ultraviolet light that causes sunburn, and the x-rays that produce images of our teeth or bones. All these forms of radiant energy should be familiar to you. We begin our discussion of the development of our current atomic model by describing the properties of waves and the various forms of electromagnetic radiation. Figure $$\PageIndex{1}$$: A Wave in Water. When a drop of water falls onto a smooth water surface, it generates a set of waves that travel outward in a circular direction. ### Properties of Waves A wave is a periodic oscillation that transmits energy through space. Anyone who has visited a beach or dropped a stone into a puddle has observed waves traveling through water (Figure $$\PageIndex{1}$$). These waves are produced when wind, a stone, or some other disturbance, such as a passing boat, transfers energy to the water, causing the surface to oscillate up and down as the energy travels outward from its point of origin. As a wave passes a particular point on the surface of the water, anything floating there moves up and down. Figure $$\PageIndex{2}$$: Important Properties of Waves (a) Wavelength (λ in meters), frequency (ν, in Hz), and amplitude are indicated on this drawing of a wave. (b) The wave with the shortest wavelength has the greatest number of wavelengths per unit time (i.e., the highest frequency). If two waves have the same frequency and speed, the one with the greater amplitude has the higher energy. Waves have characteristic properties (Figure $$\PageIndex{2}$$). As you may have noticed in Figure $$\PageIndex{1}$$, waves are periodic, that is, they repeat regularly in both space and time. The distance between two corresponding points in a wave—between the midpoints of two peaks, for example, or two troughs—is the wavelength ($$λ$$, lowercase Greek lambda). Wavelengths are described by a unit of distance, typically meters. The frequency ($$\nu$$, lowercase Greek nu) of a wave is the number of oscillations that pass a particular point in a given period of time. The usual units are oscillations per second (1/s = s−1), which in the SI system is called the hertz (Hz). It is named after German physicist Heinrich Hertz (1857–1894), a pioneer in the field of electromagnetic radiation. The amplitude, or vertical height, of a wave is defined as half the peak-to-trough height; as the amplitude of a wave with a given frequency increases, so does its energy. As you can see in Figure $$\PageIndex{2}$$, two waves can have the same amplitude but different wavelengths and vice versa. The distance traveled by a wave per unit time is its speed ($$v$$), which is typically measured in meters per second (m/s). The speed of a wave is equal to the product of its wavelength and frequency: \begin{align} (\text{wavelength})(\text{frequency}) &= \text{speed} \nonumber \\[5pt] \lambda \nu &=v \label{6.1.1a} \\[5pt] \left ( \dfrac{meters}{\cancel{wave}} \right )\left ( \dfrac{\cancel{\text{wave}}}{\text{second}} \right ) &=\dfrac{\text{meters}}{\text{second}} \label{6.1.1b} \end{align} Be careful not to confuse the symbols for the speed, $$v$$, with the frequency, $$\nu$$. Different types of waves may have vastly different possible speeds and frequencies. Water waves are slow compared to sound waves, which can travel through solids, liquids, and gases. Whereas water waves may travel a few meters per second, the speed of sound in dry air at 20°C is 343.5 m/s. Ultrasonic waves, which travel at an even higher speed (>1500 m/s) and have a greater frequency, are used in such diverse applications as locating underwater objects and the medical imaging of internal organs. Water waves transmit energy through space by the periodic oscillation of matter (the water). In contrast, energy that is transmitted, or radiated, through space in the form of periodic oscillations of electric and magnetic fields is known as electromagnetic radiation. (Figure $$\PageIndex{3}$$). Some forms of electromagnetic radiation are shown in Figure $$\PageIndex{4}$$. In a vacuum, all forms of electromagnetic radiation—whether microwaves, visible light, or gamma rays—travel at the speed of light (c), which turns out to be a fundamental physical constant with a value of 2.99792458 × 108 m/s (about 3.00 ×108 m/s or 1.86 × 105 mi/s). This is about a million times faster than the speed of sound. Figure $$\PageIndex{3}$$: The Nature of Electromagnetic Radiation. All forms of electromagnetic radiation consist of perpendicular oscillating electric and magnetic fields. Because the various kinds of electromagnetic radiation all have the same speed (c), they differ in only wavelength and frequency. As shown in Figure $$\PageIndex{4}$$ and Table $$\PageIndex{1}$$, the wavelengths of familiar electromagnetic radiation range from 101 m for radio waves to 10−12 m for gamma rays, which are emitted by nuclear reactions. By replacing $$v$$ with $$c$$ in Equation $$\ref{6.1.1a}$$, we can show that the frequency of electromagnetic radiation is inversely proportional to its wavelength: \begin{align} c&=\lambda \nu \\[5pt] \nu &=\dfrac{c}{\lambda } \label{6.1.2} \end{align} For example, the frequency of radio waves is about 108 Hz, whereas the frequency of gamma rays is about 1020 Hz. Visible light, which is electromagnetic radiation that can be detected by the human eye, has wavelengths between about 7 × 10−7 m (700 nm, or 4.3 × 1014 Hz) and 4 × 10−7 m (400 nm, or 7.5 × 1014 Hz). Note that when frequency increases, wavelength decreases; c being a constant stays the same. Similarly, when frequency decreases, the wavelength increases. Figure $$\PageIndex{4}$$: The Electromagnetic Spectrum. (a) This diagram shows the wavelength and frequency ranges of electromagnetic radiation. The visible portion of the electromagnetic spectrum is the narrow region with wavelengths between about 400 and 700 nm. (b) When white light is passed through a prism, it is split into light of different wavelengths, whose colors correspond to the visible spectrum. Within the visible range our eyes perceive radiation of different wavelengths (or frequencies) as light of different colors, ranging from red to violet in order of decreasing wavelength. The components of white light—a mixture of all the frequencies of visible light—can be separated by a prism, as shown in part (b) in Figure $$\PageIndex{4}$$. A similar phenomenon creates a rainbow, where water droplets suspended in the air act as tiny prisms. Table $$\PageIndex{1}$$: Common Wavelength Units for Electromagnetic Radiation Unit Symbol Wavelength (m) Type of Radiation picometer pm 10−12 gamma ray angstrom Å 10−10 x-ray nanometer nm 10−9 UV, visible micrometer μm 10−6 infrared millimeter mm 10−3 infrared centimeter cm 10−2 microwave As you will soon see, the energy of electromagnetic radiation is directly proportional to its frequency and inversely proportional to its wavelength: \begin{align} E\; &\propto\; \nu \label{6.1.3} \\[5pt] & \propto\; \dfrac{1}{\lambda } \label{6.1.4} \end{align} Whereas visible light is essentially harmless to our skin, ultraviolet light, with wavelengths of ≤ 400 nm, has enough energy to cause severe damage to our skin in the form of sunburn. Because the ozone layer of the atmosphere absorbs sunlight with wavelengths less than 350 nm, it protects us from the damaging effects of highly energetic ultraviolet radiation. The energy of electromagnetic radiation increases with increasing frequency and decreasing wavelength. Example $$\PageIndex{1}$$: Wavelength of Radiowaves Given: frequency Strategy: Substitute the value for the speed of light in meters per second into Equation $$\ref{6.1.2}$$ to calculate the wavelength in meters. Solution: From Equation $$\ref{6.1.2}$$, we know that the product of the wavelength and the frequency is the speed of the wave, which for electromagnetic radiation is 2.998 × 108 m/s: \begin{align*} λν &= c \\[5pt] &= 2.998 \times 10^8 m/s \end{align*} Thus the wavelength $$λ$$ is given by \begin{align*} \lambda &=\dfrac{c}{\nu } \\[5pt] &=\left ( \dfrac{2.988\times 10^{8}\; m/\cancel{s}}{101.1\; \cancel{MHz}} \right )\left ( \dfrac{1\; \cancel{MHz}}{10^{6}\; \cancel{s^{-1}}} \right ) \\[5pt] &=2.965\; m \end{align*} Exercise $$\PageIndex{1}$$ As the police officer was writing up your speeding ticket, she mentioned that she was using a state-of-the-art radar gun operating at 35.5 GHz. What is the wavelength of the radiation emitted by the radar gun? Understanding the electronic structure of atoms requires an understanding of the properties of waves and electromagnetic radiation. A wave is a periodic oscillation by which energy is transmitted through space. All waves are periodic, repeating regularly in both space and time. Waves are characterized by several interrelated properties: wavelength ($$λ$$), the distance between successive waves; frequency ($$\nu$$), the number of waves that pass a fixed point per unit time; speed ($$v$$), the rate at which the wave propagates through space; and amplitude, the magnitude of the oscillation about the mean position. The speed of a wave is equal to the product of its wavelength and frequency. Electromagnetic radiation consists of two perpendicular waves, one electric and one magnetic, propagating at the speed of light ($$c$$). Electromagnetic radiation is radiant energy that includes radio waves, microwaves, visible light, x-rays, and gamma rays, which differ in their frequencies and wavelengths.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9750732183456421, "perplexity": 570.1173611204292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825098.68/warc/CC-MAIN-20181213193633-20181213215133-00320.warc.gz"}
http://clay6.com/qa/12725/which-of-the-following-can-give-both-e-and-z-products-
Browse Questions # Which of the following can give both $E$ and $Z$ products? A) 2 B) 3 C) 1 and 2 d) 2 and 3 As there is only one hydrogen atom at the beta position only one anti elimination can occur .The substrate also carries beta hydrogen but since the mechanism is E1 it can give both E and Z
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8898000121116638, "perplexity": 1739.299727375762}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00572-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.math.toronto.edu/ivrii/PDE-textbook/Chapter13/S13.5.html
13.5. Continuous spectrum and scattering $\renewcommand{\Re}{\operatorname{Re}}$ $\renewcommand{\Im}{\operatorname{Im}}$ $\newcommand{\erf}{\operatorname{erf}}$ $\newcommand{\supp}{\operatorname{supp}}$ $\newcommand{\dag}{\dagger}$ $\newcommand{\const}{\mathrm{const}}$ $\newcommand{\mes}{\operatorname{mes}}$ ## 13.5. Continuous spectrum and scattering ### Introduction Here we discuss idea of scattering. Basically there are two variants of the Scattering Theory--non-stationary and stationary. We start from the former but then fall to the latter. We assume that there is unperturbed operator $L_0$ and perturbed operator $L=L_0+V$ where $V$ is a perturbation. It is always assumed that $L_0$ has only continuous spectrum (more precisely--absolutely continuous) and the same is true for $L$ (otherwise our space $\mathsf{H}$ is decomposed into sum $\mathsf{H}=\mathsf{H}_{\mathsf{ac}}\oplus \mathsf{H}_{\mathsf{pp}}$ where $L$ acts on each of them and on $\mathsf{H}_{\mathsf{ac}}$, $\mathsf{H}_{\mathsf{pp}}$ it has absolutely continuous and pure point spectra respectively. Scattering happens only on the former. Now consider $u=e^{itL}u_0$ be a solution of the perturbed non-stationary equation. In the reasonable assumptions it behaves as $t\to \pm \infty$ as solutions of the perturbed non-stationary equation: $$\|e^{itL}u_0- e^{itL_0}u_\pm\|\to 0\qquad \text{as }\ t\to \pm \infty \label{eq-13.5.1}$$ or, in other words the following limits exist $$u_\pm=\lim_{t\to \pm \infty} e^{-itL_0}e^{itL}u_0. \label{eq-13.5.2}$$ Then operators $W_\pm : u_0\to u_pm$ are called wave operators and under some restrictions they are proven to be unitary operators from $\mathsf{H}$ onto $\mathsf{H}_{\mathsf{ac}}$. Finally $S=W_+W_-^{-1}$ is called a scattering operator. Despite theoretical transparency this construction is not very convenient and instead are considered some test solutions which however do not belong to space $\mathsf{H}_{\mathsf{ac}}$. ### One dimensional scattering Let us consider on $\mathsf{H}=L^2(\mathbb{R})$ operators $L_0u:= -u_{xx}$ and $L=L_0+V(x)$. Potential $V$ is supposed to be fast decaying as $|x|\to \infty$ (or even compactly supported). Then consider a solution of $$u_t =iLu= -iu_{xx} + V(x)u \label{eq-13.5.3}$$ of the form $e^{ik^2t}v(x,k)$; then $v(x,k)$ solves $$v_{xx}- V(x)v + k^2v=0 \label{eq-13.5.4}$$ and it behaves as $a_{\pm} e^{ikx}+ b_{\pm} e^{-ikx}$ as $x\to \pm \infty$. Consider solution which behaves exactly as $e^{ikx}$ as $x\to -\infty$: $$v(k,x)= e^{ikx}+o(1)\qquad \text{as } x\to -\infty; \label{eq-13.5.5}$$ then $$v(k,x)= A(k)e^{ikx}+ B(k)e^{-ikx}+o(1)\qquad \text{as } x\to +\infty. \label{eq-13.5.6}$$ Complex conjugate solution then \begin{align} &\bar{v}(k,x)= e^{-ikx}+o(1)\qquad \text{as } x\to -\infty, \label{eq-13.5.7}\\ &\bar{v}(k,x)= \bar{A}(k)e^{-ikx}+ \bar{B}(k)e^{ikx}+o(1)\qquad \text{as } x\to +\infty. \label{eq-13.5.8} \end{align} Their Wronskian $W(v,\bar{v})$ must be constant (which follows from equation to Wronskian from ODE) and since $W(v,\bar{v})= W(e^{ikx},e^{-ikx})+o(1)=-2ik+o(1)$ as $x\to -\infty$ and \begin{align} &W(v,\bar{v})= W(A(k)e^{ikx}+ B(k)e^{-ikx},\label{eq-13.5.9}\\ &\bar{A}(k)e^{-ikx}+ \bar{B}(k)e^{ikx})+o(1)= -2ik \bigl( |b(k)|^2-|a(k)|^2\bigr)+o(1) \label{eq-13.5.10} \end{align} as $x\to +\infty$ we conclude that $$|A(k)|^2-|B(k)|^2 =1. \label{eq-13.5.11}$$ We interpret it as the wave $A(k)e^{ikx}$ at $+\infty$ meets a potential and part of it $e^{ikx}$ passes to $-\infty$ and another part $B(k)e^{-ikx}$ reflects back to $+\infty$. We observe that (\ref{eq-13.5.11}) means that the energy of the passed (refracted) and reflected waves together are equal to the energy of the original wave. We can observe that $$A(-k)=\bar{A}(k), \qquad B(-k)=\bar{B}(k). \label{eq-13.5.12}$$ Functions $A(k)$ and $B(k)$ are scattering coefficients and together with eigenvalues $-k_j^2$ $$\phi_{j,xx} -V_j(x)\phi_j -k_j^2\phi_j=0, \qquad \phi_j\ne 0 \label{eq-13.5.13}$$ they completely define potential $V$. ### Three dimensional scattering Consider $-\Delta$ as unperturbed operator and $-\Delta + V(x)$ as perturbed where $V(x)$ is smooth fast decaying at infinity potential. We ignore possible point spectrum (which in this case will be finite and discrete). Let us consider perturbed wave equation $$u_{tt}-\Delta u +V(x)u=0; \label{eq-13.5.14}$$ it is simler than Schrödinger equation. Let us consider its solution which behaves as $t\to -\infty$ as a plane wave $$u\sim u_{-\infty} = e^{ik (\boldsymbol{\omega}\cdot \mathbf{x}-t)}\qquad \text{as } t\to -\infty. \label{eq-13.5.15}$$ with $\boldsymbol{\omega}\in \mathbb{S}^2$ (that means $\boldsymbol{\omega}\in \mathbb{R}^3$ and $|\boldsymbol{\omega}|=1$), $k\ge 0$. Theorem 1. If (\ref{eq-13.5.15}) holds then $$u\sim u_{+\infty} = e^{ik (\boldsymbol{\omega}\cdot \mathbf{x}-t)}+ v(x) e^{-ikt}\qquad \text{as } t\to +\infty. \label{eq-13.5.16}$$ where the second term in the right-hand expression is an outgoing spherical wave i.e. $v(x)$ satisfies Helmholtz equation (9.1.19) and Sommerfeld radiation conditions (9.1.20)--(9.1.21) and moreover $$v(x)\sim a(\boldsymbol{\theta}, \boldsymbol{\omega}; k )|x|^{-1} e^{ik|x|} \qquad\text{as } x= r\boldsymbol{\theta}, r\to \infty, \boldsymbol{\theta}\in \mathbb{S}^2. \label{eq-13.5.17}$$ Sketch of Proof Observe that $(u-u_{-\infty})_{tt}-\Delta (u-u_{-\infty})= f:=-V u$ and $(u-u_{-\infty})\sim 0$ as $t\to -\infty$ and then applying Kirchhoff formula (9.1.21) with $0$ initial data at $t=-\infty$ we arrive to $$u-u_{-\infty}= \frac{1}{4\pi} \iiint |x-y|^{-1} f(y, t-|x-y|)\,dy \label{eq-13.5.18}$$ and one can prove easily (\ref{eq-13.5.17}) from this. Definition 1. $a(\boldsymbol{\theta}, \boldsymbol{\omega}; k)$ is Scattering amplitude and operator $S(k):L^2 (\mathbb{S}^2)\to L^2 (\mathbb{S}^2)$, $$(S(k)w)(\boldsymbol{\theta})= w(\boldsymbol{\theta})+ \iint _{\mathbb{S}^2} a(\boldsymbol{\theta}, \boldsymbol{\omega}; k) w(\boldsymbol{\omega})\, d\sigma (\boldsymbol{\omega}) \label{eq-13.5.19}$$ is a scattering matrix. It is known that Theorem 2. Scattering matrix is a unitary operator for each $k$: $$S^*(k)S(k)=S(k)S^*(k)=I. \label{eq-13.5.20}$$ Remark 1. 1. The similar results are proven when the scatterer is an obstacle rather than potential, or both. 2. Determine scatterer from scattering amplitude is an important Inverse scattering problem. 3. In fact fast decaying at infinity potential means decaying faster than Coulomb potential; for the latter theory needs to be heavily modified.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 16, "x-ck12": 0, "texerror": 0, "math_score": 0.994544267654419, "perplexity": 472.1453270352084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816083.98/warc/CC-MAIN-20180225011315-20180225031315-00284.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/208045-integrals.html
# Math Help - integrals 1. ## integrals Hey. Not quite sure if this post is in the right category! But I have a integral that Im struggling with. Its a definite integral 2. ## Re: integrals The best forum for this topic is the calculus forum. I would begin by rewriting the integrand as: $\int_0^1 e^{7\ln(42)x}\,dx$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9342950582504272, "perplexity": 3266.450419772746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663762.45/warc/CC-MAIN-20140930004103-00392-ip-10-234-18-248.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/109300/squares-of-complex-numbers
# Squares of Complex numbers I have one problem with Complex numbers. $$(-6i)^2 = (1-6i)^2$$ This is ok? - For the right-hand side, expand like you do with $(a+b)^2=a^2+2ab+b^2$, or better for us, $a^2+b^2+2ab$. Let $a=1$, $b=-6i$. Then $a^2+b^2=1^2+(-6i)^2= -35$, and $2ab=-12i$. So right-hand side is $-(35+12i)$. –  André Nicolas Feb 14 '12 at 16:06 What have you tried? –  lhf Feb 14 '12 at 16:06 More generally, $(-z)^2 = (1-z)^2$ iff $z=1/2$. –  lhf Feb 14 '12 at 16:08 It is ok, when you would calculate $(-6i)^2-(1-6i)^2 \mod (1-12i)=0$. –  draks ... Feb 14 '12 at 19:03 It is not. The left-hand side is a real number but the right-hand side is not. - $$(-6i)^2 = 36i$$ or -6 * (-1) = 6 ? –  lala23 Feb 14 '12 at 16:00 @lala23, $(-6i)^2 = (-6)^2(i^2) = (36)\cdot(-1) = -36$. –  lhf Feb 14 '12 at 16:01 @lala: Again, the left one's real, but the other is not. –  J. M. Feb 14 '12 at 16:03 What we have on the left hand side is: $(-6i)^2 = (-6)^2(i)^2 = 36*-1 = -36.$ On right right side, we have $(1-6i)^2 = (1-6i)(1-6i) =$ ... $= -35 - 12i.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9558924436569214, "perplexity": 1647.9564176327297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888210.96/warc/CC-MAIN-20140722025808-00016-ip-10-33-131-23.ec2.internal.warc.gz"}
http://clay6.com/qa/14522/there-is-an-error-of-pm-0-04-cm-in-the-measurement-of-the-diameter-of-a-sph
Browse Questions # There is an error of $\pm 0.04\;cm$ in the measurement of the diameter of a sphere. When the radius is $10\;cm$, the percentage error in the volume of the sphere is : $\begin {array} {1 1} (1)\;\pm 1.2 & \quad (2)\;\pm 1.0 \\ (3)\;\pm 0.8 & \quad (4)\;\pm 0.6 \end {array}$ $(4)\;\pm 0.6$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9936634302139282, "perplexity": 109.75188247943575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00198-ip-10-171-10-70.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/450403/if-the-mth-term-of-an-arithmetic-progression-is-frac1n-and-the-nth-te?answertab=oldest
# If the $m$th term of an Arithmetic Progression is $\frac{1}{n}$ and the $n$th term is… Problem : If the $m$th term of an A.P is $\frac{1}{n}$ and the $n$th term is $\frac{1}{m}$ then prove that the sum to $mn$ terms is $\frac{mn+1}{2}$ My working : Let $a$ be the first term of the progression and $d$ the common difference then: $$\tag1T_m = \frac{1}{n}= a+(m-1)d$$ $$\tag2 T_n = \frac{1}{m} = a+(n-1)d$$ Subtracting (1) from (2) and solving for $d$ we get : $d = \frac{1}{mn}$ Please suggest what to do further. Thanks - then you have to count the sum of $T_1,...,T_{mn}$ it seems, by computing $S_{mn} = \frac{T_1 + T_{mn}}{2}mn$... - Now that you have $d$ you can calculate $a$ from $(1)$ $$T_1 = a = \frac{1}{n}-(m-1)d = \frac{m}{mn}-\frac{m-1}{mn}=\frac{1}{mn}$$ Having $a$ and $d$ Apply the formula for arithmetic progression sum. $$T_{mn} = a + (mn-1)d = \frac{1}{mn} + \frac{mn-1}{mn} = 1$$ $$S_{mn} = mn\frac{T_1 + T_{mn}}{2} = mn\frac{\frac{1}{mn}+1}{2} = \frac{mn(mn+1)}{2mn} = \frac{mn+1}{2}$$ Q.E.D. - If $\,a_1,a_2,....\;$ is an arithmetic progression with common difference $\,d\,$ , we have that $$S_r:=a_1+a_2+\ldots +a_r=\frac r2\left(2a_1+(r-1)d\right)$$ $$\frac{m+n}{mn}=\frac1n+\frac1m=2a_1+(m+n-2)d=2a_1+(m+n-2)\frac1{mn}\implies$$ $$2a_1=\frac2{mn}\implies \color{red}{a_1=\frac1{mn}}\;\;,\;\;\text{and since also}\;\;\color{red}{d=\frac1{mn}}\implies$$ $$S_{mn}=\frac{mn}2\left(2a_1+(mn-1)d\right)=\frac{mn}2\left(\frac2{mn}+1-\frac1{mn}\right)=\frac12+\frac{mn}2=\frac{mn+1}2$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185753464698792, "perplexity": 125.13444671172317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645335509.77/warc/CC-MAIN-20150827031535-00348-ip-10-171-96-226.ec2.internal.warc.gz"}
https://kerodon.net/tag/01YT
# Kerodon $\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$ Corollary 5.5.8.8. Let $\operatorname{\mathcal{C}}$ be a simplicial category having the property that, for every pair of objects $X,Y \in \operatorname{\mathcal{C}}$, the simplicial set $\operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Y)_{\bullet }$ is an $\infty$-category. Let $\operatorname{\mathcal{C}}'$ denote the simplicial subcategory of $\operatorname{\mathcal{C}}$ having the same objects, with morphism simplicial sets given by $\operatorname{Hom}_{\operatorname{\mathcal{C}}'}(X,Y)_{\bullet } = \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Y)_{\bullet }^{\simeq }$. Then the inclusion of simplicial categories $\operatorname{\mathcal{C}}' \hookrightarrow \operatorname{\mathcal{C}}$ induces an isomorphism of $\infty$-categories $\operatorname{N}_{\bullet }^{\operatorname{hc}}(\operatorname{\mathcal{C}}') \simeq \operatorname{Pith}( \operatorname{N}_{\bullet }^{\operatorname{hc}}(\operatorname{\mathcal{C}}) )$. Proof. Let $\sigma$ be an $n$-simplex of the homotopy coherent nerve $\operatorname{N}_{\bullet }^{\operatorname{hc}}(\operatorname{\mathcal{C}})$, which we identify with a simplicial functor $F: \operatorname{Path}[n]_{\bullet } \rightarrow \operatorname{\mathcal{C}}$ carrying each $i \in [n]$ to an object $C_{i} \in \operatorname{\mathcal{C}}$. If $T \subseteq [n]$ is a nonempty subset having smallest element $i$ and largest element $k$, let us write $F(T)$ for the corresponding vertex of the simplicial set $\operatorname{Hom}_{\operatorname{\mathcal{C}}}(C_ i, C_ k)_{\bullet }$. If $S \subseteq T$ is a subset containing $i$ and $k$, let us write $F(S \subseteq T): F(T) \rightarrow F(S)$ for the corresponding edge of the simplicial set $\operatorname{Hom}_{\operatorname{\mathcal{C}}}(C_ i, C_ k)_{\bullet }$. Let us abuse notation by identifying $\operatorname{N}_{\bullet }^{\operatorname{hc}}(\operatorname{\mathcal{C}}')$ with a simplicial subset of $\operatorname{N}_{\bullet }^{\operatorname{hc}}(\operatorname{\mathcal{C}})$. Unwinding the definitions, we see that $\sigma$ is contained in $\operatorname{N}_{\bullet }^{\operatorname{hc}}(\operatorname{\mathcal{C}}')$ if and only if the following condition is satisfied: $(1)$ For every inclusion $S \subseteq T$ of nonempty subsets of $[n]$ having the same smallest element $i$ and largest element $k$, the edge $F(S \subseteq T): F(T) \rightarrow F(S)$ is an isomorphism in the $\infty$-category $\operatorname{Hom}_{\operatorname{\mathcal{C}}}( C_ i, C_ k)_{\bullet }$. Using the thinness criterion of Proposition 5.5.8.7, we see that $\sigma$ belongs to the pith $\operatorname{Pith}( \operatorname{N}_{\bullet }^{\operatorname{hc}}(\operatorname{\mathcal{C}}))$ if and only if the following a priori weaker condition is satisfied: $(2)$ For every triple of elements $0 \leq i \leq j \leq k \leq n$, the edge $F( \{ i,k \} \subseteq \{ i,j,k\} ): F( \{ i, j, k\} ) \rightarrow F( \{ i, k \} )$ is an isomorphism in the $\infty$-category $\operatorname{Hom}_{\operatorname{\mathcal{C}}}(C_ i, C_ k)_{\bullet })$. To complete the proof, it will suffice to show that $(2) \Rightarrow (1)$. Assume that $(2)$ is satisfied, and suppose that we are given nonempty subsets $S \subseteq T$ of $[n]$ having the same smallest element $i$ and largest element $k$. We wish to show that $F(S \subseteq T)$ is an isomorphism in the $\infty$-category $\operatorname{Hom}_{\operatorname{\mathcal{C}}}( C_ i, C_ k)_{\bullet }$. Since the collection of isomorphisms contains all identity morphisms and is closed under composition (Remark 1.3.6.3), we may assume without loss of generality that the difference $T \setminus S$ contains exactly one element $j$. Set $S_{-} = \{ s \in S: s < j \}$ and $S_{+} = \{ s \in S: s > j \}$. Let $i'$ be the largest element of $S_{-}$, and let $k'$ denote the smallest element of $S_{+}$. Unwinding the definitions, we see that the edge $F(S \subseteq T)$ is the image of $F( \{ i',k' \} \subseteq \{ i',j,k'\} )$ under the functor $\operatorname{Hom}_{\operatorname{\mathcal{C}}}( C_{i'}, C_{k'})_{\bullet } \xrightarrow { F(S_{+}) \circ \bullet \circ F(S_{-}) } \operatorname{Hom}_{\operatorname{\mathcal{C}}}( C_ i, C_ k)_{\bullet },$ and is therefore an isomorphism by virtue of assumption $(2)$. $\square$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9899300336837769, "perplexity": 56.72902340882199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500758.20/warc/CC-MAIN-20230208092053-20230208122053-00677.warc.gz"}
https://www.arxiv-vanity.com/papers/0909.3526/
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org. # Holographic quantum liquids in 1+1 dimensions Ling-Yan Hung and Aninda Sinha Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada E-mail:  jhung, ###### Abstract: In this paper we initiate the study of holographic quantum liquids in 1+1 dimensions. Since the Landau Fermi liquid theory breaks down in 1+1 dimensions, it is of interest to see what holographic methods have to say about similar models. For theories with a gapless branch, the Luttinger conjecture states that there is an effective description of the physics in terms of a Luttinger liquid which is specified by two parameters. The theory we consider is the defect CFT arising due to a probe D3 brane in the AdS Schwarzschild planar black hole background. We turn on a fundamental string density on the worldvolume. Unlike higher dimensional defects, a persistent dissipationless zero sound mode is found. The thermodynamic aspects of these models are considered carefully and certain subtleties with boundary terms are explained which are unique to 1+1 dimensions. Spectral functions of bosonic and fermionic fluctuations are also considered and quasinormal modes are analysed. A prescription is given to compute spectral functions when there is mixing due to the worldvolume gauge field. We comment on the Luttinger conjecture in the light of our findings. preprint: ## 1 Introduction AdS/CFT [1] is a powerful tool in extracting information about the strongly coupled limit of a conformal field theory. The correspondence has been extended to the finite temperature limit[2]. In particular, there is recent interest in understanding the thermodynamics, transport and spectral properties of strongly coupled low dimensional systems, which are of interest in condensed matter physics [3, 4, 5, 6, 7, 8, 9, 10, 11] (and [12, 13, 14] for recent reviews). Quantum liquids in one dimension have non-Fermi liquid properties and are thought to be of some relevance to the non-Fermi liquid properties of quasi-1D metals [15] and high-T superconductors [16]. Apart from its ubiquitous appearance and wide applications, one-dimensional Fermi liquids are of special theoretical interests on their own right. In one dimension, Fermi liquids behave very differently from their higher dimensional counterparts, which, in the weak coupling limit, can be very generally described by the Landau Fermi liquid theory (see for example a textbook introduction in [17]). The Landau Fermi liquid theory asserts that the free fermion description of the system is only mildly altered under the introduction of interaction, particularly if we are sufficiently close to the Fermi-surface. Interaction between the fermions can be accounted for by introducing a renormalised mass for the fermionic particles. These fermionic excitations decay in time, but sufficiently slowly as we get closer to the Fermi-surface, so that they are reasonably well defined quasi-particles within their life-time. This picture, however, completely breaks down in one dimension. The fundamental difference stems from the different topologies of the Fermi-surfaces, which, in one-dimension, comprises of only two distinct points in momentum space, instead of forming a continuous surface. This means that for an excitation from the ground state with a sufficiently small momentum, the energy of the excitation is completely determined. In higher dimensions a particle hole pair can acquire a continuous range of energies even for arbitrarily small total momentum. In 1D, this means that for small momentum change about the Fermi-points the density fluctuations, which are collective fluctuations, are eigenstates of the Hamiltonian. In fact, one can assume a quasi-particle description in one dimension and calculate the particle decay rate, which diverges for arbitrarily small but finite interactions, indicating a break-down of the quasi-particle picture. The Luttinger liquid is introduced to model the effects of interaction in one dimensional fermion system, in an analytically controlled context[18]. For a spinless fermion, and ignoring back-scattering, which can be shown to be irrelevant in the low energy limit, the model is reduced via bosonisation, to a free theory that can be exactly solved, which gives us physical insight into the low energy spectrum[19, 20, 21](and for a review see [15, 22]). The energy eigenstates, as anticipated above, are related to density fluctuations of the fermion liquid by a Bogoliubov transformation. This is a massless excitation with linear dispersion, which is one of the prominent features of the Luttinger liquid. In the more sophisticated model of spin-half Luttinger liquid, the diagonalisation of the Hamiltonian leads to two independent eigen-bosonic fluctuations, corresponding to the spin and charge densities fluctuations. They are again massless excitations which often move with different speeds, depending on the coupling. This is the famous spin-charge separation effect[21, 15, 22], which has recently been observed in experiments[23, 24]. The Luttinger conjecture states [21, 15] that any 1D model of correlated quantum particles (bosons or fermions) having a branch of gapless excitations will have as its stable low-energy fixed point the Luttinger model. The asymptotic low energy properties of the degree of freedom of this branch will be described by an effective renormalized Luttinger model characterized by only 2 parameters: a renormalized Fermi velocity and a renormalized stiffness constant . While successful in providing lots of physical intuition, the Luttinger liquid model involves lots of simplification, without which analytic control would have been impossible. In fact lattice methods are used to extract effective parameters of the continuum model. The holographic model is thus another analytic handle towards understanding low-dimensional systems. The usefulness of these models is that the dynamic properties can be extracted more easily than lattice methods. For example, in higher dimensions the fermion sign problem makes it next to impossible to extract useful information about fermion correlation functions using the lattice method. The model we will study in this paper is a dimensional defect CFT (dCFT). This is made from intersecting D3 branes and is the finite temperature generalization of [25]. The counterterms needed to compute correlation functions at zero temperature in this model were discussed in [26]. In order to be in a gapless phase, we will consider the phase described by the so-called black hole embeddings [27]. Moreover, we will turn on a finite chemical potential so that the black hole embeddings are the only physical ones [28]. Considering a defect CFT of this kind is also useful since it facilitates comparison with such theories in other dimensions. The crucial features of the D3-D7 model with a finite chemical potential which makes a 3+1 dimensional theory with fundamental matter are: • A phase transition for low baryon densities and low temperatures [28]. • In a certain momenta regime, quasiparticles become a good description for the excitations in the spectral functions [29, 30, 31]. Spectral functions typically contain several Breit-Wigner resonances. A collective mode resembling the zero-sound was found [32, 33]. All modes have a dissipative part. • Dispersion relations for quasiparticles look like for low and for high . At higher , the peaks dissolve and the notion of quasiparticles is lost. is less than unity and a notion of a limiting velocity emerges [30, 34]. These features are expected to hold even in the 2+1 dimensional dCFT [35]. The 1+1 dCFT at zero temperature and chemical potential is obtained by placing D3 branes in the background geometry created by D3 branes and considering the probe approximation . The probe sees a geometry. This background is known to preserve half the original , supersymmetry and realizes a two dimensional supersymmetry algebra. The conformal symmetry group is inherited from the . The massless open string degrees of freedom correspond to a pair of Yang-Mills multiplet coupled to a bifundamental hypermultiplet living at the intersection. We send to infinity keeping . This leaves a single Yang-Mills multiplet coupled to the bifundamental at a 1+1 dimensional defect. We will be concerned with the case where the probe sits at the origin of the part of the AdS space transverse to its worldvolume. The modes corresponding to contracting the inside the saturates the Brietenlohner-Freedman bound for scalars in and hence are stable. The generating function for the field theory is given by the classical action of superstring theory on coupled to a Dirac-Born-Infeld theory on . We will be considering this setup at finite temperature. Finite temperature is introduced by replacing the by the 5-dimensional AdS-Schwarzschild black hole. This trick has been used to analyse the behaviour of higher dimensional defect CFTs at finite temperature[28, 36, 37, 27, 30, 31, 29, 38, 39, 35, 40]. Part of this exercise is to see if we can probe the validity of the Luttinger conjecture using AdS/CFT. The other part is to probe differences between 1+1 dimensional theories and higher dimensional ones. To introduce a finite chemical potential and quark condensate, we look for a probe brane solution in the AdS black hole background with a non-trivial brane profile and world-volume electric field. The thermodynamics of the system is studied in detail. In the case when the probe wraps the maximal circle (in the notation used in this paper, ) we analytically derive the expressions for the thermodynamic quantitites, where we find a heat capacity scaling as in the high temperature limit, as expected in a d system, but in the low temperature region where the baryon density becomes important. For the non-maximal case () we turn to numerics. The specific heat in this case, closely resembles that of the case. We then compute holographically the Green’s functions of various scalar, vector and fermionic operators by studying the world-volume fluctuations of the probe brane about the background solution. Our analysis is divided into several steps. To begin with, we study the fluctuations of the longitudinal electric field for trivial embedding (where the brane passes straight through the horizon), very much in the spirit of [32]. We find that there is a massless (zero-sound) mode with dispersion given by the conformal result i.e. , where is the defect dimension, equals to one in this case. We obtain this result analytically in the hydrodynamic limit, and find no dissipation. We extend the study to larger frequencies and momenta numerically, and find that the dispersion relation is unmodified, and remain dissipationless. More surprisingly, while it is by now known that these massless (sound) modes in holographic defect models disappear at finite temperatures for higher dimensional defects [41], the massless mode in our 1D defect survives at finite temperatures, with identical dispersion as in the case of zero temperature. We then generalise the investigation to the case of mixed excitations, where the fluctuations of the electric field longitudinal to the defect are mixed with those of the embedding profile at finite momentum exchange, and in the presence of background world-volume electric field and a non-trivial probe embedding. To deal with the mixing, we explain in detail how the quasi-normal modes and spectral functions are obtained in principle[42], and implement the procedure numerically. It is interesting to see that the massless mode mentioned above survives even in this limit without dissipation. There is also a mode corresponding to pure dissipation, which originates from the profile fluctuation even before mixing is introduced. The dispersion of this mode is given schematically by , for some constant and dependent on the background electric field (controlling the chemical potential) and the probe embedding (controlling the quark mass and condensate). The higher quasi-normal modes for bosonic excitations, however, behave roughly as the higher dimensional ones, with a dispersion relation quadratic at small and , and approaches that of the speed of light asymptotically for large and . It is interesting to note that while our analytic expressions for the dispersions for the massless mode appear to be independent of the charge density , the limit at zero temperature is not a smooth one. In fact when the fluctuations can be solved exactly analytically at zero temperature and we find that the massless mode disappears. In fact the spectral function becomes a constant, for both the vector and scalar modes. At zero and finite temperatures, the scalar fluctuations can again be solved exactly and the quasi-normal modes form a discrete infinite tower of modes. The massless dissipationless mode that appears in the longitudinal electric field fluctuations however, remains intact here. This paper is organized as follows. In section 2, we discuss the setup and specify our conventions. In section 3, we consider the thermodynamics of our embeddings and carefully analyse the counterterms needed for the calculations. In section 4, we turn to the analysis of zero sound. In sections 5 and 6 we consider the spectral functions for bosonic and fermionic fluctuations. We conclude with a discussion in section 6. Appendix A gives some analytic results for the thermodynamics. Appendix B has a proof that when the real part of the quasinormal mode is non-zero, then the imaginary part of the mode has to be necessarily positive indicating no instabilities. Appendix C has some explicit calculations for the Green functions for the mixed modes considered in section 5. ## 2 D-brane configurations We would like to begin with a review of the brane configuration, namely, the intersecting D3 systems, considered in this note. Its zero temperature limit has been studied in depth in[25, 26]. We have D3 branes and D3 branes intersecting over a 1+1 dimensional domain with four relatively transverse dimensions. The low energy effective field theory is given by a supersymmetric Yang-Mills theory on a 1+1 dimensional defect. In the large and large t’Hooft coupling limit, such that , one could replace the branes by the AdS geometry and treat the D3 branes as probes in the curved background. The excitations on the probe are then dual to the low energy excitations on the 1+1 dimensional defect. Although the 1+1 dimensional defect will be the primary focus of this paper, sometimes we will draw parallels to the well studied D3-D7, 3+1 dimensional setup as well. The brane-scans for the two setups are 0123456789   Nc D3:××××−−−−−−1. Nf D7:××××××××−−2. Nf D3:××−−××−−−− (1) In the zero temperature limit both these theories are supersymmetric. We will consider the abelian case only and leave the analysis for non-abelian effects for future work. At finite temperatures, the AdS geometry is deformed to that of an asymptotically AdS black hole[43]. The metric is given by ds2=r2Hu2L2[−(1−1u4)dt2+3∑idx2i]+L2u2[(1−1u4)−1du2+u2dΩ25], (2) where the metric for the 5-sphere can be chosen to be dΩ25=dθ2+cos2θdζ21+sin2θ(dζ22+sin2ζ2dζ23+cos2ζ2dζ24), (3) The temperature of the system is related to the radius of the black hole horizon by T=rHπL2 (4) ### Embedding In the D3-D7 setup, the probe wraps while in the D3-D3 setup the probe wraps around . We will turn on a world-volume gauge field which corresponds on the gauge theory side to having a chemical potential or a finite baryon density. In the presence of a chemical potential, it can be shown that the so-called black hole embedding is a physical embedding, which we will focus on. It is common to make a change of coordinates in the case of D3-D7, and for D3-D3 it is convenient to pick instead . For black-hole embeddings, we require regularity at the horizon. In the literature[30], this is implemented by switching to a coordinate and requiring that 111In the near horizon limit . Therefore requiring is equivalent to requiring that the expansion of in to go like . . We also set . The resultant probe brane action for the D3-Dp setup is given by FbulkkBT = Ibulk=TDp∫dp+1σ√det(G+2πα′F) (5) = 1kBT∫dqσduLbulk = NkBT∫dqσdu(1−χ2)p−14√|~g00|~gp−1211~guu−(2πα′)2~g11F20u, where is the induced world-volume metric and explicitly for the D3-D3 case, and allowing for a non-trivial profile in , we have d~s2=r2Hu2L2[−(1−1u4)dt2+dx2]+L2u2[(1−1u4)−1(1+u2(∂uχ)2(1−χ2))du2+u2(1−χ2)dζ21]. (6) Here, are world-volume gauge field strengths. In the Euclidean signature, the integral over the time direction is between zero and in our units where , the Boltzmann constant, is set to one. This factor is made explicit after the second equality in equation (5). The normalization constant is given by N=N0Tq+1,N0=2Vp−q−1NcN%fλp−34(2π)(p−1). (7) where is the spatial dimension of the intersection domain, and is the volume of the sphere wrapped by the probe D brane. We choose the gauge . The field strengths can be readily solved in terms of . (8) where and are dimensionful integration constants from solving the equations of motion of , and respectively. They in fact parametrise the conserved magnetic and electric charges of the solution. Note that we have absorbed a in the definition of the constants and . For notational simplicity we will take , unless stated otherwise. We will concentrate on the simple case where there is a finite baryon density, corresponding to a non-trivial , while leaving . A0(u)∼{μ+(d/r2H)u2q=3,μ+dloguq=1. (9) near the boundary. The thermodynamics interpretation of various quantities will be discussed in the next section. So for the numerics we have two tunable parameters and . The near horizon expansion for is given by χ=χ0+χ1(u−1)+O((u−1)2),, (10) with for D3-D7 and for the D3-D3 setup. Here we have introduced dimensionless defined as ~d=dr3Hq=3,~d=drHq=1. (11) Typical profiles for D3-D3 at various values of are plotted in figure (2). χ(u)∼{mu+cu3q=3,mu+cloguuq=1. (12) In the supersymmetric limit is an exact solution, both for and . Hence, we will continue to use the same notation and for the D3 probe, as in the higher dimensional case. At , saturates the Brietenlohner-Freedman bound. The asymptotic behaviour presented in (12) is as expected since the conformal dimension of the dual operator , where is the mass of the scalar field ( for ), evaluates to for at . This already suggests that the two possible large behaviour are and , as we have confirmed. This however, also implies that only one conformally-invariant boundary condition is possible[44]. One could require physical fluctuations to behave as and setting the terms to zero. These, as mentioned above, correspond to the supersymmetric solutions. On the contrary, requiring the vanishing of the term at the boundary and keeping the term violates conformal invariance, since that is a scale dependent statement i.e. requires the choice of an arbitrary scale . There is therefore likewise only one natural way to incorporate an external source , and that is to require that behaves like in the AdS boundary . From the point of view of the dual field theory, this means that the expectation value of the dual operator has a logarithmic violation of conformal invariance in the presence of a source, since the two point function contains a logarithmic divergence. Considerations here apply similarly to the leading term of the gauge field in (9). We will encounter this issue again in the discussion of the thermodynamics of the system. Therefore, the roles played by and are interchanged, namely that corresponds to the quark condensate and to a source[26]. This is in contrast with the scenarios [27]. We will make further comments about the appearance of the as we proceed. ## 3 Thermodynamics In this section we consider the thermodynamics of the 1+1 defect CFT. There are some interesting differences as compared to higher dimensional defects as will become clear. These differences arise due to the terms in the asymptotics of and . We introduce the dimensionful source , which is the 1D analog of the quark mass in higher dimensional probes [27], and nq=∫dζ1∂Lbulk∂Frt=2π(2πα′NfTD3)d. (13) Since and are parameters of the dual defect CFT at , the phase diagram of the system is obtained by keeping them fixed as the temperature is varied. The ratio is therefore a handy parameter to label the dual CFT. We will find that below a certain value for the system exhibits an instability, similar to the D3-D7 case. We will be interested in values of where the system is stable. It should also be noted that while the Mermin-Wagner theorem states that at finite temperatures continuous symmetry cannot be generally broken in dimensions , the restriction is evaded in the large limit [14]. To analyse the thermodynamics of this system we begin by computing the Euclidean action on-shell. From the asymptotic behaviour of the fields, it is clear that the action is divergent. The counterterms needed to cancel the divergence arising from have been given in[26]. L1 = −N2√γ, (14) L2 = N4log(Λ/r0)√γRγ, (15) L4 = N2√γχ2(x,~Λ)(1−1log(Λ/r0)), (16) where is the induced metric on the boundary, is the Ricci scalar evaluated on , and we have defined . Since we are considering a flat boundary theory, does not contribute. It is important to note here that, since we have switched coordinates to dimensionless , all functions of in the boundary limit is evaluated at . On the other hand, when the cut-off appears explicitly as a coefficient in the counter terms, it appears simply as , and in the case of explicit terms, , for some arbitrary scale . To make sense of the dependence on , one should recall that there is a logarithmic violation of conformal invariance at non- vanishing and , as discussed in the previous section. Therefore there is dependence on an arbitrary scale , which appears inside these ’s. The dependence on reminds us that whenever we perform a rescaling in i.e. , corresponding to a rescaling to a different energy scale in the dual theory, a contact term (which contributes only to the finite part of the Green’s function) proportional to would appear in the action, as discussed in [45]. This will not affect the physics (e.g. spectral functions, quasi-frequencies etc) we are interested in and we will set to unity in the rest of our discussions. For non-vanishing , there is also a logarithmic divergence in the action of the form . This is a new divergence only appearing in 1+1 dimensions. We will add LF=N2logΛAμAνγμν√γ, (17) as a counterterm to remove the logarithmic divergence due to the gauge field. Note that the term preserves gauge invariance despite looking otherwise. The reason is that AdS/CFT puts restriction on the allowed gauge transformation such that the asymptotics of are not altered. Since , the leading term at the boundary of could at best be . The gauge transformation of (17) again goes like and thus vanishes at the AdS boundary. It is shown in the appendix that in the gauge , the boundary term can again be written in terms of gauge invariant variables. What is the thermodynamics interpretation of and ? Firstly we observe that with a radial electric field, there is effectively a number of fundamental strings stretching along the worldvolume of the probe D3 brane. This density of strings is given by . As a result it is natural to interpret as being proportional to a number density. This leads to identifying as the chemical potential222As we will see below the physical chemical potential differ from by a constant in the case of a probe D3 brane. This applies also to the discussion of , which will be discussed below.. Similarly in cases of the higher dimensional defects, the brane separation which is controlled by is identified as being proportional to the inverse temperature. However, unlike the higher dimensional defects, for the probe D3, could be interpreted as the brane separation only in the supersymmetric theory. Moreover from the point of view of AdS/CFT, the source of the dual operator should be identified as the coefficient of the term i.e. . As a result we will expect δFδnq=μ,δFδc=m. (18) This is consistent with the interpretation in [26] where is identified as a vev in the dual field theory. Thermodynamically, we then have that the Euclidean action is a function of and the source rather than and as in the higher dimensional defect case. Since is kept fixed and corresponds to a source in the dual defect CFT, we can identify as the temperature. The free energy can thus be interpreted as the Helmholtz free energy. The counterterms arising from computes to Ict= NkBT∫dqσ(−~Λ22+d2log~Λ+2μd2r2H+12(m+clog~Λ)2−mc (19) −c22log~Λ−(d2−r2Hc2)logrH2r2H), and we define . Note that the last term above arises precisely because of the explicit appearance of in the definition of the counter terms. After considering the variation of the action, this leads to δF/N=−(m−clogrH)δc+(μ−dlogrH)/rHδ~d. (20) This confirms our expectation that the action is a function of . It is not surprising that and appears in this particular combination. This is because the dimensionful vev’s are defined as χ(r)=1r(¯¯¯¯¯¯M+¯¯¯¯Clogr),Ar(r)=¯μ+dlogr, (21) implying that and similarly . It is thus consistent to have these physical quantities and appearing in the variation of the free energy. ### 3.1 Specific heat Having discussed the interpretation of the Euclidean action in the previous section, we are ready to compute the specific heat capacity. #### 3.1.1 Case I: χ=0 For simplicity, let us begin with the massless case where . The entropy is obtained by differentiating the action at constant chemical potential. The chemical potential is given by the constant in the asymptotic expansion of the gauge potential . ¯¯¯μ=μ−dlogrH=limu→∞A0(u)−A′0(u)(ulogu)−dlogrH,A0(u)=∫u1duFu0. (22) At , they are explicitly, Fu0=A′0(u)=rHd√r2Hu2+d2,¯¯¯μ=d(log2−log(rH+√d2+r2H)). (23) Combining with the counter terms, we have FN0=14(−2rH√d2+r2H+d2(1+log4)−2d2log(rH+√d2+r2H)). (24) As discussed in the previous section, at zero , is simply a function of and the baryon density . Therefore the entropy is given by SN0=−1N0∂F∂T∣∣∣d=π√d2+(πT)2, (25) and the heat capacity is cvN0=T∂(S/N0)∂T∣∣∣d=(πT)2√(πT)2+d2. (26) The heat capacity is linear in the temperature at sufficiently high temperatures as expected of a dimensional quantum system. However, the temperature dependence becomes quadratic at low temperatures in the presence of a non-vanishing . Note also that the result is unchanged if we subtract the zero temperature contribution to the free energy and the chemical potential. Note that in this limit i.e. , the entropy and subsequently the heat capacity are independent of the arbitrary scale , if we are keeping the baryon density fixed. #### 3.1.2 Case II: χ≠0 When is non-zero, the above calculations can be done numerically. However, we would then have to take into account the contribution of and it’s corresponding counter terms. We follow the procedure in [27] to obtain an explicit expression for the entropy , keeping the baryon density and fixed.i.e. ∂rHc=−crH,∂rH~d=−~drH. (27) The entropy thus has five contributions. S=−π(Si+Sii+Siii+Siv+Sv), (28) where Si = (∂rHN)FN,Sii=(−~ΛrH)Lbulk(u)∣∣~Λ, Siii = (−~ΛrH)∂~ΛIct,Siv=(−1rH)(c∂c+~d∂~d)F=−NrH(~dμ/rH−cm+(~d2−c2)logrH), Sv = −N(~d2−c2)2∂rHlogrH=−N(~d2−c2)2rH. (29) We have made used of the fact that in evaluating , and is the explicit derivative with respect to in the last term in the counter terms in equation (19). After a tedious but straight forward computation, we finally have S=−2FT(1−N[(μ/rH−~dlog(πT))~d−c(m−clog(πT))]2F)+N(~d2−c2)2T. (30) The corresponding heat capacity is given by cv=T∂TS∣∣∣M,d=S+NT2∂TT2((μ/rH−~dlogπT)~d−(m−clogπT)c)−N(~d2−c2)T. (31) Derivative of and with respect to has to be implemented numerically. Typical plots of the free energy against temperature at fixed in the stable and unstable regimes are shown in figure (3)333The dotted line is added in by hand. Our numerical results end just before the dotted line begins and the region corresponds to very small and an initial condition for at the horizon extremely close to one . It would require much more numerical accuracy to see the dotted line. . At high temperatures, both the entropy and the specific heat scales as , approaching the conformal result. Below , the heat capacity could turn negative. This allows us to determine the phase diagram of the system. This agrees with a quasi-normal mode analysis, where an unstable mode that grows in time appears where the heat capacity is negative. The phase diagram is shown in figure (4)444At , the supersymmetric solution is stable. Note that one could not smoothly go from a general black hole embedding to a supersymmetric solution by taking the limit. For more general ,the Minkowski embedding is the only stable embedding [27].. ## 4 Sound modes For simplicity, we would like to begin by considering the simpler embedding where all the way. The terms in the action mixing the gauge potential and the brane profile perturbations are zero in this case. Therefore we can consider the equations of motion of the longitudinal gauge perturbations independently of the other scalar fluctuations. The calculation in the zero temperature limit mimics that in [32]. The solution there however cannot be applied directly for probe D3’s by simply substituting the defect dimension in our case. The probe D3 case however is straightforward to solve and one obtains a dissipationless mode with a dispersion given by the conformal value . In the finite temperature, the near horizon solution has a singularity and requires slightly more work. It is convenient to work with the coordinate , which has the correct normalisation at the boundary and would avoid mistake when we begin to expand in power series of and . The equation of motion at zero is given by ∂z(F[z]E′x[z])+G[z]Ex[z]=0, F[z]=z(1+~d2z2)3/2(−1+z4)ω2q2(−1+z4)+2(1+~d2z2)ω2, G[z]=z3√1+~d2z22(−1+z4). (32) We would like to solve these equations in the limit, where and momentum are much less than any other scales, in which case we can obtain a solution perturbatively in and . We will follow the approach in [46]. To begin with, we isolate the singularity appearing in the near horizon limit, and write Ex[z]=(1−z)−iω4(e0[z]+ωe1[z]+...) (33) After making the substitution in equation (4) and removing the overall factor of , we expand the resulting equations of motion in powers of and . In the expansion we considered the case . For each solution of we have to set two boundary conditions. Since the singular part of the solution has been isolated, we require that and and so on, are regular at the horizon. Also, by convention we choose to absorb any constants by , and therefore require that vanish at the horizon. Given these conditions, we find that e0[z] = c0 e1[z] = c0(c1−i(k2−ω2)log[z]ω2+other terms) (34) The solution involves other complicated terms in and , which are regular at the boundary and do not concern us here. The constant can be expressed in terms of and , by requiring that vanishes at the horizon. What is important however, is that the coefficient of the term is given by , which gives a quasi-normal mode at ω=±k, (35) in agreement with the conformal result of a sound mode in one-dimension. We can in fact evaluate the spectral function explicitly. To lowest order in and , it is given by Im(Gxx)(2N)/(π2T2)=ω2ω2−k2Im(c0−ic0(k2−ω2)ω)=ω3(ω2−k2)2, (36) and similarly for other components of the Green’s functions of the currents, which are obtained by putting into the above expression appropriate pre-factors of ’s and ’s. The real part of approaches a finite value in the hydrodynamic limit at . This is because as we consider higher order terms we find limω→0Re(Gxx) ∼ Re(1+ω(HR+iHI)+...iω(1+ω(KR+iKI)+...)) (37) = HI−KI where and are overall zeroth order constants depending on . We will have to solve for still higher order corrections in order to determine . We followed this sound mode using the complete equation without taking the hydrodynamic limit and found that this dispersion relation remains exact beyond the hydrodynamic limit. In fact, even when is non-zero and mixing becomes important, as we will investigate in the following sections, this mode remains intact and dissipationless. This should be put in sharp contrast with higher dimensional defect theories obtained from higher dimensional probes (i.e. D5 or D7), where the leading term in the boundary expansion of the gauge field is given by the constant term. However, for regularity at the horizon, continues to be a constant in these higher dimensional theories. The constant term at the boundary is then given by Ex[z]=c0(1+ω(c1)+...) (38) where is an arbitrary constant, and depends on and although it is overall zeroth order in and . The quasi-normal modes are defined at the zeroes of the constant term. It is however clear that cannot begin at linear order in , due to the presence of the constant term . This absence of a sound mode has been observed in [41], and is interpreted as the destruction of the Fermi-surface at finite temperature in the strongly coupled regime. We have also looked for quasi-frequencies of the fluctuations in the hydrodynamic limit, and for simplicity, at zero and zero chemical potential. In that case the regular solution zeroth order in is non-trivial and contains a term at the boundary, whose coefficient is independent of . The quasi-normal mode occurs at the zeroes of the coefficient of the term, and therefore no higher order corrections in and could cancel the zeroth order contribution, if we assume that . We conclude that it does not contain a sound mode with linear dispersion. While the above expression for the quasinormal frequency appears to be independent of that controls the chemical potential, the sound mode actually disappears at zero temperature if we then also put . The limit to is thus not a smooth one. The difference lies in the modified near horizon (i.e. or ) expansion of the solution. Instead of we have . We pick the plus sign for regularity as in[32]. The equations of motion at and can in fact be solved exactly. The solution is given by Ex(ω,z)=c1I0(z√k2−ω2)+c2√πK0(z√k2−ω2), (39) where and are the modified Bessel functions. The constants and are fixed by the boundary condition, and they are related by c2=−ic1√π. (40) Then we can expand the solution in the boundary limit, and extract the coefficient of the log term and the constant. They are given by Ex(z→0)=iγ+π+ilog(i√ω2−k22)+ilog(z)+... (41) where is the Euler’s constant. The log term becomes independent of and and the massless sound mode disappears. We can also obtain the Green’s function from the solution555We obtained the results at . As we will show more explicitly in the appendix, the explicit contribution in the counter term contributes only to the real part of the Green’s function and does not affect the imaginary part, which is of more interest for our purpose.. At zero this simply gives Gxx=2Nπ2T2(−γ+iπ−log(iω2)). (42) A plot of the real and imaginary parts of the Green’s function are shown in figure (5). The real part depends logorithmically on , where as the imaginary part is simply a constant . The result also agrees precisely with the high limit of the numerical solution at finite baryon density and temperature, which we will return to later along with the numerical solutions. ## 5 Bosonic fluctuations and spectral functions ### 5.1 The diagonal modes Consider the fluctuations666Note that we have not considered fluctuations of the probe along inside the . The action of these modes include contributions from the WZ terms[25, 47, 48], and the boundary action evaluates to zero. They are related to surface operators in the dual theory[47, 48]. The usual mesonic excitations in the dCFT are described by fluctuations of the gauge field and of the transverse 3-sphere[27]. We will not discuss these surface operators any further in this work. of the angular position 777We have not considered fluctuations of other angular directions on the transverse 3-sphere, since they are related to the mode by rotations., as defined in section (2) of the probe D3 brane. The diagonal modes exhibit the usual quasi-particle peaks and indicate no sign of the underlying instability in the system. Here we plot some representative spectral functions for the D3-D3 setup in figure (6), since it has never been studied before. The D3-D7 case was studied for example in [30]. The first peak is located precisely at . The width of the peak increases with , and the Green’s function has a singularity at precisely . ### 5.2 The mixed modes In the presence of a chemical potential and at a finite momentum, there is a mixing between the fluctuations of the longitudinal electric field and the profile . This was first pointed out in [30]. These mixed modes were subsequently studied in detail in [31] for the case of probe D7’s. This Lagrangian at quadratic order can be written as L = T1δθ2+T2δθ2x+T3δθ2t+T4δθ2u+T5δθδθu (43) + D1δθxFxt+D2δθuFtu+D3δθFtu + S1F2xt+S2F2tu+S3F2xu. To avoid cumbersome notations, we have implicitly made a change of coordinates i.e. . As a result our 2-momenta are dimensionless quantitites normalized by the temperature. Here are complicated functions of whose complete expressions we will refrain from showing here. Their near horizon and boundary behaviour will be given in the appendix. The second line denotes mixing terms and all the mixing terms are proportional to . We will work in the gauge . This leads to the constraint equation ∂tD2δθu+∂tD3δθ=2∂t(S2∂uAt)+2∂x(S3∂uAx). (44) The equation of motion for and are ∂t(D1δθx)=−2∂t(S1Fxt)+2∂u(S3∂uAx), (45) −∂x(D1δθx)+∂u(D2δθu)+∂u(D3δθ)=2∂x(S1Fxt)+2∂u(S2∂uAt). (46) Now we write Aμ = ∫dωdk(2π)2e−iωt+ikxaμ(u,κ), (47) δθ = ∫dωdk(2π)2e−iω′t+iκ′xΘ(u,κ). (48) Since and are real, we have ¯Θ−κ≡¯Θ(u,−κ)=Θ(u,κ)≡Θκ,¯aμ,−κ≡¯aμ(u,−κ)=aμ(u,κ)≡aμ,κ, (49) where denotes complex conjugation. The gauge invariant combination is . Using the constraint equation we find a′x=kΔ+2iωS2E′x2i(ω2S2+k2S3), (50) or equivalently a′t=2ikS3E′x−ωΔ2i(ω2S2+k2S3), (51) where Δ=−iωD2Θ′−iωD3Θ. (52) Using this we find D1ωkΘ=−2S1ωEx+2∂u(S3kΔ+2iωS2E′x2i(ω2S2+k2S3)). (53) After some algebra, the equation of motion for is found to be 2∂x(T2δθx+12D1Fxt)+2∂t(T3δθt)+2∂u(T4δθu+12T5δθ)−2T1δθ−T5δθu+D3A′t−∂u(D2A′t)=0. (54) Note that our equations of motion explicitly violates spatial and time reversal. This is a result of the presence of a background worldvolume electric field . This will have implications in the spectral functions, which we will discuss in the next sub-section. #### 5.2.1 The spectral function of the mixed modes Recall that for the mixed fluctuations, we have two coupled linear second-order differential equations, and as a result, there should be four independent pair of solutions . By imposing the infalling boundary conditions at the horizon for both and , we are imposing two constraints and should be left with two independent solutions. The boundary values of and respectively source different operators, and given these two independent solutions, it is in principle possible to construct solutions with independent arbitrary boundary values and . i.e. in general Ex(u)=AE1(u)+BE2(u),Θ(u)=AΘ1(u)+BΘ2(u), (55) which allow us to solve for and for any given pair of boundary values . The Green’s function is obtained by differentiating the on-shell action evaluated at the boundary with respect to and . It is therefore apparent that while explicit mixing terms in the on-shell action vanishes at the boundary, mixing terms involving the product of could still appear through the squares and products of and , which generally depend on both of these boundary quantities. It is important to emphasise that the sources are only independent if we have the freedom to pick and accordingly. Given a particular boundary condition at the horizon, the boundary values of the pair solution of the equations of motion are not independent and therefore evaluating the action at such a particular solution and differentiating with respect to these correlated boundary values would not give the correct Green’s function, as in [31]. In fact, it is apparent from the procedure taken in [31] that the diagonal elements of the spectral function (i.e. imaginary parts of the Green’s function) are not proportional to the conserved current discussed in section (5.2.2). By picking different horizon boundary values these spectral functions could turn negative, which already suggests that the procedure is pathological. The study of mixed modes is well known in the context of shear modes in R-charged backgrounds[49]. The numerical procedure needed to search for quasi-normal modes have been discussed in [50]. We will discuss this procedure, and lay out the explicit way of computing spectral functions888We thank Andrei Starinets and Sean Hartnoll for discussion of the issue and pointing us to useful references. . To implement the procedure numerically, where we are only capable of controlling the horizon values, we construct general solutions of and in the following manner. First, we construct two independent sets of solutions by picking two sets of horizon boundary conditions, for concreteness, say E1(u→1)∼(1−u)−iω4(1+...) Θ1(u→1)∼(1−u)−iω4(1+...), E2(u→1)∼−
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.966548502445221, "perplexity": 392.9410972909948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905777.48/warc/CC-MAIN-20201029184716-20201029214716-00132.warc.gz"}
https://www.physicsforums.com/threads/blrpv-a-new-susy-it-seems.772265/
# BLRPV, a new SUSY, it seems. 1. Sep 22, 2014 ### arivero Interesting paper: http://arxiv.org/abs/1409.5438 Simultaneous B and L Violation: New Signatures from RPV-SUSY The Superpotential has terms UDD and QLD. Does it mean that a Diquark can decay to an antiquark, and a Meson can decay to a Lepton? I am not sure of the second point. The paper show Neutron decaying to leptons, but this is simultaneous violation of B and L, while Meson to Lepton only violates L. 2. Sep 25, 2014 ### ChrisVer when you say diquark to an antiquark what do you mean? If I recall well U and D are both singlets of SU(2), and they are in the $\textbf{3}$ repr of SU(3) - so they are quarks.. http://prntscr.com/4qbbst From M.Drees book- since the Ubar and Dbar are written as antiquarks, I suppose U,D are quarks... but nevertheless you can change my 3 above to 3bar and "quarks" to "antiquarks" without messing the reasoning...the main point is that it doesn't contain anything like $3 \otimes 3 \otimes \bar{3}$ which wouldn't only violate B or L numbers, but also the local SU(3) symmetry [on which the MSSM is built on] The meson decaying to lepton is true from the RPV terms. In fact this is already known and that's why people imposed R parity in the first place [to kill those terms which are SU[3]xSU[2]xU[1] invariant but violate L,B numbers - the coupling constants are then unnaturally small, so you impose a symmetry U[1]_R to explain their smallness. Then you break it to discrete symmetry] Last edited: Sep 25, 2014 3. Sep 25, 2014 ### arivero in the diagram a) in page 3, you can see a diquark-antiquark vertex, or it seems so. SU(3) is not violated. For sure UDD, seen as an interaction term, can be a SU(3) singlet. Or, from group theory, $3 \otimes 3 = 6 \oplus \bar 3$ we can see that a diquark (or a disquark) can be in a colour triplet, just as if it were an antiquark. For SU(2) you are right that something is going on. UDD is not a SU(2) singlet, if all the quarks or squarks are of the left chirality. 4. Sep 25, 2014 ### ChrisVer Yes I understand your confusion with the diagram, but in fact that's not an antiquark there... it's a squark [scalar]. The thing above is not a bar [indicating anti- ] but a tilde [indicating super-]. You can zoom in and see it by yourself, at some point it becomes pretty obvious :) The coupling is then with quark-quark-squark. If you allowed a vertex with quark-quark-antiquark then your Lagrangian would need to have a term with fields containing those 3 particles- to make it simple let's say it has the fields $\mathcal{L}= q_1 q_2 \bar{q}_3$ so that's a $3 \otimes 3 \otimes \bar{3}$ term which violates the SU[3] symmetry. SU[3] permits only $3 \otimes \bar{3}$ (eg in a mass term) or $3 \otimes 3 \otimes 3$ and of course $\bar{3} \otimes \bar{3} \otimes \bar{3}$ , or in general words: configurations which have the singlet representation as a result , so they can be SU[3] invariant. That's how you build the MSSM: send your SM fields to chiral superfields existing in the same representations as in SM, take SU(3)xSU(2)xU(1) and start building up your model's lagrangian which has those invariant terms. The RPV terms then naturally appear in the model (so they are singlets) as well as the rest MSSM interaction terms... For SU(2): It is a singlet, since U,U and D are singlets. So you have $1 \otimes 1 \otimes 1$... In case you have the one singlet and put a doublet like for example the Q, you also need to put a 2nd doublet (like L) to keep SU(2) invariance- in fact the one is bared but it doesn't make a big difference for SU(2) since the fundamental and antifundamental reprs are the same- : $2 \otimes 2 \otimes 1 = ( 3 \oplus 1 ) \otimes 1 = 3 \oplus 1$ which can be SU(2) invariant. :( this way of configurations of representations costed me a whole question to my SUSY exams and lost a very good grade, where I had to find those RPV terms and explain the need for R-parity afterwards with diagrams, and I messed up the fields resulting in non-SU(3) x SU(2) x U(1) invariant quantities... when I asked for some compassion especially because I couldn't see why 3x3x3 is a singlet, I was told "you shouldn't have tried SUSY if you don't know this trivial stuff" - of course I did know, but during the exam I had forgotten. Also during the exam they also tried to help us saying "thing of protons and pions" but it was more confusing than helpful during the exam time. Well although the mark was "bad" for me, I learned out of it. ) Last edited: Sep 25, 2014 5. Sep 26, 2014 ### arivero Hey, yes, it is true, they are the singlets. I noticed it in my first reading but somehow I forgot. Here I also concede the point, but I am not so sure about it not being an antiparticle (respective of the other two), because the squark is outgoing from the vertex, while the quarks are incoming. Also I would expect that the vertex violates barion number by one, while the next vertex keeps barion number and violates lepton number by one. If it is so, and reading the process in the diagram as going from left to right, I would say that it describes two (conjugate-)quarks that collide to form a (conjugate-)anti-s-quark, which in turn decays to a lepton plus a quark. In the first vertex, the incoming particles have barion number (-1/3)x2 and the outgoing squark should have barion number (+1/3). Then in the second vertex the outcoming quark should have still barion number (+1/3) and the lepton has of course lepton number (+1) So the first vertex violates B (and B-L), the second vertex violates L (and B-L) but the four fermion effective vertex magically preserves B-L, while violating both B and L. Last edited: Sep 26, 2014 6. Sep 26, 2014 ### ChrisVer and what does outgoing actually mean? the squarks are scalar fields. Also in this case, they are the propagators... 7. Sep 27, 2014 ### arivero I have seen some books that, in order to have a criteria both for internal and external lines, refer to the "flowing of charge" and the "flowing of momentum". Depending if charge flows in the same direction or contrary to momentum, you name particle or antiparticle. Here by outgoing I was implying that the momentum was flowing from left to right, describing a process of decay from Nucleon to (Meson + Lepton). By the way, now that we have agreed that the internal line is an scalar, it is actually the same diagram you were calculating in https://www.physicsforums.com/threads/amplitude-for-fermion-fermion-yukawa-scattering.772742/ isn't it? Or a pure chiral variant of it; in the other thread the fermions were Dirac and here I guess they are Weyl, are there? 8. Sep 27, 2014 ### ChrisVer It depends. I think you can as well combine your weyl spinors in the MSSM lagrangian into dirac, no? The only thing that would change is that you could then avoid in refering to R or L quarks as well as to R and L squarks.But I am not really sure about that.. What I am sure about is that in the end you can sum everything up in the diagram... As for the diagram. Yes it is the same. You can apply the same Feynman rules to it and get the amplitude. The only things that can change are the parameters and the coupling constants at the vertices...the free theory propagator will remain the same: $\frac{i}{p^2 - m^2}$ Similar Discussions: BLRPV, a new SUSY, it seems.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8500435948371887, "perplexity": 1508.039902919605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588251.76/warc/CC-MAIN-20171216143011-20171216165011-00662.warc.gz"}
https://www.arxiv-vanity.com/papers/1806.01933/
# Explainable Neural Networks based on Additive Index Models Joel Vaughan Email: Joel.V; Corresponding author Corporate Model Risk, Wells Fargo, USA Agus Sudjianto Corporate Model Risk, Wells Fargo, USA Erind Brahimi Corporate Model Risk, Wells Fargo, USA Jie Chen Corporate Model Risk, Wells Fargo, USA and Vijayan N. Nair Corporate Model Risk, Wells Fargo, USA June 2018 ###### Abstract Machine Learning algorithms are increasingly being used in recent years due to their flexibility in model fitting and increased predictive performance. However, the complexity of the models makes them hard for the data analyst to interpret the results and explain them without additional tools. This has led to much research in developing various approaches to understand the model behavior. In this paper, we present the Explainable Neural Network (xNN), a structured neural network designed especially to learn interpretable features. Unlike fully connected neural networks, the features engineered by the xNN can be extracted from the network in a relatively straightforward manner and the results displayed. With appropriate regularization, the xNN provides a parsimonious explanation of the relationship between the features and the output. We illustrate this interpretable feature–engineering property on simulated examples. ## 1 Introduction Neural networks (NNs) and ensemble algorithms such as Gradient Boosting Machines (GBMs) and Random Forest (RFs) have become popular in recent years due to their predictive power and flexibility in model fitting. They are especially useful with large data sets where it is difficult to do handcrafted variable selection and feature engineering. Further, in these situations, they have substantially better predictive performance compared to traditional statistical methods. Despite these advantages, there has been reluctance to fully adopt them. One of the primary barriers to widespread adoption is the “black box” nature of such models. The models are very complex and cannot be written down explicitly. It is therefore difficult for a modeler to explain the relationships between the input features and the response or more generally understand the model’s behavior. However, the ability to interpret a model and explain its behavior is critical in certain industries such as medicine and health care that deal with high risk or in banking and finance that are strongly regulated. For instance, in banking, regulators require that the input-output relationships are consistent with business knowledge and the model includes key economic variables that have to be used for stress testing. These challenges have led to a lot of research recently in developing tools to “open up the black box”. There are, broadly speaking, three inter-related model–based areas of research: a) global diagnostics (Sobol & Kucherenko (2009), Kucherenko (2010)); b) local diagnostics ( Sundararajan et al. (2017), Ancona et al. (2018)); and c) development of approximate or surrogate models that may be easier to understand and explain. These models which may be either global (Hinton et al. (2015), Bucilua et al. (2006), Tan et al. (2018)) or local (Hu et al. (2018)) in nature. There are also efforts to understand neural networks using visualization–based techniques such as those described in Kahng et al. (2017) or Olah et al. (2017). In this paper, we propose a flexible, yet inherently explainable, model. More specifically, we describe a structured network that imposes some constraints on the network architecture and thereby provides better insights into the underlying model. We refer to it as explainable neural network, or xNN. The structure provides a means to understand and describe the features engineered by the network in terms of linear combinations of the input features and univariate non-linear transformations. • We use the terms “interpretable” and “explainable” interchangeably in this paper although, strictly speaking, they have different meanings. The former refers to the ability to understand and interpret the results to yourself; and the latter is the ability to explain the results to someone else. So interpretability can be viewed as a precursor to explainability. But we do not make that distinction in this paper. • Explainability by itself is not enough without also considering predictive performance. For instance, a linear model is very explainable but it is likely to have poor performance approximating a complex surface. In the simple examples considered in the paper, the xNNs have excellent predictive performance. But additional research is needed on more complex examples, and this is being currently pursued. Feedforward neural networks typically consist of fully connected layers, i.e., the output of each node on layer is used as input for each node on layer . By limiting the connections between nodes, we can give a feedforward neural network structure that can be exploited for different purposes. For example, Tsang et al. (2018) considered a structure to detect interactions among input features in the presence of features’ main effects. In this paper, we propose a structured neural network designed to be explainable, meaning that it is relatively easy to describe the features and nonlinear transformations learned by the network via the network structure. It is based on the concept of additive index models (Ruan & Yuan (2010), Yuan (2011) ) and is related to projection pursuit and generalized additive models (Hastie & Tibshirani (1986)). The remainder of the paper is as follows. In Section 2, we review additive index model and introduce the explainable neural network architecture. In Section 4, we illustrate how the components of the xNN may be used to describe the engineered features of the input variables the network learns. Section 5 discusses several practical considerations that arise in using such networks in practice. Finally, we provide additional examples of trained xNN models in Section 6. ## 2 Additive Index Models The formal definition of a additive index model is given in (1): f(x)=g1(βT1x)+g2(βT2x)+⋯+gK(βTKx), (1) where the function on the LHS can be expressed as a sum of smooth functions (Ruan & Yuan, 2010). These univariate functions are each applied to a linear combination of the input features (). The coefficients are often referred to as projection indices and the ’s are referred to as ridge functions, following Friedman & Stuetzle (1981). See also Hastie & Tibshirani (1986) for the related notion of generalized additive models. The additive index model in (1) provides a flexible framework for approximating complex functions. In fact, as shown in Diaconis & Shahshahani (1984), the additive index models can approximate any multivariate function with arbitrary accuracy provided , the number of ridge functions, is sufficiently large. In practice, additive index models can be fit using penalized least squares methods to simultaneously fit the model and select the appropriate number of ridge functions (Ruan & Yuan (2010)). See also Yuan (2011) for a discussion of identifiability issues surrounding such models. ## 3 Explainable Neural Network Architecture (xNN) The Explainable Neural Network provides an alternative formulation of the additive index model as a structured neural network. It also provides a direct approach for fitting the model via gradient-based training methods for neural networks. The resulting model has built-in interpretation mechanisms as well as automated feature engineering. We discuss these mechanisms in more detail in Section 4. Here, we describe the architecture of the xNN. We define a modified version of the additive index model in (1) as follows: f(x)=μ+γ1h1(βT1x)+γ2h2(βT1x)+⋯+γKhK(βTKx). (2) Although the shift parameter and the scale parameters ’s are not identifiable, they are useful for the purposes model fitting: selecting an appropriate number of ridge functions through regularization. The structure of an xNN is designed to explicitly learn the model given in equation (2). Figure 1 illustrates the architecture of an xNN. The input layer is fully connected to the first hidden layer (called the projection layer), which consists of nodes (one for each ridge function.). The weights of the node in the first hidden layer corresponds to the coefficients () of the input to the corresponding ridge function. The projection layer uses a linear activation function, to ensure that each node in this layer learns a linear combination of the input features. The output of each node in the projection layer is used as the input to exactly one subnetwork. Subnetworks are used to learn the ridge functions, . The external structure of the subnetworks is essential to the xNN. Each subnetwork must have univariate input and output, and there must be no connections between subnetworks. The internal structure of subnetworks is less critical, provided that the subnetworks have sufficient structure to learn a broad class of univariate functions. Subnetworks typically consist of multiple fully-connected layers and use nonlinear activation functions. More details are discussed in Section 5.3. The combination layer is the final hidden layer of the xNN, and consists of a single node. The inputs of the node are the univariate activations of all of the subnetworks. The weights learned correspond to the ’s in equation (2), and provide a final weighting of the ridge functions. A linear activation function is used on this layer, so the output of the network as a whole is a linear combination of the ridge functions. (Note: A non–linear activation function may easily be used on the combination layer instead of a linear activation. This changes to the formulation given in (2) by wrapping the LHS in a further link function, as with generalized linear models. We do not explore this generalization in detail here.) The neural network based formulation of the additive index model provides some advantages over the traditional approach in the statistics literature. First, it may be trained using the same mini–batch gradient–based methods, allowing the xNN formulation to easily be trained on datasets that may be too large to fit in memory at the same time. Further, the neural network formulation allows the xNN to take advantage of the advancements in GPU computing used to train neural networks in general. Finally, the neural network formulation allows for straightforward computation of partial derivatives of the function learned by the xNN. This supports the ability to carryout derivative–based analysis techniques using the xNN, without needing to rely on finite difference approximations and the difficulties that these may cause. Some techniques that may be employed are presented in Sobol & Kucherenko (2009) and Kucherenko (2010). In the next section, we illustrate how the structures built into the xNN, namely the projection layer and subnetworks, provide a mechanism to explain the function learned by such a network. ## 4 Visualization and Explainability of the xNN We now illustrate how visualization of xNN components can be used to aid in explainability. We consider a simple toy example based on the first three Legendre polynomials, shown in Figure 2 and defined in (3). These polynomials are orthogonal on the the interval and have a range of over the same interval. The exact form of these functions is not of particular interest except for the fact that they provide distinct linear, quadratic, and cubic functions on a similar scale and are orthogonal. f1(x)=x;f2(x)=12(3x2−1);f3(x)=12(5x3−3x) (3) We simulated five independent variables, from a Uniform distribution on . We then generated via y=f1(x1)+f2(x2)+f3(x3) (4) where are the Legendre polynomials as described in (3). This leaves as noise variables. We then built an xNN model with 5 subnetworks and them on all five features (). Only the strength of the penalty on the projection and output layers were tuned. The resulting xNN was used to generate the summaries that follow. ### 4.1 Visualizing Ridge Functions Figure 3 shows the ridge functions. Row represents subnetwork for . The first column illustrates the univariate functions learned by subnetwork , scaled by . These plots illustrate the univariate, non–linear transformations learned by the xNN in training. The second column displays the values of , the projection coefficients. The projection coefficients explain which combination of input features is used as input to each of the ridge functions. In this way, the plot displays the most relevant features of the network: the scaled ridge functions and the projection coefficients. In this example, we see from Figure 3 and 4 that Subnetwork 1 has learned the cubic Legendre function (), and from the second column of Figure 3, only has a non-zero coefficient in the input to this subnetwork. Subnetwork 2 has learned the quadratic function (), and only the coefficient of is nonzero. Subnetwork 5 has learned the linear function (), and only the coefficient of is non-zero. The other subnetworks (3 and 4) are not needed, and are set to zero by using an penalty on the ridge function weights ( in (2)). ### 4.2 Visualizing Univariate Effects The plot shown in Figure 4 illustrates the feature-centric view of the xNN, which we refer to as conditional effects. In this view, the th row summarizes the xNN’s treatment of the th feature. In the first column, each subnetwork’s treatment of feature is plotted in row , calculated via Each dotted line represents one such subnetwork, while the bold, solid line represents the effect of the network as a whole on feature . This is calculated via , and is the sum of the conditional effects of the individual subnetworks. This is equivalent to plotting . If the data have been standardized (as is typical in this case), this is equivalent to plotting . The second column of Figure 4 shows the projection coefficient of feature for each of the subnetworks. This shows which ridge functions are used to describe the effects of . In this particular example, we see that the only nonzero coefficient of is in the projection for subnetwork 5, the linear function, and that the conditional effect on is linear. Similarly, the only nonzero coefficient of appears in subnetwork 2, which learned a quadratic function. The only nonzero coefficient of is in subnetwork 1, which has learned the cubic function (). The two extraneous variables, and , have no non-zero coefficients, so the overall conditional effect of these variables is constant. It should be mentioned that the conditional effects plot shows some information that is redundant with the subnetwork–centric view. Nonetheless, the alternate view can be useful in understanding the role each feature plays in the predictions of the xNN model. In this toy example, the effect of each feature is represented by exactly one ridge function. In situations with more complex behavior, multiple ridge functions may be involved in representing the effect of a particular variable, and often are in more complex situations. Furthermore, in under-regularized networks, the effects of each variable may be be modeled by the contributions of several subnetworks. This behavior is displayed in the examples in Section 6. ## 5 Practical Considerations In this section, we consider some of the practical considerations that arise when using such models. These include a brief discussion on the difference between model recoverability and explainability, regularization of the xNN needed to learn a parsimonious model, and the structure of the subnetworks. ### 5.1 Model Recoverability and Explainability In practice, fitted xNN models exist on a spectrum of model recoverability while retaining a high degree of explainability. By model recoverability, we refer to the ability to recover the underlying generative mechanisms for the data., and explainability refers to the xNN’s ability to provide an explanation of the mechanisms used by the network to approximate a complex multivariate function, even if these mechanisms do not faithfully recover the underlying data generating process. With proper regularization, as discussed in Section 5.2, the representation is parsimonious and straightforward to interpret. The example discussed previously in Section 4 illustrates a situation where the xNN has high model recoverability, meaning that it has clearly learned the underlying generating process. In practice, this is not always be the case, as the data–generating process may not be fully described by the additive index model. In Section 6.2, we see such an example where the model is explainable even though it does not have higher model recoverability. In practice, the user will never know on which end of the spectrum a given xNN sits. However, unlike other popular network structures (such as feedforward networks) or tree-based methods, the xNN has a built-in mechanism to describe the complex function learned by the network in the relatively simple terms of projections and univariate ridge functions that ensure the model is explainable, regardless of where it may fall on the model recoverability spectrum. Finally, note that in certain circumstances, model recoverability may not be desirable. If the data generating process is highly complex, the explainable xNN is likely to be more easily understood given its additive nature. The xNN is especially easy to understand if it has been properly regularized. ### 5.2 Regularization and Parsimony The overall explainability of the network can be enhanced by using an penalty on both the first and last hidden layers during training. That is, both the projection coefficients ( ’s) and the ridge function weights ( ’s) are penalized. When the strength of the penalty is properly tuned, this can produce a parsimonious model that is relatively easily explained. An penalty on the first hidden layer forces the projection vectors to have few non-zero entries, meaning that each subnetwork (and corresponding ridge function) is only applied to a small set of the variables. Similarly, an penalty on the final layer serves to force to zero in situations where fewer subnetworks are needed in the xNN than are specified in training. ### 5.3 Subnetwork Structure In principle, the subnetwork structure must be chosen so that each subnetwork is capable of learning a large class of univariate functions. In our experience, however, both the explainability and predictive performance of the network are not highly sensitive to the subnetwork structure. In our simulations, we have found that using subnetworks consisting of two hidden layers with structures such as [25, 10] or even [12,6] with nonlinear activation functions (tanh, e.g.) are sufficient to learn sufficiently flexible ridge functions in fitting the models. ### 5.4 xNN as a Surrogate Model While the xNN architecture may be used as an explainable, predictive model built directly from data, it may also be used as a surrogate model to explain other nonparametric models, such as tree-based methods and feedforward neural networks, called a base model. Because the xNN is an explainable model, we may train an xNN using the input features and corresponding response values predicted by the base model. We then may use the xNN to explain the relationships learned by the base model. For further discussion of surrogate models, see Hinton et al. (2015), Bucilua et al. (2006), or Tan et al. (2018). The use of more easily interpretable surrogate models to help interpret a complex machine learning model is similar to the field of computer experiments, where complicated computer simulations of physical systems are studied using well–understood statistical models, as described in Fang, Li & Sudjianto (2005) and Bastos & O’Hagan (2009). In computer experiments, the input to the computer simulation may be carefully designed to answer questions of interest using these statistical models, where as the complex ML models often restricted to observational data. ## 6 Simulation Examples In this section, we illustrate the behavior of xNN networks with two simulations. In the first, data are generated from a model that follows the additive index model framework. This is an example where the trained xNN has high model recoverability, meaning it recovers correctly the data generating mechanism. The second simulation does not follow the additive index model framework, yet the trained xNN is still explainable, in the sense that the xNN still provides a clear description of the mechanisms the xNN learns to approximate the underlying response surface. ### 6.1 Example 1: Linear Model with Multiplicative Interaction We simulate six independent variables, from independent Uniform distributions on . We then generate via y=0.5x1+0.5x22+0.5x3x4+0.3x25+ϵ, % where ϵ∼N(0,0.05). (5) This is a linear model with a multiplicative interaction. The variable, is left as a noise feature. While this model does not, at first glance, fit the additive index model framework, we note that a multiplicative interaction may be represented as the sum of quadratic functions, as shown in (6). xy=c(ax+by)2−c(ax−by)2 for any a,b satisfying ab≠0,a2+b2=1 with c=1/4ab (6) Therefore, this model may be exactly represented by an xNN. We trained the xNN using 20 subnetworks. The network achieved a mean squared error of 0.0028 the holdout set, close to the simulation lower bound of 0.0025. The resulting active ridge functions are illustrated in Figure 5. (By active ridge functions, we mean those functions that are not constant.) Subnetwork 9 learned a linear ridge function, and has a relatively large projection coefficient for . Subnetworks 2, 4, 5, and 16 learned quadratic ridge functions. Based on the projection coefficients, we see that subnetworks 2 and 5 are used to represent the contributions of and , respectively. Subnetworks 4 and 16 combine to represent the interaction . Both are quadratic. The two features have the same projection coefficients in subnetwork 16, while they have projection coefficients of opposite signs in subnetwork 4. This is exactly the representation of an interaction term described in equation (6). Thus, this xNN has both high model recoverability and a high degree of explainability. Figure 6 illustrates the conditional effects of each network on each of the predictors. We see, as expected, a linear marginal effect on and quadratic effects on and . It is notable that the conditional effects plots for both and show no conditional effect. In the case of such interactions, this is expected. In this model, if we condition on e.g. , then will show no effect on the response. Similarly, we see no effect of when conditioning on . ### 6.2 Example 2: Non-Linear Model We simulate four independent variables, from independent Uniform distributions on . We then generate via y=exp(x1)⋅sin(x2)+ϵ where ϵ∼N(0,0.1). (7) Both and are left as noise variables. We then fit an xNN with 10 subnetworks and a subnet structure of [12,6] with tanh activation. The network achieved a mean squared error of 0.0122 on a holdout test set, close to the simulation lower bound of 0.01. Note that this generating model does not fit the additive index model framework. In this example, the trained xNN is explainable despite having low model recoverability. Although the xNN cannot recover the data generating process, it still fits the data well, and clearly explains the mechanisms it uses to do so, by displaying the projection coefficients and learned ridge functions. Figure 7 shows two ridge functions, represented by subnetworks 2 and 5. Both subnetworks have non-zero coefficients of and , although they have the same sign in Subnetwork 5, and opposite signs in Subnetwork 2. We see that the xNN approximates the simulated function with the function , where and are the two ridge functions learned by subnetworks 2 and 5, respectively. Figure 8 shows the Note that subnetwork 3 has learned a small non-zero coefficients for , however, the corresponding ridge function is constant at zero, so does not contribute to the output. This type of behavior may occur when the xNN is slightly under regularized. ## 7 Conclusion We have proposed an explainable neural network architecture, the xNN, based on the additive index model. Unlike commonly used neural network structures, the structure of the xNN describes the features it learns, via linear projections and univariate functions. These explainability features have the attractive feature of being additive in nature and straightforward to interpret. Whether the network is used as a primary model or a surrogate for a more complex model, the xNN provides straightforward explanations of how the model uses the input features to make predictions. Future work on the xNN will study the overall predictive performance of the xNN compared to other ML models, such as GBM and unconstrained FFNNs. We will also study the predictive performance lost when using the xNN as a surrogate model for more complex models. ## References Want to hear about new tools we're making? Sign up to our mailing list for occasional updates. If you find a rendering bug, file an issue on GitHub. Or, have a go at fixing it yourself – the renderer is open source! For everything else, email us at [email protected].
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8323702216148376, "perplexity": 707.1242460401404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00609.warc.gz"}
http://www.ck12.org/book/CK-12-Geometry-Second-Edition/r1/section/10.7/
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # 10.7: Chapter 10 Review Difficulty Level: At Grade Created by: CK-12 Keywords, Theorems and Formulas Perimeter The distance around a shape. Or, the sum of all the edges of a two-dimensional figure. Area of a Rectangle The area of a rectangle is the product of its base (width) and height (length) A=bh\begin{align*}A=bh\end{align*}. Perimeter of a Rectangle P=2b+2h\begin{align*}P=2b+2h\end{align*}, where b\begin{align*}b\end{align*} is the base (or width) and h\begin{align*}h\end{align*} is the height (or length). Perimeter of a Square P=4s\begin{align*}P=4s\end{align*} Area of a Square A=s2\begin{align*}A=s^2\end{align*} Congruent Areas Postulate If two figures are congruent, they have the same area. If a figure is composed of two or more parts that do not overlap each other, then the area of the figure is the sum of the areas of the parts. Area of a Parallelogram A=bh\begin{align*}A=bh\end{align*}. Area of a Triangle A=12bh\begin{align*}A= \frac{1}{2} bh\end{align*} or A=bh2\begin{align*}A=\frac{bh}{2}\end{align*} Area of a Trapezoid The area of a trapezoid with height h\begin{align*}h\end{align*} and bases b1\begin{align*}b_1\end{align*} and b2\begin{align*}b_2\end{align*} is A=12h(b1+b2)\begin{align*}A=\frac{1}{2} h(b_1+b_2)\end{align*}. Area of a Rhombus If the diagonals of a rhombus are d1\begin{align*}d_1\end{align*} and d2\begin{align*}d_2\end{align*}, then the area is A=12d1d2\begin{align*}A=\frac{1}{2} d_1 d_2\end{align*}. Area of a Kite If the diagonals of a kite are d1\begin{align*}d_1\end{align*} and d2\begin{align*}d_2\end{align*}, then the area is A=12d1d2\begin{align*}A=\frac{1}{2} d_1 d_2\end{align*}. Area of Similar Polygons Theorem If the scale factor of the sides of two similar polygons is mn\begin{align*}\frac{m}{n}\end{align*}, then the ratio of the areas would be (mn)2\begin{align*}\left( \frac{m}{n} \right)^2\end{align*}. π\begin{align*}\pi\end{align*} The ratio of the circumference of a circle to its diameter. Circumference If d\begin{align*}d\end{align*} is the diameter or r\begin{align*}r\end{align*} is the radius of a circle, then C=πd\begin{align*}C=\pi d\end{align*} or C=2πr\begin{align*}C=2 \pi r\end{align*}. Arc Length The length of an arc or a portion of a circle’s circumference. Arc Length Formula length of ABˆ=mABˆ360πd\begin{align*}\widehat{AB}=\frac{m \widehat{AB}}{360^\circ} \cdot \pi d\end{align*} or mABˆ3602πr\begin{align*}\frac{m \widehat{AB}}{360^\circ} \cdot 2 \pi r\end{align*} Area of a Circle If r\begin{align*}r\end{align*} is the radius of a circle, then A=πr2\begin{align*}A=\pi r^2\end{align*}. Sector of a Circle The area bounded by two radii and the arc between the endpoints of the radii. Area of a Sector If r\begin{align*}r\end{align*} is the radius and ABˆ\begin{align*}\widehat{AB}\end{align*} is the arc bounding a sector, then A=mABˆ360πr2\begin{align*}A= \frac{m\widehat{AB}}{360^\circ} \cdot \pi r^2\end{align*}. Segment of a Circle The area of a circle that is bounded by a chord and the arc with the same endpoints as the chord. Perimeter of a Regular Polygon If the length of a side is s\begin{align*}s\end{align*} and there are n\begin{align*}n\end{align*} sides in a regular polygon, then the perimeter is P=ns\begin{align*}P=ns\end{align*}. Apothem A line segment drawn from the center of a regular polygon to the midpoint of one of its sides. Area of a Regular Polygon If there are n\begin{align*}n\end{align*} sides with length s\begin{align*}s\end{align*} in a regular polygon and a\begin{align*}a\end{align*} is the apothem, then A=12asn\begin{align*}A=\frac{1}{2} asn\end{align*} or A=12aP\begin{align*}A=\frac{1}{2} aP\end{align*}, where P\begin{align*}P\end{align*} is the perimeter. ## Review Questions Find the area and perimeter of the following figures. Round your answers to the nearest hundredth. 1. square 2. rectangle 3. rhombus 4. regular pentagon 5. parallelogram 6. regular dodecagon 1. triangle 2. kite 3. isosceles trapezoid 4. Find the area and circumference of a circle with radius 17. 5. Find the area and circumference of a circle with diameter 30. 6. Two similar rectangles have a scale factor 43\begin{align*}\frac{4}{3}\end{align*}. If the area of the larger rectangle is 96 units2\begin{align*}96 \ units^2\end{align*}, find the area of the smaller rectangle. Find the area of the following figures. Round your answers to the nearest hundredth. 1. find the shaded area (figure is a rhombus) ## Texas Instruments Resources In the CK-12 Texas Instruments Geometry FlexBook, there are graphing calculator activities designed to supplement the objectives for some of the lessons in this chapter. See http://www.ck12.org/flexr/chapter/9695. ### Notes/Highlights Having trouble? Report an issue. Color Highlighted Text Notes Show Hide Details Description Tags: Subjects:
{"extraction_info": {"found_math": true, "script_math_tex": 44, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9466343522071838, "perplexity": 2464.6158600448416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218199514.53/warc/CC-MAIN-20170322212959-00322-ip-10-233-31-227.ec2.internal.warc.gz"}
https://paradigms.oregonstate.edu/search/?q=Electric%20Potential
group Small Group Activity 30 min. ##### Work By An Electric Field (Contour Map) Students will estimate the work done by a given electric field. They will connect the work done to the height of a plastic surface graph of the electric potential. group Small Group Activity 30 min. ##### Electric Field of Two Charged Plates • Students need to understand that the surface represents the electric potential in the center of a parallel plate capacitor. Try doing the activity Electric Potential of Two Charged Plates before this activity. • Students should know that 1. objects with like charge repel and opposite charge attract, 2. object tend to move toward lower energy configurations 3. The potential energy of a charged particle is related to its charge: $U=qV$ 4. The force on a charged particle is related to its charge: $\vec{F}=q\vec{E}$ group Small Group Activity 30 min. ##### Electric Potential of Two Charged Plates Students examine a plastic "surface" graph of the electric potential due to two charged plates (near the center of the plates) and explore the properties of the electric potential. group Small Group Activity 30 min. ##### Charged Sphere Students use a plastic surface representing the potential due to a charged sphere to explore the electrostatic potential, equipotential lines, and the relationship between potential and electric field. keyboard Computational Activity 120 min. ##### Electrostatic potential of spherical shell Computational Physics Lab II 2022 Students solve numerically for the potential due to a spherical shell of charge. Although this potential is straightforward to compute using Gauss's Law, it serves as a nice example for numerically integrating in spherical coordinates because the correct answer is easy to recognize. assignment Homework ##### Electric Field from a Rod Static Fields 2022 (5 years) Consider a thin charged rod of length $L$ standing along the $z$-axis with the bottom end on the $xy$-plane. The charge density $\lambda$ is constant. Find the electric field at the point $(0,0,2L)$. assignment Homework ##### Line Sources Using the Gradient Static Fields 2022 (6 years) 1. Find the electric field around an infinite, uniformly charged, straight wire, starting from the following expression for the electrostatic potential: $$V(\vec r)=\frac{2\lambda}{4\pi\epsilon_0}\, \ln\left( \frac{ s_0}{s} \right)$$ assignment Homework ##### Electric Field of a Finite Line Consider the finite line with a uniform charge density from class. 1. Write an integral expression for the electric field at any point in space due to the finite line. In addition to your usual physics sense-making, you must include a clearly labeled figure and discuss what happens to the direction of the unit vectors as you integrate.Consider the finite line with a uniform charge density from class. 2. Perform the integral to find the $z$-component of the electric field. In addition to your usual physics sense-making, you must compare your result to the gradient of the electric potential we found in class. (If you want to challenge yourself, do the $s$-component as well!) group Small Group Activity 30 min. ##### Electric Field Due to a Ring of Charge Static Fields 2022 (8 years) Power Series Sequence (E&M) Ring Cycle Sequence Students work in groups of three to use Coulomb's Law $\vec{E}(\vec{r}) =\frac{1}{4\pi\epsilon_0}\int\frac{\rho(\vec{r}^{\,\prime})\left(\vec{r}-\vec{r}^{\,\prime}\right)}{\vert \vec{r}-\vec{r}^{\,\prime}\vert^3} \, d\tau^{\prime}$ to find an integral expression for the electric field, $\vec{E}(\vec{r})$, everywhere in space, due to a ring of charge. In an optional extension, students find a series expansion for $\vec{E}(\vec{r})$ either on the axis or in the plane of the ring, for either small or large values of the relevant geometric variable. Add an extra half hour or more to the time estimate for the optional extension. assignment Homework Power Series Sequence (E&M) Static Fields 2022 (6 years) Consider a collection of three charges arranged in a line along the $z$-axis: charges $+Q$ at $z=\pm D$ and charge $-2Q$ at $z=0$. 1. Find the electrostatic potential at a point $\vec{r}$ in the $xy$-plane at a distance $s$ from the center of the quadrupole. The formula for the electrostatic potential $V$ at a point $\vec{r}$ due to a charge $Q$ at the point $\vec{r'}$ is given by: $V(\vec{r})=\frac{1}{4\pi\epsilon_0} \frac{Q}{\vert \vec{r}-\vec{r'}\vert}$ Electrostatic potentials satisfy the superposition principle. 2. Assume $s\gg D$. Find the first two non-zero terms of a power series expansion to the electrostatic potential you found in the first part of this problem. group Small Group Activity 120 min. ##### Equipotential Surfaces Students are prompted to consider the scalar superposition of the electric potential due to multiple point charges. First a single point charge is discussed, then four positive charges, then an electric quadrupole. Students draw the equipotential curves in the plane of the charges, while also considering the 3D nature of equipotentials. assignment_ind Small White Board Question 10 min. ##### Electrostatic Potential Due to a Point Charge Static Fields 2022 (2 years) Warm-Up group Small Group Activity 30 min. ##### A glass of water Energy and Entropy 2021 (2 years) Students generate a list of properties a glass of water might have. The class then discusses and categorizes those properties. assignment Homework Consider the fields at a point $\vec{r}$ due to a point charge located at $\vec{r}'$. 1. Write down an expression for the electrostatic potential $V(\vec{r})$ at a point $\vec{r}$ due to a point charge located at $\vec{r}'$. (There is nothing to calculate here.) 2. Write down an expression for the electric field $\vec{E}(\vec{r})$ at a point $\vec{r}$ due to a point charge located at $\vec{r}'$. (There is nothing to calculate here.) 3. Working in rectangular coordinates, compute the gradient of $V$. 4. Write several sentences comparing your answers to the last two questions. group Small Group Activity 30 min. ##### Number of Paths Student discuss how many paths can be found on a map of the vector fields $\vec{F}$ for which the integral $\int \vec{F}\cdot d\vec{r}$ is positive, negative, or zero. $\vec{F}$ is conservative. They do a similar activity for the vector field $\vec{G}$ which is not conservative. assignment Homework ##### Line Sources Using Coulomb's Law Static Fields 2022 (6 years) 1. Find the electric field around a finite, uniformly charged, straight rod, at a point a distance $s$ straight out from the midpoint, starting from Coulomb's Law. 2. Find the electric field around an infinite, uniformly charged, straight rod, starting from the result for a finite rod. assignment Homework ##### Potential vs. Potential Energy Static Fields 2022 (6 years) In this course, two of the primary examples we will be using are the potential due to gravity and the potential due to an electric charge. Both of these forces vary like $\frac{1}{r}$, so they will have many, many similarities. Most of the calculations we do for the one case will be true for the other. But there are some extremely important differences: 1. Find the value of the electrostatic potential energy of a system consisting of a hydrogen nucleus and an electron separated by the Bohr radius. Find the value of the gravitational potential energy of the same two particles at the same radius. Use the same system of units in both cases. Compare and the contrast the two answers. 2. Find the value of the electrostatic potential due to the nucleus of a hydrogen atom at the Bohr radius. Find the gravitational potential due to the nucleus at the same radius. Use the same system of units in both cases. Compare and contrast the two answers. 3. Briefly discuss at least one other fundamental difference between electromagnetic and gravitational systems. Hint: Why are we bound to the earth gravitationally, but not electromagnetically? assignment Homework ##### The Gradient for a Point Charge Static Fields 2022 (6 years) The electrostatic potential due to a point charge at the origin is given by: $$V=\frac{1}{4\pi\epsilon_0} \frac{q}{r}$$ 1. Find the electric field due to a point charge at the origin as a gradient in rectangular coordinates. 2. Find the electric field due to a point charge at the origin as a gradient in spherical coordinates. 3. Find the electric field due to a point charge at the origin as a gradient in cylindrical coordinates. face Lecture 5 min. ##### Central Forces Introduction: Lecture Notes Central Forces 2023 (2 years) group Small Group Activity 30 min. ##### Magnetic Vector Potential Due to a Spinning Charged Ring Static Fields 2022 (6 years) Power Series Sequence (E&M) Ring Cycle Sequence Students work in groups of three to use the superposition principle $\vec{A}(\vec{r}) =\frac{\mu_0}{4\pi}\int\frac{\vec{J}(\vec{r}^{\,\prime})}{\vert \vec{r}-\vec{r}^{\,\prime}\vert}\, d\tau^{\prime}$ to find an integral expression for the magnetic vector potential, $\vec{A}(\vec{r})$, due to a spinning ring of charge. In an optional extension, students find a series expansion for $\vec{A}(\vec{r})$ either on the axis or in the plane of the ring, for either small or large values of the relevant geometric variable. Add an extra half hour or more to the time estimate for the optional extension.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9250677824020386, "perplexity": 435.568817965466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00620.warc.gz"}
https://ccrma.stanford.edu/~bilbao/master/node113.html
Next: Losses, Sources, and Spatially-varying Up: The (2+1)D Parallel-plate System Previous: Defining Equations and Centered The Waveguide Mesh Consider the original form (2+1)D waveguide network, or mesh [198], operating on a rectilinear grid. Each scattering junction (parallel) is connected to four neighbors by unit sample bidirectional delay lines. The spacing of the junctions is (in either the or direction) and the time delay is in the delay lines (see Figure 4.19). We now index a junction (and all its associated voltages and currents and wave quantities) at coordinates by the pair . As in the (1+1)D case, at each parallel junction at location , we have voltages at every port, given by and current flows current flow in waveguide leading east current flow in waveguide leading west current flow in waveguide leading north current flow in waveguide leading south as well as wave quantities where is any of , , or . The variables superscripted with a refer to the incoming waves, and those marked to outgoing waves. The voltage and current waves are related by (4.67) where is the admittance of the waveguide connected to the junction with coordinates in direction . The junction admittance is then (4.68) and the scattering equation, for voltage waves, will be, from (4.15), (4.69) where is any of , , , or . Voltage waves are propagated by: The case of flow waves is similar except for a sign inversion. The complete picture is shown in Figure 4.19. Similarly to the (1+1)D case, it is possible to obtain a finite difference scheme purely in terms of the junction voltages , under the assumption that the admittances of all the waveguides in the network are identical, and equal to some positive constant . Thus, from (4.56), . We have, for the junction at location , , This is identical to if we replace by . If we now replace all the bidirectional delay lines in Figure 4.19 by the same split pair of lines shown in Figure 4.11, then we get the arrangement in Figure 4.20. We have placed the split lines such that the branches containing sign inversions are adjacent to the western and southern ports of the parallel junctions. We also introduce new junction variables at the series junctions between two horizontal half-sample waveguides, and at the series junctions between two vertical delay lines, as well as all the associated wave quantities at the ports of the new series junctions. It is straightforward to show that upon identifying , and with and and , the mesh will calculating according to scheme (4.52) with constant coefficients, if we choose where is the impedance in all the delay lines. We are again at the magic time step, but the impedance has been set to be larger than the characteristic impedance of the medium. Also, notice that the speed of propagation along the delay lines is not the wave speed of the medium, which is . Such a mesh is called a slow-wave structure [90] in the TLM literature. At this point, it is useful to compare Figures 4.20 and 4.18. Subsections Next: Losses, Sources, and Spatially-varying Up: The (2+1)D Parallel-plate System Previous: Defining Equations and Centered Stefan Bilbao 2002-01-22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313505291938782, "perplexity": 1210.2329646286353}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00031-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/closed-and-open-subsets.648592/
# Homework Help: Closed and open subsets 1. Oct 31, 2012 ### gavbacardi 1. {(x,y)$\in$ R^2 such that 2x+y<=2, x-y>4} Determine whether this subset of R^2 is open, closed or neither open nor closed. 2. I think this is an open subset but not sure how to prove it. I have rearranged the equations to give x>2, y<=-2x+1, y< x-1. I think it is open because x can get closer and closer to 2 but never equal it and y can get closer and closer to x-1 but never equal it. I'm not sure how prove this mathematically though? Any help or hints would be great! 2. Oct 31, 2012 ### Zondrina First things first. Draw what your set looks like by re-arranging your equations a bit. This will allow you to see whether or not your set is open. Remember a set is open if every point is an interior point, that is for any point you choose, every neighborhood around it contains points only from the set.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8318686485290527, "perplexity": 445.43726272314433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865081.23/warc/CC-MAIN-20180623132619-20180623152619-00098.warc.gz"}
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=16A60&jrnl=one&onejrnl=proc
AMS eContent Search Results Matches for: msc=(16A60) AND publication=(proc) Sort order: Date Format: Standard display Results: 1 to 20 of 20 found      Go to page: 1 [1] A. N. Dranishnikov. On the virtual cohomological dimensions of Coxeter groups. Proc. Amer. Math. Soc. 125 (1997) 1885-1891. MR 1422863. Abstract, references, and article information    View Article: PDF This article is available free of charge [2] Martin Lorenz. On the global dimension of fixed rings . Proc. Amer. Math. Soc. 106 (1989) 923-932. MR 972235. Abstract, references, and article information    View Article: PDF This article is available free of charge [3] Dan Zacharia. A characterization of Artinian rings whose endomorphism rings have finite global dimension . Proc. Amer. Math. Soc. 104 (1988) 37-38. MR 958038. Abstract, references, and article information    View Article: PDF This article is available free of charge [4] Robert L. Snider. Noncommutative regular local rings of dimension $3$ . Proc. Amer. Math. Soc. 104 (1988) 49-50. MR 958041. Abstract, references, and article information    View Article: PDF This article is available free of charge [5] Ellen Kirkman and James Kuzmanovich. On the global dimension of a ring modulo its nilpotent radical . Proc. Amer. Math. Soc. 102 (1988) 25-28. MR 915709. Abstract, references, and article information    View Article: PDF This article is available free of charge [6] Edward Formanek and A. H. Schofield. Groups acting on the ring of two $2\times 2$ generic matrices and a coproduct decomposition of its trace ring . Proc. Amer. Math. Soc. 95 (1985) 179-183. MR 801319. Abstract, references, and article information    View Article: PDF This article is available free of charge [7] W. D. Burgess, K. R. Fuller, E. R. Voss and B. Zimmermann-Huisgen. The Cartan matrix as an indicator of finite global dimension for Artinian rings . Proc. Amer. Math. Soc. 95 (1985) 157-165. MR 801315. Abstract, references, and article information    View Article: PDF This article is available free of charge [8] K. Igusa and G. Todorov. Preprojective partitions and the determinant of the Hom matrix . Proc. Amer. Math. Soc. 94 (1985) 189-197. MR 784160. Abstract, references, and article information    View Article: PDF This article is available free of charge [9] Kenneth A. Brown and R. B. Warfield. Krull and global dimensions of fully bounded Noetherian rings . Proc. Amer. Math. Soc. 92 (1984) 169-174. MR 754696. Abstract, references, and article information    View Article: PDF This article is available free of charge [10] K. R. Goodearl and L. W. Small. Krull versus global dimension in Noetherian P.I. rings . Proc. Amer. Math. Soc. 92 (1984) 175-178. MR 754697. Abstract, references, and article information    View Article: PDF This article is available free of charge [11] George V. Wilson. Ultimately closed projective resolutions and rationality of Poincar\'e-Betti series . Proc. Amer. Math. Soc. 88 (1983) 221-223. MR 695246. Abstract, references, and article information    View Article: PDF This article is available free of charge [12] Robert F. Damiano. The global dimension of FBN rings with enough clans . Proc. Amer. Math. Soc. 86 (1982) 25-28. MR 663859. Abstract, references, and article information    View Article: PDF This article is available free of charge [13] P. F. Smith. Rings with every proper image a principal ideal ring . Proc. Amer. Math. Soc. 81 (1981) 347-352. MR 597637. Abstract, references, and article information    View Article: PDF This article is available free of charge [14] Jürgen Herzog and Manfred Steurich. Two applications of change of rings theorems for Poincar\'e series . Proc. Amer. Math. Soc. 73 (1979) 163-168. MR 516457. Abstract, references, and article information    View Article: PDF This article is available free of charge [15] M. Boratyński. A change of rings theorem and the Artin-Rees property . Proc. Amer. Math. Soc. 53 (1975) 307-310. MR 0401840. Abstract, references, and article information    View Article: PDF This article is available free of charge [16] Mark Ramras. Injective dimension of quaternion orders . Proc. Amer. Math. Soc. 38 (1973) 493-498. MR 0313293. Abstract, references, and article information    View Article: PDF This article is available free of charge [17] Joseph A. Wehlen. Triangular matrix algebras over Hensel rings . Proc. Amer. Math. Soc. 37 (1973) 69-74. MR 0308196. Abstract, references, and article information    View Article: PDF This article is available free of charge [18] John D. Fuelberth and Mark L. Teply. A splitting ring of global dimension two . Proc. Amer. Math. Soc. 35 (1972) 317-324. MR 0306264. Abstract, references, and article information    View Article: PDF This article is available free of charge [19] William R. Nico. An improved upper bound for global dimension of semigroup algebras . Proc. Amer. Math. Soc. 35 (1972) 34-36. MR 0296182. Abstract, references, and article information    View Article: PDF This article is available free of charge [20] Joseph A. Wehlen. Cohomological dimension and global dimension of algebras . Proc. Amer. Math. Soc. 32 (1972) 75-80. MR 0291226. Abstract, references, and article information    View Article: PDF This article is available free of charge Results: 1 to 20 of 20 found      Go to page: 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8905119299888611, "perplexity": 1412.7194692785301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892238.78/warc/CC-MAIN-20180123191341-20180123211341-00240.warc.gz"}
https://en.wikipedia.org/wiki/Degenerate_form
# Degenerate bilinear form (Redirected from Degenerate form) For other uses, see Degeneracy. In mathematics, specifically linear algebra, a degenerate bilinear form f(x, y) on a vector space V is a bilinear form such that the map from V to V (the dual space of V) given by v ↦ (xf(x, v)) is not an isomorphism. An equivalent definition when V is finite-dimensional is that it has a non-trivial kernel: there exist some non-zero x in V such that ${\displaystyle f(x,y)=0\,}$ for all ${\displaystyle y\in V.}$ ## Non-degenerate forms A nondegenerate or nonsingular form is one that is not degenerate, meaning that ${\displaystyle v\mapsto (x\mapsto f(x,v))}$ is an isomorphism, or equivalently in finite dimensions, if and only if ${\displaystyle f(x,y)=0\,}$ for all ${\displaystyle y\in V}$ implies that x = 0. ## Using the determinant If V is finite-dimensional then, relative to some basis for V, a bilinear form is degenerate if and only if the determinant of the associated matrix is zero – if and only if the matrix is singular, and accordingly degenerate forms are also called singular forms. Likewise, a nondegenerate form is one for which the associated matrix is non-singular, and accordingly nondegenerate forms are also referred to as non-singular forms. These statements are independent of the chosen basis. ## Related notions There is the closely related notion of a unimodular form and a perfect pairing; these agree over fields but not over general rings. ## Examples The most important examples of nondegenerate forms are inner products and symplectic forms. Symmetric nondegenerate forms are important generalizations of inner products, in that often all that is required is that the map ${\displaystyle V\to V^{*}}$ be an isomorphism, not positivity. For example, a manifold with an inner product structure on its tangent spaces is a Riemannian manifold, while relaxing this to a symmetric nondegenerate form yields a pseudo-Riemannian manifold. ## Infinite dimensions Note that in an infinite dimensional space, we can have a bilinear form ƒ for which ${\displaystyle v\mapsto (x\mapsto f(x,v))}$ is injective but not surjective. For example, on the space of continuous functions on a closed bounded interval, the form ${\displaystyle f(\phi ,\psi )=\int \psi (x)\phi (x)dx}$ is not surjective: for instance, the Dirac delta functional is in the dual space but not of the required form. On the other hand, this bilinear form satisfies ${\displaystyle f(\phi ,\psi )=0\,}$ for all ${\displaystyle \,\phi }$ implies that ${\displaystyle \psi =0.\,}$ ## Terminology If ƒ vanishes identically on all vectors it is said to be totally degenerate. Given any bilinear form ƒ on V the set of vectors ${\displaystyle \{x\in V\mid f(x,y)=0{\mbox{ for all }}y\in V\}}$ forms a totally degenerate subspace of V. The map ƒ is nondegenerate if and only if this subspace is trivial. Sometimes the words anisotropic, isotropic and totally isotropic are used for nondegenerate, degenerate and totally degenerate respectively, although definitions of these latter can be a bit ambiguous: a vector ${\displaystyle x\in V}$ such that ${\displaystyle f(x,x)=0}$ is called isotropic for the quadratic form associated with the bilinear form ${\displaystyle f}$, but such vectors can arise even if the bilinear form has no nonzero isotropic vectors. Geometrically, an isotropic line of the quadratic form corresponds to a point of the associated quadric hypersurface in projective space. Such a line is additionally isotropic for the bilinear form if and only if the corresponding point is a singularity. Hence, over an algebraically closed field, Hilbert's nullstellensatz guarantees that the quadratic form always has isotropic lines, while the bilinear form has them if and only if the surface is singular.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9709201455116272, "perplexity": 260.456890756881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828010.65/warc/CC-MAIN-20160723071028-00246-ip-10-185-27-174.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?p=4272054
# Maximum Charge on Designed Capacitor by gvjt Tags: capacitance, design constraint, dielectric P: 10 I'll spare you all the details, but suppose for some reason (too complicated to get into here) I needed to design a capacitor with certain fixed parameters and only one adjustable aspect, and I wanted the design to maximize the total charge the device could store. According to my calculations, changing the adjustable parameter described below does not seem to make any difference to the answer, and I'm suspicious that I am making a error, hence my post: Here are the fixed parameters: - the plate area A - the distance between the plates d - the space between is filled with two dielectrics as follows: i) Strontium Titanate, (K=300, V/m=8e6) ii) dry air @ STP, (K=1, V/m=3e6) - the StrontiumTi is in the middle with equal sized air gaps before each plate - the air gaps are required in the design and must be at least 0.001d each. The question becomes, what fraction of "d" should the thickness of the StrontiumTi be to maximize the total charge the capacitor can hold. The issue is that C increases with more StontiumTi, but the air gap has a higher voltage gradient and the maximum charging voltage falls as the air gap decreases. Since Q=CV, I suspect that Q is a constant in this case, but I wanted to set up the equations to prove this, and I'm having some trouble making sense of it all. Now don't ask why the air gap is required, that's an independent issue outside the scope of the problem and is simply a design constraint. If it could be eliminated, then clearly the entire dielectric could just be the StontiumTi and then this would give the ideal situation. And this is why I'm confused, because it seems as soon as we insist on the presence of the air gap, the amount of the other material doesn't seem to affect the total charge attainable. On the one hand, this result seems plausible because the voltage gradient is much lower in the StrontiumTi leaving the bulk of the charging voltage across the thin air gap limiting the charging voltage substantially. But on the other hand it seems fishy that things would change so suddenly and drastically due to the presence of this required air gap, and that's why I'm suspicious that something is amiss. If the air gap wasn't there, Vmax would become 8e6 X d and C would be a maximum giving Q=CV the highest possible value. However, once the air gap is introduced, Vmax drops substantially. As the air gap is increased, Vmax increases, but C falls accordingly, and Q seems to be constant as a result. In my attempts at working on this, I recognized that the voltage gradient (delta V / delta d) is 300 times lower in the StontiumTi than in the air, but that the voltage is the gradient X thickness of the layer. So I think I did all that part correctly. -gt- P: 371 Look at the electric flux density. D=ϵE. If there is an air gap and you assume that the air in that gap breaks down at 3MV/m then you get 3e6 V/m * 8.85e-12 C/Vm = 2.655e-5 C/m^2 So approximately 26 µC per square meter. That would mean if the capacitor had an area of one m^2 it could hold a charge of 26µC independant of the thickness of the StontiumTi. But that must indead be flawed since we know for a fact that the maximum charge increases by a lot if the width of the air gap approaches zero. The solution is simple. The breakdown strength of an air gap depends on it's width. Due to the way air breaks down (electron avalanche http://en.wikipedia.org/wiki/Electron_avalanche) the maximum field strength the air gap can withstand becomes a lot higher than 3MV/m if the width of that gap is very small. A microscopically small air gap can easily withstand many hundreds of MV/m. This is known as Paschen's law. http://www.physics.nus.edu.sg/~L3000...%20physics.pdf Plus even if the air should break down and the charge jumps over to the surface of the StontiumTi, the capacitor would still be charged. As Walter Lewin demonstrates here with a leyden jar. https://www.youtube.com/watch?v=E185G_JBd7U#t=37m50s And he explains it in detail in this video https://www.youtube.com/watch?v=MZOaVXmK5zk#t=32m55s P: 10 Let me check to make sure I'm understanding what you said: So the 3e6V/m for air is only valid until the gap gets below some threshold whereupon it actually gets significantly higher - and this was the fact that I was missing? Finally, when you said, "But that must indeed be flawed," what this refers to is ambiguous, so can you help me with interpreting it? i.e. was the flaw just not understanding that things become different as the gap begins to close up, or was the flaw the statement that the maximum charge (for larger gaps) is a constant? I think it was the former and not the latter, but I just want to be clear. Thanks a lot for the prompt reply! I really appreciate it! -gt- P: 371 ## Maximum Charge on Designed Capacitor What I said was - The statement that the maximum charge density remains constant at 26µC/m^2 must be flawed. The equation for calculating the maximum voltage that a capacitor can hold is V = Eds * d where Eds is the breakdown strength of the dielectric. That equation however is ignoring the gap between the dielectric and the metal plates. But since that gap is usually microscopically small, it can be ignored safely since it's breakdown strength will be higher than that of the dielectric. And I guess that's the fact that you missed. P: 10 What I still can't tell from your reply is if the statement is flawed generally, or if you meant it was flawed only when the gap starts to become small. So let me rephrase: If the air gap is sufficiently large so that the breakdown voltage is indeed 3e6V/m, then should the maximum charge be independent of the thickness of the Strontium Titanate as my calculations seemed to show, or am I doing something wrong there? So far, you've cleared up nicely for me the issue of what happens when the air gap becomes small, and why this means that things don't (as I thought) "change so suddenly and drastically." So I now understand that part of all of this. But I'm still not certain so far from what you explained what happens when the air gap is large. For that case, then is the maximum charge the cap can hold independent of the various ratios of thickness of the two dielectric materials? In my calculations, I calculated the breakdown voltage of the capacitor by determining what voltage across the combination of dielectrics would reach the point where it would arc through the air. Note this is higher than the thickness of the air X 3eV/m, because there will be some drop across the other dielectric, but since K=300 there, the additional voltage will be proportional to the thickness of the SrTiO3, but inversely proportional to its K value ... so its not straightforward to get it right. The breakdown voltage of the device will be higher than the breakdown voltage of the air gap however. In any case, C is proportional to K(air) and to K(SrTiO3) and inversely proportional to d(air) and d(SrTiO3). This is the opposite relationship to that between the K's and d's and the breakdown voltage V. So as the V for the cap rises, the C falls and vice versa, so it seems from the equations that CV is constant (again ASSUMING the air gap is not approaching zero). Does this make sense? -gt- P: 10 Oh, I should add that allowing the device to have a spark arc-ing across the air gap is not acceptable, so this is why I calculated Vmax for the device based on whatever maximum total voltage would be just below the amount that would cause a breakdown of either dielectric. I guess its always going to be the air in this case. So works out to somewhat higher than 3e6Xd(air) generally. P: 371 Quote by gvjt So let me rephrase: If the air gap is sufficiently large so that the breakdown voltage is indeed 3e6V/m, then should the maximum charge be independent of the thickness of the Strontium Titanate as my calculations seemed to show, or am I doing something wrong there? If the air gap is large (1 mm already counts as large here) the maximum charge you can put on there without the air breaking down remains constant. I already mentioned before that D=ϵE in air. And D * Area gives you the charge of the capacitor. The distance between the plates is not even in those equations. Did you watch this video? In my calculations, I calculated the breakdown voltage of the capacitor by determining what voltage across the combination of dielectrics would reach the point where it would arc through the air. Note this is higher than the thickness of the air X 3eV/m, because there will be some drop across the other dielectric, but since K=300 there, the additional voltage will be proportional to the thickness of the SrTiO3, but inversely proportional to its K value ... so its not straightforward to get it right. It's relatively simple. (ag = air gap, st = Strontium Titanate, d = thickness of the dielectric) V = Vag + Vst Vag = Eag*dag Vst = Est*dst Est = Eag/300 Now putting those equations together V = Eag*dag + Est*dst V = Eag*dag + (Eag/300)*dst V = (dag + dst/300)*Eag Eag = V/(dag + dst/300) P: 10 I finally did watch the video. Walter Lewin is fantastic and I've watched most of the Physics8.02 video lectures that he made on that site. I always quote bits and pieces from his videos for my students. (I'm actually a math/compsci professor with an engineering background and a PhD in neuroscience ... but I teach at a small university with no physics program, and have been the only one remotely qualified in our faculty to teach the single intro to physics course we have here). So I've depended on Walter Lewin's lectures to help "fill in the gaps" for me. Anyway, the equations you used above were indeed the same as mine, although I described them as "not straightforward" while you described them as "relatively simple" :) Also, I hadn't stopped to realize that the final answer depends only on the plate area and that the maximum charge depends only on A and not on d, but I completely see that now. So, thanks to your generous dedication of time, I have a much clearer idea of what's going on here, and I really appreciate your contribution to the forum. Thanks a lot! P: 371 Glad I could help. btw. Richard Mullers lectures are also really great for a physics intro course. https://www.youtube.com/watch?v=6ysbZ_j2xi0 Related Discussions Introductory Physics Homework 6 Introductory Physics Homework 7 Introductory Physics Homework 2 Introductory Physics Homework 2 Introductory Physics Homework 0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375529646873474, "perplexity": 581.8564944765668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/346605/expectation-of-a-quadratic-function-of-a-matrix-variate-normal-distribution
# expectation of a quadratic function of a matrix variate normal distribution I want to compute the following expectation term: $$E[{\bf{XA}}{{\bf{X}}^T}]$$ where $${\bf X} \in R^{M \times M}$$ and its elements are normal random variables such that $$vec\left( {\bf{X}} \right)\sim \cal N\left( {\boldsymbol \mu ,\bf \Sigma } \right)$$ $$\bf A$$ is a positive definite matrix with proper dimensions and $$vec(.)$$ is the vectorization operator. Any hint on how I can derive a nice formula? $$\newcommand{\X}{\mathbf X} \newcommand{\si}{\sigma}$$ Suppose that $$A:=(a_{ij})_{i,j=1}^m$$ is an $$m\times m$$ matrix and $$\X=(X_{ij})_{i,j=1}^m$$ is a random $$m\times m$$ matrix with $$EX_{ij}=\mu_{ij}$$ and $$Cov(X_{ij},X_{kl})=\si_{ij,kl}$$. Then the $$il$$-entry of the matrix $$E\X A\X^T$$ is $$(E\X A\X^T)_{il}=\sum_{j,k}EX_{ij}a_{jk}X_{lk} =\sum_{j,k}a_{jk}(\mu_{ij}\mu_{kl}+\si_{ij,lk}) =(MAM^T)_{il}+\sum_{j,k}a_{jk}\si_{ij,lk},$$ where $$M:=(\mu_{ij})_{i,j=1}^m$$. So, $$E\X A\X^T=MAM^T+R,$$ where $$R:=(r_{il})_{i,l=1}^m$$ with $$r_{il}:=\sum_{j,k}a_{jk}\si_{ij,lk}$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958808422088623, "perplexity": 52.69609473087246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400222515.48/warc/CC-MAIN-20200925053037-20200925083037-00371.warc.gz"}
http://civilservicereview.com/2015/04/how-to-solve-digit-problems-part-ii/
# How to Solve Digit Problems Part II In the previous post, we have discussed the basics of digit problems. We have learned the decimal number system or the number system that we use everyday. In this system, each digit is multiplied by powers of 10. For instance, 871 means $(8 \times 10^2) + (7 \times 10^1) + (1 \times 10^0)$. Recall that $10^0 = 1$. In this post, we continue this series by providing another detailed example. Problem The sum of the digits of a 2-digit number is $9$. If the digits are reversed, the new number is $45$ more than the original number. What are the numbers? Solution and Discussion If the tens digit of the number is $x$, then the ones digit is $9 - x$ (can you see why?). Since the tens digit is multiplied by $10$, the original number can be represented as $10x + (9 - x)$. Simplifying the previous expression, we have 10x – x + 9 = 9x + 9. Now, if we reverse the number, then $9 - x$ becomes the tens digit and the ones digit becomes $x$. So, multiplying the tens digit by 10, we have $10(9 - x) + x$. Simplifying the expression we have 10 – 10x + x =  90 – 9x. As shown in the problem, the new number (the reversed number) is $45$ more than the original number. Therefore, reversed numberoriginal number = 45. Substituting the expressions above, we have 90 – 9x – (9x + 9) = 45. Simplifying, we have $90 - 9x - 9x - 9 = 45$ $81 - 18x = 45$ $18x = 81 - 45$ $18x = 36$ $x = 2$. Therefore, the tens digit of the original number is 2 and the ones digit is $9 - 2 = 7$. So, the original number is $27$ and the reversed number is $72$. Now, the problem says that the new number is $45$ more than the original number. And this is correct since $72 - 27 = 45$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8006108999252319, "perplexity": 337.4062390132758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512400.59/warc/CC-MAIN-20181019124748-20181019150248-00122.warc.gz"}
https://terrytao.wordpress.com/2015/07/15/cycles-of-a-random-permutation-and-irreducible-factors-of-a-random-polynomial/
In analytic number theory, there is a well known analogy between the prime factorisation of a large integer, and the cycle decomposition of a large permutation; this analogy is central to the topic of “anatomy of the integers”, as discussed for instance in this survey article of Granville. Consider for instance the following two parallel lists of facts (stated somewhat informally). Firstly, some facts about the prime factorisation of large integers: • Every positive integer ${m}$ has a prime factorisation $\displaystyle m = p_1 p_2 \dots p_r$ into (not necessarily distinct) primes ${p_1,\dots,p_r}$, which is unique up to rearrangement. Taking logarithms, we obtain a partition $\displaystyle \log m = \log p_1 + \log p_2 + \dots + \log p_r$ of ${\log m}$. • (Prime number theorem) A randomly selected integer ${m}$ of size ${m \sim N}$ will be prime with probability ${\approx \frac{1}{\log N}}$ when ${N}$ is large. • If ${m \sim N}$ is a randomly selected large integer of size ${N}$, and ${p = p_i}$ is a randomly selected prime factor of ${m = p_1 \dots p_r}$ (with each index ${i}$ being chosen with probability ${\frac{\log p_i}{\log m}}$), then ${\log p_i}$ is approximately uniformly distributed between ${0}$ and ${\log N}$. (See Proposition 9 of this previous blog post.) • The set of real numbers ${\{ \frac{\log p_i}{\log m}: i=1,\dots,r \}}$ arising from the prime factorisation ${m = p_1 \dots p_r}$ of a large random number ${m \sim N}$ converges (away from the origin, and in a suitable weak sense) to the Poisson-Dirichlet process in the limit ${N \rightarrow \infty}$. (See the previously mentioned blog post for a definition of the Poisson-Dirichlet process, and a proof of this claim.) Now for the facts about the cycle decomposition of large permutations: • Every permutation ${\sigma \in S_n}$ has a cycle decomposition $\displaystyle \sigma = C_1 \dots C_r$ into disjoint cycles ${C_1,\dots,C_r}$, which is unique up to rearrangement, and where we count each fixed point of ${\sigma}$ as a cycle of length ${1}$. If ${|C_i|}$ is the length of the cycle ${C_i}$, we obtain a partition $\displaystyle n = |C_1| + \dots + |C_r|$ of ${n}$. • (Prime number theorem for permutations) A randomly selected permutation of ${S_n}$ will be an ${n}$-cycle with probability exactly ${1/n}$. (This was noted in this previous blog post.) • If ${\sigma}$ is a random permutation in ${S_n}$, and ${C_i}$ is a randomly selected cycle of ${\sigma}$ (with each ${i}$ being selected with probability ${|C_i|/n}$), then ${|C_i|}$ is exactly uniformly distributed on ${\{1,\dots,n\}}$. (See Proposition 8 of this blog post.) • The set of real numbers ${\{ \frac{|C_i|}{n} \}}$ arising from the cycle decomposition ${\sigma = C_1 \dots C_r}$ of a random permutation ${\sigma \in S_n}$ converges (in a suitable sense) to the Poisson-Dirichlet process in the limit ${n \rightarrow \infty}$. (Again, see this previous blog post for details.) See this previous blog post (or the aforementioned article of Granville, or the Notices article of Arratia, Barbour, and Tavaré) for further exploration of the analogy between prime factorisation of integers and cycle decomposition of permutations. There is however something unsatisfying about the analogy, in that it is not clear why there should be such a kinship between integer prime factorisation and permutation cycle decomposition. It turns out that the situation is clarified if one uses another fundamental analogy in number theory, namely the analogy between integers and polynomials ${P \in {\mathbf F}_q[T]}$ over a finite field ${{\mathbf F}_q}$, discussed for instance in this previous post; this is the simplest case of the more general function field analogy between number fields and function fields. Just as we restrict attention to positive integers when talking about prime factorisation, it will be reasonable to restrict attention to monic polynomials ${P}$. We then have another analogous list of facts, proven very similarly to the corresponding list of facts for the integers: • Every monic polynomial ${f \in {\mathbf F}_q[T]}$ has a factorisation $\displaystyle f = P_1 \dots P_r$ into irreducible monic polynomials ${P_1,\dots,P_r \in {\mathbf F}_q[T]}$, which is unique up to rearrangement. Taking degrees, we obtain a partition $\displaystyle \hbox{deg} f = \hbox{deg} P_1 + \dots + \hbox{deg} P_r$ of ${\hbox{deg} f}$. • (Prime number theorem for polynomials) A randomly selected monic polynomial ${f \in {\mathbf F}_q[T]}$ of degree ${n}$ will be irreducible with probability ${\approx \frac{1}{n}}$ when ${q}$ is fixed and ${n}$ is large. • If ${f \in {\mathbf F}_q[T]}$ is a random monic polynomial of degree ${n}$, and ${P_i}$ is a random irreducible factor of ${f = P_1 \dots P_r}$ (with each ${i}$ selected with probability ${\hbox{deg} P_i / n}$), then ${\hbox{deg} P_i}$ is approximately uniformly distributed in ${\{1,\dots,n\}}$ when ${q}$ is fixed and ${n}$ is large. • The set of real numbers ${\{ \hbox{deg} P_i / n \}}$ arising from the factorisation ${f = P_1 \dots P_r}$ of a randomly selected polynomial ${f \in {\mathbf F}_q[T]}$ of degree ${n}$ converges (in a suitable sense) to the Poisson-Dirichlet process when ${q}$ is fixed and ${n}$ is large. The above list of facts addressed the large ${n}$ limit of the polynomial ring ${{\mathbf F}_q[T]}$, where the order ${q}$ of the field is held fixed, but the degrees of the polynomials go to infinity. This is the limit that is most closely analogous to the integers ${{\bf Z}}$. However, there is another interesting asymptotic limit of polynomial rings to consider, namely the large ${q}$ limit where it is now the degree ${n}$ that is held fixed, but the order ${q}$ of the field goes to infinity. Actually to simplify the exposition we will use the slightly more restrictive limit where the characteristic ${p}$ of the field goes to infinity (again keeping the degree ${n}$ fixed), although all of the results proven below for the large ${p}$ limit turn out to be true as well in the large ${q}$ limit. The large ${q}$ (or large ${p}$) limit is technically a different limit than the large ${n}$ limit, but in practice the asymptotic statistics of the two limits often agree quite closely. For instance, here is the prime number theorem in the large ${q}$ limit: Theorem 1 (Prime number theorem) The probability that a random monic polynomial ${f \in {\mathbf F}_q[T]}$ of degree ${n}$ is irreducible is ${\frac{1}{n}+o(1)}$ in the limit where ${n}$ is fixed and the characteristic ${p}$ goes to infinity. Proof: There are ${q^n}$ monic polynomials ${f \in {\mathbf F}_q[T]}$ of degree ${n}$. If ${f}$ is irreducible, then the ${n}$ zeroes of ${f}$ are distinct and lie in the finite field ${{\mathbf F}_{q^n}}$, but do not lie in any proper subfield of that field. Conversely, every element ${\alpha}$ of ${{\mathbf F}_{q^n}}$ that does not lie in a proper subfield is the root of a unique monic polynomial in ${{\mathbf F}_q[T]}$ of degree ${f}$ (the minimal polynomial of ${\alpha}$). Since the union of all the proper subfields of ${{\mathbf F}_{q^n}}$ has size ${o(q^n)}$, the total number of irreducible polynomials of degree ${n}$ is thus ${\frac{q^n - o(q^n)}{n}}$, and the claim follows. $\Box$ Remark 2 The above argument and inclusion-exclusion in fact gives the well known exact formula ${\frac{1}{n} \sum_{d|n} \mu(\frac{n}{d}) q^d}$ for the number of irreducible monic polynomials of degree ${n}$. Now we can give a precise connection between the cycle distribution of a random permutation, and (the large ${p}$ limit of) the irreducible factorisation of a polynomial, giving a (somewhat indirect, but still connected) link between permutation cycle decomposition and integer factorisation: Theorem 3 The partition ${\{ \hbox{deg}(P_1), \dots, \hbox{deg}(P_r) \}}$ of a random monic polynomial ${f= P_1 \dots P_r\in {\mathbf F}_q[T]}$ of degree ${n}$ converges in distribution to the partition ${\{ |C_1|, \dots, |C_r|\}}$ of a random permutation ${\sigma = C_1 \dots C_r \in S_n}$ of length ${n}$, in the limit where ${n}$ is fixed and the characteristic ${p}$ goes to infinity. We can quickly prove this theorem as follows. We first need a basic fact: Lemma 4 (Most polynomials square-free in large ${q}$ limit) A random monic polynomial ${f \in {\mathbf F}_q[T]}$ of degree ${n}$ will be square-free with probability ${1-o(1)}$ when ${n}$ is fixed and ${q}$ (or ${p}$) goes to infinity. In a similar spirit, two randomly selected monic polynomials ${f,g}$ of degree ${n,m}$ will be coprime with probability ${1-o(1)}$ if ${n,m}$ are fixed and ${q}$ or ${p}$ goes to infinity. Proof: For any polynomial ${g}$ of degree ${m}$, the probability that ${f}$ is divisible by ${g^2}$ is at most ${1/q^{2m}}$. Summing over all polynomials of degree ${1 \leq m \leq n/2}$, and using the union bound, we see that the probability that ${f}$ is not squarefree is at most ${\sum_{1 \leq m \leq n/2} \frac{q^m}{q^{2m}} = o(1)}$, giving the first claim. For the second, observe from the first claim (and the fact that ${fg}$ has only a bounded number of factors) that ${fg}$ is squarefree with probability ${1-o(1)}$, giving the claim. $\Box$ Now we can prove the theorem. Elementary combinatorics tells us that the probability of a random permutation ${\sigma \in S_n}$ consisting of ${c_k}$ cycles of length ${k}$ for ${k=1,\dots,r}$, where ${c_k}$ are nonnegative integers with ${\sum_{k=1}^r k c_k = n}$, is precisely $\displaystyle \frac{1}{\prod_{k=1}^r c_k! k^{c_k}},$ since there are ${\prod_{k=1}^r c_k! k^{c_k}}$ ways to write a given tuple of cycles ${C_1,\dots,C_r}$ in cycle notation in nondecreasing order of length, and ${n!}$ ways to select the labels for the cycle notation. On the other hand, by Theorem 1 (and using Lemma 4 to isolate the small number of cases involving repeated factors) the number of monic polynomials of degree ${n}$ that are the product of ${c_k}$ irreducible polynomials of degree ${k}$ is $\displaystyle \frac{1}{\prod_{k=1}^r c_k!} \prod_{k=1}^r ( (\frac{1}{k}+o(1)) q^k )^{c_k} + o( q^n )$ which simplifies to $\displaystyle \frac{1+o(1)}{\prod_{k=1}^r c_k! k^{c_k}} q^n,$ and the claim follows. This was a fairly short calculation, but it still doesn’t quite explain why there is such a link between the cycle decomposition ${\sigma = C_1 \dots C_r}$ of permutations and the factorisation ${f = P_1 \dots P_r}$ of a polynomial. One immediate thought might be to try to link the multiplication structure of permutations in ${S_n}$ with the multiplication structure of polynomials; however, these structures are too dissimilar to set up a convincing analogy. For instance, the multiplication law on polynomials is abelian and non-invertible, whilst the multiplication law on ${S_n}$ is (extremely) non-abelian but invertible. Also, the multiplication of a degree ${n}$ and a degree ${m}$ polynomial is a degree ${n+m}$ polynomial, whereas the group multiplication law on permutations does not take a permutation in ${S_n}$ and a permutation in ${S_m}$ and return a permutation in ${S_{n+m}}$. I recently found (after some discussions with Ben Green) what I feel to be a satisfying conceptual (as opposed to computational) explanation of this link, which I will place below the fold. To put cycle decomposition of permutations and factorisation of polynomials on an equal footing, we generalise the notion of a permutation ${\sigma \in S_n}$ to the notion of a partial permutation ${\sigma = (\sigma,S)}$ on a fixed (but possibly infinite) domain ${X}$, which consists of a finite non-empty subset ${S}$ of the set ${X}$, together with a bijection ${\sigma: S \rightarrow S}$ on ${S}$; I’ll call ${S}$ the support of the partial permutation. We say that a partial permutation ${\sigma}$ is of size ${n}$ if the support ${S}$ is of cardinality ${n}$, and denote this size as ${|\sigma|}$. And now we can introduce a multiplication law on partial permutations that is much closer to that of polynomials: if two partial permutations ${\sigma, \sigma'}$ on the same domain ${X}$ have disjoint supports ${S, S'}$, then we can form their disjoint union ${\sigma \uplus \sigma'}$, supported on ${S \cup S'}$, to be the bijection on ${S \cup S'}$ that agrees with ${\sigma}$ on ${S}$ and with ${\sigma'}$ on ${S'}$. Note that this is a commutative and associative operation (where it is defined), and is the disjoint union of a partial permutation of size ${n}$ and a partial permutation of size ${m}$ is a partial permutation of size ${n+m}$, so this operation is much closer in behaviour to the multiplication law on polynomials than the group law on ${S_n}$. There is the defect that the disjoint union operation is sometimes undefined (when the two partial permutations have overlapping support); but in the asymptotic regime where the size ${n}$ is fixed and the set ${X}$ is extremely large, this will be very rare (compare with Lemma 4). Note that a partial permutation is irreducible with respect to disjoint union if and only if it is a cycle on its support, and every partial permutation ${\sigma}$ has a decomposition ${\sigma = C_1 \uplus \dots \uplus C_r}$ into such partial cycles, unique up to permutations. If one then selects some set ${{\mathcal P}}$ of partial cycles on the domain ${X}$ to serve as “generalised primes”, then one can define (in the spirit of Beurling integers) the set ${{\mathcal N}}$ of “generalised integers”, defined as those partial permutations that are the disjoint union ${\sigma = C_1 \uplus \dots \uplus C_r}$ of partial cycles in ${{\mathcal P}}$. If one lets ${{\mathcal N}_n}$ denote the set of generalised integers of size ${n}$, one can (assuming that this set is non-empty and finite) select a partial permutation ${\sigma}$ uniformly at random from ${{\mathcal N}_n}$, and consider the partition ${\{ |C_1|, \dots, |C_r| \}}$ of ${n}$ arising from the decomposition into generalised primes. We can now embed both the cycle decomposition for (complete) permutations and the factorisation of polynomials into this common framework. We begin with the cycle decomposition for permutations. Let ${q}$ be a large natural number, and set the domain ${X}$ to be the set ${\{1,\dots,q\}}$. We define ${{\mathcal P}_n}$ to be the set of all partial cycles on ${X}$ of size ${n}$, and let ${{\mathcal P}}$ be the union of the ${{\mathcal P}_n}$, that is to say the set of all partial cycles on ${X}$ (of arbitrary size). Then ${{\mathcal N}}$ is of course the set of all partial permutations on ${X}$, and ${{\mathcal N}_n}$ is the set of all partial permutations on ${X}$ of size ${n}$. To generate an element of ${{\mathcal N}_n}$ uniformly at random for ${1 \leq n \leq q}$, one simply has to randomly select an ${n}$-element subset ${S}$ of ${X}$, and then form a random permutation of the ${n}$ elements of ${S}$. From this, it is obvious that the partition ${\{ |C_1|, \dots, |C_r|\}}$ of ${n}$ coming from a randomly chosen element of ${{\mathcal N}_n}$ has exactly the same distribution as the partition ${\{ |C_1|, \dots, |C_r|\}}$ of ${n}$ coming from a randomly chosen element of ${S_n}$, as long as ${q}$ is at least as large as ${n}$ of course. Now we embed the factorisation of polynomials into the same framework. The domain ${X}$ is now taken to be the algebraic closure ${\overline{{\mathbf F}_q}}$ of ${{\mathbf F}_q}$, or equivalently the direct limit of the finite fields ${{\mathbf F}_{q^n}}$ (with the obvious inclusion maps). This domain has a fundamental bijection on it, the Frobenius map ${\hbox{Frob}: x \mapsto x^q}$, which is a field automorphism that has ${{\mathbf F}_q}$ as its fixed points. We define ${{\mathcal N}}$ to be the set of partial permutations on ${X}$ formed by restricting the Frobenius map ${\hbox{Frob}}$ to a finite Frobenius-invariant set. It is easy to see that the irreducible Frobenius-invariant sets (that is to say, the orbits of ${\hbox{Frob}}$) arise from taking an element ${x}$ of ${X}$ together with all of its Galois conjugates, and so if we define ${{\mathcal P}}$ to be the set of restrictions of Frobenius to a single such Galois orbit, then ${{\mathcal N}}$ are the generalised integers to the generalised primes ${{\mathcal P}}$ in the sense above. Next, observe that, when the characteristic ${p}$ is sufficiently large, every squarefree monic polynomial ${f \in {\mathbf F}_q[T]}$ of degree ${n}$ generates a generalised integer of size ${n}$, namely the restriction of the Frobenius map to the ${n}$ roots of ${f}$ (which will be necessarily distinct when the characteristic is large and ${f}$ is squarefree). This generalised integer will be a generalised prime precisely when ${f}$ is irreducible. Conversely, every generalised integer of size ${n}$ generates a squarefree monic polynomial in ${{\mathbf F}_q[T]}$, namely the product of ${T-x}$ as ${x}$ ranges over the support of the integer. This product is clearly monic, squarefree, and Frobenius-invariant, thus it lies in ${{\mathbf F}_q[T]}$. Thus we may identify ${{\mathcal N}_n}$ with the monic squarefree polynomials of ${{\mathbf F}_q}$ of degree ${n}$. With this identification, the (now partially defined) multiplication operation on monic squarefree polynomials coincides exactly with the disjoint union operation on partial permutations. As such, we see that the partition ${\{ \hbox{deg} P_1, \dots, \hbox{deg} P_r \}}$ associated to a randomly chosen squarefree monic polynomial ${f = P_1\dots P_r}$ of degree ${n}$ has exactly the same distribution as the partition ${\{ |C_1|, \dots, |C_r| \}}$ associated to a randomly chosen generalised integer ${\sigma = C_1 \uplus \dots \uplus C_r}$ of size ${n}$. By Lemma 4, one can drop the condition of being squarefree while only distorting the distribution by ${o(1)}$. Now that we have placed cycle decomposition of permutations and factorisation of polynomials into the same framework, we can explain Theorem 3 as a consequence of the following universality result for generalised prime factorisations: Theorem 5 (Universality) Let ${{\mathcal P}, {\mathcal N}}$ be collections of generalised primes and integers respectively on a domain ${X}$, all of which depend on some asymptotic parameter ${q}$ that goes to infinity. Suppose that for any fixed ${n,m}$ and ${q}$ going to infinity, the sets ${{\mathcal N}_n, {\mathcal N}_m, {\mathcal N}_{n+m}}$ are non-empty with cardinalities obeying the asymptotic $\displaystyle |{\mathcal N}_{n+m}| = (1+o(1)) |{\mathcal N}_n| |{\mathcal N}_m|. \ \ \ \ \ (1)$ Also, suppose that only ${o( |{\mathcal N}_n| |{\mathcal N}_m|)}$ of the pairs ${(\sigma,\sigma') \in {\mathcal N}_n \times {\mathcal N}_m}$ have overlapping supports (informally, this means that ${\sigma \uplus \sigma'}$ is defined with probability ${1-o(1)}$). Then, for fixed ${n}$ and ${q}$ going to infinity, the distribution of the partition ${\{ |C_1|, \dots, |C_r|\}}$ of a random generalised integer from ${{\mathcal N}_n}$ is universal in the limit; that is to say, the limiting distribution does not depend on the precise choice of ${X, {\mathcal P}, {\mathcal N}}$. Note that when ${{\mathcal N}_n}$ consists of all the partial permutations of size ${n}$ on ${\{1,\dots,q\}}$ we have $\displaystyle |{\mathcal N}_n| = \binom{q}{n} n! = (1+o(1)) q^n$ while when ${{\mathcal N}_n}$ consists of the monic squarefree polynomials of degree ${n}$ in ${{\mathbf F}_q[T]}$ then from Lemma 4 we also have $\displaystyle |{\mathcal N}_n| = (1+o(1)) q^n$ so in both cases the first hypothesis (1) is satisfied. The second hypothesis is easy to verify in the former case and follows from Lemma 4 in the latter case. Thus, Theorem 5 gives Theorem 3 as a corollary. Remark 6 An alternate way to interpret Theorem 3 is as an equidistribution theorem: if one randomly labels the ${n}$ zeroes of a random degree ${n}$ polynomial as ${1,\dots,n}$, then the resulting permutation on ${1,\dots,n}$ induced by the Frobenius map is asymptotically equidistributed in the large ${q}$ (or large ${p}$) limit. This is the simplest case of a much more general (and deeper) result known as the Deligne equidistribution theorem, discussed for instance in this survey of Kowalski. See also this paper of Church, Ellenberg, and Farb concerning more precise asymptotics for the number of squarefree polynomials with a given cycle decomposition of Frobenius. It remains to prove Theorem 5. The key is to establish an abstract form of the prime number theorem in this setting. Theorem 7 (Prime number theorem) Let the hypotheses be as in Theorem 5. Then for fixed ${n}$ and ${q \rightarrow \infty}$, the density of ${{\mathcal P}_n}$ in ${{\mathcal N}_n}$ is ${\frac{1}{n}+o(1)}$. In particular, the asymptotic density ${1/n}$ is universal (it does not depend on the choice of ${X, {\mathcal P}_n, {\mathcal N}_n}$). Proof: Let ${a_n := n |{\mathcal P}_n| / |{\mathcal N}_n|}$ (this may only be defined for ${q}$ sufficiently large depending on ${n}$); our task is to show that ${a_n = 1+o(1)}$ for each fixed ${n}$. Consider the set of pairs ${(\sigma, x)}$ where ${\sigma}$ is an element of ${{\mathcal N}_n}$ and ${x}$ is an element of the support of ${\sigma}$. Clearly, the number of such pairs is ${n |{\mathcal N}_n|}$. On the other hand, given such a pair ${(\sigma,x)}$, there is a unique factorisation ${\sigma = C \uplus \sigma'}$, where ${C}$ is the generalised prime in the decomposition of ${\sigma}$ that contains ${x}$ in its support, and ${\sigma'}$ is formed from the remaining components of ${\sigma}$. ${C}$ has some size ${1 \leq m \leq n}$, ${\sigma'}$ has the complementary size ${n-m}$ and has disjoint support from ${C}$, and ${x}$ has to be one of the ${m}$ elements of the support of ${C}$. Conversely, if one selects ${1 \leq m \leq n}$, then selects a generalised prime ${C \in {\mathcal P}_m}$, and a generalised integer ${\sigma' \in {\mathcal C}_{n-m}}$ with disjoint support from ${C}$, and an element ${x}$ in the support of ${C}$, we recover the pair ${(\sigma,x)}$. Using the hypotheses of Theorem 5, we thus obtain the double counting identity $\displaystyle n |{\mathcal N}_n| = \sum_{m=1}^n m |{\mathcal P}_m| |{\mathcal N}_{n-m}| - o( |{\mathcal N}_m| |{\mathcal N}_{n-m}| )$ $\displaystyle = (\sum_{m=1}^n a_m + o(1)) |{\mathcal N}_n|$ and thus ${\sum_{m=1}^n a_m = n+o(1)}$ for every fixed ${n}$, and so ${a_n = 1+o(1)}$ for fixed ${n}$ as claimed. $\Box$ Remark 8 One could cast this argument in a language more reminiscent of analytic number theory by forming generating series of ${{\mathcal N}_n}$ and ${{\mathcal P}_n}$ and treating these series as analogous to a zeta function and its log-derivative (in close analogy to what is done with Beurling primes), but we will not do so here. We can now finish the proof of Theorem 5. To show asymptotic universality of the partition ${\{ |C_1|,\dots,|C_r|\}}$ of a random generalised integer ${\sigma \in {\mathcal N}_n}$, we may assume inductively that asymptotic universality has already been shown for all smaller choices of ${n}$. To generate a uniformly random generalised integer ${\sigma}$ of size ${n}$, we can repeat the process used to prove Theorem 7. It of course suffices to generate a uniformly random pair ${(\sigma,x)}$, where ${\sigma}$ is a generalised integer of size ${n}$ and ${x}$ is an element of the support of ${\sigma}$, since on dropping ${x}$ we would obtain a uniformly drawn ${\sigma}$. To obtain the pair ${(\sigma,x)}$, we first select ${m \in \{1,\dots,n\}}$ uniformly at random, then select a generalised prime ${C}$ randomly from ${{\mathcal P}_m}$ and a generalised integer ${\sigma'}$ randomly from ${{\mathcal C}_{n-m}}$ (independently of ${C}$ once ${m}$ is fixed). Finally, we select ${x}$ uniformly at random from the support of ${C}$, and set ${\sigma := C \uplus \sigma'}$. The pair ${(\sigma,x)}$ is certainly a pair of the required form, but this random variable is not quite uniformly distributed amongst all such pairs. However, by repeating the calculations in Theorem 5 (and in particular relying on the conclusion ${a_m=1+o(1)}$), we see that this distribution is is within ${o(1)}$ of the uniform distribution in total variation norm. Thus, the distribution of the cycle partition ${\{ |C_1|,\dots,|C_r|\}}$ of a uniformly chosen ${\sigma}$ lies within ${o(1)}$ in total variation of the distribution of the cycle partition of a ${\sigma = C \uplus \sigma'}$ chosen by the above recipe. However, the cycle partition of ${\sigma = C \uplus \sigma'}$ is simply the union (with multiplicity) of ${\{m\}}$ with the cycle partition of ${\sigma'}$. As the latter was already assumed to be asymptotically universal, we conclude that the former is also, as required. Remark 9 The above analysis helps explain why one could not easily link permutation cycle decomposition with integer factorisation – to produce permutations here with the right asymptotics we needed both the large ${q}$ limit and the Frobenius map, both of which are available in the function field setting but not in the number field setting.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 406, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9804762005805969, "perplexity": 108.31632809545069}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369420.71/warc/CC-MAIN-20210304143817-20210304173817-00137.warc.gz"}
https://brilliant.org/problems/new-algebraic-theories/
# New algebraic theories? Number Theory Level 3 I. $$\large x+y = \sqrt{z^2+2xy}$$ II. $$\large x+y = \sqrt[3]{z^3+3xy(x+y)}$$ III. $$\large \left(\dfrac{x}{z}\right)^2+\left(\dfrac{y}{z}\right)^2=1$$ IV. $$\large \left(\dfrac{x}{z}\right)^4+\left(\dfrac{y}{z}\right)^4=1$$ V. $$\large \left(\dfrac{x}{z}\right)^4+\left(\dfrac{y}{z}\right)^4 = \dfrac{z^4-2x^{2}y^{2}}{z^4}$$ Which equation(s) above has/have infinitely many positive integer solutions for $$x$$ , $$y$$ and $$z$$? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8967508673667908, "perplexity": 3547.027445653676}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720000.45/warc/CC-MAIN-20161020183840-00146-ip-10-171-6-4.ec2.internal.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?t=17542
## HClO3 as an acid Helena Vervaet 1N Posts: 26 Joined: Wed Sep 21, 2016 2:59 pm ### HClO3 as an acid I was just wondering whether or not we consider HClO3 to be a strong acid? In the list of strong acids on page 163 of the course reader, it isn't listed. However, in some of the review sessions I believed I heard some of the TAs saying that we are considering it as a strong acid for this class? Just making sure! BlakeMillar4J Posts: 11 Joined: Wed Sep 21, 2016 2:56 pm ### Re: HClO3 as an acid If Ka was included in the problem, I'm pretty sure it would be considered a weak acid. I do recall Lavelle saying that we only need to know the 6 or 7 strong acids from the list as being strong, and that the other acids not on the list are weak.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8078629374504089, "perplexity": 1645.5709119168314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348502204.93/warc/CC-MAIN-20200605174158-20200605204158-00203.warc.gz"}
http://physics.stackexchange.com/questions/193739/is-there-a-relation-between-g-and-the-age-of-the-universe
# Is there a relation between $G$ and the age of the universe? Here is a recording of Paul Dirac, talking about dimensionless constants and their significance. He gives some examples of such constants(ratio of the masses of an electron to a proton, the fine-structure constant) and then touches upon the subject of the relative strength of electromagnetic force compared with that of gravity. He says, the ratio of the electromagnetic force to that of gravity is $10^{39}$. The age of the universe(which according to the estimation of their time is the "false" $18$ billion years), When expressed in atomic units of time, is also $10^{39}$. He believes this is more than a coincidence, and hence he developed a theory in which $G$ and the age of the universe are related, where $G$ is decreasing with time, so it's not a constant. I also read in The Feynman lectures, Feynman talking about the same subject(the relation between $G$ and the age of the universe) but the only difference was that the ratio was about $10^{42}$ not $10^{39}$. Has any progress been made in working out the relation between this constant and the age of the universe? or Has it been discredited or falsified? these two questions are related to mine, but I think they're different. (link one, link two). - Good question. I don't know the answer myself, but I am aware that some so-called constants aren't actually constant, and I can understand why Dirac thought the force of gravity could be reducing in an expanding universe - think rubber sheets and shallower slopes. But IMHO complications arise because the expansion is not uniform, as per the raisin-cake analogy. Space expands between the galaxies, but not within. – John Duffield Jul 12 '15 at 20:29 – Qmechanic Jul 12 '15 at 20:34 The biggest problem with the argument is that gravity is not actually a force, so the ratio of the electromagnetic force and the acceleration of gravity is not truly dimensionless, even though an endless amount of theoretical nonsense has been the result of starting with that false premise. – CuriousOne Jul 12 '15 at 21:48 @CuriousOne What do you mean it's not a force? – Omar Nagib Jul 12 '15 at 22:07 a 2014 paper focused on the fine-structure constant : Planck intermediate results. XXIV. Constraints on variation of fundamental constants – igael Jul 12 '15 at 22:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9044979214668274, "perplexity": 312.7360234502768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275981.56/warc/CC-MAIN-20160524002115-00240-ip-10-185-217-139.ec2.internal.warc.gz"}
https://fr.maplesoft.com/support/help/maplesim/view.aspx?path=Iterator/Trees/Transpose
Transpose - Maple Help Iterator[Trees] Transpose compute the transpose of a tree Calling Sequence Transpose(tree, format=fmt) Parameters tree - seq(rtable) fmt - (optional) A,C,D,E,LR,P,S,Z Options • format = A,C,D,E,LR,P,S,Z Specifies the format of the tree. The default is LR. See Iterator[Trees] for a description of the formats. Description • The Transpose command computes the transpose of a tree. The transpose of a binary tree is formed by interchanging left and right links. The transpose of a tree of a given format is computed by converting it to a binary tree, interchanging, then converting back to the specified format. • The tree parameter is the tree. Examples > $\mathrm{with}\left(\mathrm{Iterator}:-\mathrm{Trees}\right):$ Generate a random tree with four internal nodes in LR format. > $L,R≔\mathrm{Random}\left(4,\mathrm{format}=\mathrm{LR}\right)$ ${L}{,}{R}{≔}\left[\begin{array}{cccc}{2}& {3}& {0}& {0}\end{array}\right]{,}\left[\begin{array}{cccc}{4}& {0}& {0}& {0}\end{array}\right]$ (1) Compute its transpose. > $\mathrm{Transpose}\left(L,R\right)$ $\left[\begin{array}{cccc}{2}& {0}& {0}& {0}\end{array}\right]{,}\left[\begin{array}{cccc}{3}& {0}& {4}& {0}\end{array}\right]$ (2) References Knuth, Donald Ervin. The Art of Computer Programming, volume 4, fascicle 4; generating all trees, sec. 7.2.1.6, generating all trees, exercise 12, p. 33. Compatibility • The Iterator[Trees][Transpose] command was introduced in Maple 2016.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.866958737373352, "perplexity": 2589.9077890544904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00236.warc.gz"}
https://socratic.org/questions/what-is-the-pattern-in-the-sequence-2-5-10-17-28-41-58-77-100
Precalculus Topics # What is the pattern in the sequence 2, 5, 10, 17, 28, 41, 58, 77, 100? Jul 31, 2015 The difference between each term and the next is the next odd prime number. #### Explanation: Let ${p}_{0} = 2$, ${p}_{1} = 3$, ${p}_{2} = 5$, ${p}_{3} = 7$,... be the prime numbers. Then ${a}_{0} = 2$ and ${a}_{i + 1} = {a}_{i} + {p}_{i + 1}$ ##### Impact of this question 10522 views around the world You can reuse this answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8867521286010742, "perplexity": 794.9178201401379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00203.warc.gz"}
http://math.stackexchange.com/users/4743/jon?tab=activity&sort=all&page=4
Jon Reputation 1,145 Top tag Next privilege 2,000 Rep. Aug 10 comment Is there a way to pick a $k$ such that $p_0p_1 + ik$ is always a product of two primes? $p_0p_1+(p_0p_1)k = p_0p_1 (1 + k)$, no? Jun 16 awarded Enthusiast Jun 11 accepted Combinatorial proof that binomial coefficients are given by alternating sums of squares? Jun 11 comment Combinatorial proof that binomial coefficients are given by alternating sums of squares? Very nice! (characters...) Jun 11 asked Combinatorial proof that binomial coefficients are given by alternating sums of squares? Feb 11 comment If a product of relatively prime integers is an $n$th power, then each is an $n$th power Have you considered the prime factorizations of $a$, $b$, and $c$? Feb 9 revised Can someone explain how this question is reduced using basic postulates add info on p6b Feb 9 answered Can someone explain how this question is reduced using basic postulates Feb 8 comment No prime number between number and square of number Bertrand's postulate guarantees the existence of a prime between $n$ and $2n$ for all integers $n > 1$. Therefore there are no non-trivial examples of the phenomenon you describe. Jan 17 awarded Nice Answer Jan 16 awarded Commentator Jan 16 comment Why do we need to prove $e^{u+v} = e^ue^v$? @bobobobo In that case, we have no a priori information about the function $e^x$, and we've got to establish its basic properties from the definition provided. Jan 16 answered Why do we need to prove $e^{u+v} = e^ue^v$? Jan 16 comment Why do we need to prove $e^{u+v} = e^ue^v$? It's also quite common to define $ln(x)$ as the integral $\int_1^a \frac{1}{x} \, dx$, which might be the case here. Jan 16 comment Why do we need to prove $e^{u+v} = e^ue^v$? How is the function $e^x$ defined in the text? Jan 10 awarded Scholar Jan 10 accepted Topic for a high school-level math elective? Jan 10 comment Topic for a high school-level math elective? Great info. Thanks! Jan 10 awarded Nice Question Jan 8 comment Topology of the power set @t.spero: Xiaochuan is using the product topology on $2^{[0,1]}$. (The reference to Tycohoff's theorem is a clue!) More information is available on wikipedia. In concrete terms, if your base set is $S$, open sets on $\mathcal{P}(S)$ are generated by the sets $\mathcal{U}(F, G)$ for finite sets $F$ and $G \in S$, where $\mathcal{U}(F,G)$ is defined to be $\{U \subset S : F \subset U \text{ and } G \cap U = \emptyset\}$. (I believe I have that right, but I am also tired, so no guarantees.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9447539448738098, "perplexity": 380.92967072064545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737942301.73/warc/CC-MAIN-20151001221902-00225-ip-10-137-6-227.ec2.internal.warc.gz"}
https://ask.libreoffice.org/t/delete-lines-matching-regex-programmatically/66285
# Delete lines matching regex programmatically I’m trying to delete whole lines from a Writer document which match a pattern like ‘%C%c…%c%C’. I’m using construct like this: ``````xTextDocument = (XTextDocument)xComponent; XReplaceable xReplaceable = (XReplaceable)xTextDocument; XReplaceDescriptor xReplaceDescriptor = xReplaceable.createReplaceDescriptor(); { XPropertySet replaceProps = xReplaceDescriptor; string cStringRegex = @"%C[\s\S]*?%c[\s\S]*?%c[\s\S]*?%C\$"; replaceProps.setPropertyValue("SearchRegularExpression", new uno.Any(true)); xReplaceDescriptor.setSearchString(cStringRegex); xReplaceDescriptor.setReplaceString(string.Empty); xReplaceable.replaceAll(xReplaceDescriptor); } `````` I have the following issues: 1. Code above leaves blank lines behind, and I don’t want to remove any more pre-existing blank lines. 2. This pattern does match multiline strings from LibreOffice Writer UI, but doesn’t seem to work programmatically. How do I get it to solve the above conditions? [PS: I just read about balancing groups in C# Regex, but does LibreOffice have support for this?] Wouldn’t your regexp be written in a simpler way by replacing `[\s\S]` with `.`? You don’t delete lines because lines do not exist as primary objects. They are only a result of distributing text onto the sheet. Line wrapping creates the visual lines. `\$` in a regexp is a position located just before a paragraph marker. It is not the paragraph marker itself.This means the paragraph marker is not captured and can’t be replaced. If your replacement results in an empty paragraph, this empty para is left behind and nothing distinguishes it form intentional empty paragraph. But note that empty paragraphs are faulty (this a direct formatting for vertical space). Vertical space should be specified in the paragraph style attributes. When this is done properly, the document contains no empty paragraph and empty paragraphs can then be eliminating by searching for `^\$` and replacing with nothing. I wrote `[\s\S]` because it would also capture `'\n'`, `'\r'`, etc characters I was told, which `'.'` wouldn’t? I understood about ‘vertical space’, so I need to reformat my document to use `'\v'` style character? Also, this is very strange, `xTextDocument.getText().getString()` returns `"\r\n"` line endings. `.` stands for “any character”. It thus captures also `\n` and `\r`. But these characters never appears in a well-behaved Writer document anyway because line ends are not encoded in the document but are dynamically generated as the document is displayed. Paragraph markers are themselves “off-text” information internally managed by Writer. Only paragraph content text is visible. Similarly, vertical spacing is off-text data associated with a paragraph. Don’t add `\v` characters lest you create a real mess making your document un-formattable. Vertical spacing is one of the attributes of paragraph styles. You can tune it in the `Indents & Spacing` tab of the style, e.g. Text Body. I have never used `getString` but if it returns `\r\n`, then you likely typed `Shift`+`Enter` instead of `Enter`, which created linebreaks instead of paragraph ends.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736274838447571, "perplexity": 3793.7407899002133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154486.47/warc/CC-MAIN-20210803222541-20210804012541-00073.warc.gz"}
https://math.stackexchange.com/questions/692839/is-there-a-pigeon-hole-principle-proof
Is there a Pigeon hole principle proof Let $a_i$, $1 \leq i \leq 5$ denote five positive real numbers such that $\sum_{i =1}^{5}a_i = 100$. Show that there exist a pair $a_i,a_j$ such that $|a_i-a_j|\leq 10$. Is there a proof using pigeon hole principle. I think i have a proof but it does not involve pigeon hole principle. My proof is: Suppose there exist no such pair $a_i,a_j$. Assume that $a_1 < a_2 < a_3 < a_4 < a_5$, then $a_1 \in [0,20]$. As $a_i > a_{i-1} +10$ $\forall 1 \leq i \leq 4$, we have $100 = \sum_{i=1}^{5}a_i > 5a_i +100$. Hence by contradiction the claim cannot be true. • This is a question in the book "walk through combinatorics" in the chapter Pigeon hole principle. Feb 27 '14 at 16:22 • Your solution doesn't invoke the pigeonhole principle explicitly, but implicitly. I.e., when you say that $a_1 \in [0, 20]$. Feb 27 '14 at 17:48 Let $a_1<a_2<a_3<a_4<_5$, then $$100=(a_5-a_4)+2(a_4-a_3)+3(a_3-a_2)+4(a_2-a_1)+5a_1$$ Applying pigeon hole principle to $(a_5-a_4)$, 2 copies $(a_4-a_3)$, 3 copies $(a_3-a_2)$, 4 copies $(a_2-a_1)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9787886738777161, "perplexity": 192.24370693995527}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057558.23/warc/CC-MAIN-20210924140738-20210924170738-00188.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-asymptotes-for-y-x-3-2x-8
Precalculus Topics # How do you find the asymptotes for y=(x+3)/(2x-8)? Dec 23, 2015 Explanation is given below #### Explanation: $y = \frac{x + 3}{2 x - 8}$ For this problem, vertical asymptotes are found by equating the denominator to zero and solving for x. $2 x - 8 = 0$ $2 x = 8$ $x = \frac{8}{2}$ $x = 4$ Equation of vertical asymptote. For horizontal asymptote, check out the degree of numerator and denominator. If the degree of the numerator is same as the degree of denominator. The Horizontal asymptote is got by dividing the lead coefficients of numerator and denominator. For example if $y = \frac{a x + b}{c x + d}$ Vertical asymptote is got by solving for $x$ from $c x + d = 0$ The Horizontal asymptote is got by $y = \frac{a}{c}$ as both numerator and denominator are of degree 1 and their lead coefficient are $a$ and $c$ respectively. For our problem $y = \frac{x + 3}{2 x - 8}$ The horizontal asymptote would be $y = \frac{1}{2}$ Note: If the degree of the numerator is greater than the degree of denominator then there is no Horizontal Asymptote. If the degree of the denominator is greater than the degree of numerator then $y = 0$ is the Horizontal Asymptote. ##### Impact of this question 175 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9953573346138, "perplexity": 336.5604981587967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315132.71/warc/CC-MAIN-20190819221806-20190820003806-00008.warc.gz"}
https://physics.stackexchange.com/questions/639937/gondolo-gelmini-change-of-variables/651463#651463
# Gondolo-Gelmini Change of Variables In the article Cosmic abundances of stable particles: Improved analysis, P. Gondolo and G. Gelmini, Nucl. Phys. B 360 (1991), p. 145-179, they convert $$\rm{d}^3p_1\rm{d}^3p_2=2\pi^2p_1p_2\rm{d}E_1\rm{d}E_2\rm{d}\cos{\theta}$$ (eq.3.2) into $$\rm{d}^3p_1\rm{d}^3p_2=2\pi E_1E_2\rm{d}E_+\rm{d}E_-\rm{d}s$$ (eq. 3.4) with the following change of variables: $$E_+=E_1+E_2, \quad E_-=E_1-E_2, \quad s=2m^2+2E_1E_2-2p_1p_2\cos{\theta}, \quad \rm{(eq. 3.3)}$$ When I try to derive the second expresion of $$\rm{d}^3p_1\rm{d}^3p_2$$ using these new variables $$E_+,\ E_-, s$$, I always get second order diferentials that in theory should vanish. I don't know how one could get $$\rm{d}^3p_1\rm{d}^3p_2=2\pi E_1E_2\rm{d}E_+\rm{d}E_-\rm{d}s$$. They also define new limits of integration due to this change of variables: $$\{ E_1>m,\ E_2>m, \ |\cos{\theta}|\leq 1 \}$$ changes to $$\{ s\geq 4m^2,\ E_+\geq \sqrt{s}, \ |E_-|\leq\sqrt{1-4m^2/s}\sqrt{E_+^2-s} \}$$ (eq. 3.5). I get the first two, but I don't know how to compute $$E_-$$. The limit cases give some idea on how the expresion should be, but this is not enough to know the exact expresion, something like: $$\rm{If} \quad E_+^2=s \implies E_-=0$$ $$\rm{If} \quad s=4m^2 \implies E_-=0$$ $$\rm{Then (?):} \quad |E_-| \leq \sqrt{1-\frac{4m^2}{s}}\sqrt{E_+^2-s}$$ Assume some general $$A \bar{A}$$ states where $$\theta = \theta_{A \bar{A}}$$. The bound on $$E_-$$ is given by $$|\cos \theta_{A\bar{A}}| \leq 1$$ : \begin{aligned} \cos \theta_{A \bar{A}} = & \frac{2m_A^2 + 2E_{A}E_{\bar{A}} - s}{2 |\vec{p}_A| |\vec{p}_{\bar{A}}|} \\ = & \frac{4m_A^2 + E_+^2 - E_-^2 - 2s}{4 |\vec{p}_A| |\vec{p}_{\bar{A}}|} \\ = & \frac{4m_A^2 + E_+^2 - E_-^2 - 2s}{4\sqrt{(E_A^2 - m_A^2)(E_{\bar{A}}^2 - m_A^2)}} \\ = & \frac{4m_A^2 + E_+^2 - E_-^2 - 2s}{4\sqrt{E_A^2 E_{\bar{A}}^2 - m_A^2(E_A^2 + E_{\bar{A}}^2) + m_A^4}} \\ = & \frac{4m_A^2 + E_+^2 - E_-^2 - 2s}{4\sqrt{\left(\frac{E_+^2 - E_-^2}{4} \right)^2 - m_A^2 \frac{E_+^2 + E_-^2}{2} + m_A^4}} \\ = & \frac{4m_A^2 + E_+^2 - E_-^2 - 2s}{\sqrt{(E_+^2 - E_-^2)^2 - 8m_A^2 (E_+^2 + E_-^2) + 16m_A^4}} \end{aligned} Using Mathematica's Reduce function we get $$$$E_-^2 \leq \frac{1}{s} (E_+^2 - s) (s - 4m_{A}^2)$$$$ $$J_{ij}=\frac{\partial y_i}{\partial x_j} = \begin{pmatrix} \frac{\partial E_1}{\partial E_+} & \frac{\partial E_1}{\partial E_+} & \frac{\partial E_1}{\partial s}\\ \frac{\partial E_2}{\partial E_+} & \frac{\partial E_2}{\partial E_-} & \frac{\partial E_2}{\partial s}\\ \frac{\partial \cos{\theta}}{\partial E_+} & \frac{\partial \cos{\theta}}{\partial E_-} & \frac{\partial \cos{\theta}}{\partial s} \end{pmatrix}^{-1} = \begin{pmatrix} 1 & 1 & 2E_2\\ 1 & -1 & 2E_1\\ 0 & 0 & -2 p_1 p_2 \end{pmatrix}^{-1}.$$ Knowing that the determinant of the inverse matrix is the inverse of the determinant of the original matrix, we get $$\det{J}=(4p_1 p_2)^{-1}$$. $$\mathrm{d}^3p_1 \mathrm{d}^3p_2 = \left(4\pi p_1 \mathrm{d}E_1\right) \left(4\pi p_2 \mathrm{d}E_2\right) \left(\frac{1}{2}\mathrm{d}\cos{\theta}\right) = 8\pi^2 p_1 p_2 \frac{1}{4p_1 p_2}\mathrm{d}E_+ \mathrm{d}E_- \mathrm{d}s = 2\pi^2\mathrm{d}E_+ \mathrm{d}E_- \mathrm{d}s.$$ I guess the integrating limits logic is good enough. If anybody has something else to add there, please point it out.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9995086193084717, "perplexity": 1906.9684808147733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334974.57/warc/CC-MAIN-20220927002241-20220927032241-00621.warc.gz"}