url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://www.physicsforums.com/threads/unbanked-curve-problem.543022/
# Unbanked curve problem 1. Oct 22, 2011 ### physics_ 1. The problem statement, all variables and given/known data A block is hung by a string from the inside roof of a van. When the van goes straight ahead at a speed of 28m/s, the block hands vertically down. But when the van maintains this same speed around an unbanked curve (radius=150m), the block swings toward the outside of the curve. Then the stringmakes an angle A with the vertical. Find angle A. 2. Relevant equations F=mv^2/r??? FsMax = (mew)s Fn ??? Mew = coefficient of static friction. 3. The attempt at a solution Managed to find coefficient of static friction by equating the two equations above and substituting w=mg=Fn. Coefficient based on my calculatinos was 0.53. Didn't help me answer the question. Then I reasoned that based on a drawing, which may be flawed, that tanA = Fn/r. Fn= normal force which would lead to equating FsMax = Mv^2/r. Couldn't find the mass though. If I had found the mass, I could find one of sides of hte triangle the angle is in and since radius is given, it would be easy to determine angle A. Last edited: Oct 22, 2011 2. Oct 22, 2011 ### ehild When the block is stationary with respect to the van it moves around the same circle and with the same speed as the van. Where does the centripetal force come from? ehild 3. Oct 22, 2011 ### HallsofIvy Staff Emeritus And what does "static friction" have to do with this? I see no mention of friction in the problem. The block is not sitting on anything to have friction with. 4. Oct 22, 2011 ### physics_ Anyone have a solution? The answer in the book is 28 degrees. I was merely guessing with teh work I posted above. Last edited: Oct 22, 2011 5. Oct 22, 2011 ### ehild Make a drawing like the one attached, and find out what forces act on the hanging box. The resultant has to be the centripetal force. ehild #### Attached Files: • ###### vanbox.JPG File size: 4.7 KB Views: 111 6. Oct 22, 2011 ### physics_ I seem to have made the drawing fairly accurately already. The problem is the reasoning to solving the problem. 7. Oct 22, 2011 ### ehild You see the right triangle with sides mg and Fcp? What is tan(theta)? ehild Similar Discussions: Unbanked curve problem
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.86373370885849, "perplexity": 1821.6464106700341}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808254.76/warc/CC-MAIN-20171124142303-20171124162303-00221.warc.gz"}
https://zbmath.org/?q=an:0779.35067
× # zbMATH — the first resource for mathematics Global solutions of systems of conservation laws by wave-front tracking. (English) Zbl 0779.35067 The author considers the Cauchy problem for a nonlinear strictly hyperbolic $$n\times n$$ system of conservation laws in one space variable $$u_ t+ F(u)x=0$$, $$x\in\mathbb{R}$$, $$t\geq 0$$; $$u(0,x)=v(x)$$ with sufficiently small total variation of $$v$$ and proves existence of weak global solutions satisfying entropy admissible conditions. Approximate solutions are constructed based on wave-front tracking – the approximate solutions are piecewise constant functions generated by solutions of Riemann problems, starting from piecewise constant approximation of $$v$$. It is proved, that total variation remains small and that the number of lines of discontinuities remains finite. Consequently a subsequence converges to a desired solution. Reviewer: A.Doktor (Praha) ##### MSC: 35L65 Hyperbolic conservation laws 35A05 General existence and uniqueness theorems (PDE) (MSC2000) 35A35 Theoretical approximation in context of PDEs Full Text: ##### References: [1] {\scA. Bressan}, A contractive metric for systems of conservation laws with coinciding shock and rarefaction curves, J. Differential Equations, in press. · Zbl 0802.35095 [2] DiPerna, R, Global existence of solutións to nonlinear hyperbolic systems of conservation laws, J. differential equations, 20, 187-212, (1976) · Zbl 0314.58010 [3] Glimm, J, Solutions in the large for nonlinear hyperbolic systems of equations, Comm. pure appl. math., 18, 95-105, (1965) · Zbl 0141.28902 [4] Lax, P, Hyperbolic systems of conservation laws, II, Comm. pure appl. math., 10, 537-567, (1957) · Zbl 0081.08803 [5] Lia, T.P, The deterministic version of the glimm scheme, Comm. math. phys., 57, 135-148, (1977) · Zbl 0376.35042 [6] Smoller, J, Shock waves and reaction-diffusion equations, (1983), Springer-Verlag New York · Zbl 0508.35002 [7] Rozdestvenskii, B.L; Yanenko, N, Systems of quasilinear equations, () [8] {\scB. Temple}, Systems of conservation laws with coinciding shock and rarefaction curves, in “Nonlinear Partial Differential Equations” (J. Smoller, Ed.), pp. 143-151, Contemporary Math Series, Vol. 17, Amer. Math. Soc., Providence, RI. · Zbl 0538.35050 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858626484870911, "perplexity": 1392.3159328157947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056900.32/warc/CC-MAIN-20210919190128-20210919220128-00613.warc.gz"}
https://www.physicsforums.com/threads/absence-of-gravity-at-atomic-level-at-all.118327/
# Absence of gravity at atomic level at all? 1. Apr 21, 2006 ### heartless Hello, I'm not familiar with what scientists working at accelerators say, but almost every book, and documentary movie say that Quantum Mechanics don't cover the gravity. I just gave a quick thought, well, maybe the gravity isn't there at all? Atomic level simply lacks the gravity and is only based upon 3 forces, electro-magnetism, strong and weak force? If the gravity, exists there, it must be very weak, but wouldn't atoms fall into the gravitational field of one another and start circling around each other? Hey, so how is it with this gravity at atomic level? And well, maybe I'm just out of time, and every one knows the answer except for me :) P.S, just a quick question, what keeps the electrons tied up to an atom? Thanks for all the help :shy: 2. Apr 21, 2006 ### chroot Staff Emeritus Gravity certainly exists at atomic scales, but its strength is negligible when compared to the other three forces. - Warren 3. Apr 21, 2006 ### Staff: Mentor Let's calculate the gravitational potential energy of two protons or neutrons (mass about $m = 2.7 \times 10^{-27}$ kg) in a nucleus, separated by about $r = 10^{-15}$ m. The gravitational constant is about $G = 6.7 \times 10^{-11}$N-m^2/kg^2. The potential energy is $$U = - \frac{Gm^2}{r}[/itex] which gives about $-4.9 \times 10^{-49}$ J, or about $-3.0 \times 10^{-30}$ eV. To put this in perspective, the binding energy per proton or neutron in most nuclei is in the general ballpark of a few million ($10^{+6}$) eV. So, unless gravity behaves wildly differently on a nuclear scale than on a macroscopic scale, its effects are insignificant compared to those of the other forces. Last edited: Apr 21, 2006 4. Apr 23, 2006 ### heartless Thanks, now let me ask you another question, suppose the gravity at atomic level is stronger than nuclear strong force. How would if affect atoms, and all the particles? 5. Apr 23, 2006 ### -Job- You used $r = 10^{-15}$. I don't know if, from a particle perspective, that is close or not, but the closer the better. If we were to take r arbitrarily close to 0 we would get a significant value. I'm not suggesting we take the limit of the gravitational potential as r goes to 0, but i wonder why you used the value of r that you used. How small can r be, in theory? Naturally, at this small scale things would have to be extremely close in order for gravity to have a chance, but in the event that they did, it sounds like it would be hard to get them apart. What are some of the main reasons why the nuclear forces are not caused by gravity? Last edited: Apr 23, 2006 6. Apr 26, 2006 ### dextercioby Well, the smaller "r" gets in gravity, the more you'd have to use GR... Daniel. 7. Apr 26, 2006 ### jtbell ### Staff: Mentor Atomic nuclei have diameters in the range of $10^{-15}$ to $10^{-14}$ m, so $10^{-15}$ is a reasonable order of magnitude estimate for the separation of two "neighboring" nucleons in a nucleus. Last edited: Apr 27, 2006 8. Apr 26, 2006 ### arivero More than "absence of gravity", I like to think of "presence of gauge forces". The point is, suppose you put a general cut-off asking that no force between two particles can be greater than the Force of Planck [tex]F_P \propto {\hbar c \over l_P^2$$ . If we had only gravity, this cut-off happens to be milder than the Planck Length cut-off because two electrons have a gravity force $$F_G=\hbar c ({m_e\over m_P})^2 {1\over r^2}$$ so they can approach a lot without violating the Planck Force. Fortunately they also happen to have an electromagnetic force $$F_E=\alpha(r) \hbar c {1\over r^2}$$ so that at distances of order Planck Length, the *electromagnetic* force is the one causing a force of order Planck Force, and then hitting the cut-off. Thus the gauge forces are needed to make the Force cutoff and the Distance cutoff compatible. Best said, Force cut-off + Gauge forces imply the length cut-off. Without gauge forces, the Force cut-off and the length (area, if you prefer) cutoff are two separate hypothesis. Last edited: Apr 26, 2006
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704051613807678, "perplexity": 1502.6249161905944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863463.3/warc/CC-MAIN-20180620050428-20180620070428-00398.warc.gz"}
https://math.uiowa.edu/calendar/algebra-seminar-113
# Algebra Seminar Speaker: Erich Jauch Topic: An Alternating Analogue of $U(\mathfrak{gl}_n)$ and Its Representations Abstract: The universal enveloping algebra of a Lie algebra $\mathfrak{g}$ is of utmost importance when studying representations of $\mathfrak{g}$. In 2010, V. Futorny and S. Ovsienko gave a realization of $U(\mathfrak{gl}_n)$ as a subalgebra of the ring of invariants of a certain noncommutative ring with respect to the action of $S_1\times S_2\times\cdots\times S_n$ where $S_j$ is the symmetric group on $j$ variables. With some connections to Galois Theory, an interesting question is what would a similar object be in the invariant ring with respect to a product of Alternating groups? We will discuss such an object and some results about its representations and how they relate to $U(\mathfrak{gl}_n)$ representations. This is based on the work in my recent preprint arXiv:1907.13254 Event Date: October 21, 2019 - 3:30pm to 4:20pm Location: 217 MLH Calendar Category: Seminar Seminar Category: Algebra
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.827111005783081, "perplexity": 248.9143879151092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736057.87/warc/CC-MAIN-20200810145103-20200810175103-00073.warc.gz"}
http://mathinsight.org/applet/angled_line_or_plane
# Math Insight ### Applet: An angled line or a plane The graph of the equation $y=3x-2$ looks like a line, which it would be if were an equation in two dimensions, i.e., in the $xy$-plane. However, rotate the graph with the mouse to give you a new perspective on the graph. Since in this case, the graph is really in three dimensions, the graph of the equation becomes a plane. The equation just happens to not depend on the variable $z$, so the plane is parallel to the $z$-axis.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9075852632522583, "perplexity": 164.1141312252917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423903.35/warc/CC-MAIN-20170722062617-20170722082617-00344.warc.gz"}
http://arxiv.org/abs/math/0701917
math (what is this?) # Title: Proliferating parasites in dividing cells : Kimmel's branching model revisited Authors: Vincent Bansaye (PMA) Abstract: We consider a branching model introduced by Kimmel for cell division with parasite infection. Cells contain proliferating parasites which are shared randomly between the two daughter cells when they divide. We determine the probability that the organism recovers, meaning that the asymptotic proportion of contaminated cells vanishes. We study the tree of contaminated cells, give the asymptotic number of contaminated cells and the asymptotic proportions of contaminated cells with a given number of parasites. This depends on domains inherited from the behavior of branching processes in random environment (BPRE) and given by the bivariate value of the means of parasite offsprings. In one of these domains, the convergence of proportions holds in probability, the limit is deterministic and given by the Yaglom quasistationary distribution. Moreover, we get an interpretation of the limit of the Q-process as the size-biased quasistationary distribution. Subjects: Probability (math.PR); Populations and Evolution (q-bio.PE) MSC classes: 60J80, 60J85, 60K37, 92C37, 92D25, 92D30 Cite as: arXiv:math/0701917 [math.PR] (or arXiv:math/0701917v2 [math.PR] for this version) ## Submission history From: Vincent Bansaye [view email] [via CCSD proxy] [v1] Wed, 31 Jan 2007 10:33:43 GMT (55kb) [v2] Sat, 28 Jun 2008 07:41:56 GMT (108kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9454482793807983, "perplexity": 4786.915872613106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824756.90/warc/CC-MAIN-20160723071024-00261-ip-10-185-27-174.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/165941/finding-a-splitting-field-of-x3-x-1-over-mathbbz-2
# Finding a splitting field of $x^3 + x +1$ over $\mathbb{Z}_2$ Finding a splitting field of $x^3 + x +1$ over $\mathbb{Z}_2$. Ok so originally I messed around with $x^3 + x +1$ for a bit looking for an easy way to factor it and eventually decided that the factors are probably made up of really messy nested roots. So then I tried looking at the quotient field $\mathbb{Z}_2[x]/x^3 + x + 1$ to see if I would get lucky and it would contain all three roots but it doesn't. Is there a clever way to easily find this splitting field besides using the cubic formula to find the roots and then just directly adjoining them to $\mathbb{Z}_2$? Edit: Ok it turns out I miscalculated in my quotient field, $\mathbb{Z}_2[x]/x^3 + x + 1$ does contain all three roots. - If degree is not $3$ it is $6$. – André Nicolas Jul 3 '12 at 2:08 Actually, your quotient $\mathbb{Z}_2[x]/(x^3+x+1)$ does contain at least one root: $x$. This is because $x^3+x+1\equiv 0\pmod{x^3+x+1}$. Can you find the others? – roninpro Jul 3 '12 at 2:08 The splitting field is either degree $3$ or degree $6$ over $\mathbb{Z}_2$, hence it is either $\mathbb{F}_8$ or $\mathbb{F}_{64}$. Let $\alpha$ be a root, so that $\mathbb{F}_8 = \mathbb{F}(\alpha)$. The elements are of the form $a+b\alpha+c\alpha^2$, with $\alpha^3=\alpha+1$. Now, the question is whether any of these elements besides $\alpha$ is a root of the original polynomial $x^3+x+1$. Note that $(\alpha^2)^3 = (\alpha^3)^2 = (\alpha+1)^2 = \alpha^2+1$, and so if we plug in $\alpha^2$ into the polynomial we have $$\alpha^6 + \alpha^2 + 1 = \alpha^2+1+\alpha^2+1= 0.$$ Thus, $\alpha^2$ is also a root. So the polynomial has at least two roots in $\mathbb{F}_8$, and so splits there. - oh shoot I must have miscaculated in my quotient field, ok cool, thanks! – Thoth Jul 3 '12 at 2:11 If your cubic were to factor, the factorization must have at least one linear term, which corresponds to a root. It is easy to check it has no roots in $\mathbb{Z}_2$ - check them both! Plugging in $0$ and $1$ both give $1$ so are not roots, so your polynomial is irreducible over $\mathbb{Z}_2.$ A way to write the splitting field is $\mathbb{Z}_2(\alpha)$ where $\alpha$ is any one of the roots of $x^3+x+1.$ This is because $\alpha^2$ and $\alpha^4= \alpha^2+\alpha$ must then also be distinct roots of $x^3+x+1$, and these 3 then comprise a full list. Note, this isn't a special trick to notice for this problem. In fields of characteristic $p$, if $\beta$ is a root of a polynomial then $\beta^p$ will automatically also be a root, due to properties of the Frobenius endomorphism. Or as roninpro pointed out in the comments, the quotient ring you considered (which is essentially the same thing as adjoining these roots) does contain at least one root, and by this same trick, all the roots. - I thought forming the quotient field adjoined just a single root? And that sometimes you would get lucky and the remaining roots could be formed within the quotient field as well, but that sometimes they couldn't, is this not correct? – Thoth Jul 3 '12 at 2:17 @NollieTré You're correct. But extensions of finite fields are all Galois and in particular normal. So if $K/k$ are finite fields and $f \in k[X]$ is irreducible and has a root in $K$, then it splits in $K$. Frobenius is a great thing! – Dylan Moreland Jul 3 '12 at 2:25 @NollieTré You are correct, sometimes the extra roots we find with the Frobenius trick just correspond to the same roots. But here they are distinct. You can check yourself (by carrying out the division) that here $x^2$ is also a root, since $x^6+x^2+1 = (x^3+x+1)(x^3-x-1)+2(x^2+x+1) = 0$, and similarly $x^4$ is also a root. So the quotient formed actually has all the roots. – Ragib Zaman Jul 3 '12 at 2:26 Ok interesting, so is there a canonical example of when the quotient field does not contain all the roots of the polynomial? – Thoth Jul 3 '12 at 2:29 I think it's easier to note that if $\alpha$ is a root of the polynomial then squaring gives $0^2 = (\alpha^3 + \alpha + 1)^2 = (\alpha^2)^3 + \alpha^2 + 1$. – Dylan Moreland Jul 3 '12 at 2:30 Two good answers already here, but I wanted to emphasize the usefulness of the Frobenius endomorphism. Let $f(X) = X^3 + X + 1$. If I let $\alpha$ denote the image of $X$ in $k = \mathbb F_2[X]/(f(X))$ then applying Frobenius to $0 = f(\alpha)$ gives $0 = (\alpha^3 + \alpha + 1)^2 = (\alpha^2)^3 + \alpha^2 + 1.$ Hence $\alpha^2$ is also a root of $f$, and $\alpha^2 \neq \alpha$ because $\alpha \neq 0, 1$. Since $k$ contains two roots of the cubic $f$, it contains three. To bring in slightly more technology: extensions of finite fields are Galois and in particular normal, and this implies that if you adjoin one root of an irreducible polynomial over a finite field then you've actually adjoined all of them. This is, of course, not true in general! - Just wanted to add here, in $\Bbb{Z}_2(\alpha)[x]$, one can factor $x^3 + x + 1 = (x + \alpha)(x^2 + \alpha x + (\alpha^2 + 1))$. Given that $\alpha^2$ is also a root (as Dylan shows above), using ordinary high-school factorization techniques the third root is easily seen to be $\alpha^2 + \alpha$ (a somewhat preferable form than $\alpha^4$). That is: $x^2 + \alpha x + (\alpha^2 + 1) = (x + \alpha^2)(x + (\alpha^2 + \alpha))$ – David Wheeler Jul 6 '12 at 10:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9060187339782715, "perplexity": 186.81982532539902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701150206.8/warc/CC-MAIN-20160205193910-00182-ip-10-236-182-209.ec2.internal.warc.gz"}
https://arxiv.org/abs/1807.02069
math.DS (what is this?) # Title:Singular perturbation analysis of a regularized MEMS model Abstract: Micro-Electro Mechanical Systems (MEMS) are defined as very small structures that combine electrical and mechanical components on a common substrate. Here, the electrostatic-elastic case is considered, where an elastic membrane is allowed to deflect above a ground plate under the action of an electric potential, whose strength is proportional to a parameter $\lambda$. Such devices are commonly described by a parabolic partial differential equation that contains a singular nonlinear source term. The singularity in that term corresponds to the so-called "touchdown" phenomenon, where the membrane establishes contact with the ground plate. Touchdown is known to imply the non-existence of steady state solutions and blow-up of solutions in finite time. We study a recently proposed extension of that canonical model, where such singularities are avoided due to the introduction of a regularizing term involving a small "regularization" parameter $\varepsilon$. Methods from dynamical systems and geometric singular perturbation theory, in particular the desingularization technique known as "blow-up", allow for a precise description of steady-state solutions of the regularized model, as well as for a detailed resolution of the resulting bifurcation diagram. The interplay between the two main model parameters $\varepsilon$ and $\lambda$ is emphasized; in particular, the focus is on the singular limit as both parameters tend to zero. Subjects: Dynamical Systems (math.DS) MSC classes: 34B16, 34C23, 34E05, 34E15, 34L30, 35K67, 74G10 Cite as: arXiv:1807.02069 [math.DS] (or arXiv:1807.02069v1 [math.DS] for this version) ## Submission history From: Annalisa Iuorio [view email] [v1] Thu, 5 Jul 2018 16:03:43 UTC (305 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9080376625061035, "perplexity": 818.8537188746083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247505838.65/warc/CC-MAIN-20190221152543-20190221174543-00419.warc.gz"}
https://edwardfhughes.wordpress.com/2012/08/
# The Theorem of The Existence of Zeroes It’s time to prove the central result of elementary algebraic geometry. Mostly it’s referred to as Hilbert’s Nullstellensatz. This German term translates precisely to the title of this post. Indeed ‘Null’ means ‘zero’, ‘stellen’ means to exist and ‘Satz’ means theorem. But referring to it merely as an existence theorem for zeroes is inadequate. Its real power is in setting up a correspondence between algebra and geometry. Are you sitting comfortably? Grab a glass of water (or wine if you prefer). Settle back and have a peruse of these theorems. This is your first glance into the heart of a magical subject. (In many texts these theorems are all referred to as the Nullstellensatz. I think this is both pointless and confusing, so have renamed them! If you have any comments or suggestions about these names please let me know). Theorem 4.1 (Hilbert’s Nullstellensatz) Let $J\subsetneq k[\mathbb{A}^n]$ be a proper ideal of the polynomial ring. Then $V(J)\neq \emptyset$. In other words, for every nontrivial ideal there exists a point which simulataneously zeroes all of its elements. Theorem 4.2 (Maximal Ideal Theorem) Every maximal ideal $\mathfrak{m}\subset k[\mathbb{A}^n]$ is of the form $(x-a_1,\dots,x-a_n)$ for some $(a_1,\dots,a_n)\in \mathbb{A}^n$. In other words every maximal ideal is the ideal of some single point in affine space. Theorem 4.3 (Correspondence Theorem) For every ideal $J\subset k[\mathbb{A}^n]$ we have $I(V(J))=\sqrt{J}$. We’ll prove all of these shortly. Before that let’s have a look at some particular consequences. First note that 4.1 is manifestly false if $k$ is not algebraically closed. Consider for example $k=\mathbb{R}$ and $n=1$. Then certainly $V(x^2+1)=\emptyset$. Right then. From here on in we really must stick just to algebraically closed fields. Despite having the famous name, 4.1 not really immediately useful. In fact we’ll see its main role is as a convenient stopping point in the proof of 4.3 from 4.2. The maximal ideal theorem is much more important. It precisely provides the converse to Theorem 3.10. But it is the correspondence theorem that is of greatest merit. As an immediate corollary of 4.3, 3.8 and 3.10 (recalling that prime and maximal ideals are radical) we have Corollary 4.4 The maps $V,I$ as defined in 1.2 and 2.4 give rise to the following bijections $\{\textrm{affine varieties in }\mathbb{A}^n\} \leftrightarrow \{\textrm{radical ideals in } k[\mathbb{A}^n]\}$ $\{\textrm{irreducible varieties in }\mathbb{A}^n\} \leftrightarrow \{\textrm{prime ideals in } k[\mathbb{A}^n]\}$ $\{\textrm{points in }\mathbb{A}^n\} \leftrightarrow \{\textrm{maximal ideals in } k[\mathbb{A}^n]\}$ Proof We’ll prove the first bijection explicitly, for it is so rarely done in the literature. The second and third bijections follow from the argument for the first and 3.8, 3.10. Let $J$ be a radical ideal in $k[\mathbb{A}^n]$. Then $V(J)$ certainly an affine variety so $V$ well defined. Moreover $V$ is injective. For suppose $\exists J'$ radical with $V(J')=V(J)$. Then $I(V(J'))=I(V(J))$ and thus by 4.3 $J = J'$. It remains to prove that $V$ surjective. Take $X$ an affine variety. Then $J'=I(X)$ an ideal with $V(J')=X$ by Lemma 2.5. But $J'$ not necessarily radical. Let $J=\sqrt{J'}$ a radical ideal. Then by 4.3 $I(V(J'))=J$. So $V(J) = V(I(V(J')) = V(J') = X$ by 2.5. This completes the proof. $\blacksquare$ We’ll see in the next post that we need not restrict our attention to $\mathbb{A}^n$. In fact using the coordinate ring we can gain a similar correspondence for the subvarieties of any given variety. This will lead to an advanced introduction to the language of schemes. With these promising results on the horizon, let’s get down to business. We’ll begin by recalling a definition and a theorem. Definition 4.5 A finitely generated $k$-algebra is a ring $R$ s.t. $R \cong k[a_1,\dots,a_n]$ for some $a_i \in R$. A finite $k$-algebra is a ring $R$ s.t. $R\cong ka_1 + \dots ka_n$. Observe how this definition might be confusing when compared to a finitely generated $k$-module. But applying a broader notion of ‘finitely generated’ to both algebras and modules clears up the issue. You can check that the following definition is equivalent to those we’ve seen for algebras and modules. A finitely generated algebra is richer than a finitely generated module because an algebra has an extra operation – multiplication. Definition 4.6 We say an algebra (module) $A$ is finitely generated if there exists a finite set of generators $F$ s.t. $A$ is the smallest algebra (module) containing $F$. We then say that $A$ is generated by $F$. Theorem 4.7 Let $k$ be a general field and $A$ a finitely generated $k$-algebra. If $A$ is a field then $A$ is algebraic over $k$. Okay I cheated a bit saying ‘recall’ Theorem 4.7. You probably haven’t seen it anywhere before. And you might think that it’s a teensy bit abstract! Nevertheless we shall see that it has immediate practical consequences. If you are itching for a proof, don’t worry. We’ll in fact present two. The first will be due to Zariski, and the second an idea of Noether. But before we come to those we must deduce 4.1 – 4.3 from 4.7. Proof of 4.2 Let $m \subset k[\mathbb{A}^n]$ be a maximal ideal. Then $F = k[\mathbb{A}^n]/m$ a field. Define the natural homomorphism $\pi: k[\mathbb{A}^n] \ni x \mapsto x+m \in F$. Note $F$ is a finitely generated $k$-algebra, generated by the $x_i+m$ certainly. Thus by 4.7 $F/k$ is an algebraic extension. But $k$ was algebraically closed. Hence $k$ is isomorphic to $F$ via $\phi : k \rightarrowtail k[\mathbb{A}^n] \xrightarrow{\pi} F$. Let $a_i = \phi^{-1}(x_i+m)$. Then $\pi(x_i - a_i) = 0$ so $x_i - a_i \in \textrm{ker}\pi = m$. Hence $(x_1-a_1, \dots, x_n-a_n) \subset m$. But $(x_1-a_1, \dots, x_n-a_n)$ is itself maximal by 3.10. Hence $m = (x_1-a_1, \dots, x_n-a_n)$ as required. $\blacksquare$ That was really quite easy! We just worked through the definitions, making good use of our stipulation that $k$ is algebraically closed. We’ll soon see that all the algebraic content is squeezed into the proof of 4.7 Proof of 4.1 Let $J$ be a proper ideal in the polynomial ring. Since $k[\mathbb{A}^n]$ Noetherian $J\subset m$ some maximal ideal. From 4.2 we know that $m=I(P)$ some point $P\in \mathbb{A}^n$. Recall from 2.5 that $V(I(P)) = \{P\} \subset V(J)$ so $V(J) \neq \emptyset$. $\blacksquare$ The following proof is lengthier but still not difficult. Our argument uses a method known as the Rabinowitsch trick. Proof of 4.3 Let $J\triangleleft k[\mathbb{A}^n]$ and $f\in I(V(J))$. We want to prove that $\exists N$ s.t. $f^N \in J$. We start by introducing a new variable $t$. Define an ideal $J_f \supset J$ by $J_f = (J, ft - 1) \subset k[x_1,\dots,x_n,t]$. By definition $V(J_f) = \{(P,b) \in \mathbb{A}^{n+1} : P\in V(J), \ f(P)b = 1\}$. Note that $f \in I(V(J))$ so $V(J_f) = \emptyset$. Now by 4.1 we must have that $J_f$ improper. In other words $J_f = k[x_1,\dots, x_n, t]$. In particular $1 \in J_f$. Since $k[x_1,\dots, x_n, t]$ is Noetherian we know that $J$ finitely generated by some $\{f_1,\dots,f_r\}$ say. Thus we can write $1 = \sum_{i=1}^r g_i f_i + g_o (ft - 1)$ where $g_i\in k[x_1,\dots , x_n, t]$ (*). Let $N$ be such that $t^N$ is the highest power of $t$ appearing among the $g_i$ for $=\leq i \leq r$. Now multiplying (*) above by $f^N$ yields $f^N = \sum_{i=1}^r G_i(x_1,\dots, x_n, ft) f_i + G_0(x_1,\dots,x_n,ft)(ft-1)$ where we define $G_i = f^N g_i$. This equation is valid in $k[x_1,\dots,x_n, t]$. Consider its reduction in the ring $k[x_1,\dots,x_n,t]/(ft - 1)$. We have the congruence $f_N\equiv \sum_{i=1}^r h_i (x_1,\dots,x_n) f_i \ \textrm{mod}\ (ft-1)$ where $h_i = G_i(x_1,\dots,x_n,1)$. Now consider the map $\phi:k[x_1,\dots, x_n]\rightarrowtail k[x_n,\dots, x_n,t]\xrightarrow{\pi} k[x_n,\dots, x_n,t]/(ft-1)$. Certainly nothing in the image of the injection can possibly be in the ideal $(ft - 1)$, not having any $t$ dependence. Hence $\phi$ must be injective. But then we see that $f^N = \sum_{i=1}^r h_i(x_1,\dots, x_n) f_i$ holds in the ring $k[\mathbb{A}^n]$. Recalling that the $f_i$ generate $J$ gives the result. $\blacksquare$ We shall devote the rest of this post to establishing 4.7. To do so we’ll need a number of lemmas. You might be unable to see the wood for the trees! If so, you can safely skim over much of this. The important exception is Noether normalisation, which we’ll come to later. I’ll link the ideas of our lemmas to geometrical concepts at our next meeting. Definition 4.8 Let $A,B$ be rings with $B \subset A$. Let $a\in A$. We say that $a$ is integral over $B$ if $a$ is the root of some monic polynomial with roots in $B$. That is to say $\exists b_i \in B$ s.t. $a^n + b_{n-1}a^{n-1} + \dots + b_0 = 0$. If every $a \in A$ is integral over $B$ we say that $A$  is integral over $B$ or $A$ is an integral extension of $B$. Let’s note some obvious facts. Firstly we can immediately talk about $A$ being integral over $B$ when $A,B$ are algebras with $B$ a subalgebra of $A$. Remember an algebra is still a ring! It’s rather pedantic to stress this now, but hopefully it’ll prevent confusion if I mix my termin0logy later. Secondly observe that when $A$ and $B$ are fields “integral over” means exactly the same as “algebraic over”. We’ll begin by proving some results that will be of use in both our approaches. We’ll see that there’s a subtle interplay between finite $k$-algebras, integral extensions and fields. Lemma 4.9 Let $F$ be a field and $R\subset F$ a subring. Suppose $F$ is an integral extension of $R$. Then $R$ is itself a field. Proof Let $r \in R$. Then certainly $r \in F$ so $r^{-1} \in F$ since $F$ a field. Now $r^{-1}$ integral over $R$ so satisfies an equation $r^-n = b_{n-1} r^{-n+1} +\dots + b_0$ with$b_i \in R$. But now multiplying through by $r^{n-1}$ yields $r^{-1} = b_{n-1} + \dots + b_0 r^{n-1} \in R$. $\blacksquare$ Note that this isn’t obvious a priori. The property that an extension is integral contains sufficient information to percolate the property of inverses down to the base ring. Lemma 4.10 If $A$ is a finite $B$ algebra then $A$ is integral over $B$. Proof Write $A = Ba_1 + \dots +Ba_n$. Let $x \in A$. We want to prove that $x$ satisfies some equation $x^n + b_{n-1}x^n{n-1} + \dots + b_0 = 0$. We’ll do so by appealing to our knowledge about determinants. For each $a_i$ we may clearly write $xa_i = \sum_{i=1}^{n} b_{ij}a_j$ for some $b_ij \in B$. Writing $\vec{a} = (a_1, \dots, a_n)$ and defining the matrix $(\beta)_{ij} = b_{ij}$ we can express our equation as $\beta a = xa$. We recognise this as an eigenvalue problem. In particular $x$ satisfies the characteristic polynomial of $\beta$, a polynomial of degree $n$ with coefficients in $B$. But this is precisely what we wanted to show. $\blacksquare$ Corollary 4.11 Let $A$ be a field and $B\subset A$ a subring. If $A$ is a finite $B$-algebra then $B$ is itself a field. Proof Immediate from 4.9 and 4.10. $\blacksquare$ We now focus our attention on Zariski’s proof of the Nullstellensatz. I take as a source Daniel Grayson’s excellent exposition. Lemma 4.12 Let $R$ be a ring an $F$ a $R$-algebra generated by $x \in F$. Suppose further that $F$ a field. Then $\exists s \in R$ s.t. $S = R[s^{-1}]$ a field.  Moreover $x$ is algebraic over $S$. Proof Let $R'$ be the fraction field of $R$. Now recall that $x$ is algebraic over $R'$ iff $R'[x] \supset R'(x)$. Thus $x$ is algebraic over $R'$ iff $R'[x]$ is a field. So certainly our $x$ is algebraic over $R'$ for we are given that $F$ a field. Let $x^n + f_{n-1}x^{n-1} + \dots + f_0$ be the minimal polynomial of $x$. Now define $s\in R$ to be the common denominator of the $f_i$, so that $f_0,\dots, f_{n-1} \in R[s^{-1}] = S$. Now $x$ is integral over $S$ so $F/S$ an integral extension. But then by 4.9 $S$ a field, and $x$ algebraic over it. $\blacksquare$ Observe that this result is extremely close to 4.7. Indeed if we take $R$ to be a field we have $S = R$ in 4.12. Then lemma then says that $R[x]$ is algebraic as a field extension of $R$. Morally this proof mostly just used definitions. The only nontrivial fact was the relationship between $R'(x)$ and $R'[x]$. Even this is not hard to show rigorously from first principles, and I leave it as an exercise for the reader. We’ll now attempt to generalise 4.12 to $R[x_1,\dots,x_n]$. The argument is essentially inductive, though quite laborious. 4.7 will be immediate once we have succeeded. Lemma 4.13 Let $R = F[x]$ be a polynomial ring over a field $F$. Let $u\in R$. Then $R[u^{-1}]$ is not a field. Proof By Euclid, $R$ has infinitely many prime elements. Let $p$ be a prime not dividing $u$. Suppose $\exists q \in R[u^{-1}]$ s.t. $qp = 1$. Then $q = f(u^{-1})$ where $f$ a polynomial of degree $n$ with coefficients in $R$. Hence in particular $u^n = u^n f(u^{-1}) p$ holds in $R$ for $u^n f(u^{-1}) \in R$. Thus $p | u^n$ but $p$ prime so $p | u$. This is a contradiction. $\blacksquare$ Corollary 4.14 Let $K$ be a field, $F\subset K$ a subfield, and $x \in K$. Let $R = F[x]$. Suppose $\exists u\in R$ s.t. $R[u^{-1}] = K$. Then $x$ is algebraic over $F$. Moreover $R = K$. Proof Suppose $x$ were transcendental over $F$. Then $R=F[x]$ would be a polynomial ring, so by 4.12 $R[u^{-1}]$ couldn’t be a field. Hence $x$ is algebraic over $F$ so $R$ is a field. Hence $R=R[u{-1}]=K$. $\blacksquare$ The following fairly abstract theorem is the key to unlocking the Nullstellensatz. It’s essentially a slight extension of 4.14, applying 4.12 in the process. I’d recommend skipping the proof first time, focussing instead on how it’s useful for the induction of 4.16. Theorem 4.15 Take $K$ a field, $F \subset K$ a subring, $x \in K$. Let $R = F[x]$. Suppose $\exists u\in R$ s.t. $R[u^{-1}] = K$. Then $\exists 0\neq s \in F s.t. F[s^{-1}]$ is a field. Moreover $F[s^{-1}][x] = K$ and $x$ is algebraic over $F[s^{-1}]$. Proof Let $L=\textrm{Frac}(F)$. Now by 4.14 we can immediately say that $L[x]=K$, with $x$ algebraic over $L$. Now we seek our element $s$ with the desired properties. Looking back at 4.12, we might expect it to be useful. But to use 4.12 for our purposes we’ll need to apply it to some $F' = F[t^{-1}]$ with $F'[x] = K$, where $t \in F$. Suppose we’ve found such a $t$. Then 4.12 gives us $s' \in F'$ s.t. $F'[s'^{-1}]$ a field with $x$ algebraic over it. But now $s' = qt^{-m}$ some $q \in F, \ m \in \mathbb{N}$. Now $F'[s'^{-1}]=F[t^{-1}][s'^{-1}]=F[(qt)^{-1}]$, so setting $=qt$ completes the proof. (You might want to think about that last equality for a second. It’s perhaps not immediately obvious). So all we need to do is find $t$. We do this using our first observation in the proof. Observe that $u^{-1}\in K=L[x]$ so we can write $u^{-1}=l_0+\dots +l_{n-1}x^{n-1}$, $l_i \in L$. Now let $t \in F$ be a common denominator for all the $l_i$. Then $u^{-1} \in F'=F[t^{-1}]$ so $F'[x]=K$ as required. $\blacksquare$ Corollary 4.16 Let $k$ a ring, $A$ a field, finitely generated as a $k$-algebra by $x_1,\dots,x_n$. Then $\exists 0\neq s\in k$ s.t. $k[s^{-1}]$ a field, with $A$ a finite algebraic extension of $k[s^{-1}]$. Trivially if $k$ a field, then $A$ is algebraic over $k$, establishing 4.7. Proof Apply Lemma 4.15 with $F=k[x_1,\dots,x_{n-1}]$, $x=x_n$, $u=1$ to get $s'\in F$ s.t. $A' = k[x_1,\dots,x_{n-1}][s'^{-1}]$ is a field with $x_n$ algebraic over it. But now apply 4.15 again with $F=k[x_1,\dots,x_{n-2}]$, $u = s'$ to deduce that $A''=k[x_1,\dots, x_{n-2}][s''^{-1}]$ is a field, with $A'$ algebraic over $A''$, for some $s'' \in F$. Applying the lemma a further $(n-2)$ times gives the result. $\blacksquare$ This proof of the Nullstellensatz is pleasingly direct and algebraic. However it has taken us a long way away from the geometric content of the subject. Moreover 4.13-4.15 are pretty arcane in the current setting. (I’m not sure whether they become more meaningful with a better knowledge of the subject. Do comment if you happen to know)! Our second proof sticks closer to the geometric roots. We’ll introduce an important idea called Noether Normalisation along the way. For that you’ll have to come back next time! # Gravity for Beginners Everyone knows about gravity. Every time you drop a plate or trip on the stairs you’re painfully aware of it. It’s responsible for the thrills and spills at theme parks and the heady hysteria of a plummeting toboggan. But gravity is not merely restricted to us small fry on Earth. It is a truly universal force, clumping mass together to form stars and planets, keeping worlds in orbit around their Suns. Physicists call gravity a fundamental force – it cannot be explained in terms of any other interaction. The most accurate theory of gravity is Einstein’s General Theory of Relativity. The presence of mass warps space and time, creating the physical effects we observe. The larger the mass, the more curved space and time become, so the greater the gravitational pull. Space and time are a rubber sheet, which a large body like the Sun distorts. Gravity is very weak compared to other fundamental forces. This might be a bit of a surprise – after all it takes a very powerful rocket to leave Earth’s orbit. But this is just because Earth is so very huge. Things on an everyday scale don’t seem to be pulled together by gravity. But small magnets certainly are attracted to each other by magnetism. So magnetism is stronger than gravity. The fact that gravity acts by changing the geometry of space and time sets it apart from all other forces. In fact our best theories of particles assume that there is some cosmic blank canvas on which interactions happen. This dichotomy and the weakness of gravity give rise to the conflicts at the heart of physics. It’s All Relative, After All “You can’t just let nature run wild.” Walt Disney Throughout history, scientists have employed principles of relativity to understand nature. In broad terms these say that different people doing the same scientific experiment should get the same answer. This stands to reason – we travel the world and experience the same laws of nature everywhere. Any differences we might think we observe can be explained by a change in our experimental conditions. Looking for polar bears in the Sahara is obvious madness. In the early 17th century Galileo formulated a specific version of relativity that applied to physics. He said that the laws of physics are the same for everyone moving at constant speed, regardless of the speed they are going. We notice this in everyday life. If you do an egg and spoon race walking at a steady pace the egg will stay on the spoon just as if you were standing still. If you try to speed up or slow down, though, the egg will likely go crashing to the ground. This shows that for an accelerating observer the laws of physics are not the same. Newton noticed this and posited that an accelerating object feels a force that grows with its mass and acceleration. Sitting in a plane on takeoff this force pushes you backwards into your seat. Physicists call this Newton’s Second Law. Newton’s Second Law can be used the opposite way round too. Let’s take an example. Suppose we drop an orange in the kitchen. As it travels through the air its mass certainly stays the same and the only significant force it feels is gravity. Using Newton’s Second Law we can calculate the acceleration of the orange. Now acceleration is just change in speed over time. So given any time in the future we can predict the speed of the orange. But speed is change in distance over time. So we know the distance the orange has travelled towards the ground after any given time. In particular we can say exactly how much time it’ll be before the orange goes splat on the kitchen floor. One of the remarkable features of the human brain is that it can do approximations to these calculations very quickly, enabling us to catch the orange before disaster strikes. The power of the principle of relativity is now apparent. Suppose we drop the orange in a lift, while steadily travelling upwards. We can instantly calculate how long it’ll take to hit the lift floor. Indeed by the principle of relativity it must take exactly the same amount of time as it did when we were in the kitchen. There’s a hidden subtlety here. We’ve secretly assumed that there is some kind of universal clock, ticking away behind the scenes. In other words, everyone measures time the same, no matter how they’re moving. There’s also a mysterious cosmic tape measure somewhere offstage. That is, everyone agrees on distances, regardless of their motion. These hypotheses are seemingly valid for everyday life. But somehow these notions of absolute space and time are a little unsettling. It would seem that Galileo’s relativity principle applies not only to physics, but also to all of space and time. Newton’s ideas force the universe to exist against the fixed backdrop of graph paper. Quite why the clock ticks and the ruler measures precisely as they do is not up for discussion. And the mysteries only deepen with Newton’s theory of gravity. A Tale of Two Forces “That one body may act upon another at a distance through a vacuum without the mediation of anything else […] is to me so great an absurdity that […] no man who has […] a competent faculty of thinking could ever fall into it.” Sir Isaac Newton Newton was arguably the first man to formulate a consistent theory of gravitation. He claimed that two masses attract each other with a force related to their masses and separation. The heavier the objects and the closer they are, the bigger the force of gravity. This description of gravity was astoundingly successful.  It successively accounted for the curvature of the Earth, explained the motion of the planets around the Sun, and predicted the precise time of appearance of comets. Contemporary tests of Newton’s theory returned a single verdict – it was right. Nevertheless Newton was troubled by his theory. According to his calculations, changes in the gravitational force must be propagated instantaneously throughout the universe. Naturally he sought a mechanism for such a phenomenon. Surely something must carry this force and effect its changes. Cause and effect is an ubiquitous feature of everyday physics. Indeed Newton’s Second Law says motion and force and inextricably linked. Consider the force which opens a door. It has a cause – our hand pushing against the wood – and an effect – the door swinging open. But Newton couldn’t come up with an analogous explanation for gravity. He had solved the “what”, but the “why” and “how” evaded him entirely. For more than a century Newton’s theories reigned supreme. It was not until the early 1800s that physicists turned their attention firmly towards another mystery – electricity. Michael Faraday lead the charge with his 1721 invention of an electric motor. By placing a coil of wire in a magnetic field and connecting a battery he could make it rotate. The race was on to explain this curious phenomenon. Faraday’s work implied that electricity and magnetism were two sides of the same coin. The strange force he had observed became known, appropriately, as electromagnetism. The scientific community quickly settled on an idea. Electricity and magnetism were examples of a force field – at every point in space surrounding a magnet or current, there were invisible lines of force which affected the motion of nearby objects. All they needed now were some equations. This would allow them to predict the behaviour of currents near magnets, and verify that this revolutionary force field idea was correct. Without a firm mathematical footing the theory was worthless. Several preeminent figures tried their hand at deriving a complete description, but in 1861 the problem remained open. In that year a young Scotsman named James Clerk Maxwell finally cracked the issue. By modifying the findings of those before him he arrived at a set of equations which completely described electromagnetism. He even went one better than Newton had with gravity. He found a mechanism for the transmission of electromagnetic energy. Surprisingly, Maxwell’s equations suggested that light was an electromagnetic wave. To wit, solutions showed that electricity and magnetism could spread out in a wavelike manner. Moreover the speed of these waves was determined by a constant in his equations. This constant turned out to be very close to the expected speed of light in a vacuum. It wasn’t a giant leap to suppose this wave was light itself. At first glance it seems like these theories solve all of physics. To learn about gravity and the motion of uncharged objects we use Newton’s theory. To predict electromagnetic phenomena we use Maxwell’s. Presumably to understand the motion of magnets under gravity we need a bit of both. But trying to get Maxwell’s equations to play nice with Galileo’s relativity throws a big spanner in the works. A Breath of Fresh Air “So the darkness shall be the light, and the stillness the dancing.” T.S. Eliot Galileo’s relativity provides a solid bedrock for Newtonian physics. It renders relative speeds completely irrelevant, allowing us to concentrate on the effects of acceleration and force. Newton’s mechanics and Galileo’s philosophy reinforce each other. In fact Newton’s equations look the same no matter what speed you are travelling at. Physicists say they are invariant for Galileo’s relativity. We might naively hope that Maxwell’s equations are invariant. Indeed a magnet behaves the same whether you are sitting still at home or running around looking for buried treasure. Currents seem unaffected by how fast you are travelling – an iPod still works on a train. It would be convenient if electromagnetism had the same equations everywhere. Physicists initially shared this hope. But it became immediately apparent that Maxwell’s equations were not invariant. A change in the speed you were travelling caused a change in the equations. Plus there was only one speed at which solutions to the equations gave the right answer! Things weren’t looking good for Galileo’s ideology. To solve this paradox, physicists made clever use of a simple concept. Since antiquity we’ve had a sense of perspective – the world looks different from another point of view. Here’s a simple example in physics. Imagine you’re running away from a statue. From your viewpoint the statue is moving backwards. From the statue’s viewpoint you are moving forwards. Both descriptions are right. They merely describe the same motion in contrary ways. In physics we give perspective a special name. Any observer has a frame of reference from which they see the world. In your frame of reference the statue moves backwards. In the statue’s frame of reference you move forwards. We’ve seen that Newton’s physics is the same in every frame of reference which moves with a constant speed. We can now rephrase our discovery about Maxwell’s equations. Physicists found that there was one frame of reference in which they were correct. Maxwell’s equations somehow prefer this frame of reference over all other ones! Any calculations in electromagnetism must be done relative to this fixed frame. But why is this frame singled out? Faced with this question, physicists spotted the chance to kill two birds with one stone. The discovery that light is a wave of electromagnetism raises an immediate question. What medium carries the light waves? We’re used to waves travelling through some definite substance. Water waves need a sea or ocean, sound waves require air, and earthquakes move through rock. Light waves must travel through a fixed mysterious fog pervading all of space. It was called aether. The aether naturally has it’s own frame of reference, just as you and I do. When we measure the speed of light in vacuum, we’re really measuring it relative to the aether. So the aether is a special reference frame for light. But light is an electromagnetic wave. It’s quite sensible to suggest that Maxwell’s preferred reference frame is precisely the aether! Phew, we’ve sorted it. Newtonian mechanics works with Galilean relativity because there’s nothing to specify a particular reference frame. Maxwell’s equations don’t follow relativity because light waves exist in the aether, which is a special frame of reference. Once we’ve found good evidence for the aether we’ll be home and dry. So thought physicists in the late 19th century. The gauntlet was down. Provide reliable experimental proof and win instant fame. Two ambitious men took up the challenge – Albert Michelson and Edward Morley. They reasoned that the Earth must be moving relative to the aether. Therefore from the perspective of Earth the aether must move. Just as sound moves faster when carried by a gust of wind, light must move faster when carried by a gust of aether. This prediction became known as the aether wind. By measuring the speed of light in different directions, Michelson and Morley could determine the direction and strength of the aether wind. The speed of light would be greatest when they aligned their measuring apparatus with the wind. Indeed the gusts of aether would carry light more swiftly. The speed of light would be slowest when they aligned their apparatus at right angles to the wind. The expected changes in speed were so minute that it required great ingenuity to measure them. Nevertheless by 1887 they had perfected a cunning technique. With physicists waiting eagerly for confirmation, Michelson and Morley’s experiment failed spectacularly. They measured no aether wind at all. The experiment was repeated hundreds of times in the subsequent years. The results were conclusive. It didn’t matter which orientation you chose, the speed of light was the same. This devastating realisation blew aether theory to smithereens. Try as they might, no man nor woman could paper over the cracks. Physics needed a revolution. How Time Flies “Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. That’s relativity.” Albert Einstein It is the most romanticised of fables. The task of setting physics right again fell to a little known patent clerk in Bern. The life and work of Albert Einstein has become a paradigm for genius. But yet the idea that sparked his reworking of reality was beautifully simple. For a generation, physicists had been struggling to reconcile Galileo and Maxwell. Einstein claimed that they had missed the point. Galileo and Maxwell are both right. We’ve just misunderstood the very nature of space and time. This seems a ridiculously bold assumption. It’s best appreciated by way of an analogy. Suppose you live in a rickety old house and buy a nice new chair. Placing it in your lounge you notice that it wobbles slightly. You wedge a newspaper under one leg. At first glance you’ve solved the problem. But when you sit on the chair it starts wobbling again. After a few more abortive attempts with newspapers, rags and other household items you decide to put the chair in another room. But the wobble won’t go away. Even when you sand down the legs you can’t make it stand firm. The chair and the house seem completely incompatible. One day a rogue builder turns up. He promises to fix your problem. You don’t believe he can. Neither changing the house nor the chair has made the slightest difference to you. When you return from work you are aghast to see he’s knocked the whole house down. He shouts up to you from a hole in the ground, “just fixing your foundations”! More concretely Einstein said that there’s no problem with Galileo’s relativity. The laws of physics really are the same in every frame moving at constant speed. Physicists often rename this idea Einstein’s special principle of relativity even though it is Galileo’s invention! Einstein also had no beef with Maxwell’s equations. In particular they are the same in every frame. To get around the fact that Maxwell’s equations are not invariant for Galileo’s relativity, Einstein claimed that we don’t understand space and time. He claimed that there is no universal tape measure, or cosmic timepiece. Everybody is responsible for their own measurements of space and time. This clears up some of the issues that annoyed Newton. In order to get Maxwell’s equations to be invariant when we change perspective, Einstein had to alter the foundations of physics. He postulated that each person’s measurements of space and time were different, depending on how fast they were going. This correction magically made Maxwell’s equations work with his principle of special relativity. There’s an easy way to understand how Einstein modified space and time. We’re used to thinking that we move at a constant speed through time. Clocks, timers and watches all attest to our obsession with measuring time consistently. The feeling of time dragging on or whizzing by is merely a psychological curiosity. Before Einstein space didn’t have this privilege. We only move through space as we choose. And moving through space has no effect on moving through time. Einstein made everything much more symmetrical. He said that we are always moving through both space and time. We can’t do anything about it. Everyone always moves through space and time at the same speed – the speed of light. To move faster through space you must move slower through time to compensate. The slower you trudge through space, the faster you whizz through time. Simple as. Einstein simply put space and time on a more even footing. We call the whole construct spacetime. Intuitively spacetime is four dimensional. That is, you can move in four independent directions. Three of these are in space, up-down, left-right, forward-backward. One of these is in time. You probably haven’t noticed it yet, but special relativity has some weird effects. First and foremost, the speed of light is the same however fast you are going. This is because it is a constant in Maxwell’s equations, which are the same in all frames. You can never catch up with a beam of light! This is not something we are used to from everyday life. Nevertheless it can be explained using Einstein’s special relativity. Suppose you measure the speed of light when you are stationery. You do this by measuring the amount of time it takes for light to go a certain distance. For the sake of argument assume you get the answer 10 mph. Now imagine speeding up to 5 mph. You measure the speed of light again. Without Einstein you’d expect the result to be 5 mph. But because you’re moving faster through space you must be moving slower through time. That means it’ll take the light less time to go the same distance. In fact the warping of spacetime precisely accounts for the speed you’ve reached. The answer is again 10 mph. Einstein’s spacetime also forces us to forget our usual notions of simultaneity. In everyday physics we can say with precision whether two events happen at the same time. But this concept relies precisely on Newton’s absolute time. Without his convenient divine timepiece we can’t talk about exact time. We can only work with relative perspectives. Let’s take an example. Imagine watching a Harrison Ford thriller. He’s standing on the top of a train as it rushes through the station. He positions himself precisely in the center of the train. At either end is a rapier wielding bad guy eager to kill him. Ford is equipped with two guns which he fires simultaneously at his two nemeses. These guns fire beams of light that kill the men instantly. In Ford’s frame both men die at the same time. The speed of light is the same in both directions and he’s equidistant from the men when he shoots. Therefore the beams hit at the same moment according to Ford. But for Ford’s sweetheart on the platform the story is different. Let’s assume she is aligned with Ford at the moment he shoots. She sees the back of the train catching up with the point where Ford took the shot. Moreover she sees the front of the train moving away from the point of firing. Now the speed of light is constant in all directions for her. Therefore she’ll see the man at the back of the train get hit before the man at the front! Remarkably all of these strange effects have been experimentally confirmed. Special relativity and spacetime really do describe our universe. But with our present understanding it seems that making genuine measurements would be nigh on impossible. We’ve only seen examples of things we can’t measure with certainty! Thankfully there is a spacetime measurement all observers can agree on. This quantity is known as proper time. It’s very easy to calculate the proper time between events A and B. Take a clock and put it on a spaceship. Set your spaceship moving at a constant speed so that it goes through the spacetime points corresponding to A and B. The proper time between A and B is the time that elapses on the spaceship clock between A and B. Everyone is forced to agree on the proper time between two events. After all it only depends on the motion of the spaceship. Observers moving at different speeds will all see the same time elapsed according to the spaceship clock. The speed that they are moving has absolutely no effect on the speed of your spaceship! You might have spotted a potential flaw. What if somebody else sets off a spaceship which takes a different route from A to B? Wouldn’t it measure a different proper time? Indeed it would. But it turns out this situation is impossible. Remember that we had to set our spaceship off at a constant speed. This means it is going in a straight line through spacetime. It’s easy to see that there’s only one possible straight line route joining any two points. (Draw a picture)! That’s it. You now understand special relativity. In just a few paragraphs we’ve made a huge conceptual leap. Forget about absolute space and time – it’s plain wrong. Instead use Einstein’s new magic measurement of proper time. Proper time doesn’t determine conventional time or length. Rather it tells us about distances in spacetime. Einstein had cracked the biggest problem in physics. But he wasn’t done yet. Armed with his ideas about relativity and proper time, he turned to the Holy Grail. Could he go one better than Newton? It was time to explain gravity. The Shape Of Space “[…] the great questions about gravitation. Does it require time? […] Has it any reference to electricity? Or does it stand on the very foundation of matter – mass or inertia?” James Clerk Maxwell Physicists are always happy when they are going at constant speed. Thanks to Galileo and Einstein they don’t need to worry exactly how fast they are going. They can just observe the world and be sure that their observations are true interpretations of physics. We can all empathise with this. It’s much more pleasant going on a calm cruise across the Aegean than a rocky boat crossing the English Channel. This is because the choppy waters cause the boat to accelerate from side to side. We’re no longer travelling at a constant speed, so the laws of physics appear unusual. This can have unpleasant consequences if we don’t find our sea legs. Given this overwhelming evidence it seems madness to alter Einstein and Galileo’s relativity. But this is exactly what Einstein did. He postulated a general principle of relativity – the laws of physics are the same in any frame of reference. That is, no matter what your speed or acceleration. To explain this brazen statement, we’ll take an example. Imagine you go to Disneyland and queue for the Freefall Ride. In this terrifying experience you are hoisted 60 metres in the air and then dropped. As you plunge towards the ground you notice that you can’t feel your weight. For a few moments you are completely weightless! In other words there is no gravity in your accelerating frame. Let’s try a similar thought experiment. Suppose you wake up and find yourself in a bare room with no windows or doors. You stick to the floor as you usually would on Earth. You might be forgiven for thinking that you were under the influence of terrestrial gravity. In fact you could equally be trapped in a spaceship, accelerating at precisely the correct rate to emulate the force of gravity. Just think back to Newton’s Second Law and this is obviously true. Hopefully you’ve spotted a pattern. Acceleration and gravity are bound together. In fact there seems to be no way of unpicking the knots. Immediately Einstein’s general relativity becomes more credible. An accelerating frame is precisely a constant speed frame with gravity around it. So far so good. But we haven’t said anything about gravity that couldn’t be said about other forces. If we carry on thinking of gravity as a conventional interaction the argument becomes quite circular. We need a description of gravity that is independent of frames. Here special relativity comes to our rescue. Remember that proper time gives us a universally agreed distance between two points. It does so by finding the length of a straight line in spacetime from one event to the other. This property characterises spacetime as flat. To see this imagine you are a millipede, living on a piece of string. You can only move forwards or backwards. In other words you have one dimension of space. You also move through time. Therefore you exist in a two dimensional spacetime. One day you decide to make a map of all spacetime. You travel to every point in spacetime and write down the proper time it took to get there. Naturally you are very efficient, and always travel by the shortest possible route. When you return you try to make a scale drawing of all your findings on a flat sheet of paper. Suppose your spacetime is exactly as we’d expect from special relativity. Then you have no problem making your map. Indeed you always travelled by the shortest route to each point, which is a straight line in the spacetime of special relativity. On your flat sheet of paper, the shortest distance between two points is also a straight line. So your map will definitely work out. You can now see why special relativistic spacetime is flat. Just for fun, let’s assume that you had trouble making your map. Just as you’ve drawn a few points on the map you realise there’s a point which doesn’t fit. Thinking you must be mistaken you try again. The same thing happens. You go out for another trek round spacetime. Exhausted you return with the same results. How immensely puzzling! Eventually you start to realise that something fundamental is wrong. What if spacetime isn’t flat? You grab a nearby bowling ball and start drawing your map on its surface. Suddenly everything adds up. The distances you measured between each point work out perfectly. Your spacetime isn’t flat, it’s spherical. Let’s look a bit closer at why this works. When you went out measuring your spacetime you took the shortest route to every point. On the surface of a sphere this is not a straight line. Rather it is part of a circle – a line of longitude or latitude. If you aren’t convinced, find a globe and measure it yourself on the Earth’s surface. When you try to make a flat map, it doesn’t work. This is because the shortest distance between two points on the flat paper is a straight line. The two systems of measurement are just incompatible. We’re used to seeing this every time we look at a flat map of the world. The distances on it are all wrong because it has to be stretched and squished to fit the flat page. Gallivanting millipedes aside, what has this got to do with reality? If we truly live in the universe of special relativity then we don’t need to worry about such complications. Everything is always flat! But Einstein realised that these curiosities were exactly the key to the treasure chest of gravity. In his landmark paper of 1915, Einstein told us to forget gravity as a force. Instead, he claimed, gravity modifies the proper time between events. In doing so, it changes the geometry of spacetime. The flat, featureless landscape of special relativity was instantly replaced by cliffs and ravines. More precisely Einstein came up with a series of equations which described how the presence of mass changes the calculation of proper time. Large masses can warp spacetime, causing planets to orbit their suns and stars to form galaxies. Nothing is immune to the change in geometry. Even light bends around massive stars. This has been verified countless times during solar eclipses. Einstein had devised a frame independent theory of gravity which elegantly explained all gravitational phenomena. His general principle of relativity was more than vindicated; it became the title of a momentous new perspective on the cosmos. The quest to decipher Newton’s gravity was complete. In the past century, despite huge quantities of research, we have made little progress beyond Einstein’s insights. The experimental evidence for general relativity is tumultuous. But yet it resists all efforts to express it as a quantum theory. We stand at an impasse, looking desperately for a way across. I’ll start this post by tying up some loose ends from last time. Before we get going there’s no better recommendation for uplifting listening than this marvellous recording. Hopefully it’ll help motivate and inspire you (and I) as we journey deeper into the weird and wonderful world of algebra and geometry. I promised a proof that for algebraically closed fields $k$ every Zariski open set is dense in the Zariski topology. Quite a mouthful at this stage of a post, I admit. Basically what I’m showing is that Zariski open sets are really damn big, only in a mathematically precise way. But what of this ‘algebraically closed’ nonsense? Time for a definition. Definition 3.1 A field $k$ is algebraically closed if every nonconstant polynomial in $k[x]$ has a root in $k$. Let’s look at a few examples. Certainly $\mathbb{R}$ isn’t algebraically closed. Indeed the polynomial $x^2 + 1$ has no root in $\mathbb{R}$. By contrast $\mathbb{C}$ is algebraically closed, by virtue of the Fundamental Theorem of Algebra. Clearly no finite field is algebraically closed. Indeed suppose $k=\{p_1,\dots ,p_n\}$ then $(x-p_1)\dots (x-p_n) +1$ has no root in $k$. We’ll take a short detour to exhibit another large class of algebraically closed fields. Definition 3.2 Let $k,\ l$ be fields with $k\subset l$. We say that $l$ is a field extension of $k$ and write $l/k$ for this situation. If every element of $l$ is the root of a polynomial in $k[x]$ we call $l/k$ an algebraic extension. Finally we say that the algebraic closure of $k$ is the algebraic extension $\bar{k}$ of $k$ which is itself algebraically closed. (For those with a more technical background, recall that the algebraic closure is unique up to $k$-isomorphisms, provided one is willing to apply Zorn’s Lemma). The idea of algebraic closure gives us a pleasant way to construct algebraically closed fields. However it gives us little intuition about what these fields ‘look like’. An illustrative example is provided by the algebraic closure of the finite field of order $p^d$ for $p$ prime. We’ll write $\mathbb{F}_{p^d}$ for this field, as is common practice. It’s not too hard to prove the following Theorem 3.3 $\mathbb{F}_{p^d}=\bigcup_{n=1}^{\infty}\mathbb{F}_{p^{n!}}$ Proof Read this PlanetMath article for details. Now we’ve got a little bit of an idea what algebraically closed fields might look like! In particular we’ve constructed such fields with characteristic $p$ for all $p$. From now on we shall boldly assume that for our purposes every field $k$ is algebraically closed I imagine that you may have an immediate objection. After all, I’ve been recommending that you use $\mathbb{R}^n$ to gain an intuition about $\mathbb{A}^n$. But we’ve just seen that $\mathbb{R}$ is not algebraically closed. Seems like we have an issue. At this point I have to wave my hands a bit. Since $\mathbb{R}^n$ is a subset of $\mathbb{C}^n$ we can recover many (all?) of the geometrical properties we want to study in $\mathbb{R}^n$ by examining them in $\mathbb{C}^n$ and projecting appropriately. Moreover since $\mathbb{C}^n$ can be identified with $\mathbb{R}^{2n}$ in the Euclidean topology, our knowledge of $\mathbb{R}^n$ is still a useful intuitive guide. However we should be aware that when we are examining affine plane curves with $k=\mathbb{C}$ they are in some sense $4$ dimensional objects – subsets of $\mathbb{C}^2$. If you can imagine $4$ dimensional space then you are a better person than I! That’s not to say that these basic varieties are completely intractable though. By looking at projections in $\mathbb{R}^3$ and $\mathbb{R}^2$ we can gain a pretty complete geometric insight. And this will soon be complemented by our burgeoning algebraic understanding. Now that I’ve finished rambling, here’s the promised proof! Lemma 3.4 Every nonempty Zariski open subset of $\mathbb{A}^1$ is dense. Proof Recall that $k[x]$ is a principal ideal domain. Thus any ideal $I\subset k[x]$ may be written $I=(f)$. But $k$ algebraically closed so $f$ splits into linear factors. In other words $I = ((x-a_1)\dots (x-a_n))$. Hence the nontrivial Zariski closed subsets of $\mathbb{A}^1$ are finite, so certainly the Zariski open subsets of $\mathbb{A}^1$ are dense. $\blacksquare$ I believe that the general case is true for the complement of an irreducible variety, a concept which will be introduced next. However I haven’t been able to find a proof, so have asked here. How do varieties split apart? This is a perfectly natural question. Indeed many objects, both in mathematics and the everyday world, are made of some fundamental building block. Understanding this ‘irreducible atom’  gives us an insight into the properties of the object itself. We’ll thus need a notion for what constitutes an ‘irreducible’ or ‘atomic’ variety. Definition 3.5 An affine variety $X$ is called reducible if one can write $X=Y\cup Z$ with $Y,\ Z$ proper subsets of $X$. If $X$ is not reducible, we call it irreducible. This seems like a good and intuitive way of defining irreducibility. But we don’t yet know that every variety can be constructed from irreducible building blocks. We’ll use the next few minutes to pursue such a theorem. As an aside, I’d better tell you about some notational confusion that sometimes creeps in. Some authors use the term algebraic set for  my term affine variety. Such books will often use the term affine variety to mean irreducible algebraic set. For the time being I’ll stick to my guns, and use the word irreducible when it’s necessary! Before we go theorem hunting, let’s get an idea about what irreducible varieties look like by examining some examples. The ‘preschool’ example is that $V(x_1 x_2)\subset \mathbb{A}^2$ is reducible, for indeed $V(x_1 x_2) = V(x_1)\cup V(x_2)$. This is neither very interesting nor really very informative, however. A better example is the fact that $\mathbb{A}^1$ is irreducible. To see this, recall that earlier we found that the only proper subvarieties of $\mathbb{A}^1$ are finite. But $k$ is algebraically closed, so infinite. Hence we cannot write $\mathbb{A}^1$ as the union of two proper subvarities! What about the obvious generalization of this to $\mathbb{A}^n$? Turns out that it is indeed true, as we might expect. For the sake of formality I’ll write it up as a lemma. Lemma 3.6 $\mathbb{A}^n$ is irreducible Proof Suppose we could write $\mathbb{A}^n=V(f)\cup V(g)$. By Lemma 2.5 we know that $V(f)\cup V(g) = V((f)\cap (g))$. But $(f)\cap(g)\supset (fg)$ so $V((f)\cap(g))\subset V(fg)$ again by Lemma 2.5. Conversely if $x\in V(fg)$ then either $f(x) = 0$ or $g(x) = 0$, so $x \in V(f)\cup V(g)$. This shows that $V(f)\cup V(g)=V(fg)$. Now $V(fg)=\mathbb{A}^n$ immediately tells us $fg(x) = 0 \ \forall x\in k$. Suppose that $f$ is nonzero. We’ll prove that $g$ is the zero polynomial by induction on $n$. Then $V(g)=\mathbb{A}^n$ so $\mathbb{A}^n$ not irreducible, as required. We first note that since $k$ algebraically closed $k$ infinite. For $n=1$ suppose $f,\ g \neq 0$. Then $f,\ g$ are each zero at finite sets of points. Thus since $k$ infinite, $fg$ is not the zero polynomial, a contradiction. Now let $n>1$.  Consider $f,\ g$ nonzero polynomials in $k[\mathbb{A}^n]$. Fix $x_n \in k$. Then $f,\ g$ polynomials in $k[\mathbb{A}^{n-1}]$. For some $x_n$, $f,\ g$ nonzero as polynomials in $k[\mathbb{A}^{n-1}]$. By the induction hypothesis $fg\neq 0$. This completes the induction. $\blacksquare$ I’ll quickly demonstrate that $\mathbb{A}^n$ is quite strange, when considered as a topological space with the Zariski topology! Indeed let $U$ and $V$ be two nonempty open subsets. Then $U\cap V\neq \emptyset$. Otherwise $\mathbb{A}^n\setminus U,\ \mathbb{A}^n\setminus V$ would be proper closed subsets (affine subvarieties) which covered $\mathbb{A}^n$, violating irreducibility. This is very much not what happens in the Euclidean topology! Similarly we now have a rigorous proof that an open subset $U$ of $\mathbb{A}^n$ is dense. Otherwise $\bar{U}$ and $\mathbb{A}^n\setminus U$ would be proper subvarieties covering $\mathbb{A}^n$. It’s all very well looking for direct examples of irreducible varieties, but in doing so we’ve forgotten about algebra! In fact algebra gives us a big helping hand, as the following theorem shows. For completeness we first recall the definition of a prime ideal. Definition 3.7 $\mathfrak{p}$ is a prime ideal in $R$ iff whenever $fg \in \mathfrak{p}$ we have $f\in \mathfrak{p}$ or $g \in \mathfrak{p}$. Equivalently $\mathfrak{p}$ is prime iff $R/\mathfrak{p}$ is an integral domain. Theorem 3.8 Let $X$ be a nonempty affine variety. Then $X$ irreducible iff $I(X)$ a prime ideal. Proof [“$\Rightarrow$“] Suppose $I(X)$ not prime. Then $\exists f,g \in k[\mathbb{A}^n]$ with $fg \in I(X)$ but $f,\ g \notin I(X)$. Let $J_1 = (I(X),f)$ and $J_2 = (I(X),g)$. Further define $X_1 = V(J_1), \ X_2 = V(J_2)$. Then $V(X_1), \ V(X_2) \subset X$ so proper subsets of $\mathbb{A}^n$. On the other hand $X\subset X_1 \cup X_2$. Indeed if $P\in X$ then $fg(P)=0$ so $f(P)=0$ or $g(P)=0$ so $P \in X_1\cup X_2$. [“$\Leftarrow$“] Suppose $X$ is reducible, that is $\exists X_1,\ X_2$ proper subvarieties of $X$ with $X=X_1\cup X_2$. Since $X_1$ a proper subvariety of $X$ there must exist some element $f \in I(X_1)\setminus I(X)$. Similarly we find $g\in I(X_2)\setminus I(X)$. Hence $fg(P) = 0$ for all $P$ in $X_1\cup X_2 = X$, so certainly $fg \in I(X)$. But this means that $I(X)$ is not prime. $\blacksquare$ This easy theorem is our first real taste of the power that abstract algebra lends to the study of geometry. Let’s see it in action. Recall that a nonzero principal ideal of the ring $k[\mathbb{A}^n]$ is prime iff it is generated by an irreducible polynomial. This is an easy consequence of the fact that $k[\mathbb{A}^n]$ is a UFD. Indeed a nonzero principal ideal is prime iff it is generated by a prime element. But in a UFD every prime is irreducible, and every irreducible is prime! Using the theorem we can say that every irreducible polynomial $f$ gives rise to an irreducible affine hypersurface $X$ s.t. $I(X)=(f)$. Note that we cannot get a converse to this – there’s nothing to say that $I(X)$ must be principal in general. Does this generalise to ideals generated by several irreducible polynomials? We quickly see the answer is no. Indeed take $f = x\, g = x^2 + y^2 -1$ in $k[\mathbb{A}^2]$. These are both clearly irreducible, but $(f,g)$ is not prime. We can see this in two ways. Algebraically $y^2 \in (f,g)$ but $y \notin (f,g)$. Geometrically, recall Lemma 2.5 (3). Also note that by definition $(f,g) = (f)+(g)$. Hence $V(f,g) = V(f)\cap V(g)$. But $V(f) \cap V(g)$ is clearly just two distinct points (the intersection of the line with the circle). Hence it is reducible, and by our theorem $(f,g)$ cannot be prime. We can also use the theorem to exhibit a more informative example of a reducible variety. Consider $X = V(X^2Y - Y^2)$. Clearly $\mathfrak{a}=(X^2Y-Y^2)$ is not prime for $Y(X^2 - Y) \in \mathfrak{a}$ but $Y\notin \mathfrak{a}, \ X^2 - Y \notin \mathfrak{a}$. Noting that $\mathfrak{a}=(X^2-Y)\cap Y$ we see that geometrically $X$ is the union of the $X$-axis and the parabola $Y=X^2$, by Lemma 2.5. Having had such success with prime ideals and irreducible varieties, we might think – what about maximal ideals? Turns out that they have a role to play too. Note that maximal ideals are automatically prime, so any varieties they generate will certainly be irreducible. Definition 3.9 An ideal $\mathfrak{m}$ of $R$ is said to be maximal if whenever $\mathfrak{m}\subset\mathfrak{a}\subset R$ either $\mathfrak{a} = \mathfrak{m}$ or $\mathfrak{a} = R$. Equivalently $\mathfrak{m}$ is maximal iff $R/\mathfrak{m}$ is a field. Theorem 3.10 An affine variety $X$ in $\mathbb{A}^n$ is a point iff $I(X)$ is a maximal ideal. Proof  [“$\Rightarrow$“] Let $X = \{(a_1, \dots , a_n)\}$ be a single point. Then clearly $I(X) = (X_1-a_1,\dots ,X_n-a_n)$. But $k[\mathbb{A}^n]/I(X)$ a field. Indeed $k[\mathbb{A}^n]/I(X)$ isomorphic to $k$ itself, via the isomorphism $X_i \mapsto a_i$. Hence $I(X)$ maximal. [“$\Leftarrow$“] We’ll see this next time. In fact all we need to show is that $(X_1-a_1,\dots,X_n-a_n)$ are the only maximal ideals. $\blacksquare$ Theorems 3.8 and 3.10 are a promising start to our search for a dictionary between algebra and geometry. But they are unsatisfying in two ways. Firstly they tell us nothing about the behaviour of reducible affine varieties – a very large class! Secondly it is not obvious how to use 3.8 to construct irreducibly varieties in general. Indeed there is an inherent asymmetry in our knowledge at present, as I shall now demonstrate. Given an irreducible variety $X$ we can construct it’s ideal $I(X)$ and be sure it is prime, by Theorem 3.8. Moreover we know by Lemma 2.5 that $V(I(X))=X$, a pleasing correspondence. However, given a prime ideal $J$ we cannot immediately say that $V(J)$ is prime. For in Lemma 2.5 there was nothing to say that $I(V(J))=J$, so Theorem 3.8 is useless. We clearly need to find a set of ideals for which $I(V(J))=J$ holds, and hope that prime ideals are a subset of this. It turns out that such a condition is satisfied by a class called radical ideals. Next time we shall prove this, and demonstrate that radical ideals correspond exactly to algebraic varieties. This will provide us with the basic dictionary of algebraic geometry, allowing us to proceed to deeper results. The remainder of this post shall be devoted to radical ideals, and the promised proof of an irreducible decomposition. Definition 3.11 Let $J$ be an ideal in a ring $R$. We define the radical of $J$ to be the ideal $\sqrt{J}=\{f\in R : f^m\in J \ \textrm{some} \ m\in \mathbb{N}\}$. We say that $J$ is a radical ideal if $J=\sqrt{J}$. (That $\sqrt{J}$ is a genuine ideal needs proof, but this is merely a trivial check of the axioms). At first glance this appears to be a somewhat arbitrary definition, though the nomenclature should seem sensible enough. To get a more rounded perspective let’s introduce some other concepts that will become important later. Definition 3.12polynomial function or regular function on an affine variety $X$ is a map $X\rightarrow k$ which is defined by the restriction of a polynomial in $k[\mathbb{A}^n]$ to $X$. More explicitly it is a map $f:X\rightarrow k$ with $f(P)=F(P)$ for all $P\in X$ where $F\in k[\mathbb{A}^n]$ some polynomial. These are eminently reasonable quantities to be interested in. In many ways they are the most obvious functions to define on affine varieties. Regular functions are the analogues of smooth functions in differential geometry, or continuous functions in topology. They are the canonical maps. It is obvious that a regular function $f$ cannot in general uniquely define the polynomial $F$ giving rise to it. In fact suppose $f(P)=F(P)=G(P) \ \forall P \in X$. Then $F-G = 0$ on $X$ so $F-G\in I(X)$. This simple observation explains the implicit claim in the following definition. Definition 3.13 Let $X$ be an affine variety. The coordinate ring $k[X]$ is the ring $k[\mathbb{A}^n]|_X=k[\mathbb{A}^n]/I(X)$. In other words the coordinate ring is the ring of all regular functions on $X$. This definition should also appear logical. Indeed we define the space of continuous functions in topology and the space of smooth functions in differential geometry. The coordinate ring is merely the same notion in algebraic geometry.  The name  ‘coordinate ring’ arises since clearly $k[X]$ is generated by the coordinate functions $x_1,\dots ,x_n$ restricted to $X$. The reason for our notation $k[x_1,\dots ,x_n]=k[\mathbb{A}^n]$ should now be obvious. Note that the coordinate ring is trivially a finitely generated $k$-algebra. The coordinate ring might seem a little useless at present. We’ll see in a later post that it has a vital role in allowing us to apply our dictionary of algebra and geometry to subvarieties. To avoid confusion we’ll stick to $k[\mathbb{A}^n]$ for the near future. The reason for introducing coordinate rings was to link them to radical ideals. We’ll do this via two final definitions. Definition 3.14 An element $x$ of a ring $R$ is called nilpotent if $\exists$ some positive integer $n$ s.t. $x^n=0$. Definition 3.15 A ring $R$ is reduced if $0$ is its only nilpotent element. Lemma 3.16 $R/I$ is reduced iff $I$ is radical. Proof Let $x+I$ be a nilpotent element of $R/I$ i.e. $(x^n + I) = 0$. Hence $x^n \in I$ so by definition $x\in \sqrt{I}=I$. Conversely let $x\in R s.t. x^m \in I$. Then $x^m + I = 0$ in $R/I$ so $x+I = 0+I$ i.e. $x \in I$. $/blacksquare$ Putting this all together we immediately see that the coordinate ring $k[X]$ is a reduced, finitely generated $k$-algebra. That is, provided we assume that for an affine variety $X$, $I(X)$ is radical, which we’ll prove next time. It’s useful to quickly see that these properties characterise coordinate rings of varieties. In fact given any reduced, finitely generated $k$-algebra $A$ we can construct a variety $X$ with $k[X]=A$ as follows. Write $A=k[a_1,\dots ,a_n]$ and define a surjective homomorphism $\pi:k[\mathbb{A}^n]\rightarrow A, \ x_i\mapsto a_i$. Let $I=\textrm{ker}(\pi)$ and $X=V(I)$. By the isomorphism theorem $A = k[\mathbb{A}^n]/I$ so $I$ is radical since $A$ reduced. But then by our theorem next time $X$ an affine variety, with coordinate ring $A$. We’ve come a long way in this post, and congratulations if you’ve stayed with me through all of it! Let’s survey the landscape. In the background we have abstract algebra – systems of equations whose solutions we want to study. In the foreground are our geometrical ideas – affine varieties which represent solutions to the equations. These varieties are built out of irreducible blocks, like Lego. We can match up ideals and varieties according to various criteria. We can also study maps from geometrical varieties down to the ground field using the coordinate ring. Before I go here’s the promised proof that irreducible varieties really are the building blocks we’ve been talking about. Theorem 3.17 Every affine variety $X$ has a unique decomposition as $X_1\cup\dots\cup X_n$ up to ordering, where the $X_i$ are irreducible components and $X_i\not\subset X_j$ for $i\neq j$. Proof (Existence) An affine variety $X$ is either irreducible or $X=Y\cup Z$ with $Y,Z$ proper subset of $X$. We similarly may decompose $Y$ and $Z$ if they are reducible, and so on. We claim that this process stops after finitely many steps. Suppose otherwise, then $X$ contains an infinite sequence of subvarieties $X\supsetneq X_1 \supsetneq X_2 \supsetneq \dots$. By Lemma 2.5 (5) & (7) we have $I(X)\subsetneq I(X_1) \subsetneq I(X_2) \subsetneq \dots$. But $k[\mathbb{A}^n]$ a Noetherian ring by Hilbert’s Basis Theorem, and this contradicts the ascending chain condition! To satisfy the $X_i \not\subset X_j$ condition we simply remove any such $X_i$ that exist in the decomposition we’ve found. (Uniqueness) Suppose we have another decomposition $X=Y_1\cup Y_m$ with $Y_i\not\subset Y_j$ for $i\neq j$. Then $X_i = X_i\cap X = \bigcup_{j=1}^{m}( X_i\cap Y_j)$. Since $X_i$ is irreducible we must have $X_i\cap Y_j = X_i$ for some $j$. In particular $X_i \subset Y_j$. But now by doing the same with the $X$ and $Y$ reversed we find $X_k$ width $X_i \subset Y_j \subset X_k$. But this forces $i=k$ and $Y_j = X_i$. But $i$ was arbitrary, so we are done. $\blacksquare$ If you’re interested in calculating some specific examples of ideals and their associated varieties have a read about Groebner Bases. This will probably become a topic for a post at some point, loosely based on the ideas in Hassett’s excellent book. This question is also worth a skim. I leave you with this enlightening MathOverflow discussion , tackling the irreducibility of polynomials in two variables. Although some of the material is a tad dense, it’s nevertheless interesting, and may be a useful future reference! # Invariant Theory and David Hilbert Health warning: this post is part of a more advanced series on commutative algebra. It may be a little tricky for the layman to understand! David Hilbert was perhaps the greatest mathematicians of the late 19th century. Much of his work laid the foundations for our modern study of commutative algebra. In doing so, he was sometimes said to have killed the study of invariants by solving the central problem in the field. In this post I’ll give a sketch of how he did so. Motivated by Galois Theory we ask the following question. Given a polynomial ring $S = k[x_1,\dots,x_n]$ and a group $G$ acting on $S$ as a $k$-automorphism, what are the elements of $S$ that are invariant under the action of $G$? Following familiar notation we denote this set $S^G$ and note that it certainly forms a subalgebra of $S$. In the late 19th century it was found that $S^G$ could be described fully by a finite set of generators for several suggestive special cases of $G$. It soon became clear that the fundamental problem of invariant theory was to find necessary and sufficient conditions for $S^G$ to be finitely generated. Hilbert’s contribution was an incredibly general sufficient condition, as we shall soon see. To begin with we shall recall the alternative definition of a Noetherian ring. It is a standard proof that this definition is equivalent to that which invokes the ascending chain condition on ideals. As an aside, also recall that the ascending chain condition can be restated by saying that every nonempty collection of ideals has a maximal element. Definition A.1 A ring $R$ is Noetherian if every ideal of $R$ is finitely generated. We shall also recall without proof Hilbert’s Basis Theorem, and draw an easy corollary. Theorem A.2 If $R$ Noetherian then $R[x]$ Noetherian. Corollary A.3 If $S$ is a finitely generated algebra over $R$, with $R$ Noetherian, then $S$ Noetherian. Proof We’ll first show that any homomorphic image of $R$ is Noetherian. Let $I$ be an ideal in the image under than homomorphism $f$. Then $f^{-1}(I)$ an ideal in $R$. Indeed if $k\in f^{-1}(I)$ and $r\in R$ then $f(rk)=f(r)f(k)\in I$ so $rk \in f^{-1}(I)$. Hence $f^{-1}(I)$ finitely generated, so certainly $I$ finitely generated, by the images of the generators of $f^{-1}(I)$. Now we’ll prove the corollary. Since $S$ is a finitely generated algebra over $R$, $S$ is a homomorphic image of $R[x_1,\dots,x_n]$ for some $n$, by the obvious homomorphism that takes each $x_i$ to a generator of $S$. By Theorem A.2 and induction we know that $R[x_1,\dots,x_n]$ is Noetherian. But then by the above, $S$ is Noetherian. $\blacksquare$ Since we’re on the subject of Noetherian things, it’s probably worthwhile introducing the concept of a Noetherian module. The subsequent theorem is analogous to A.3 for general modules. This question ensures that the theorem has content. Definition A.4 An $R$-module $M$ is Noetherian if every submodule $N$ is finitely generated, that is, if every element of $N$ can be written as a polynomial in some generators $\{f_1,\dots,f_n\}\subset N$ with coefficients in $R$. Theorem A.5 If $R$ Noetherian and $M$ a finitely generated $R$-module then $M$ Noetherian. Proof Suppose $M$ generated by $f_1,\dots,f_t$, and let $N$ be a submodule. We show $N$ finitely generated by induction on $t$. If $t=1$ then clearly the map $h:R\rightarrow M$ defined by $1\mapsto f_1$ is surjective. Then the preimage of $N$ is an ideal, just as in A.3, so is finitely generated. Hence $N$ is finitely generated by the images of the generators of $h^{-1}(N)$.  (*) Now suppose $t>1$. Consider the quotient map $h:M \to M/Rf_1$. Let $\tilde{N}$ be the image of $N$ under this map. Then by the induction hypothesis $\tilde{N}$ is finitely generated as it is a submodule of $M/Rf_1$. Let $g_1,\dots,g_s$ be elements of $N$ whose images generate $\tilde{N}$. Since $Rf_1$ is a submodule of $M$ generated by a single element, we have by (*) that it’s submodule $Rf_1\cap N$ is finitely generated, by $h_1,\dots,h_r$ say. We claim that $\{g_1,\dots,g_s,h_1,\dots,h_r\}$ generate $N$. Indeed given $n \in N$ the image of $n \in N$ is a linear combination of the images of the $g_i$. Hence subtracting the relevant linear combination of the $g_i$ from $n$ produces an element of $N \cap Rf_1$ which is precisely a linear combination of the $h_i$ by construction. This completes the induction. $\blacksquare$ We’re now ready to talk about the concrete problem that Hilbert solved using these ideas, namely the existence of finite bases for invariants. We’ll take $k$ to be a field of characteristic $0$ and $G$ to be a finite group, or one of the linear groups $\textrm{ GL}_n(k),\ \textrm{SL}_n(k)$. As in our notation above, we take $S=k[x_1,\dots,x_n]$. Suppose also we are given a group homomorphism $\phi:G \to \textrm{GL}_r(k)$, which of course can naturally be seen as the group of invertible linear transformations of the vector space $V$ over $k$ with basis $x_1,\dots,x_r$. This is in fact the definition of a representation of $G$ on the vector space $V$. As is common practice in representation theory, we view $G$ as acting on $V$ via $(g,v)\mapsto \phi(g)v$. If $G$ is $\textrm{SL}_n(k)$ or $\textrm{GL}_n(k)$ we shall further suppose that our representation of $G$ is rational. That is, the matrices $g \in G$ act on $V$ as matrices whose entries are rational functions in the entries of $g$. (If you’re new to representation theory like me, you might want to read that sentence twice)! We now extend the action of $g\in G$ from $V$ to the whole of $S$ by defining $(g,f)\mapsto f(g^{-1}(x_1),\dots,g^{-1}(x_r),x_{r+1},\dots,x_n)$. Thus we may view $G$ as an automorphism group of $S$. The invariants under $G$ are those polynomials left unchanged by the action of every $g \in G$, and these form a subring of $S$ which we’ll denote $S^G$. Enough set up. To proceed to more interesting territory we’ll need to make another definition. Definition A.6 A polynomial is called homogeneous, homogeneous form, or merely a form, if each of its monomials with nonzero coefficient has the same total degree. Hilbert noticed that the following totally obvious fact about $S^G$ was critically important to the theory of invariants. We may write $S^G$ as a direct sum of the vector spaces $R_i$ of homogeneous forms of degree $i$ that are invariant under $G$. We say that $S^G$ may be graded by degree and use this to motivate our next definition. Definition A.7 A graded ring is a ring $R$ together with a direct sum decomposition as abelian groups $R = R_0 \oplus R_1 \oplus \dots$, such that $R_i R_j \subset R_{i+j}$. This allows us to generalise our notion of homogeneous also. Definition A.8 A homogeneous element of a graded ring $R$ is an element of one of the groups $R_i$. A homogeneous ideal of $R$ is an ideal generated by homogeneous elements. Be warned that clearly homogeneous ideals may contain many inhomogeneous elements! It’s worth mentioning that there was no special reason for taking $\mathbb{N}$ as our indexing set for the $R_i$. We can generalise this easily to $\mathbb{Z}$, and such graded rings are often called $\mathbb{Z}$-graded rings. We won’t need this today, however. Note that if $f \in R$ we have a unique expression for $f$ of the form $f = f_0 + f_1 + \dots + f_n$ with $f_i \in R_i$. (I have yet to convince myself why this terminates generally, any thoughts? I’ve also asked here.) We call the $f_i$homogeneous component of $f$. The next definition is motivated by algebraic geometry, specifically the study of projective varieties. When we arrive at these in the main blog (probably towards the end of this month) it shall make a good deal more sense! Definition A.9 The ideal in a graded ring $R$ generated by all forms of degree greater than $0$ is called the irrelevant ideal and notated $R_+$. Now we return to our earlier example. We may grade the polynomial ring $S=k[x_1,\dots,x_n]$ by degree. In other words we write $S=S_0\oplus S_1 \oplus \dots$ with $S_i$ containing all the forms (homogeneous polynomials) of degree $i$. To see how graded rings are subtly useful, we’ll draw a surprisingly powerful lemma. Lemma A.10 Let $I$ be a homogeneous ideal of a graded ring $R$, with $I$ generated by $f_1,\dots,f_r$. Let $f\in I$ be a homogeneous element. Then we may write $f = \sum f_i g_i$ with $g_i$ homogeneous of degree $\textrm{deg}(f)-\textrm{deg}(f_i)$. Proof We can certainly write $f = \sum f_i G_i$ with $G_i \in R$. Take $g_i$ to be the homogeneous components of $G_i$ of degree $\textrm{deg}(f)-\textrm{deg}(f_i)$. Then all other terms in the sum must cancel, for $f$ is homogeneous by assumption. $\blacksquare$ Now we return to our attempt to emulate Hilbert. We saw earlier that he spotted that grading $S^G$ by degree may be useful. His second observation was this. $\exists$ maps $\phi:S\to S^G$ of $S^G$-modules s.t. (1) $\phi$ preserves degrees and (2) $\phi$ fixes every element of $S^G$. It is easy to see that this abstract concept corresponds intuitively to the condition that $S^G$ be a summand of the graded ring $S$. This is trivial to see in the case that $G$ is a finite group. Indeed let $\phi (f) = \frac{1}{|G|}\sum_{g\in G} g(f)$. Note that we have implicitly used that $k$ has characteristic zero to ensure that the multiplicative inverse to $|G|$ exists. In the case that $G$ is a linear group acting rationally, then the technique is to replace the sum by an integral. The particulars of this are well beyond the scope of this post however! We finally prove the following general theorem. We immediately get Hilbert’s result on the finite generation of classes of invariants by taking $R=S^G$. Theorem A.11 Take$k$ a field and $S=k[x_1,\dots,x_n]$ a polynomial ring graded by degree. Let $R$ be a $k$-subalgebra of $S$. Suppose $R$ is a summand of $S$, in the sense described above. Then $R$ is finitely generated as a $k$-algebra. Proof Let $I\subset R$ be the ideal of $R$ generated by all homogeneous elements of degree $> 0$. By the Basis Theorem $S$ is Noetherian, and $IS$ an ideal of $S$, so finitely generated. By splitting each generator into its homogeneous components we may assume that $IS$ is generated by some homogeneous elements $f_1,\dots,f_s$ which we may wlog assume lie in $I$. We’ll prove that these elements precisely generate $R$ as a $k$-algebra. Now let $R'$ be the $k$-subalgebra of $S$ generated by $f_1,\dots,f_s$ and take $f\in R$ a general homogeneous polynomial. Suppose we have shown $f\in R'$. Let $g$ be a general element of $R$. Then certainly $g\in S$ a sum of homogeneous components. But $R$ a summand of $S$, so applying the given map $\phi$ we have that the homogeneous components are wlog in $R$. Thus $g\in R'$ also, and we are done. It only remains to prove $f \in R'$ which we’ll do by induction on the degree of $f$. If $\textrm{deg}(f)=0$ then $f\in K\subset R'$. Suppose $\textrm{deg}(f)>0$ so $f\in I$. Since the $f_i$ generate $IS$ as a homogeneous ideal of $S$ we may write $f = \sum f_i g_i$ with $g_i$ homogeneous of degree $\textrm{deg}(f)-\textrm{deg}(f_i)<\textrm{deg}(f)$ by Lemma A.10. But again we may use the map $\phi$ obtained from our observation that $R$ a summand of $S$. Indeed then $f=\sum \phi(g_i)f_i$ for $f,\ f_i \in R$. But $\phi$ preserves degrees so $\phi(g_i)$ 0f lower degree than $f$. Thus by the induction hypothesis $\phi(g_i) \in R'$ and hence $f\in R'$ as required. $\blacksquare$ It’s worth noting that such an indirect proof caused quite a furore when it was originally published in the late 19th century. However the passage of time has provided us with a broader view of commutative algebra, and techniques such as this are much more acceptable to modern tastes! Nevertheless I shall finish by making explicit two facts that help to explain the success of our argument. We’ll first remind ourselves of a useful definition of an algebra. Definition A.12 An $R$-algebra $S$ is a ring $S$ which has the compatible structure of a module over $R$ in such a way that ring multiplication is $R$-bilinear. It’s worth checking that this intuitive definition completely agrees with that we provided in the Background section, as is clearly outlined on the Wikipedia page.  The following provide an extension and converse to Corollary A.3 (that finitely generated algebras over fields are Noetherian) in the special case that $R$ a graded ring. Lemma A.13 $S=R_0\oplus R_1 \oplus \dots$ a Noetherian graded ring iff $R_0$ Noetherian and $S$ a finitely generated $R_0$ algebra. Lemma A.14 Let $S$ be a Noetherian graded ring, $R$ a summand of $S$. Then $R$ Noetherian. We’ll prove these both next time. Note that they certainly aren’t true in general when $S$ isn’t graded!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 873, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.921130359172821, "perplexity": 277.93976921091735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.92/warc/CC-MAIN-20170823035753-20170823055753-00625.warc.gz"}
http://mathhelpforum.com/algebra/118066-algebra-2-word-problem-2-a-print.html
Algebra 2 Word Problem #2 • Dec 2nd 2009, 02:42 PM kelsikels Algebra 2 Word Problem #2 1.) The speed of a stream is 5 mph. If a boat travels 82 miles downstream in the same time that it takes to travel 41 miles upstream, what is the speed of the boat in still water? THANK YOU! • Dec 2nd 2009, 02:48 PM skeeter Quote: Originally Posted by kelsikels 1.) The speed of a stream is 5 mph. If a boat travels 82 miles downstream in the same time that it takes to travel 41 miles upstream, what is the speed of the boat in still water? THANK YOU! basic idea for setting up equations is (rate)(time) = distance let $v$ = speed of the boat in mph with no current downstream ... $(v+5)t = 82$ upstream ... $(v-5)t = 41$ solve the system for $v$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8536911606788635, "perplexity": 1250.603054794017}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119120.22/warc/CC-MAIN-20170423031159-00121-ip-10-145-167-34.ec2.internal.warc.gz"}
http://blekko.com/wiki/Coherence_(physics)?source=672620ff
# Coherence (physics) In physics, coherence is an ideal property of waves that enables stationary (i.e. temporally and spatially constant) interference. It contains several distinct concepts, which are limit cases that never occur in reality but allow an understanding of the physics of waves, and has become a very important concept in quantum physics. More generally, coherence describes all properties of the correlation between physical quantities of a single wave, or between several waves or wave packets. Interference is nothing more than the addition, in the mathematical sense, of wave functions. In quantum mechanics, a single wave can interfere with itself, but this is due to its quantum behavior and is still an addition of two waves (see Young's slits experiment). This implies that constructive or destructive interferences are limit cases, and that waves can always interfere, even if the result of the addition is complicated or not remarkable. When interfering, two waves can add together to create a wave of greater amplitude than either one (constructive interference) or subtract from each other to create a wave of lesser amplitude than either one (destructive interference), depending on their relative phase. Two waves are said to be coherent if they have a constant relative phase. The degree of coherence is measured by the interference visibility, a measure of how perfectly the waves can cancel due to destructive interference. Spatial coherence describes the correlation between waves at different points in space. Temporal coherence describes the correlation or predictable relationship between waves observed at different moments in time. Both are observed in the Michelson–Morley experiment and Young's interference experiment. Once the fringes are obtained in the Michelson–Morley experiment, when one of the mirrors is moved away gradually, the time for the beam to travel increases and the fringes become dull and finally are lost, showing temporal coherence. Similarly, if in Young's double slit experiment the space between the two slits is increased, the coherence dies gradually and finally the fringes disappear, showing spatial coherence. ## Introduction Coherence was originally conceived in connection with Thomas Young's double-slit experiment in optics but is now used in any field that involves waves, such as acoustics, electrical engineering, neuroscience, and quantum mechanics. The property of coherence is the basis for commercial applications such as holography, the Sagnac gyroscope, radio antenna arrays, optical coherence tomography and telescope interferometers (astronomical optical interferometers and radio telescopes). ## Coherence and correlation The coherence of two waves follows from how well correlated the waves are as quantified by the cross-correlation function.[1][2][3][4][5] The cross-correlation quantifies the ability to predict the value of the second wave by knowing the value of the first. As an example, consider two waves perfectly correlated for all times. At any time, if the first wave changes, the second will change in the same way. If combined they can exhibit complete constructive interference/superposition at all times, then it follows that they are perfectly coherent. As will be discussed below, the second wave need not be a separate entity. It could be the first wave at a different time or position. In this case, the measure of correlation is the autocorrelation function (sometimes called self-coherence). Degree of correlation involves correlation functions. ## Examples of wave-like states These states are unified by the fact that their behavior is described by a wave equation or some generalization thereof. In most of these systems, one can measure the wave directly. Consequently, its correlation with another wave can simply be calculated. However, in optics one cannot measure the electric field directly as it oscillates much faster than any detector's time resolution.[6] Instead, we measure the intensity of the light. Most of the concepts involving coherence which will be introduced below were developed in the field of optics and then used in other fields. Therefore, many of the standard measurements of coherence are indirect measurements, even in fields where the wave can be measured directly. ## Temporal coherence Figure 1: The amplitude of a single frequency wave as a function of time t (red) and a copy of the same wave delayed by τ(green). The coherence time of the wave is infinite since it is perfectly correlated with itself for all delays τ. Figure 2: The amplitude of a wave whose phase drifts significantly in time τc as a function of time t (red) and a copy of the same wave delayed by 2τc(green). At any particular time t the wave can interfere perfectly with its delayed copy. But, since half the time the red and green waves are in phase and half the time out of phase, when averaged over t any interference disappears at this delay. Temporal coherence is the measure of the average correlation between the value of a wave and itself delayed by τ, at any pair of times. Temporal coherence tells us how monochromatic a source is. In other words, it characterizes how well a wave can interfere with itself at a different time. The delay over which the phase or amplitude wanders by a significant amount (and hence the correlation decreases by significant amount) is defined as the coherence time τc. At τ=0 the degree of coherence is perfect whereas it drops significantly by delay τc. The coherence length Lc is defined as the distance the wave travels in time τc. One should be careful not to confuse the coherence time with the time duration of the signal, nor the coherence length with the coherence area (see below). ### The relationship between coherence time and bandwidth It can be shown that the faster a wave decorrelates (and hence the smaller τc is) the larger the range of frequencies Δf the wave contains. Thus there is a tradeoff: $\tau_c \Delta f \approx 1$. Formally, this follows from the convolution theorem in mathematics, which relates the Fourier transform of the power spectrum (the intensity of each frequency) to its autocorrelation. ### Examples of temporal coherence We consider four examples of temporal coherence. • A wave containing only a single frequency (monochromatic) is perfectly correlated at all times according to the above relation. (See Figure 1) • Conversely, a wave whose phase drifts quickly will have a short coherence time. (See Figure 2) • Similarly, pulses (wave packets) of waves, which naturally have a broad range of frequencies, also have a short coherence time since the amplitude of the wave changes quickly. (See Figure 3) • Finally, white light, which has a very broad range of frequencies, is a wave which varies quickly in both amplitude and phase. Since it consequently has a very short coherence time (just 10 periods or so), it is often called incoherent. Monochromatic sources are usually lasers; such high monochromaticity implies long coherence lengths (up to hundreds of meters). For example, a stabilized and monomode helium–neon laser can easily produce light with coherence lengths of 300 m.[7] Not all lasers are monochromatic, however (e.g. for a mode-locked Ti-sapphire laser, Δλ ≈ 2 nm - 70 nm). LEDs are characterized by Δλ ≈ 50 nm, and tungsten filament lights exhibit Δλ ≈ 600 nm, so these sources have shorter coherence times than the most monochromatic lasers. Holography requires light with a long coherence time. In contrast, optical coherence tomography uses light with a short coherence time. ### Measurement of temporal coherence Figure 3: The amplitude of a wavepacket whose amplitude changes significantly in time τc (red) and a copy of the same wave delayed by 2τc(green) plotted as a function of time t. At any particular time the red and green waves are uncorrelated; one oscillates while the other is constant and so there will be no interference at this delay. Another way of looking at this is the wavepackets are not overlapped in time and so at any particular time there is only one nonzero field so no interference can occur. Figure 4: The time-averaged intensity (blue) detected at the output of an interferometer plotted as a function of delay τ for the example waves in Figures 2 and 3. As the delay is changed by half a period, the interference switches between constructive and destructive. The black lines indicate the interference envelope, which gives the degree of coherence. Although the waves in Figures 2 and 3 have different time durations, they have the same coherence time. In optics, temporal coherence is measured in an interferometer such as the Michelson interferometer or Mach–Zehnder interferometer. In these devices, a wave is combined with a copy of itself that is delayed by time τ. A detector measures the time-averaged intensity of the light exiting the interferometer. The resulting interference visibility (e.g. see Figure 4) gives the temporal coherence at delay τ. Since for most natural light sources, the coherence time is much shorter than the time resolution of any detector, the detector itself does the time averaging. Consider the example shown in Figure 3. At a fixed delay, here 2τc, an infinitely fast detector would measure an intensity that fluctuates significantly over a time t equal to τc. In this case, to find the temporal coherence at 2τc, one would manually time-average the intensity. ## Spatial coherence In some systems, such as water waves or optics, wave-like states can extend over one or two dimensions. Spatial coherence describes the ability for two points in space, x1 and x2, in the extent of a wave to interfere, when averaged over time. More precisely, the spatial coherence is the cross-correlation between two points in a wave for all times. If a wave has only 1 value of amplitude over an infinite length, it is perfectly spatially coherent. The range of separation between the two points over which there is significant interference is called the coherence area, Ac. This is the relevant type of coherence for the Young's double-slit interferometer. It is also used in optical imaging systems and particularly in various types of astronomy telescopes. Sometimes people also use "spatial coherence" to refer to the visibility when a wave-like state is combined with a spatially shifted copy of itself. ### Examples of spatial coherence Consider a tungsten light-bulb filament. Different points in the filament emit light independently and have no fixed phase-relationship. In detail, at any point in time the profile of the emitted light is going to be distorted. The profile will change randomly over the coherence time $\tau_c$. Since for a white-light source such as a light-bulb $\tau_c$ is small, the filament is considered a spatially incoherent source. In contrast, a radio antenna array, has large spatial coherence because antennas at opposite ends of the array emit with a fixed phase-relationship. Light waves produced by a laser often have high temporal and spatial coherence (though the degree of coherence depends strongly on the exact properties of the laser). Spatial coherence of laser beams also manifests itself as speckle patterns and diffraction fringes seen at the edges of shadow. Holography requires temporally and spatially coherent light. Its inventor, Dennis Gabor, produced successful holograms more than ten years before lasers were invented. To produce coherent light he passed the monochromatic light from an emission line of a mercury-vapor lamp through a pinhole spatial filter. In February 2011, Dr Andrew Truscott, leader of a research team at the ARC Centre of Excellence for Quantum-Atom Optics at Australian National University in Canberra, Australian Capital Territory, showed that helium atoms cooled to near absolute zero / Bose–Einstein condensate state, can be made to flow and behave as a coherent beam as occurs in a laser.[8][9] ## Spectral coherence Figure 10: Waves of different frequencies (i.e. colors) interfere to form a pulse if they are coherent. Figure 11: Spectrally incoherent light interferes to form continuous light with a randomly varying phase and amplitude Waves of different frequencies (in light these are different colours) can interfere to form a pulse if they have a fixed relative phase-relationship (see Fourier transform). Conversely, if waves of different frequencies are not coherent, then, when combined, they create a wave that is continuous in time (e.g. white light or white noise). The temporal duration of the pulse $\Delta t$ is limited by the spectral bandwidth of the light $\Delta f$ according to: $\Delta f\Delta t \ge 1$, which follows from the properties of the Fourier transform and results in Küpfmüller's uncertainty principle (for quantum particles it also results in the Heisenberg uncertainty principle). If the phase depends linearly on the frequency (i.e. $\theta (f) \propto f$) then the pulse will have the minimum time duration for its bandwidth (a transform-limited pulse), otherwise it is chirped (see dispersion). ### Measurement of spectral coherence Measurement of the spectral coherence of light requires a nonlinear optical interferometer, such as an intensity optical correlator, frequency-resolved optical gating (FROG), or spectral phase interferometry for direct electric-field reconstruction (SPIDER). ## Polarization coherence Light also has a polarization, which is the direction in which the electric field oscillates. Unpolarized light is composed of incoherent light waves with random polarization angles. The electric field of the unpolarized light wanders in every direction and changes in phase over the coherence time of the two light waves. An absorbing polarizer rotated to any angle will always transmit half the incident intensity when averaged over time. If the electric field wanders by a smaller amount the light will be partially polarized so that at some angle, the polarizer will transmit more than half the intensity. If a wave is combined with an orthogonally polarized copy of itself delayed by less than the coherence time, partially polarized light is created. The polarization of a light beam is represented by a vector in the Poincaré sphere. For polarized light the end of the vector lies on the surface of the sphere, whereas the vector has zero length for unpolarized light. The vector for partially polarized light lies within the sphere ## Applications ### Holography Coherent superpositions of optical wave fields include holography. Holographic objects are used frequently in daily life in bank notes and credit cards. ### Non-optical wave fields Further applications concern the coherent superposition of non-optical wave fields. In quantum mechanics for example one considers a probability field, which is related to the wave function $\psi (\mathbf r)$ (interpretation: density of the probability amplitude). Here the applications concern, among others, the future technologies of quantum computing and the already available technology of quantum cryptography. Additionally the problems of the following subchapter are treated. ## Quantum coherence In quantum mechanics, all objects have wave-like properties (see de Broglie waves). For instance, in Young's double-slit experiment electrons can be used in the place of light waves. Each electron's wave-function goes through both slits, and hence has two separate split-beams that contribute to the intensity pattern on a screen. According to standard wave theory (Fresnel, Huygens) these two contributions give rise to an intensity pattern of bright bands due to constructive interference, interlaced with dark bands due to destructive interference, on a downstream screen. (Each split-beam, by itself, generates a diffraction pattern with less noticeable, more widely spaced dark and light bands.) This ability to interfere and diffract is related to coherence (classical or quantum) of the wave. The association of an electron with a wave is unique to quantum theory. When the incident beam is represented by a quantum pure state, the split beams downstream of the two slits are represented as a superposition of the pure states representing each split beam. (This has nothing to do with two particles or Bell's inequalities relevant to an entangled state: a 2-body state, a kind of coherence between two 1-body states.) The quantum description of imperfectly coherent paths is called a mixed state. A perfectly coherent state has a density matrix (also called the "statistical operator") that is a projection onto the pure coherent state, while a mixed state is described by a classical probability distribution for the pure states that make up the mixture. Large-scale (macroscopic) quantum coherence leads to novel phenomena, the so-called macroscopic quantum phenomena. For instance, the laser, superconductivity and superfluidity are examples of highly coherent quantum systems whose effects are evident at the macroscopic scale. The macroscopic quantum coherence (Off-Diagonal Long-Range Order, ODLRO) [Penrose & Onsager (1957), C. N. Yang (1962)] for laser light, and superfluidity, is related to first-order (1-body) coherence/ODLRO, while superconductivity is related to second-order coherence/ODLRO. (For fermions, such as electrons, only even orders of coherence/ODLRO are possible.) Superfluidity in liquid He4 is related to a partial Bose–Einstein condensate. Here, the condensate portion is described by a multiply-occupied single-particle state. [e.g., Cummings & Johnston (1966)] Regarding the occurrence of quantum coherence at a macroscopic level, it is interesting to note that the classical electromagnetic field exhibits macroscopic quantum coherence. The most obvious example is the carrier signal for radio and TV. They satisfy Glauber's quantum description of coherence.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9156023859977722, "perplexity": 643.3527386270264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122087108.30/warc/CC-MAIN-20150124175447-00008-ip-10-180-212-252.ec2.internal.warc.gz"}
https://byjus.com/mass-flow-rate-formula/
# Mass Flow Rate Formula ## Mass Flow Rate Formula The mass flow rate is the mass of a liquid substance passing per unit time. In other words, Mass flow rate is defined as the rate of movement of liquid mass through a unit area. The mass flow is directly depended on the density, velocity of the liquid and area of cross section. It is the movement of mass per unit time.The mass flow is denoted by m and the units in kg/s. The mass flow formula is given by, $m=\rho&space;VA$ Where, ρ = density of fluid, V = velocity of the liquid, and A = area of cross section Example 1 Determine the mass flow rate of a given fluid whose density is 700 kg/m3, velocity and area of cross section are 20m/s and 20cm2 respectively. Solution: Given values are, ρ = 700kg/m3, V = 20m/s, and A = 20cm2 = 0.20m2 The mass flow rate formula is given by, m = ρVA = 700 × 20 × 0.20 = 2800 kg/s Example 2 The density and velocity of a given fluid is 900kg/m3 and 5m/s. If this fluid flows through an area of 20cm2, determine the mass flow rate. Solution: Given values are, ρ = 900kg/m3, V = 5m/s, and A = 20cm2 = 0.20m2 The mass flow rate formula is given by, m = ρVA = 900 × 5 ×0.20 = 900 kg/s
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851858615875244, "perplexity": 1572.7047383302308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318986.84/warc/CC-MAIN-20190823192831-20190823214831-00021.warc.gz"}
https://astarmathsandphysics.com/ib-physics-notes/thermal-physics/1462-evaporation.html
## Evaporation Evaporation is the process by which a liquid becomes a gas. Evaporation may take place at any temperature, but occurs most quickly when a liquid boils. Below the boiling point, the molecules in the surface of a liquid do not have enough kinetic energy on average to leave the liquid, but a fraction of molecules do and evaporation can still take place. Of course, it is typically the faster molecules that leave the liquid. The lower energy molecules remain as liquid, and higher energy molecules may leave escape the liquid but be pulled back, with even higher energy molecules able to escap altogether. If the system is isolated then the most energetic molecules remove from the liquid whatever energy they have. The molecules left behind have less energy on average, so the rate of evaporation will decrease. In practice no system is ever isolated. The liquid will be at the ambient temperature usually, and energy may be given to a liquid from it's surrounding to replace that energy taken away by the liquid that has evaporated. The net effect is that in the absence of the amount of liquid being replenished by other means, all the liquid will eventually evaporate. The rate at which evaporation takes place depends on: • The surface area of the liquid. Increased surface area means increased rate of evaporation. • The temperature og the liquid. Higher temperatures mean higher evaporation rates. • Air pressure. The evaporation rate increasing with decreasing atmospheric pressure. • The amount of vapour present in the air. The net rate of evaporation is a balance between vapour molecules rejoining the liquid and liquid molecules leaving. If there is a lot of vapour present, the rate at which molecules rejoin the liquid will increase and net evaporation will decrease. • A draught above the liquid surface will increase the evaporation rate by giving kinetic energy to molecules near the surface.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9256595373153687, "perplexity": 545.7540516678519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806070.53/warc/CC-MAIN-20171120145722-20171120165722-00019.warc.gz"}
https://community.powerbi.com/t5/Desktop/Avg-Hrs-Worked/m-p/420829
cancel Showing results for Did you mean: Frequent Visitor ## Avg Hrs Worked Hi! Need help please on the following calculation that I need to translate to PBI. Here's what I got in Excel. I need to be able to compute for the overall avg hours work per site and account using the formulas noted on the screenshot. Thanks for all the help! 1 ACCEPTED SOLUTION Accepted Solutions New Contributor ## Re: Avg Hrs Worked Hi @minapot, Try this: ```Avg = IF ( HASONEVALUE ( table[emp_id] ), DIVIDE ( SUM ( table[total hrs worked] ), COUNT ( table[wknum] ) ), DIVIDE ( DIVIDE ( COUNT ( table[wknum] ), [measure to divide] ), DISTINCTCOUNT ( table[emp_id] ) ) )``` Replace the text in red with the appropriate measure since you did not mention which column to divide the count of weeks by for the average. "Tell me and I’ll forget; show me and I may remember; involve me and I’ll understand." 3 REPLIES 3 Established Member ## Re: Avg Hrs Worked Hello, CountOfWeeks=Count(Table[wknum]) Total_hrs=Sum(Table[total hrs worked]) avg_hrs=[Total_hrs]/[CountOfWeeks] Best regards. Highlighted Frequent Visitor ## Re: Avg Hrs Worked Hi Floriankx! Thanks for your reply! I tried your suggestions but failed to get the correct output. For the overall avg hrs worked, I need to have the total worked hrs for the site/account and then divide that by the average number of weeks the employees for that site/account worked and then divide the answer to the distinct # of employees for that site/account. so... site/account avg hrs worked = (total hours worked per site/ave # of weeks with schedule)/distinctcount(HC) New Contributor ## Re: Avg Hrs Worked Hi @minapot, Try this: ```Avg = IF ( HASONEVALUE ( table[emp_id] ), DIVIDE ( SUM ( table[total hrs worked] ), COUNT ( table[wknum] ) ), DIVIDE ( DIVIDE ( COUNT ( table[wknum] ), [measure to divide] ), DISTINCTCOUNT ( table[emp_id] ) ) )``` Replace the text in red with the appropriate measure since you did not mention which column to divide the count of weeks by for the average. "Tell me and I’ll forget; show me and I may remember; involve me and I’ll understand."
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8689180612564087, "perplexity": 4777.3576913244615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525659.27/warc/CC-MAIN-20190718145614-20190718171614-00000.warc.gz"}
https://cs.stackexchange.com/questions/12147/variations-of-omega-and-omega-infinity
# Variations of Omega and Omega infinity Some authors define $\Omega$ in a slightly different way: let’s use $\overset{\infty}{\Omega}$ (read “omega infinity”) for this alternative definition. We say that $f(n) = \overset{\infty}{\Omega}(g(n))$ if there exists a positive constant $c$ such that $f(n) \geq c\cdot g(n) \geq 0$ for infinitely many integers $n$, whereas the usual $\Omega$ requires that this holds for all integers greater than a certain $n_0$. Show that for any two functions $f(n)$ and $g(n)$ that are asymptotically nonnegative, either $f(n) = O(g(n))$ or $f(n)= \overset{\infty}{\Omega}(g(n))$ or both, whereas this is not true if we use $\Omega$ in place of $\overset{\infty}{\Omega}$. I am trying learn Algorithms. But I am unable to prove this. Can the experts help me ? • Try to use the definitions, keeping in mind that for every property $P$, either $P$ holds for infinitely many integers, or $P$ does not hold for almost all integers. Observe that $\Omega^\infty$ is the negation of $O$. – Shaull May 20 '13 at 4:04 • See here or here. – Raphael May 20 '13 at 19:45 Hint: If $f(n) \notin \overset{\infty}{\Omega}(g(n))$ and $g(n)$ is asymptotically non-negative, then for all positive constants $c$, $f(n) \leq c \cdot g(n)$ for large enough $n$. This follows by ignoring the condition $c \cdot g(n) \geq 0$ and negating the definition of $f(n) \in \overset{\infty}{\Omega}(g(n))$. In fact, this way you get the stronger result that either $f(n) \in \overset{\infty}{\Omega}(g(n))$ or $f(n) \in o(g(n))$ (but not both). Further hint: You can start by showing that the negation of "$P(n)$ for infinitely many $n$" is "$\lnot P(n)$ for large enough $n$". • I can't understand the difference of infinitely many & for large enough n. do you know a source that can help me?? – Fatemeh Karimi Jul 16 '17 at 5:37 • I suggest a good grounding in set theory and formal logic. Something holds for all large enough $n$ if there exists $n_0$ such that it holds for all $n\geq n_0$. It holds for infinitely many $n$ if the set of $n$ for which it holds is infinite. – Yuval Filmus Jul 16 '17 at 6:00 I can give you an example so that you can better understand $\overset{\infty}{\Omega}(g(n))$. Imagine a binomial heap. The insert operation is $O(logN)$, but is it ${\Omega}(logN)$? In cases when we have tree ranks of 4-3-2-1-0 and inserting a tree with rank 0 will be a ${\Omega}(logN)$ operation. But inserting a tree with rank 0 on the resulting heap from the previous operation (heap with having tree rank of 5) will be a $O(1)$ operation, since only pointers should be added and no extra merge work is necessary. This is the essential difference between ${\Omega}$ and $\overset{\infty}{\Omega}$. For example the binomial heap insert operation is $\overset{\infty}{\Omega}(logN)$ for set of $n = \{{1,3,7,...,2^k - 1}\}$. It doesn't state that when $n \geq n_0$ the complexity is ${\Omega}(logN)$ but rather for some infinite set of n, but not for all $n \geq n_0$ • @YuvalFilmus please correct me If I'm wrong – denis631 Jan 14 '18 at 11:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9181870818138123, "perplexity": 152.6413055077678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143805.13/warc/CC-MAIN-20200218180919-20200218210919-00172.warc.gz"}
https://yutsumura.com/use-coordinate-vectors-to-show-a-set-is-a-basis-for-the-vector-space-of-polynomials-of-degree-2-or-less/
# Use Coordinate Vectors to Show a Set is a Basis for the Vector Space of Polynomials of Degree 2 or Less ## Problem 588 Let $P_2$ be the vector space over $\R$ of all polynomials of degree $2$ or less. Let $S=\{p_1(x), p_2(x), p_3(x)\}$, where $p_1(x)=x^2+1, \quad p_2(x)=6x^2+x+2, \quad p_3(x)=3x^2+x.$ (a) Use the basis $B=\{x^2, x, 1\}$ of $P_2$ to prove that the set $S$ is a basis for $P_2$. (b) Find the coordinate vector of $p(x)=x^2+2x+3\in P_2$ with respect to the basis $S$. ## Solution. ### (a) Prove that the set $S$ is a basis for $P_2$. The coordinate vectors of $p_1(x)$ with respect to the basis $B=\{x^2, x, 1\}$ is given by $[p_1(x)]_B=\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}$ as $p_(x)$ can be written as a linear combination $p_1(x)=1\cdot x^2+0\cdot x+1\cdot 1$ of the basis vectors in $B$. Similarly, we have $[p_2(x)]_B=\begin{bmatrix} 6 \\ 1 \\ 2 \end{bmatrix} \text{ and } [p_3(x)]_B=\begin{bmatrix} 3 \\ 1 \\ 0 \end{bmatrix}.$ Let $T=\{[p_1(x)]_B, [p_2(x)]_B, [p_3(x)]_B\}$. Then $S$ is a basis for $P_2$ if and only if $T$ is a basis for $\R^3$. Thus it remains to show that $T$ is a basis for $\R^3$. Consider the matrix whose column vectors are vectors in $T$. We have \begin{align*} \begin{bmatrix} 1 & 6 & 3 \\ 0 &1 &1 \\ 1 & 2 & 0 \end{bmatrix} \xrightarrow{R_3-R_1} \begin{bmatrix} 1 & 6 & 3 \\ 0 &1 &1 \\ 0 & -4 & -3 \end{bmatrix} \xrightarrow{\substack{R_1-6R_2\\R_3+4R_2}}\6pt] \begin{bmatrix} 1 & 0 & -3 \\ 0 &1 &1 \\ 0 & 0 & 1 \end{bmatrix} \xrightarrow{\substack{R_1+3R_3\\R_2-R_3}} \begin{bmatrix} 1 & 0 & 0 \\ 0 &1 &0 \\ 0 & 0 & 1 \end{bmatrix}. \end{align*} It follows that T is linearly independent. As T consists of three linearly independent vectors in \R^3, it is a basis for \R^3. Hence S is a basis for P_2. ### (b) Find the coordinate vector of p(x)=x^2+2x+3\in P_2 with respect to the basis S. To find the coordinate vector [p(x)]_S with respect to the basis S, we express p(x) as a linear combination of the basis vectors in S. Thus we want to find scalars c_1, c_2, c_3 such that \[p(x)=c_1p_1(x)+c_2p_2(x)+c_3p_3(x). Considering the coordinate vectors of both sides, this is equivalent to $\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}=c_1\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}+c_2\begin{bmatrix} 6 \\ 1 \\ 2 \end{bmatrix}+c_3\begin{bmatrix} 3 \\ 1 \\ 0 \end{bmatrix}.$ We write this equation as a matrix equation $\begin{bmatrix} 1 & 6 & 3 \\ 0 &1 &1 \\ 1 & 2 & 0 \end{bmatrix}\begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix}=\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}.$ To solve this, we apply elementary row operations to the augmented matrix as follows. \begin{align*} \left[\begin{array}{rrr|r} 1 & 6 & 3 & 1 \\ 0 &1 & 1 & 2 \\ 1 & 2 & 0 & 3 \end{array} \right] \xrightarrow{R_3-R_1} \left[\begin{array}{rrr|r} 1 & 6 & 3 & 1 \\ 0 &1 & 1 & 2 \\ 0 & -4 & -3 & 2 \end{array} \right] \xrightarrow{\substack{R_1-6R_2\\R_3+4R_2}}\6pt] \left[\begin{array}{rrr|r} 1 & 0 & -3 & -11 \\ 0 &1 & 1 & 2 \\ 0 & 0 & 1 & 10 \end{array} \right] \xrightarrow{\substack{R_1+3R_3\\R_2-R_3}} \left[\begin{array}{rrr|r} 1 & 0 & 0 & 19 \\ 0 &1 & 0 & -8 \\ 0 & 0 & 1 & 10 \end{array} \right]. \end{align*} (Note that elementary row operations are exactly the same as before.) Hence the solution is c_1=19, c_2=-8, c_3=10. Thus, we have the linear combination \[p(x)=19p_1(x)-8p_2(x)+10p_3(x) and the coordinate vector of $p(x)$ with respect to the basis $S$ is $[p(x)]_S=\begin{bmatrix} 19 \\ -8 \\ 10 \end{bmatrix}.$ ### More from my site #### You may also like... ##### Commuting Matrices $AB=BA$ such that $A-B$ is Nilpotent Have the Same Eigenvalues Let $A$ and $B$ be square matrices such that they commute each other: $AB=BA$. Assume that $A-B$ is a nilpotent... Close
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998343586921692, "perplexity": 68.71226853259029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866772.91/warc/CC-MAIN-20180524190036-20180524210036-00637.warc.gz"}
https://mathoverflow.net/questions/215295/calculating-homology-of-the-boundary-of-a-handlebody
Calculating Homology of the Boundary of a Handlebody Given a manifold $M$ with boundary $W = \partial M$, I know that having a handle decomposition of $M$ allows one to compute its homology, at least in nice cases, by - for example - using the Morse Homology of its critical points. Is it similarly easy to compute the homology of $W$, since the handle decomposition of $M$ provides a surgery description of $W$? If it helps, I'm interested in a particularly simple case: $M$ is a smooth $2n$-manifold which is described by simultaneously adding some number of $n$-handles to $D^{2n}$. However, I am primarily concerned with the integer homology, rather than over $\mathbb{Z}/2\mathbb{Z}$ or $\mathbb{Q}$. For $n>1$, manifold itself is determined by the linking numbers between the attaching spheres of your handles, and the framings of those handles. (For $n=1$, you have to take into account knotting and linking of the attaching circles.) The framings are in 1-1 correspondence with elements of $\pi_n(SO(n+1))$. The homology depends on the image of the framings under the map $\pi_n(SO(n+1))\to \pi_n(S^n)$. Geometrically, this means to take the linking number of your attaching sphere with a pushoff given by the first vector of the given framing. Then the homology is presented by the $(-1)^{n-1}$ symmetric matrix of linking numbers and images of these framings. The proof is an exercise using Poincar\'e duality and the long exact sequence of the pair $(W,M)$. This is a standard fact in the case $n=2$, in which case the map $\pi_1(SO(2))\to \pi_1(S^1)$ is a bijection, and I think you can find it in such sources as Gompf-Stipsicz. For the general high-dimensional case, look at Classification of (n-1)-Connected 2n-manifolds by C.T.C. Wall, Annals of Math Vol. 75, No. 1 (1962), pp. 163-189.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658253192901611, "perplexity": 145.7146533625599}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00453.warc.gz"}
https://brilliant.org/problems/hot-integral-5/
# Hot Integral-5 Calculus Level 5 Let $$\displaystyle f(x)=\int\limits_0^1 \frac{\sqrt{1-xt^2}}{\sqrt{1-t^2}}dt$$ Then, the following integral can be expressed as : $$\displaystyle \int \frac{f(2015x^2)}{x^3}dx = -\frac{a}{bz^c}[\pi \left\{dz^f \;_4F_3(g,h,\frac{i}{k},\frac{j}{k};l,m,m;nz^o) -pz^q \log(r) + sz^t\log(-uz^v) + w\right\}]$$ + constant Evaluate $$a+b+c+d+f+g+h+i+j+k+l+m+n+o+p+q+r+s+t+u+v+w = ?$$ $$\textbf{Details and Assumptions}$$ 1)$$a,b,c,d,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w$$ are all positive integers. 2) $$_4F_3(.,.,.,.;.,.,.;.)$$ is a $$\textbf{Hypergeometric function}$$. 3) Everything is in the $$\textbf{simplest form}$$ and $$r$$ is square free. $$\blacksquare$$ This is a part of Hot Integrals ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9170200824737549, "perplexity": 1464.825861508968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822145.14/warc/CC-MAIN-20171017163022-20171017183022-00682.warc.gz"}
https://www.physicsforums.com/threads/car-driving-around-a-curve.175282/
# Car driving around a curve 1. Jun 27, 2007 ### canadiantiger7 1. The problem statement, all variables and given/known data a car drives around a curve with a radius 410 m at a speed of 38 m/s. the road is banked at 5.1 degrees, and the car weighs 1400 kg. what if the frictional force of the car? at what speed could you drive around this curve so that the force of friction is zero? I tried a few equations and got different answers, but none of them correct. any help is appreciated. 2. Relevant equations 3. The attempt at a solution 2. Jun 27, 2007 ### StatusX What exactly have you tried? You have to show some work before you'll get any help. 3. Jun 27, 2007 ### canadiantiger7 i got the acceleration (V2/R) and then plugged that in to get Fnet (m*a). but it did not work. that is what it said to do in the book though. 4. Jun 27, 2007 ### StatusX That is the net force on the car, but you need to determine where its coming from. Gravity always acts vertically with magnitude mg. The other two forces are the normal force of the road, which is perpendicular to the surface of the road, and the force of friction, which is parellel to the surface. These must be such that a) the vertical components add to zero, since there's no vertical acceleration and b) the horizontal components add to the net force, providing the centripetal force that keeps the car moving in a circle. 5. Mar 28, 2008 ### alexander38 Hi canadiantiger7, I’m not getting what you exactly want. Please provide more details so that I can help you. Thanks & Regards Find the location of police speed traps using your mobile phones and other types of devices, take a look at http://www.trapster.com 6. Mar 29, 2008 ### merryjman If you bank a curve steeply enough, the normal force will supply the needed centripetal acceleration rather than the frictional force. This is the principle at work at high speed race tracks - this way, the limited friction between the tires/road can be used strictly for accelerating and braking, and not turning as well. I would start with a free-body diagram if I were you, and then do what StatusX said above with regard to the horizontal and vertical forces. Note that the normal force does NOT point directly upward, as it would in other problems. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Have something to add? Similar Discussions: Car driving around a curve
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8313265442848206, "perplexity": 692.8727369614636}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718840.18/warc/CC-MAIN-20161020183838-00452-ip-10-171-6-4.ec2.internal.warc.gz"}
https://overbrace.com/bernardparent/viewtopic.php?f=95&t=1564&start=20
Fundamentals of Fluid Mechanics B Questions and Answers No, there are no other assumptions. These strain rates can be used for any fluid. Also, the same applies with the strain rates in cylindrical and spherical coordinates in the tables. The strain rates can be used in the general case. 03.24.20 Question by AME536A Student Could you also give us the final answers for Assignment 5 Question 3 (d) and (e)? Let me see if I can find them.. If I do, I'll post them soon. Question by AME536A Student I'm still having problems to solve Assignment#1 Question#2(b), the prove of Crocco's theorem. In class you gave us this hint: say $a=b-c$ $$da = d(b-c)$$ If $da=0$ and if s-s and uniform properties at some point upstream: $$\nabla a = 0$$ Then $$db-dc =0$$ and $$\nabla (b-c)=0$$ $$\nabla b = \nabla c$$ I don't understand how we can use this without ending up with a temperature gradient term in the equation. 03.25.20 Another hint. For two streamlines near each other and at a certain location, one has a temperature $T$, and the other the temperature $T+dT$. Expand terms and get rid of the terms that are necessarily much smaller than the others. Question by Student AME536B Do we need to memorize proofs for this midterm? The midterm is open book so you can consult your assignments, class notes, books if you want. However, if I were you, I would stick to consulting the tables: you won't waste time and score higher this way. Question by Student AME536B I was attempting to figure out the mean free path derivation question you gave us previously on how to get the $\sqrt2$ term. What I came up with using your tip of $$q_{rms}=c*q_{avg}$$ is, $$\overline{V_{rel}^{2}}=\overline{V_1^{2}}+\overline{V_2^{2}}$$ Is this process correct? I don't understand where your equation (2) is coming from. You need to clarify this. Question by Student AME536B On the tables, the equation given for the strain rate $S_{\theta r}$ in spherical coordinates has a term of $\frac{v_{\theta}}{r}$; is that a typo? Shouldn't the shear strain equation for spherical coordinates be given by: $$S_{r\theta} = \frac{1}{2} \left( \frac{1}{r}\frac{\partial v_r}{\partial \theta} + \frac{\partial v_{\theta}}{\partial r} \right)$$ 04.06.20 The equation in the tables is correct: there is no typo. Question by AME536A Student For question 1. b. on Assignment 6, do you want us to derive the expression you gave for the streamfunction? Or just start from the given equation? If it's given in the question don't derive it. Previous   1 ,  2   •  PDF 1✕1 2✕1 2✕2  •  New Question $\pi$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8197869658470154, "perplexity": 427.89344568757616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371803248.90/warc/CC-MAIN-20200407152449-20200407182949-00410.warc.gz"}
https://www.answers.com/Q/How_do_you_change_50_over_45_mixed_number_to_an_improper_fraction
Numbers # How do you change 50 over 45 mixed number to an improper fraction? ###### Wiki User 50/45 is an improper fraction. 1 and 1/9 is an equivalent mixed number in lowest form. 🙏 0 🤨 0 😮 0 😂 0 ###### Wiki User 50 over 45 or 50/45 IS an improper fraction! 🙏 0 🤨 0 😮 0 😂 0 ###### Wiki User It already is an improper fraction, the other form is 1 and 1/9 🙏 0 🤨 0 😮 0 😂 0 ## Related Questions An improper fraction is a fraction. It can't become a proper fraction. You do the numerator divided by the denominator, then simplify the fractions if you can. (this is how you change a improper fraction to a mixed number, you cannot change a proper fraction to a mixed number) Here is a video tutorial https://www.youtube.com/watch?v=HPetBqWKvZM on how to change an improper fraction to a mixed number. You change it into an improper fraction, then simplify it, then if you want to, change it back to a mixed number. Change the mixed number to an improper fraction and then divide. A mixed number can be converted to an improper fraction. I wouldconvert both to improper fraction,find a common multiple (CM),calculate equivalent fraction with CM as the denominator,add the numerators,change the improper fraction to a mixed fraction, if required,simplify the fractional part of the mixed fraction - if appropriate and required. I don't think you can but, you can change an improper fraction to a decimal. If you are using the same number, then the improper fraction and mixed number are equal. You change the mixed number to an improper fraction, multiply its denominator by the whole number. Then, if required, convert back to a mixed fraction. change it to an improper fraction and then divide it First you would want to change the mixed number to an improper fraction. Then you can subtract 595/24 is an improper fraction; to change it to mixed number = 24 19/24 A mixed number of the form AB/C, as an improper fraction, is equal to (AC + B)/CA mixed number of the form AB/C, as an improper fraction, is equal to (AC + B)/CA mixed number of the form AB/C, as an improper fraction, is equal to (AC + B)/CA mixed number of the form AB/C, as an improper fraction, is equal to (AC + B)/C 43 is an integer, not an improper fraction nor a mixed number. A mixed number can be changed into an improper fraction By changing the mixed number into an improper fraction. you have to change it into a improper fraction then divide the numerator into the domnator Yes, it can. In fact every mixed number has an equivalent improper fraction. mixed # into an improper fraction: multiply the whole number by the denominator(bottom number ont he fraction) then add that number to the numberator(the top number of the fraction. and that's it! improper fraction into a mised number: divide the numberator by the denominator. whatever WHOLE number you get is your number, now for the fraction. whatever you got as the remander when you divided but that on top of the fration. ex 47/5. 5 goes into 47 9 times with a remander of 2. that equals 9 and 2/5. Well you could change it into a mixed number or just leave it. ;)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8691167831420898, "perplexity": 1149.8651848605923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704832583.88/warc/CC-MAIN-20210127183317-20210127213317-00142.warc.gz"}
https://www.springerprofessional.de/experiment-and-analysis-of-mechanical-properties-of-lightweight-/20034128
main-content ## Weitere Artikel dieser Ausgabe durch Wischen aufrufen Erschienen in: Open Access 01.12.2022 | Research # Experiment and Analysis of Mechanical Properties of Lightweight Concrete Prefabricated Building Structure Beams verfasst von: Yingguang Fang, Yafei Xu, Renguo Gu print DRUCKEN insite SUCHEN ## Abstract Recent years have witnessed that the prefabricated concrete structure is in the widespread use of building structures. This structure, however, still has some weaknesses, such as excessive weight of components, high requirements for construction equipment, difficult alignment of nodes, and poor installation accuracy. In order to handle the problems mentioned above, the prefabricated component made of lightweight concrete is adopted. At the same time, this prefabricated component is beneficial to reducing the load of the building structure itself and improving the safety and economy of the building structure. Nevertheless, it is rarely found that the researches and applications of lightweight concrete for stressed members are conducted. In this context, this paper replaces ordinary coarse aggregate with lightweight ceramsite or foam based on the C60 concrete mix ratio so as to obtain a mix ratio of C40 lightweight concrete that meets the engineering standards. Besides, ceramsite concrete beams and foamed concrete beams are fabricated. Moreover, through three-point bending tests, this paper further explores the mechanical properties of lightweight concrete beams and plain concrete beams during normal use conditions. As demonstrated in the results, the mechanical properties of the foamed concrete beam are similar to those of the plain concrete beam. Compared to plain concrete beams, the density of foamed concrete beams was lower by 23.4%; moreover, the ductility and toughness of foamed concrete were higher by 13% and 3%, respectively. However, in comparison with the plain concrete beam, the mechanical properties of the ceramsite concrete beam have some differences, with relatively large dispersion and obvious brittle failure characteristics. Moreover, in consideration of the nonlinear deformation characteristics of reinforced concrete beams, the theoretical calculation value of beam deflection was given in this paper based on the assumption of flat section and the principle of virtual work. The theoretically calculated deflection values of ordinary concrete beams and foamed concrete beams are in good agreement with the experimental values under normal use conditions, verifying the rationality and effectiveness of the calculation method. The research results of this paper can be taken as a reference for similar engineering designs. Hinweise ## Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## 1 Introduction The prefabricated concrete structure is widely used in building structures. Compared with the traditional cast-in-place structure, the prefabricated concrete structure has diverse advantages, such as short construction periods, high production efficiency, less material consumption, high quality of finished products, low carbon, and environmental protection (Liu et al., 2020; Shah et al., 2021). This structure, however, also has the disadvantage of weak structural integrity (Huang et al., 2021; Savoia et al., 2017), since it is difficult to guarantee the quality of the node construction of prefabricated components. To be specific, the narrow construction environment, high requirements for construction equipment due to excessive quality of components, the difficulty in the alignment of nodes, and the poor installation accuracy are the key factors affecting the quality of node connections (Chen et al., 2017; Nguyen & Hong, 2020). The adoption of lightweight concrete to reduce the weight of prefabricated components is helpful to solve the above problems. At the same time, this prefabricated component is beneficial to reducing the load of the building structure itself and improving the safety and economy of the building structure. Currently, there are a lot of researches on lightweight concrete, but many lightweight concretes are mainly adopted for functional components. Many scholars have obtained lightweight concrete with significantly improved heat insulation and sound absorption by replacing the aggregates of plain concrete. In recent years, with the deepening of researches, it is promising to utilize some lightweight concrete in structural stress members (Kozłowski & Kadela, 2018; Lee, Kang, et al., 2018; Yang et al., 2016). Many studies explored that the types and contents of lightweight aggregate additions have a great influence on their mechanical properties (Hamidian & Shafigh, 2021; Karamloo et al., 2020; Tian et al., 2020; Vakhshouri & Nejadi, 2018). Therefore, researchers have made a lot of efforts to find the lightweight aggregate concrete that is expected to be used in stressed components. Although the prefabricated beam functions as one of the main load-bearing components of the prefabricated concrete structure, the research on prefabricated beams made of lightweight concrete is not enough. Lee, Lim, et al. (2018) used lightweight foamed mortar with a 28-day compressive strength of 20 MPa to make reinforced concrete beams, and then conducted bending tests on the beams. Based on the research results, the ultimate load of the reinforced lightweight foam mortar beam was about 8–34% lower than that of plain reinforced concrete with the same steel configuration. However, Jones and McCarthy (2005) once pointed out that most engineers and designers were unlikely to pay much attention to the structural application of foamed concrete unless the strength of foamed concrete exceeds 25 Mpa. Therefore, Lim (2007) conducted bending tests on reinforced foam concrete beams made from 20 to 35 MPa, respectively, and found that both foam concrete beams and ordinary concrete beams showed bending failure models and similar ultimate loads. At the same time, further researches are called upon for beams made of foamed concrete with a compressive strength of 35 Mpa or more. In addition, ceramsite concrete is a type of lightweight concrete that is expected to be used for force members. In recent years, some scholars have studied the bearing capacity and crack width of ceramsite concrete beams. For example, Chen, Li, et al. (2020), Chen, Hui, et al. (2020)) explored the failure mode of shale ceramsite lightweight aggregate concrete beams and the width of diagonal cracks. Moreover, Liu et al (2021) made detailed analysis on the bearing capacity of H-shaped steel beams with circular holes on the webs wrapped in ceramsite concrete (SBWCC) and further proposed a short-term stiffness formula. According to the above researches, it can be found that in the past, the researches on lightweight concrete mainly focused on ultimate strength and ductility. However, flexural members, like beams, should not only have enough strength and ductility but also should meet the service limit state, such as crack width, vibration, and deflection. (Jahami et al., 2019; Wang & Tan, 2021). Nowadays, there are rare studies on the mechanical properties of foamed concrete beams and ceramsite beams with C40 strength grade concrete under normal use conditions. In particular, the mechanical differences between these two types of lightweight concrete beams and plain concrete beams, and the calculation method of the deflection of lightweight concrete beams are rarely explored in the previous studies. Based on the C60 concrete mix ratio, this study uses lightweight ceramsite or foam material to replace ordinary coarse aggregates so as to obtain a C40 light concrete mix ratio that meets the engineering standards. Besides, ceramsite concrete beams and foamed concrete beams are fabricated. Moreover, through three-point bending tests, this paper further explores the mechanical properties of lightweight concrete beams and plain concrete beams during normal use conditions. Then, the theoretical calculation method of the deflection of the foamed concrete beam is proposed in this paper based on the assumption of flat section and the principle of virtual work. The research results of this paper will help further understand the mechanical properties of lightweight concrete and promote the structural application of lightweight concrete. ## 2 Experimental Model ### 2.1 Test Materials and Mix Ratio This paper intends to explore the mechanical properties of plain concrete, ceramsite concrete, and foamed concrete. Considering that there are many factors that affect the strength of concrete beams, such as the amount of cement, the amount of foam, the amount of ceramsite, the curing conditions, and the water–cement ratio, the compressive strength and flexural strength of the specimens of different materials should meet the minimum engineering requirements in order to better enable the test beam to have sufficient strength to meet engineering requirements. This paper replaces the high-strength concrete with lightweight aggregates to obtain the ratio of C40 lightweight concrete beams, and then study the mechanical properties of different types of beams. To be specific, referring to the related literature (Chen, Hui, et al., 2020; Chen, Li, et al., 2020; Elrahman Abd, Chung, et al., 2019; Elrahman Abd, Chung, et al., 2019; Gong et al., 2018; Lee et al., 2019; Lotfy et al., 2015; Yu et al., 2013), the author replaced the high-strength C60 concrete coarse aggregate with light aggregates and made standard specimens of compressive and flexural strengths, whose strengths were tested accordingly. During the whole experiment, Yuexiu brand P·II52.5R cement was used to make lightweight concrete; S95 slag powder with activity index greater than 95% was used for mineral powder, and river sand with a particle size of 2.36 mm or less for fine aggregate The coarse aggregate adopts gravel piles with a bulk density of 1520 kg/m3 and a particle size of 15 mm or less. In addition, ceramsite with a density of 618 kg/m3, the cylinder strength of 1.8 Mpa, and the particle size of 8 mm–15 mm and a concentrated high-efficiency cement foaming agent were adopted. In this paper, the types of longitudinally stressed steel bars and stirrup steel bars in the specimens are all HRB500 and HPB300, respectively. The mechanical properties of these steel bars are shown in Table 1. Table 1 Mechanical properties of the rebars. Specimens fy (MPa) fu (MPa) Es (MPa) HRB500 500 630 2 × 105 HPB300 300 420 2 × 105 According to Chinese engineering requirements, the 28d compressive strength of the standard specimens is not less than 40 MPa, and the flexural strength 4.4 MPa. After various trials and tests, a lightweight ceramsite concrete with a 28d compressive strength characteristic value of 41 MPa and a flexural strength characteristic value of 6.62 MPa, and the foamed concrete with 28d compressive strength characteristic value of 41.4 MPa and the characteristic value of flexural strength of 12.97 MPa were prepared. The information of concrete mix ratio is shown in Table 2. The density of plain concrete, the density of foamed concrete, and the density of ceramsite concrete are 2480 kg/m3, 1900 kg/m3, and 2000 kg/m3 respectively. Compared with the density of plain concrete, the density of the two types of lightweight concrete has been reduced by 23.4% and 19.4%, respectively. Table 2 Concrete mix ratio. Category Plain concrete Per m3 Ceramsite concrete Per m3 Foamed concrete Per m3 Cement (kg) 434.0 (18.1%) 434.0 (26.1%) 557.3 (27.9%) Mineral powder (kg) 144.7 (6.0%) 144.7 (8.7%) 268.3 (13.4%) Sand (kg) 700.4 (29.2%) 700.4 (42.2%) 990.7 (49.5%) Water reducing agent (kg) 15.0 (0.6%) 15.0 (0.9%) 18.6 (0.9%) Water (kg) 138.8 (5.8%) 138.8 (8.4%) 165.1 (8.3%) Gravel (kg) 967.2 (40.3%) Ceramsite (kg) 228.5 (13.8%) Foam (L) 516 ### 2.2 Experimental Design Considering that T beams have high flexural and shear resistance (Gulec et al., 2021), the cross section of the test beam is designed as a T-section. A total of 5 simply supported beam members were designed and fabricated in the experiment, including 1 plain concrete beam––C, 2 ceramsite concrete beams––CC1 and CC2 (CC1 and CC2 are parallel samples), and 2 foamed concrete beams––FC1 and FC2 (FC1 and FC2 are parallel samples). Since the mid-span bending moment is the largest in the whole beam, the concrete cracks in the tension zone here are most serious and with the fastest speed in the whole beam. Since the strain gage attached to the middle span is prone to fracture, it is specially glued to the 1/4 beam length on the right in order to perform better. The thickness of the concrete protective layer is 30 mm. According to the Chinese standard, the minimum reinforcement ratio of tensile steel bars is 0.2%, and the reinforcement ratio of tensile steel bars should not exceed 2.5%. In this article, the reinforcement ratio of tension steel bars is 1.9%, and the reinforcement ratio of compression steel bars is 0.4%. The measuring points and reinforcement of the beam are displayed in Fig. 1. When the preloading is completed, the formal loading is carried out accordingly which is still controlled by load, with a step of 50 kN as the step length. After completing the loading of each level, the load is held for 10 min, and then the corresponding data of each measuring point under the test load of this level are recorded. After loading up to 200 kN, the loading step size is changed to 25 kN so as to better control the sudden occurrence of brittle failure and observe the deformation process of the component in detail. Apart from having enough strength and ductility, beams should also meet the service limit state (Wang & Tan, 2021). In this context, this test took the service limit into consideration. According to the relevant Chinese standards, the deflection of the flexural member during its service period shall meet a certain limit, which is 1/500 of the calculation span. Therefore, this paper selects the load termination value on the basis of the normal service limit state of the beam. 250 kN was taken as the load termination value by referring to the recommended deflection formula of the Chinese code and the pre-experiments, which means the test ends when the load reaches 250 kN or the ultimate strength of the specimen. Fig. 2 shows the loaded state of specimens. ## 3 Experimental Phenomena and Results ### 3.1 Experimental Phenomena The deformation characteristics of foamed concrete and plain concrete are similar throughout the load. When the load reaches about 100 kN, 1–3 vertical cracks in the area near the loading point can be found in the beam. As the load grows, the vertical cracks continue to increase and extend along the height direction and the neutral axis moves upwards. As the load continues, individual oblique cracks begin to appear at both ends of the beam, and the beam's deflection increases significantly. When continuing the loading, the number of cracks at both ends of the beam shows an upward trend. When the ceramsite concrete beam is loaded to 80 kN, the first vertical crack appears in the middle of the span, and the "click" sound of concrete cracks will be heard. With the continuous loading, it can be found that the number of cracks is more than that of plain concrete beams, and the cracks develop faster. When re-loaded, an oblique crack along the support to the loading point gradually develops. As shown in Fig. 3, when the loading reaches about 225 kN, the specimen suddenly breaks, demonstrating obvious characteristics of shear brittle failure. ### 3.2 Experimental Results Fig. 4 shows the load–deflection curve of the specimen. It can be found that at the initial stage of loading, the deflection of each concrete beam increases linearly with the load. With the increase of the load, the beam gradually cracks, the load deflection curve of plain concrete and foamed concrete gradually deviates from the straight line, and the stiffness of the beam decreases. The load deflection data of the two beams of ceramsite concrete are discrete, with the strength significantly lower than that of plain concrete. Since the density of ceramsite (about 600 kg/m3) is much smaller than that of cement paste (about 1500 kg/m3), ceramsite tends to float up during concrete solidification, resulting in uneven distribution of ceramsite in concrete. The uneven distribution in concrete, the discreteness of ceramsite strength, and the complexity and randomness of ceramsite interface bonding may lead to the obvious difference of the load deflection data of CC1 and CC2. ## 4 Calculation of Beam Deflection Under the load, the section bending moment of the beam member varies along the axis, and the average stiffness or curvature of the corresponding section changes in a complicated way, which is the main reason for accurately calculating the deformation of reinforced concrete members. The direct bilinear method, effective inertia method, and curvature integral method are mainly adopted for the calculation of beam deflection. For example, China's GB 50010-2002 "Concrete Structure Design" adopts the direct bilinear method to calculate the short-term stiffness of components that allow cracks. The American standard ACI 318-99 stipulates that the stiffness calculation after cracking adopts the effective moment of inertia method. In this paper, considering the nonlinear deformation of reinforced concrete beams, the virtual work principle is used to calculate the deflection, and the related calculations are completed with the help of commercial software matlab. ### 4.1 Basic Assumptions 1) The average strain distribution conforms to the plane section assumption, that is, the average strain of the section is linearly distributed along the height. 2) There is no bond slip between longitudinally tensioned steel bars and concrete. The stress–strain of the longitudinal steel bars adopts an ideal elastoplastic model, and the expression is $$\left\{ {\begin{array}{*{20}c} {\sigma_{s} = \varepsilon_{s} E_{{\text{s}}} ,\varepsilon_{s} \le \varepsilon_{{\text{y}}} \, } \\ {{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \sigma_{s} = f_{y} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ,\varepsilon_{y} < \varepsilon_{s} \le \varepsilon_{su} } \\ \end{array} } \right.,$$ (1) where $$\sigma_{s}$$ is the steel bar stress, $$E_{{\text{s}}}$$ is the steel bar elastic modulus, $$\varepsilon_{{\text{y}}}$$ is the steel bar yield strain, and $$f_{y}$$ is the steel bar yield strength design value. 3) The constitutive models of plain concrete, ceramsite concrete, and foamed concrete are selected with reference to relevant specifications, without considering the tensile effect of concrete. The constitutive model of plain concrete adopts the formula recommended by relevant Chinese standards, and its expression is as follows: $$\left\{ {\begin{array}{*{20}c} {\sigma_{c} = f_{c} [1 - (1 - \frac{{\varepsilon_{c} }}{{\varepsilon_{0} }})^{n} ],\varepsilon_{c} \le \varepsilon_{{0}} \, } \\ {{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \sigma_{c} = f_{c} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ,\varepsilon_{0} < \varepsilon_{{\text{c}}} \le \varepsilon_{cu} } \\ \end{array} } \right.,$$ (2) where $${\kern 1pt} {\kern 1pt} {\kern 1pt} \sigma_{c}$$ is the concrete stress, $$f_{c} {\kern 1pt}$$ is the design value of the concrete compressive strength, $$\varepsilon_{0}$$ is the concrete compressive strain when the concrete compressive stress reaches $$f_{c} {\kern 1pt}$$,$$\varepsilon_{cu}$$ is the ultimate compressive strain of the normal section concrete, and n is the coefficient. In this paper, for C40 concrete, $$\varepsilon_{0}$$, $$\varepsilon_{cu}$$, and n are 0.002, 0.0033, and 2.0 respectively. The constitutive model expressions of ceramsite concrete and foamed concrete are as follows: $$\left\{ {\sigma_{c} = \begin{array}{*{20}c} {f_{c} [1.5(\frac{{\varepsilon_{c} }}{{\varepsilon_{0} }}) - 0.5(\frac{{\varepsilon_{c} }}{{\varepsilon_{0} }})^{2} ],} & {\varepsilon_{c} \le \varepsilon_{0} } \\ {\sigma_{c} = f,} & {\varepsilon_{c} > \varepsilon_{0} } \\ \end{array} } \right.$$ (3) 4) The concrete deformation is considered to be continuous without considering cracks, satisfying the principle of virtual work. ### 4.2 Bending Analysis and Deflection Calculation of Normal Section According to the basic assumption of the flat section assumption, the section stress and the strain distribution are shown in Fig. 5 when the properly reinforced beam is in normal service. Assuming that the height of the compression zone of the section is xc, the strain at the distance y from the neutral axis of the section can be calculated by the assumption of the flat section as follows: $$\varepsilon (y{) = }\frac{y}{\rho }.$$ (4) In the formula,$$\rho$$ is the radius of curvature, and y is the coordinate with the neutral axis as the origin. According to the force balance condition of the cross section, the following two balance equations can be summarized: $$\sum {X = 0,\int_{0}^{{{\text{y}}_{c} }} {\sigma_{c} } } (y) \cdot bdy{ + }P_{{1}} { + }T_{1} + T_{2} { = }0,$$ (5) $$\sum {M = 0,M_{u} { + }\int_{0}^{{{\text{y}}_{c} }} {\sigma_{c} } } (y) \cdot b \cdot ydy + T_{1} y_{t1} + T_{2} 2y_{t2} + P_{1} y_{p1} { = }0,$$ (6) Taking the neutral axis as the origin of the ordinate, upward is positive, and downward is negative; yt1, yt2, and yp1, respectively, represent the ordinate of the resultant force of the first-row tensioned steel bars, that of the second-row tensioned steel bars, and that of compressed steel bars. From Eqs. (16), the relationship between curvature and bending moment can be obtained, and then the curve of deflection and load can be calculated based on the principle of virtual work in Eq. (7). With the increase of load, the neutral axis of the concrete will gradually move up, so the section effective moment of inertia Ie of the concrete will change accordingly, which can be determined by Eq. 8. When using the component, the deformation caused by the axial force and the shear force is negligible. Based on the mathematics software matlab for related programming, this paper calculates the theoretical calculation deflection of each beam by adopting the numerical integration method. In order to ensure sufficient accuracy and calculation speed, the integration step length is selected as 2 mm after multiple debugging. $$\Delta = \sum {\int {\frac{{\overline{M}M_{P} }}{EIM}} } {\text{ds ,}}$$ (7) $$I_{{\text{e}}} { = }\frac{{M_{P} \rho }}{E},$$ (8) where $$\Delta$$ is the deflection; $$\overline{M}$$ is the bending moment of the unit load on the virtual beam. Mp is the bending moment of section, and $$\rho (M)$$ is the corresponding curvature. E is the elastic modulus of reinforced concrete, and Ie is the effective moment of inertia of the section. ## 5 Results and Analysis The calculation results are shown in Fig. 6. It can be found that the theoretical calculation values of plain concrete and foamed concrete are in good agreement with the experimental values, the absolute error is mostly within 0.2 mm, and the relative error is mostly 10%–20% during the normal service period. The cracks of the two beams are smaller during the normal service period, which can be calculated based on the virtual work principle with little error. Due to the discreteness of the strength of the ceramsite itself and the uneven distribution of the ceramsite in the concrete, the deflection experimental data of the ceramsite concrete beam are discrete. The crack of ceramsite concrete beam develops rapidly with a large number under the load, and thus, there will be a large error in calculation when using the assumption of plane section and the principle of virtual work. In order to further verify whether the ceramsite concrete satisfies the plane section assumption, this test measured the concrete strain of the beam under different loads by using strain gages pasted on the side of the ceramsite concrete beam, and further calculated the average strain of the section, so as to obtain the average strain distribution curve of the section. The degree of compliance with the plain section assumption can be found from the curve. Fig. 7 demonstrates the strain distribution of the ceramsite concrete section, in which the strain point at 150 kN and with the section height of 47 mm fails to collect data at this point due to the quality of the strain gage. In addition, the dotted line connection is adopted for the data in this paper. Under lower loads (50 kN, 100 kN), the strain distribution of the cross section of the beam is basically straight at the elastic stage. As the load increases, the section strain begins to deviate from the straight line when the load reaches 150 kN. The section strain is far from the straight line. ### 5.2 Bending Stiffness In order to analyze the changes in the bending stiffness of the beam throughout the loading period, the following equation is used for the analysis of the section bending stiffness: $$B_{s} { = }\frac{{FL^{3} }}{48\Delta }.$$ (9) As shown in Fig. 8, the bending stiffness of foamed concrete and plain concrete gradually decreases as the load increases. The ceramsite concrete beam has comparatively large discreteness because of the uneven strength of ceramsite, uneven distribution of ceramsite in the beam, and the complexity and randomness of the ceramsite interface and the combination of colloidal materials. Thus, its bending stiffness fluctuates greatly as the load increases. Therefore, through the stiffness trend line of ceramsite concrete beams, the overall change trend of stiffness can be better observed, indicating that the overall trend of stiffness decreases as the load increases. The deflection of the ceramsite concrete beam decreases rapidly at 100 kN, and the stiffness also plummets. The possible reason is that the ceramsite concrete has some slight cracks during preloading. The bending rigidity of foamed concrete will decrease with the increase of the load throughout the service period. Table 3 illustrates the bending stiffness of various concrete beams when they reach the normal service limit. When the beam reaches the normal service limit, the bending stiffness of the foamed concrete beam changes by about 10%, and its density is about 23.4% lighter than plain concrete, proving that it is a very good lightweight concrete. The material properties and uneven distribution of ceramsite concrete have a greater impact on the test results, so further research is needed. Table 3 Limit bending stiffness. Plain concrete Foamed concrete Ceramsite concrete Deflection limit (mm) 3.16 3.16 3.16 The corresponding load of deflection limit(kN) 216.3 201.2 222.9 Bending stiffness (106 N·m2) 6.52 5.88 6.75 Initial bending stiffness (106 N·m2) 6.78 6.67 7.31 Rate of change 3.83% 11.84% 7.70% The initial bending stiffness represents the bending stiffness of 50kN. ### 5.3 Failure Analysis As shown in Table 4, in the experiment, the foamed concrete beam failed due to the deflection reaching the service limit, and the ceramsite concrete beam suddenly broke with the development of the shear crack. The ceramsite concrete suffers brittle shear failure at 225 kN, demonstrating that its bearing capacity is significantly lower than that of plain concrete. The strength of the coarse aggregate in plain concrete is higher than the strength of the interface between the cement base stone and the coarse aggregate (Xiao et al., 2013), so the failure of concrete generally starts from the interface. However, the strength of ceramsite is relatively lower than that of the cement base and the interface, so the failure of the ceramsite concrete beam is caused by the cracking of the coarse aggregate of the ceramsite concrete beam until it penetrates the entire oblique section. The phenomena are consistent with the opinion proposed by Zhang and Gjvorv (1991) that aggregate strength is the main factor affecting the strength of lightweight aggregate concrete. Table 4 Experimental results of the lightweight beam Foamed concrete beam Ceramsite concrete beam μr 1.13 0.94 Energy absorption ratio 1.03 0.92 Failure mode Service limit Shear crack ### 5.4 Displacement Ductility and Energy Absorption Ductility is often used to represent a structure's ability to resist inelastic behavior. Based on the previous literature, there are a variety of calculation models for calculating displacement ductility factors. Park mode (Gulec et al., 2021; Park., 1988) is a commonly used model for calculating ductility factor, which is the ratio of failure point displacement to yield point displacement (Khatib et al., 2020). By referring to the Park model and the service limit requirements of the beam, this paper defines the displacement factor (μ) as the ratio of the service displacement value (Δu) of the beam to the corresponding equivalent elastoplastic yield point displacement (Δy) (Eq. 10). In order to compare the ductility of the lightweight concrete beams and plain concrete beams, the relative ductility ratio ur is further defined as Eq. 11. $$\mu = \frac{{\Delta_{u} }}{{\Delta_{y} }},$$ (10) $$\mu_{r} = \frac{{\mu_{l} }}{{\mu_{p} }},$$ (11) where μl and μp, respectively, represent the ductility factor of lightweight concrete beams and that of the plain concrete beams. The average values of the relative ductility ratio of these two kinds of lightweight concrete beams are shown in Table 4. As shown in the table, compared with the plain concrete beam, the ductility of foamed concrete beams is 1.13 times of the plain concrete beam’s ductility, demonstrating that foamed concrete beam has higher ductility. However, the ductility of the ceramsite concrete beam is slightly lower than that of the plain concrete beam, which indicates that replacing coarse aggregate with ceramsite will reduce the quality of the beam but lead to a reduction in the ductility of the beam. The energy absorption capacity of the beam can be used to reflect the resistance ability to inelastic deformation. The energy absorption capacity of the beam can be obtained by calculating the area under the force–displacement curve (Gulec et al., 2021), which can be obtained by the sum of the area of two successive points (Eq. 12): $${\text{Energy absorption = 0}}{.5}\sum\limits_{i = 1}^{{n{ - }1}} {{(}d_{{i{ + 1}}} { - }d_{{\text{i}}} {)(}F_{{i{ + 1}}} { - }F_{i} {)}} ,$$ (12) where di represents the displacement; Fi represents the load (kN) at this displacement (mm); n represents the number of displacement points. The energy absorption ratio is defined as the ratio of the absorbed energy of lightweight concrete beams to that of plain concrete to compare the energy absorption of lightweight concrete beams and plain concrete beams. The average energy absorption of lightweight concrete beams is shown in Table 4. It can be found that the energy absorption capacity of the ceramsite concrete beam is the lowest in all beams, while the energy absorption capacity of the foam concrete beam is 3% higher than that of the plain concrete beam. ## 6 Conclusion The paper obtains the mix ratio of lightweight concrete by replacing the coarse aggregate of high-strength concrete with lightweight ceramsite or foam based on the C60 concrete mix ratio. After that, lightweight concrete beams are fabricated and three-point bending tests are carried out. Through the experiments and theoretical analysis, the following conclusions can be drawn from the results of this study. (1) The mechanical properties of C40 foam concrete beams are similar to those of plain concrete beams. Compared to plain concrete beams, the density of foamed concrete was lower by 23.4%; moreover, the ductility and toughness of foamed concrete were higher by 13 and 3%, respectively. (2) Considering the nonlinear deformation characteristics of reinforced concrete beams, the theoretical calculation method of beam deflection is proposed based on the flat section assumption and the principle of virtual work. Within the normal use deflection limits, the calculated results are in good agreement with the deflection of plain concrete beams and foam concrete beams, the absolute error is mostly within 0.2 mm, and the relative error is mostly 10–20% during the normal service period, verifying that the calculated value can be used as a design reference for the deflection of the foamed concrete beam during normal use. (3) The mechanical properties of C40 ceramsite concrete beam have comparatively large discreteness. This may be caused by the strength discreteness of the ceramsite, the uneven distribution of ceramsite in the concrete beam, and the complexity and randomness of the combination of ceramsite interface and colloid. Thus, the aforementioned points should be attached great importance to in subsequent research and design. ## Acknowledgements This work was financially supported by the National Key Scientific Instruments and Equipment Development Projects of China(Grant No. 41827807), the Open Research Fund of the State Key Laboratory of Geomechanics and Geotechnical Engineering, Institute of Rock and Soil Mechanics, Chinese Academy of Sciences (Grant No. Z018019), the State Key Laboratory of Subtropical Building Science in China(Grant No. 2017KB16), and Guangdong Provincial Key Laboratory of Modern Civil Engineering Technology in China (Grant No. 2021B1212040003). ## Declarations ### Competing interests The authors declare that they have no competing interests. Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​. ## Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. print DRUCKEN Literatur Bawab, J., Khatib, J., Jahami, A., Elkordi, A., & Ghorbel, E. (2021). Structural Performance of Reinforced Concrete Beams Incorporating Cathode-Ray Tube (CRT) Glass Waste. Buildings, 11(2), 67. CrossRef Chen, M., Li, Z., Wu, J., et al. (2020). Shear behaviour and diagonal crack checking of shale ceramsite lightweight aggregate concrete beams with high-strength steel bars. Construction and Building Materials., 249, 118730. CrossRef Chen, Y., Hui, Q., Zhang, H., Zhu, Z., & Wang, C. (2020). Experiment and application of ceramsite concrete used to maintain roadway in coal mine. Measurement and Control., 53, 1832–1840. CrossRef Chen, Z., Liu, J., & Yu, Y. (2017). Experimental study on interior connections in modular steel buildings. Engineering Structures, 147, 625–638. CrossRef Elrahman Abd, M., Chung, S., & Stephan, D. (2019). Effect of different expanded aggregates on the properties of lightweight concrete. Magazine of Concrete Research., 71, 95–107. CrossRef Elrahman Abd, M., Madawy El, M., Chung, S., Sikora, P., & Stephan, D. (2019). Preparation and characterization of ultra-lightweight foamed concrete incorporating lightweight aggregates. Applied Sciences., 9, 1447. CrossRef Gong, J., Zeng, W., & Zhang, W. (2018). Influence of shrinkage-reducing agent and polypropylene fiber on shrinkage of ceramsite concrete. Construction and Building Materials., 159, 155–163. CrossRef Gulec, A., Kose, M. M., & Gogus, M. T. (2021). Experimental investigation of flexural performance of T-section prefabricated cage reinforced beams with self-compacting concrete. Structures. https://​doi.​org/​10.​1016/​j.​istruc.​2021.​05.​074 CrossRef Hamidian, M. R., & Shafigh, P. (2021). Post-peak behaviour of composite column using a ductile lightweight aggregate concrete. International Journal of Concrete Structures and Materials. https://​doi.​org/​10.​1186/​s40069-020-00453-6 CrossRef Huang, W., Hu, G., Miao, X., & Fan, Z. (2021). Seismic performance analysis of a novel demountable precast concrete beam-column connection with multi-slit devices. Journal of Building Engineering, 44, 102663. CrossRef Jahami, A., Khatib, J., Baalbaki, O., & Sonebi, M. (2019). Prediction of deflection in reinforced concrete beams containing plastic waste. SSRN J. https://​doi.​org/​10.​2139/​ssrn.​3510113 CrossRef Jones, M. R., & McCarthy, A. (2005). Preliminary views on the potential of foamed concrete as a structural material. Magazine of Concrete Research, 57(1), 21–31. CrossRef Karamloo, M., Afzali-Naniz, O., & Doostmohamadi, A. (2020). Impact of using different amounts of polyolefin macro fibers on fracture behavior, size effect, and mechanical properties of self-compacting lightweight concrete. Construction and Building Materials., 250, 118856. CrossRef Khatib, J. M., Jahami, A., Elkordi, A., Abdelgader, H., & Sonebi, M. (2020). Structural assessment of reinforced concrete beams incorporating waste plastic straws. Environments, 7(11), 96. CrossRef Kozłowski, M., & Kadela, M. (2018). Mechanical characterization of lightweight foamed concrete. Advances in Materials Science &amp; Engineering., 2018, 1–8. CrossRef Kyriakopoulos, P., Peltonen, S., Vayas, I., Spyrakos, C., & Leskela, M. V. (2021). Experimental and numerical investigation of the flexural behavior of shallow floor composite beams. Engineering Structures, 231, 111734. CrossRef Lee, J., Kang, S., Ha, Y., & Hong, S. (2018). Structural behavior of durable composite sandwich panels with high performance expanded polystyrene concrete. International Journal of Concrete Structures and Materials. https://​doi.​org/​10.​1186/​s40069-018-0255-6 CrossRef Lee, Y. L., Lim, J. H., Lim, S. K., & Tan, C. S. (2018). Flexural behaviour of reinforced lightweight foamed mortar beams and slabs. KSCE Journal of Civil Engineering, 22(8), 2880–2889. CrossRef Lee, K., Yang, K., Mun, J., & Tuan, N. (2019). Effect of sand content on the workability and mechanical properties of concrete using bottom ash and dredged soil-based artificial lightweight aggregates. International Journal of Concrete Structures and Materials. https://​doi.​org/​10.​1186/​s40069-018-0306-z CrossRef Lim, H. (2007). Structural response of LWC beams in flexure. National University of Singapore. Liu, X., Meng, K., Zhang, A., Zhu, T., & Yu, C. (2021). Bearing capacity of H-section beam wrapped with ceramsite concrete. Steel and Composite Structures, 40(5), 679–696. Liu, Y., Ma, H., Li, Z., & Wang, W. (2020). Seismic behaviour of full-scale prefabricated RC beam–CFST column joints connected by reinforcement coupling sleeves. Structures. https://​doi.​org/​10.​1016/​j.​istruc.​2020.​10.​066 CrossRef Lotfy, A., Hossain, K. M. A., & Lachemi, M. (2015). Lightweight self-consolidating concrete with expanded shale aggregates, modelling and optimization. International Journal of Concrete Structures and Materials., 9, 185–206. CrossRef Nguyen, D. H., & Hong, W. K. (2020). A novel erection technique of the L-shaped precast frames utilizing laminated metal plates. Journal of Asian Architecture and Building Engineering. https://​doi.​org/​10.​1080/​13467581.​2020.​1841644 CrossRef Park, R. (1988) Ductility evaluation from laboratory and analytical testing. Proceedings of the 9th world conference on earthquake engineering, Tokyo-Kyoto, Japan. Vol. 8. Porter, J. H., Cain, T. M., Fox, S. L., & Harvey, P. S. (2019). Influence of infill properties on flexural rigidity of 3D-printed structural members. Virtual and Physical Prototyping, 14(2), 148–159. CrossRef Savoia, M., Buratti, N., & Vincenzi, L. (2017). Damage and collapses in industrial precast buildings after the 2012 Emilia earthquake. Engineering Structures, 137, 162–180. CrossRef Shah, Y. I., Hu, Z., Yin, B. S., & Li, X. (2021). Flexural performance analysis of UHPC wet joint of prefabricated bridge deck. Arabian Journal for Science and Engineering. https://​doi.​org/​10.​1007/​s13369-021-05735-z CrossRef Tian, Y., Yan, X., Zhang, M., Yang, T., & Zhang, J. (2020). Effect of the characteristics of lightweight aggregates presaturated polymer emulsion on the mechanical and damping properties of concrete. Construction and Building Materials., 253, 119154. CrossRef Vakhshouri, B., & Nejadi, S. (2018). Size effect and age factor in mechanical properties of BST Light Weight Concrete. Construction and Building Materials, 177, 63–71. CrossRef Wang, S., & Tan, K. H. (2021). Flexural performance of reinforced carbon nanofibers enhanced lightweight cementitious composite (CNF-LCC) beams. Engineering Structures, 238, 112221. CrossRef Xiao, J., Li, W., Corr, D. J., & Shah, S. P. (2013). Effects of interfacial transition zones on the stress–strain behavior of modeled recycled aggregate concrete. Cement and Concrete Research, 52, 82–99. CrossRef Xie, J., Huang, L., Guo, Y., Li, Z., Fang, C., Li, L., & Wang, J. (2018). Experimental study on the compressive and flexural behaviour of recycled aggregate concrete modified with silica fume and fibres. Construction and Building Materials, 178, 612–623. CrossRef Yang, K. H., Lee, Y., & Joo, D. B. (2016). Flexural behavior of post-tensioned lightweight concrete continuous one-way slabs. International Journal of Concrete Structures and Materials, 10(4), 425–434. CrossRef Yu, X. Q., Lin, M., Geng, G. L., Li, Y. H., & Jia, L. (2013). Preparation test study on High strength lightweight concrete in engineering materials. Advanced Materials Research. https://​doi.​org/​10.​4028/​www.​scientific.​net/​AMR.​648.​55 CrossRef Zhang, M. H., & Gjvorv, O. E. (1991). Mechanical properties of high-strength lightweight concrete. Materials Journal, 88(3), 240–247. Titel Experiment and Analysis of Mechanical Properties of Lightweight Concrete Prefabricated Building Structure Beams verfasst von Yingguang Fang Yafei Xu Renguo Gu Publikationsdatum 01.12.2022 Verlag Springer Singapore Erschienen in International Journal of Concrete Structures and Materials / Ausgabe 1/2022 Print ISSN: 1976-0485 Elektronische ISSN: 2234-1315 DOI https://doi.org/10.1186/s40069-021-00496-3 Zur Ausgabe
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113915324211121, "perplexity": 3124.300782405425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00317.warc.gz"}
https://electronics.stackexchange.com/questions/45456/transfer-function-and-bode-plot-from-poles-and-zeroes
# Transfer Function and Bode Plot from Poles and Zeroes I've got the following bode plot from a black box: I've calculated the zero to be 1055 and pole to be 67. Thus I'm using the transfer function $H(s) = (s-1055)/(s-67)$ but this gives a clearly wrong bode plot: WolframAlpha calculation I'm guessing my transfer function is wrong. Can anyone see how? The plot from Alpha looks pretty close to your "black box" measurement to me. The only difference is a pre-factor. Try $\frac{67}{1055}\frac{s-1055}{s-67}$: First, note that that the transfer function you give has the pole (and zero too) in the right hand plane, i.e., the system it describes is unstable. I suspect you meant: $H(s) = \dfrac{s + 1055}{s+67}$ However, even this is not in standard form. Putting this transfer function into standard form yields: $H(s) = \dfrac{s + 1055}{s + 67} = \dfrac{1055}{67} \dfrac{\frac{s}{1055} + 1}{\frac{s}{67} + 1}$ So, now you see where the undesired gain has come from. Knowing (only) the pole and zero, you should guess instead: $H(s) =\dfrac{\frac{s}{1055} + 1}{\frac{s}{67} + 1}$ Actually, you have half of a Bode plot from a Black Box. A Bode plot shows magnitude and phase. You cannot do the fit you're trying to do without employing phase information. More accurately, you can, but it might be difficult to interpret your results. If you had your phase, you'd have more confidence in your fit. • I was moreso worried in the gain of 24dB that my transfer function had. The phase information wasn't important and I shouldn't have included it. – Sticky Oct 31 '12 at 9:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9006001353263855, "perplexity": 782.9543736837717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315811.47/warc/CC-MAIN-20190821065413-20190821091413-00134.warc.gz"}
http://tex.stackexchange.com/questions/7966/aligning-text-vertically-in-tikz
# Aligning text vertically in Tikz I need to know the correct way of handling vertical alignment in TIKZ. Let me elaborate an example: \documentclass{article} \usepackage{amsmath,tikz} \begin{document} \begin{tikzpicture} \node (A1) at (0,4) {($\overset{\surd}{1}$ \indent 2)}; \node (B1) at (0,3) {($\overset{\surd}{3}$ \indent 4)}; \node (A) at (2,4) {$\phi_1$} ; \node (B) at (2,3) {$\phi_2$} ; \draw [->, line width=1pt] (A) -- (A1) node [near start, above] { \footnotesize{R1} }; \node (A2) at (6,4) {$7^1$ \indent $8^1$ \indent $3^1$ \indent $4^1$\indent $\parallel$ 2}; \node (B2) at (6,3) {$1^3$ \indent $2^3$ \indent $5^3$ \indent $6^3$\indent $\parallel$ 4}; %fake caption \node at (8,4.5) {fake}; \end{tikzpicture} Which results in this: I just want to know is ther any way to force the lines into a bottom line here so that $\phi_1$ and the left would be aligned? Should I use phantom to do so or is there any thing better I am unaware of? The next point is about the word "fake" in the top right corner. I want it over the "2" but I have placed it as a node. I could use \overset{fake}{\overline{2}} to have a "fake" above the "2" but it pushes the line to the left which is not desired. And as you might have guessed I also need a line between "fake" and "2" should I use \overline{2}? - I assume you're using amsmath for the \overset. To get the alignment right, stick a [every node/.style = {anchor = base}] as an argument to {tikzpicture}, and to make the arrow from \phi work, you should change its target to (A-|A1.east) (this specifies the point which is at the intersection of a horizontal line from A and a vertical line through A1.east, which gives a nice flat arrow). For your second question, I suggest you abandon your current approach and use some TikZ libraries. In particular, you should use chains to get nodes in a line easily and with any spacing you like, and positioning to put the "fake" in the right place. Once you do that, you can use a \draw command to get the line you want. I've also used scopes to simplify the code. It allows me to write braces {...} instead of \begin{scope}...\end{scope}. (If you don't know: scopes in TikZ allow you to pass optional arguments which take effect locally.) Final code: \documentclass{article} \usepackage{amsmath,tikz} \usetikzlibrary{chains, scopes, positioning} \begin{document} \begin{tikzpicture} [every node/.style = {anchor = base}] \node (A1) at (0,4) {($\overset{\surd}{1}$ \quad 2)}; \node (B1) at (0,3) {($\overset{\surd}{3}$ \quad 4)}; \node (A) at (2,4) {$\phi_1$} ; \node (B) at (2,3) {$\phi_2$} ; \draw [->, line width=1pt] (A) -- (A-|A1.east) node [near start, above] { \footnotesize{R1} }; { [start chain, every node/.append style = on chain, node distance = 1em] \node at (4,4) {$7^1$}; \node {$8^1$}; \node {$3^1$}; \node {$4^1$}; \node {$\parallel$}; \node (two) {$2$}; } { [start chain, every node/.append style = on chain, node distance = 1em] \node at (4,3) {$1^3$}; \node {$2^3$}; \node {$5^3$}; \node {$6^3$}; \node {$\parallel$}; \node {$4$}; } \node (fake) [above = 1ex of two.north] {fake}; \draw (fake.south east) -- (fake.south west); \end{tikzpicture} \end{document} - Thank you for your answer. I update the code now to reflect a full tex document. The first problem is solved, but how about the fake keyword and line below it? Do you have any suggestion? –  Yasser Sobhdel Dec 30 '10 at 10:44 Included now, second paragraph plus updated example. –  Ryan Reich Dec 30 '10 at 11:19 @Ryan Reich: Can you please tell me how can I markup code block like that? –  user1996 Dec 30 '10 at 11:20 @an_ant: I used the scopes library. I've edited my answer to explain. –  Ryan Reich Dec 30 '10 at 11:39 @an_ant: Oh, I see. You put your code block inside a list; apparently, you need to indent it four additional spaces. You can use the preview to make sure it looks right before committing. –  Ryan Reich Dec 30 '10 at 12:12 This exact problem is discussed in pgfmanual (you can read it running texdoc pgf in bash, also) in "3 Tutorial: A Petri-Net for Hagen" (pg. 37) and in "5 Tutorial: Putting a Diagram in Chains" (pg. 56). Two placing methods discussed there are: • 3.8 Placing Nodes Using Relative Placement - works like this. You use \node (waiting) {}; \node (critical) [below of waiting] {}; \node (leave critical) [right of waiting] {}; to place the first node, and every following one in relative positioning to that 1st one, or any other node you name. • 5.3 Aligning the Nodes Using Matrices - is done this way \matrix[row sep=1mm,column sep=5mm] { % First row: & & & & \node [terminal] {+}; & \\ % Second row: \node [nonterminal] {unsigned integer}; & \node [terminal] {.}; & \node [terminal] {digit}; & \node [terminal] {E}; & & \node [nonterminal] {unsigned integer}; \\ % Third row: & & & & \node [terminal] {-}; & \\ }; You actually add nodes in matrix, like you would add values in tables - members of rows and columns are therefore aligned appropriately. In combination with row sep, column sep and leaving atoms in matrix empty you can modify spaces between particular items in matrix. - For this matter, I'd like placing nodes using coordinates and not using the matrix. About the relative placement, i don't think it would solve the base line problem. Actually if there are imbalanced nodes (in hight). I have the experience of using relative coordinates and it made me to use phantoms to level that up (may be my bad!). –  Yasser Sobhdel Dec 30 '10 at 11:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381410479545593, "perplexity": 2803.4811006778555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500813887.15/warc/CC-MAIN-20140820021333-00216-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-math-topics/3497-proof.html
# Math Help - Proof 1. ## Proof I have to prove the following: Prove euclidean parallel postulate iff parallel lines are equidistant from one another. My hint is this: Use the converse to Theorem 4.1 since it can be proved assuming Neutral geometery + euclidean parallel postulate. " Parallel lines are equidistan from one another" if the length of all perpendicular segments from a point on line "dropped" to the other line are congurent[equal in length]. 2. Originally Posted by Nichelle14 Prove euclidean parallel postulate iff parallel lines are equidistant from one another Please define for me in rigorous geometric terms what it means that two lines are equidistant. 3. the best I can do for that is say that they are the same measurement apart. I don't think I can give you an answer. Maybe someone else can. 4. Observe, the picture. As I understand, the equidistant postulate states that, AB is perpendicular to line AD DC is perpendicular to line BC Also, equidistance-meaning AB=DC Note, triangle ABD and triangle BDC They satisfy Hypotenuse-Leg. Thus, they are congruent. Thus, <ADB=<BDC Thus the parallel postulate is proved. Note, the congrunce of hypotenuse-leg is independent of parallel postulate, If it were not then you cannot use it. Attached Thumbnails
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081746935844421, "perplexity": 1448.2193549891501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678695499/warc/CC-MAIN-20140313024455-00095-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/204923-question-bias-estimators.html
# Math Help - Question on Bias and Estimators 1. ## Question on Bias and Estimators Hi all, this is my problem, hopefully the image is clear To be honest, I'm stuck at finding the bias. I understand I need to work out the expectation (though that seems difficult, so help would be welcome), but what does it mean by subtracting sigma, when sigma is unknown? Thanks 2. ## Re: Question on Bias and Estimators Hey Mick. Since you have been given a distribution for S^2 in terms of chi-square, you want to find the expectation of S by itself. Recall that the expectation of a function of a random variable E[g(X)] for a continuous distribution is Integral (-infinity,infinity) g(x)f(x)dx. Now X = S^2. Let g(X) = SQRT(X) and now given the expectation theorem, you can find out what E[SQRT(S^2)] = E[S] is. Also be aware that n and sigma^2 are constants and not random variables so E[S^2] = sigma^2/(n-1)*E[chi-square_n_distribution]. Also remember that the expectation of a known distribution always returns some number whether its an actual number or just an expression in terms of non-random constants. 3. ## Re: Question on Bias and Estimators Thanks for the reply. So I'm at the integral of g(x)f(x), you've defined g(X)=SQRT(X) which I understand. But where do you take f(x) from? I guess either the normal distribution function or the inside of the square root of S? 4. ## Re: Question on Bias and Estimators f(x) will be the chi-square distribution with (n-1) degrees of freedom and E[SQRT(S^2)] = sigma^2/(n-1) * E[g(X)] where X ~ chi-square(n-1). 5. ## Re: Question on Bias and Estimators Thanks, I'm making progress I think, albeit very slow! But there's some parts of it I don't understand. So I'm going to post a picture of my workings, and if you could tell me what's badly wrong, or just point me how to continue it, I'd be really grateful. It looks like it can be integrated by parts, but I'm really bad at integration, and I've never been good with gamma distribution either. Thanks 6. ## Re: Question on Bias and Estimators Once you get to this step, use the fact that the integral of the PDF over the whole domain is equal to 1. Now you haven't got a 1/(Gamma(k)*theta^k) in there so you need to balance the equation by factoring in this term, but that's all there is to do it to evaluate the integral. More specifically if r = 1/(Gamma(k)*theta^k) and I is the integral term (without the constant c) then r*I = 1 if theta = 2 and k = n/2 so this means I = 1/r and then you can go from there. Also recall that you are trying to calculate E[S^2] where E[S^2] = [sigma^2/(n-1)]*E[SQRT(X)] <- chi-square so don't forget to factor in the sigma^2/(n-1) in the end (and the goal is to show that E[S^2] = sigma^2 for unbiased-ness or something else for biased-ness). 7. ## Re: Question on Bias and Estimators I'm sorry, I'm just confused. Can you tell me what I should be aiming for, what this integral should equal because I think once I've got this the rest of the question should be straightforward. Thanks 8. ## Re: Question on Bias and Estimators You want to show that E[S^2] = sigma^2 for S^2 to be an unbiased estimator of sigma^2 and we know that E[S^2] = sigma^2/(n-1) * E[SQRT(X)] where X ~ Chi-Square with n-1 degrees of freedom. Now in your working out you get E[SQRT(X)] down to the form of a gamma distribution PDF multiplied by some constant, but since you have the PDF in the integral, you can balance it by calculating for the appropriate constant c*Integral = 1 since integrating a PDF over the whole domain will always give 1. So if you balance this you will get Integral = 1/c for some constant (Hint: look at the definition of the Gamma distribution to find out what the c is in your case). Then once you get the final result for E[SQRT(X)] above, substitute in and see if you get E[S^2] = sigma^2 or something else and make a conclusion based on that.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9655706286430359, "perplexity": 563.1371414076036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379555.49/warc/CC-MAIN-20141119123259-00146-ip-10-235-23-156.ec2.internal.warc.gz"}
https://cs.stackexchange.com/tags/complexity-classes/hot?filter=all
# Tag Info 361 I think the Wikipedia articles $\mathsf{P}$, $\mathsf{NP}$, and $\mathsf{P}$ vs. $\mathsf{NP}$ are quite good. Still here is what I would say: Part I, Part II [I will use remarks inside brackets to discuss some technical details which you can skip if you want.] Part I Decision Problems There are various kinds of computational problems. However in an ... 178 Part II Continued from Part I. The previous one exceeded the maximum number of letters allowed in an answer (30000) so I am breaking it in two. $\mathsf{NP}$-completeness: Universal $\mathsf{NP}$ Problems OK, so far we have discussed the class of efficiently solvable problems ($\mathsf{P}$) and the class of efficiently verifiable problems ($\mathsf{... 68 It is known that P$\subseteq$NP$\subset$R, where R is the set of recursive languages. Since R is countable and P is infinite (e.g. the languages$\{n\}$for$n \in \mathbb{N}$are in P), we get that P and NP are both countable. 40 Let's refresh the definitions. PSPACE is the class of problems that can be solved on a deterministic Turing machine with polynomial space bounds: that is, for each such problem, there is a machine that decides the problem using at most$p(n)$tape cells when its input has length$n$, for some polynomial$p$. EXP is the class of problems that can ... 37 No, there will be absolutely no implication, for several reasons: The P vs. NP problem is about classical computation rather than quantum computation. Even if quantum computers could solve NP-hard problems in polynomial time (which we don't expect them to be able to do), it could still be the case that classical computers cannot solve them in polynomial ... 27$k$-SUM can be solved more quickly as follows. For even$k$: Compute a sorted list$S$of all sums of$k/2$input elements. Check whether$S$contains both some number$x$and its negation$-x$. The algorithm runs in$O(n^{k/2}\log n)$time. For odd$k$: Compute the sorted list$S$of all sums of$(k-1)/2$input elements. For each input element$a$, ... 26 More than useful mentioned answers, I recommend you highly to watch "Beyond Computation: The P vs NP Problem" by Michael Sipser. I think this video should be archived as one of the leading teaching video in computer science.! Enjoy! 17 Just elaborating what @Lamine pointed out : When$k$is part of the input,$k$can be as large as$\frac{n}{2}$, in which case the number of potential clique sets are$\binom{n}{\frac{n}{2}}$which is at least$\left( \frac{n}{\frac{n}{2}} \right)^{\frac{n}{2}}$. Hence your naive algorithm would take time$2^{\frac{n}{2}}$which is clearly exponential in ... 17 No. It is another open problem and certainly related, but different. The complexity class co-$\mathsf{NP}$is the set of languages whose complements are in$\mathsf{NP}$; that is, the set of decision problems for which a "no" answer has a deterministic polynomial-time verifier. So for example, the question "Is this SAT formula unsatisfiable?" If the answer ... 17 It is maybe easier to consider the contrapositive, that is${\sf P}={\sf NP} \Rightarrow {\sf NP}={\sf coNP}$. So assume${\sf P}={\sf NP}$, then for every$L\in {\sf NP}$, we have$L\in {\sf P}$, and since the languages in${\sf P}$are closed under complement,$\bar L\in {\sf P}$and therefore$L\in {\sf coNP}$. for every$L\in {\sf coNP}$, we have$\... 17 No implications are known either way: classical simulation of quantum computers tells us nothing about how hard NP search problems are; fast solutions to NP search problems tell us nothing about how fast quantum computers can be simulated classically. The following scenarios are possible: $P=NP=BQP$ $P=NP\subsetneq BQP$ $P\subsetneq NP=BQP$ $P\subsetneq NP\... 16 If a problem is NP-Hard it means that there exists a class of instances of that problem whose are NP-Hard. It is perfectly possible for other specific classes of instances to be solvable in polynomial time. Consider for example the problem of finding a 3-coloration of a graph. It is a well-known NP-Hard problem. Now imagine that its instances are restricted ... 16 All the classes you mention are classes of languages, formally, even if P and NP are often discussed in different (more sloppy?) terms. Note that terminology revolving around decision problems is equivalent to formal languages; the decision is always whether a word is in the given language, i.e. the problem is to solve the word problem. What you need to do ... 16 Any problem in NP is in EXPTIME because you can either use exponential time to try all possible certificates or to enumerate all possible computation paths of a nondeterministic machine. More formally, there are two main definitions of NP. One is that a language$L$is in NP iff there is a relation$R$such that there is a polynomial$p$such ... 15 First of all, the question you are asking is open, since an affirmative answer shows that$\sf NP = coNP$. In fact it is one of the most prominent open problems in computer science. If$\sf P= NP$, then the class$\sf NP$is closed under complement since$\sf P$is. If on the other hand$\sf P \not = NP$then we cannot say whether$\sf NP = coNP$or not. ... 14 It depends on what definitions you use. Sipser [1] defines$\mathrm{SPACE}(f(n))$to be the class of languages decided by Turing machines using$O(f(n))$cells on their work tapes for inputs of length$n$. Papadimitriou [2], on the other hand defines it to be the class of languages decided by Turing machines using at most$f(n)$cells on the ... 13 Let me answer your questions in order: By definition, a problem has an FPTAS if there is an algorithm which on instances of length$n$gives an$1+\epsilon$-approximation and runs in time polynomial in$n$and$1/\epsilon$, that is$O((n/\epsilon)^C)$for some constant$C \geq 0$. A running time of$2^{1/\epsilon}$doesn't belong to$O((n/\epsilon)^C)$for ... 12 Vor's answer gives the standard definition. Let me try to explain the difference a bit more intuitively. Let$M$be a bounded error probabilistic polynomial-time algorithm for a language$L$that answers correctly with probability at least$p\geq\frac{1}{2}+\delta$. Let$x$be the input and$n$the size of the input. What distinguishes an arbitrary$\... 12 $d$-SUM requires time $n^{\Omega(d)}$ unless k-SAT can be solved in $2^{o(n)}$ time for any constant k. This was shown in a paper by Mihai Patrascu and Ryan Williams(1). In other words, assuming the exponential time hypothesis, your algorithm is optimal up to a constant factor in the exponent (a polynomial factor in $n$) (1) Mihai Patrascu and Ryan ... 12 No, special cases can be easier. Consider this IP, for example, given $a_i \geq 0$ for $i \in [1..n]$: $\qquad\displaystyle \min \sum_{i=1}^n x_ia_i$ s.t. $\quad\displaystyle\sum_{i=1}^n x_i \geq 1$ and $\ \displaystyle x_i \in \mathbb{N}$ for $i \in [1..n]$. It finds the minimum among $a_1, \dots, a_n$ (that for which, inevitably, $x_i=1$ in an optimal ... 12 The fact that P ≠ NP does not preclude the possibility that NP = co-NP, in which case NP ∩ co-NP = NP. So to further the discussion, let us assume that NP ≠ co-NP. In that case, Corollary 9 in Schöning's A uniform approach to obtain diagonal sets in complexity classes shows that there exists some language in NP – co-NP which is NP-intermediate. So NPI ... 11 $\oplus P^{\oplus P}$ denotes the class $\oplus P$ equipped with what's known as an oracle for $\oplus P$ — we say that it has been given the ability to determine whether or not a string $s$ is a member of a language $L$ contained in the class $\oplus P$ in a single operation. I see that another commenter (sdcwc) has linked to the proof of $\oplus P^{\oplus ... 11 Your problem is known as the$\text{UNIQUE-SAT}$problem which is$\mathsf{US}$-complete. The problem is in$\mathsf{D^p}$but not known to be$\mathsf{D^p}$-hard under deterministic polynomial time reductions, where the class$\mathsf{D^p} = \{ L_1 \cap \overline{L_2} \mid L_1,L_2 \in \mathsf{NP} \}$. It was shown by Papadimitriou and Yannakis [1] that ... 11 There are very many natural complete problems for$\Pi_2^p$, and there is a survey [1] on completeness for levels of the polynomial hierarchy, containing many such problems. The paper On the complexity of min-max optimization problems and their approximation [2] contains a nice overview of "min-max problems" with several proofs of completeness. The latter ... 11 Your prof was absolutely not rigorous (i.e. completely wrong), that's why the distinction between NP and co-NP doesn't make sense with his definition. Better definition: Def.: A decision problem (that is a problem where the result is either YES or NO and nothing else) is in NP if for every instance where the result is YES there exists a hint such that we ... 10 That looks correct to me. The difference between BPP and PP is that for BPP the probability has to be greater than$1/2$by a constant, whereas for PP it could be$1/2+ 1/2^n$. So for BPP problems you can do probability amplification with a small number of repetitions, whereas for general PP problems you can't. 10 Only in one direction. As$\mathsf{P}=\text{co-}\mathsf{P}$, if$\mathsf{NP}\neq\text{co-}\mathsf{NP}$then we would know that$\mathsf{P}\neq\mathsf{NP}$. However the reverse implication doesn't hold. If$\mathsf{P}\neq\mathsf{NP}$then it's possible that either$\mathsf{NP}\neq\text{co-}\mathsf{NP}$or$\mathsf{NP}=\text{co-}\mathsf{NP}$. 10 First of all, we don't know whether$NP=EXP$or not. So the initial answer is "it is an open question". However, we strongly believe (and there are supporting evidence) that$NP\neq EXP$. In fact, we believe that$NP\neq PSPACE$and that$PSPACE\neq EXP$(that is, there is a strict containment$NP\subsetneq PSPACE \subsetneq EXP$). Since you are looking ... 10 The concept you are looking for is called enumeration complexity, which is the study of the computational complexity of enumerating (listing) all the solutions to a problem (or the members of a language/set). Enumeration algorithms can be modeled as a two step process:a precomputation step and an enumeration phase with delay. Both of these steps have their ... 9 Third, since$\sf{L} \subseteq \sf{NC}^2$, is there an algorithm to convert any logspace algorithm into a parallel version? It can be shown (Arora and Barak textbook) given a$t(n)$-time TM$M$, that an oblivious TM$M'$(i.e. a TM whose head movement is independent of its input$x$) can construct a circuit$C_n$to compute$M(x)$where$|x| = n\$. The ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.92631995677948, "perplexity": 1201.5519873105527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572896.15/warc/CC-MAIN-20190924083200-20190924105200-00271.warc.gz"}
https://www.physicsforums.com/threads/lorentz-invariant-mass-of-electromagnetic-field.170928/
Lorentz invariant mass of electromagnetic field? 1. May 19, 2007 da_willem An photon has mass zero by virtue of its momentum canceling its energy in $$m^2c^4 = E^2-p^2c^2$$ But in electromagnetism a field configution only has momentum when both a magnetic field and an electric field are present, e.g. in an electromagnetic wave. Now when there is only an electric or magnetic field present, doesn't the field have an invariant rest mass E/c^2 with E the total energy stored in the field? Does it make any sense to think of it like that? (Problem is maybe that for e.g. a point charge this mass is infinite...so it can't be the correct picture gravitationally right?) 2. May 19, 2007 pmb_phy I disagree. On what do you base this on? I worked out an example which gives the opposite of your conclusion. See http://www.geocities.com/physics_world/sr/mass_mag_field.htm No. No. The mass of a point charge is finite even in the case of a point charge which has an infinite mass density. Best wishes Pete 3. May 20, 2007 Xezlec Wait, are you just saying no because he used the term "mass"? I mean, relativistically, the energy stored in an electric or magnetic field certainly behaves like a mass E/c^2. It has inertia and it gravitates. Right? 4. May 20, 2007 Xezlec Oh, wait, you're just saying he can't say for a photon that that energy is a rest mass. Of course if you are at rest relative to the photon it has no mass. That zero mass gets dilated to finite mass when the photons speed becomes c, because at c, the mass is dilated by a factor of infinity. Which really makes no rigorous sense to say at all. But it makes intuitive sense. I wonder if that was any help? 5. May 20, 2007 da_willem I will take a look at your website later, but for now I would like to say a few things. Of course I figured a field configuration with only an electric or magnetic field has zero momentum, because the Poynting vector vanishes! Now with zero momentum and a nonzero field energy density this would seem to imply a mass by the energy momentum relation. I know this would make no sense physically, but the equations do appear to indicate such a (sometimes infinite) mass, what's the deal here? 6. May 20, 2007 Xezlec Oh, I see what you're asking now: when you integrate the energy density of the E-field of a point charge, you get infinite energy. This makes no sense because it certainly doesn't behave as though it has infinite mass. Here's what a website I found says: 7. May 21, 2007 da_willem Thanks! But, is it wrong to associate a 'rest mass' to the energy of a field, e.g. in the light of its gravitational influence? If so, what's the reason, as the equations (naively) seem to indicate such a mass? 8. May 21, 2007 pmb_phy I disagree. As my derivation demonstrates you can have only a magnetic field in a frame S and still have a non-zero Poynting vector in a frame S' which is moving relative to S. The reason being is that in S' there will be a non-vanishing E field which, which crossed with the B field in S' will give a non-vanishing Poynting vector. Pete 9. May 21, 2007 da_willem Right, I know that having an E or a B field is observer dependent (you do have an E field in the S' frame!), which might not cause a problem for the Lorentz invariance of the quantity $$'m'= \frac{1}{c^2} \sqrt{\frac{1}{2}\epsilon \int E^2 dV+ \frac{1}{2\mu} \int B^2 dV - \frac{c^2}{\mu} \int |\vec{E} \times \vec{B}| dV}$$ So in the S frame the change (increase mainly due to the magnetic field) in field energy is probably cancelled by the arising of field momentum. If I find the time I will try to do the calculation using your example. 10. May 25, 2007 Xezlec By my reading of that reference, I think if you apply the right normalization, you can find out if a particular field actually has any energy or not, and if so how much. The electric field of an electron itself doesn't have energy, as the article says, no one really has figured out any "nice" explanation why. It's just an exception to the usual rule about energy densities of fields. So when you have true point charges lying around, you have to do that normalization thing to get the energy calculations right. The electric field due to a continuous distribution of charge does carry energy in the usual way. And in this case, as I understand it, it is correct to say it has a rest mass. Point charges are the only weird thing, where the rules stop applying nicely. Of course there are no continous charge distributions really, but if you want to approximate... meh.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115129709243774, "perplexity": 399.8092747028155}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476990033880.51/warc/CC-MAIN-20161020190033-00024-ip-10-142-188-19.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/106293-growth-decay-question.html
# Thread: Growth/Decay Question 1. ## Growth/Decay Question This problem uses radioactive isotopes. I need to find the amount of grams after 4000 years. I'm given: Isotope:14-C Half-life (years): 5715 Initial Quantity: 15 g Amount After 4000 Years: ?? g Also unsure as to starting this one. 2. Originally Posted by BeSweeet This problem uses radioactive isotopes. I need to find the amount of grams after 4000 years. I'm given: [i]Isotope:[/]14-C Half-life (years): 5715 Initial Quantity: 15 g Amount After 4000 Years: ?? g Also unsure as to starting this one. Use the following two formulae: ${\lambda} = \frac{ln2}{t_{1/2}}$ $A(t) = A_0e^{-\lambda t}$ Where: • $\lambda$ = Decay Constant • $t_{1/2}$ = half life • $A(t)$ = Amount left at time t • $A_0$ = Amount at t=0 • $t$ = time. 3. Originally Posted by e^(i*pi) Use the following two formulae: ${\lambda} = \frac{ln2}{t_{1/2}}$ $A(t) = A_0e^{-\lambda t}$ Where: • $\lambda$ = Decay Constant • $t_{1/2}$ = half life • $A(t)$ = Amount left at time t • $A_0$ = Amount at t=0 • $t$ = time. Thanks for the quick reply. [tex]\lambda[/MATH - 15? $t_{1/2}$ - 5715 $A(t)$ - ? $A_0$ - 15 $t$ - ? Help? 4. Originally Posted by BeSweeet This problem uses radioactive isotopes. I need to find the amount of grams after 4000 years. I'm given: [i]Isotope:[/]14-C Half-life (years): 5715 Initial Quantity: 15 g Amount After 4000 Years: ?? g Also unsure as to starting this one. Let t denote the time measured in years, A_0 the initial amount and A(t) the amount after t years. Then A(t) is given by: $A(t)=A_0 \cdot e^{k\cdot t}$ where k is a constant which you have to calculate first. You already know: If the initial value is A_0 then $A(5715) = \frac12 A_0$. That means: $\frac12 A_0 = A_0 \cdot e^{k \cdot 5715}$ . Solve for k. I've got $k \approx -0.0001212856$ Now calculate A(4000) with $A_0 = 15\ g$ 5. Originally Posted by BeSweeet Thanks for the quick reply. [tex]\lambda[/MATH - 15? $t_{1/2}$ - 5715 $A(t)$ - ? $A_0$ - 15 $t$ - ? Help? $\lambda = \frac{ln2}{5715}$ (which gives the same value as k as indicated by earboth) A(t) is what you're trying to find. t = 4000 6. I am so freaking confused right now. This may be asking a lot, but could someone find the answer, and show the steps? The best way for me, personally, to figure out how to do something, is to analyze steps that get an answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8818871974945068, "perplexity": 3482.3770024577116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118743.41/warc/CC-MAIN-20170423031158-00272-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.atmos-chem-phys.net/17/8553/2017/
Journal cover Journal topic Atmospheric Chemistry and Physics An interactive open-access journal of the European Geosciences Union Journal topic • IF 5.668 • IF 5-year 6.201 • CiteScore 6.13 • SNIP 1.633 • IPP 5.91 • SJR 2.938 • Scimago H index 174 • h5-index 87 # Abstracted/indexed Abstracted/indexed Atmos. Chem. Phys., 17, 8553-8575, 2017 https://doi.org/10.5194/acp-17-8553-2017 Atmos. Chem. Phys., 17, 8553-8575, 2017 https://doi.org/10.5194/acp-17-8553-2017 Research article 13 Jul 2017 Research article | 13 Jul 2017 # Exploring gravity wave characteristics in 3-D using a novel S-transform technique: AIRS/Aqua measurements over the Southern Andes and Drake Passage Corwin J. Wright1, Neil P. Hindley1,2, Lars Hoffmann3, M. Joan Alexander4, and Nicholas J. Mitchell1 Corwin J. Wright et al. • 1Centre for Space, Atmospheric and Oceanic Science, University of Bath, Bath, UK • 2Institute for Climate and Atmospheric Science, School of Earth and Environment, University of Leeds, Leeds, UK • 3Jülich Supercomputing Centre, Forschungszentrum Jülich, Jülich, Germany • 4Northwest Research Associates, Boulder, CO, USA Abstract. Gravity waves (GWs) transport momentum and energy in the atmosphere, exerting a profound influence on the global circulation. Accurately measuring them is thus vital both for understanding the atmosphere and for developing the next generation of weather forecasting and climate prediction models. However, it has proven very difficult to measure the full set of GW parameters from satellite measurements, which are the only suitable observations with global coverage. This is particularly critical at latitudes close to 60° S, where climate models significantly under-represent wave momentum fluxes. Here, we present a novel fully 3-D method for detecting and characterising GWs in the stratosphere. This method is based around a 3-D Stockwell transform, and can be applied retrospectively to existing observed data. This is the first scientific use of this spectral analysis technique. We apply our method to high-resolution 3-D atmospheric temperature data from AIRS/Aqua over the altitude range 20–60 km. Our method allows us to determine a wide range of parameters for each wave detected. These include amplitude, propagation direction, horizontal/vertical wavelength, height/direction-resolved momentum fluxes (MFs), and phase and group velocity vectors. The latter three have not previously been measured from an individual satellite instrument. We demonstrate this method over the region around the Southern Andes and Antarctic Peninsula, the largest known sources of GW MFs near the 60° S belt. Our analyses reveal the presence of strongly intermittent highly directionally focused GWs with very high momentum fluxes (∼ 80–100 mPa or more at 30 km altitude). These waves are closely associated with the mountains rather than the open ocean of the Drake Passage. Measured fluxes are directed orthogonal to both mountain ranges, consistent with an orographic source mechanism, and are largest in winter. Further, our measurements of wave group velocity vectors show clear observational evidence that these waves are strongly focused into the polar night wind jet, and thus may contribute significantly to the missing momentum at these latitudes. These results demonstrate the capabilities of our new method, which provides a powerful tool for delivering the observations required for the next generation of weather and climate models.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8038821220397949, "perplexity": 3975.1850521511105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999800.5/warc/CC-MAIN-20190625051950-20190625073950-00252.warc.gz"}
https://www.varsitytutors.com/algebra_ii-help/intermediate-single-variable-algebra/polynomials?page=13
Algebra II : Polynomials Example Questions Example Question #74 : Factoring Polynomials Factor: Explanation: To factor this, use trial and error to see what works. Since we have a  as the leading coefficient, it's helpful to remember that there's only one way to get  . Same with  as our third term--there's only one way to get  . Use these facts as you try to factor. Remember that signs matter. Therefore, your answer is: . Example Question #75 : Factoring Polynomials Factor the polynomial: Explanation: The best method for factoring this polynomial is by grouping: Example Question #76 : Factoring Polynomials Factor the following expression completely: Explanation: The expression is in quadratic form, so it can be factored as though it is quadratic. The resulting expression can be factored further, as there are two difference of squares quadratic expressions. Example Question #77 : Factoring Polynomials Factor the following polynomial: Explanation: In order to factor a polynomial, you have to first find the greatest common factor. In this case, each term has an x in it, so you can easily factor that out. Also, the greatest common factor for the three terms is 3. After you take both the 3 and the x out you are left with The next step is to figure out which two numbers multiply together to equal , but also add together to equal . The factors of 52 are . The only ones that add to equal are , and in order to equal a positive 52, and , both of the numbers need to be negative. Example Question #78 : Factoring Polynomials Factor . Explanation: We need two numbers that add to get , and multiply to get .  It looks like  and  would do the trick.  To double check our answer, we can use the quadratic formula. Knowing our solutions are  and , we can insert them into an equation: and set each equal to . so: Example Question #79 : Factoring Polynomials Factor . Explanation: Because of the leading  in the quadratic term of the function, it's hard to think of a clever combination of solutions, but we can just rely on the quadratic formula to help us out: Setting each equation equal to  yields: If we were to expand this function we would find that it's wrong by a factor of . To correct this, we multiply one of the terms by .  In this case we can multiply the  term by 2, which also serves to clean up the fraction.  This gives us an end result of: Example Question #80 : Factoring Polynomials Factor . Explanation: We need to find two numbers that multiply to get , but we don't have to worry about them adding at all because there's a quadratic term and a linear term. If we check, we can see that the quadratic and linear terms multiply to get , which is very convenient for us.  Because the quadratic term is negative, we pair that sign with the linear term when factored: Example Question #81 : Factoring Polynomials Factor . Explanation: We can start by factoring the numerator and the denominator.  Starting with the numerator, we need to find two numbers that multiply to get  and add to get .  It happens that 3 and 3 work, so: For the denominator, we need two numbers that multiply to get , and add to get .  Here,  and  work, so: If we needed to, we always could have used the quadratic formula to factor each of the above equations.  We can now put the factored equations back in to our original function: Then we can cancel the  terms. Example Question #82 : Factoring Polynomials Factor . Explanation: In the beginning, we can treat the polynomials in the numerator and denominator as separate functions. We can use the quadratic formula for each, if we can't come up with a clever guess at the solution: Returning the solutions back into the original function gives: We cancel the  terms and have: Factor .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8428393602371216, "perplexity": 539.1474278322244}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556133.92/warc/CC-MAIN-20210624141035-20210624171035-00453.warc.gz"}
http://mathoverflow.net/questions/110580/almost-universal-properties
# Almost universal properties Suppose that a small category $C$ has sets of objects and arrows that carry the structure of, say, Polish spaces, in some appropriately compatible way. For example $C$ might be an internal category in some category of Polish spaces (with continuous morphisms? Borel morphisms?). Then it might make sense to study categorical limits that only hold generically (according to some choosen ideal or $\sigma$-ideal). For example, even if $C$ does not have products, a priori one might still hope for an almost product $A \times_{\rm almost} B$ for objects $A$ and $B$ such that for, say, merely a co-meager set of triples $(C,f:C\rightarrow A,g:C\rightarrow B)$ there exists a unique map from $C$ to $A \times_{\rm almost} B$ so that the appropriate diagram commutes. For now I'm only asking for: trenchant examples and/or pointers to the literature where anything along these lines arises. - Doesn't this make the most sense in the category of Polish spaces with maps up to almost everywhere equivalence? –  Will Sawin Oct 24 '12 at 21:02 @Will I thought about that, but I phrased the question the way I did because I thought it would be more attractive to more people if $C$ still constituted an old-fashioned garden-variety category, albeit with some extra structure. Make $C$ internal to a non-concrete category like yours and it gets hard to talk about the source and target of specific arrows, etc. So my motivation, to repair or relax the condition of the existence of limits and colimits for specific diagrams needs reframing. If I understand you?? –  David Feldman Oct 24 '12 at 21:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8128339648246765, "perplexity": 380.5634141642939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928780.77/warc/CC-MAIN-20150521113208-00091-ip-10-180-206-219.ec2.internal.warc.gz"}
https://amp.en.depression.pp.ua/9558678/1/exponential-utility.html
# ⓘ Exponential utility. In economics and finance, exponential utility is a specific form of the utility function, used in some contexts because of its convenience .. ## ⓘ Exponential utility In economics and finance, exponential utility is a specific form of the utility function, used in some contexts because of its convenience when risk is present, in which case expected utility is maximized. Formally, exponential utility is given by: u c = { 1 − e − a c / a ≠ 0 c a = 0 {\displaystyle uc={\begin{cases}1-e^{-ac}/a&a\neq 0\\c&a=0\\\end{cases}}} c {\displaystyle c} is a variable that the economic decision-maker prefers more of, such as consumption, and a {\displaystyle a} is a constant that represents the degree of risk preference (a > 0 {\displaystyle a> 0} for risk aversion, a = 0 {\displaystyle a=0} for risk-neutrality, or a < 0 {\displaystyle a • aversion. Isoelastic function Constant elasticity of substitution Exponential utility Risk aversion Ljungqvist, Lars Sargent, Thomas J. 2000 Recursive • alternative utility functions such as: CES constant elasticity of substitution, or isoelastic utility Isoelastic utility Exponential utility Quasilinear • value of the utility function. Notable special cases of HARA utility functions include the quadratic utility function, the exponential utility function • measure which depends on the risk aversion of the user through the exponential utility function. It is a possible alternative to other risk measures as • with quadratic utility 2 two - periods, exponential utility and normally - distributed returns, 3 infinite - periods, quadratic utility and stochastic • in a graph in which each player has a utility function that depends only on him and his neighbors. As the utility function depends on fewer other players • risk aversion, a term in Economics referring to a property of the exponential utility function Cara Sucia disambiguation Caras disambiguation Carra • assertion of David P. Reed that the utility of large networks, particularly social networks, can scale exponentially with the size of the network The • this integral exists. Exponential discounting and hyperbolic discounting are the two most commonly used examples. Discounted utility Intertemporal choice • multiplying by a fixed constant. Logarithmic growth is the inverse of exponential growth and is very slow. A familiar example of logarithmic growth is • investor s remaining lifetime. Under certain assumptions including exponential utility and a single asset with returns following an ARMA 1, 1 process, a
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9888941049575806, "perplexity": 1638.6269583657354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363337.27/warc/CC-MAIN-20211207075308-20211207105308-00380.warc.gz"}
https://www.physicsforums.com/threads/fusion-simulation-software.865789/
# Fusion simulation software Tags: 1. Apr 6, 2016 ### l0st In a nutshell: is there any free software, which can take as input full description of a relatively large system (20-100 light atoms + photons), simulate its evolution over a period of time, and tell the probability of fusion over that time? If there's no such thing available, how close could one get to it with existing software? 2. Apr 6, 2016 ### phyzguy I don't think anybody simulates fusion at the level of atoms. It would take a huge number of atoms to simulate even a small fluid volume, I think most simulation is done using fluid dynamics codes where you treat the plasma as a fluid characterized by a composition, temperature, pressure, magnetic field, etc. You can then write the rate of fusion as a function of the fluid variables. There are several open source MHD codes that can simulate fusion plasmas. Most of these were developed for astrophysical applications. One example is Athena. This paper gives some details on simulations using fluid dynamics codes at NIF. Draft saved Draft deleted Similar Discussions: Fusion simulation software
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400875568389893, "perplexity": 1174.5797540816745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825889.47/warc/CC-MAIN-20171023092524-20171023112524-00516.warc.gz"}
http://www.koreascience.or.kr/article/JAKO200833338747473.page
# ON GENERALIZED JORDAN LEFT DERIVATIONS IN RINGS • Ashraf, Mohammad (Department of Mathematics Aligarh Muslim University) ; • Ali, Shakir (Department of Mathematics Aligarh Muslim University) • Published : 2008.05.31 #### Abstract In this paper, we introduce the notion of generalized left derivation on a ring R and prow that every generalized Jordan left derivation on a 2-torsion free primp ring is a generalized left derivation on R. Some related results are also obtained. #### References 1. M. Ashraf, On left $({\theta},{\phi})$-derivations of prime rings, Arch. Math. (Brno) 41 (2005), no. 2, 157-166 2. M. Ashraf, A. Ali, and S. Ali, On Lie ideals and generalized $({\theta},{\phi})$-derivations in prime rings, Comm. Algebra 32 (2004), no. 8, 2977-2985 https://doi.org/10.1081/AGB-120039276 3. M. Ashraf and N. Rehman, On Jordan generalized derivations in rings, Math. J. Okayama Univ. 42 (2000), 7-9 4. M. Ashraf and N. Rehman, On Lie ideals and Jordan left derivations of prime rings, Arch. Math. (Brno) 36 (2000), no. 3, 201-206 5. M. Ashraf, N. Rehman, and S. Ali, On Jordan left derivations of Lie ideals in prime rings, Southeast Asian Bull. Math. 25 (2001), no. 3, 379-382 https://doi.org/10.1007/s100120100000 6. M. Ashraf, N. Rehman, and S. Ali, On Lie ideals and Jordan generalized derivations of prime rings, Indian J. Pure Appl. Math. 34 (2003), no. 2, 291-294 7. M. Bresar and J. Vukman, On left derivations and related mappings, Proc. Amer. Math. Soc. 110 (1990), no. 1, 7-16 8. W. Cortes and C. Haetinger, On Jordan generalized higher derivations in rings, Turkish J. Math. 29 (2005), no. 1, 1-10 9. Q. Deng, On Jordan left derivations, Math. J. Okayama Univ. 34 (1992), 145-147 10. I. N. Herstein, Jordan derivations of prime rings, Proc. Amer. Math. Soc. 8 (1957), 1104-1110 11. I. N. Herstein, Topics in Ring Theory, Univ. of Chicago Press, Chicago, 1969 12. B. Hvala, Generalized derivations in rings, Comm. Algebra 26 (1998), no. 4, 1147-1166 https://doi.org/10.1080/00927879808826190 13. W. Jing and S. Lu, Generalized Jordan derivations on prime rings and standard operator algebras, Taiwanese J. Math. 7 (2003), no. 4, 605-613 https://doi.org/10.11650/twjm/1500407580 14. K. W. Jun and B. D. Kim, A note on Jordan left derivations, Bull. Korean Math. Soc. 33 (1996), no. 2, 221-228 15. Y. S. Jung, Generalized Jordan triple higher derivations on prime rings, Indian J. Pure Appl. Math. 36 (2005), no. 9, 513-524 16. E. C. Posner, Derivations in prime rings, Proc. Amer. Math. Soc. 8 (1957), 1093-1100 17. J. Vukman, Jordan left derivations on semiprime rings, Math. J. Okayama Univ. 39 (1997), 1-6 18. S. M. A. Zaidi, M. Ashraf, and S. Ali, On Jordan ideals and left $({\theta},{\theta})$-derivations in prime rings, Int. J. Math. Math. Sci. 2004 (2004), no. 37-40, 1957-1964 https://doi.org/10.1155/S0161171204309075 19. B. Zalar, On centralizers of semiprime rings, Comment. Math. Univ. Carolin. 32 (1991), no. 4, 609-614 #### Cited by 1. On generalized left derivations in rings and Banach algebras vol.81, pp.3, 2011, https://doi.org/10.1007/s00010-011-0070-5 2. LEFT JORDAN DERIVATIONS ON BANACH ALGEBRAS AND RELATED MAPPINGS vol.47, pp.1, 2010, https://doi.org/10.4134/BKMS.2010.47.1.151 3. On Lie Ideals and Generalized Jordan Left Derivations of Prime Rings vol.65, pp.8, 2014, https://doi.org/10.1007/s11253-014-0855-5 4. Generalized Jordan left derivations on semiprime algebras vol.161, pp.1, 2010, https://doi.org/10.1007/s00605-009-0116-0 5. Generalized Jordan left derivations in rings with involution vol.45, pp.4, 2012, https://doi.org/10.1515/dema-2013-0420 6. Generalized left derivations acting as homomorphisms and anti-homomorphisms on Lie ideal of rings vol.22, pp.3, 2014, https://doi.org/10.1016/j.joems.2013.12.015 7. Left Derivations Characterized by Acting on Multilinear Polynomials vol.39, pp.6, 2011, https://doi.org/10.1080/00927872.2010.480960 8. Additive mappings satisfying algebraic conditions in rings vol.63, pp.2, 2014, https://doi.org/10.1007/s12215-014-0153-y 9. Some Theorems for Sigma Prime Rings with Differential Identities on Sigma Ideals vol.2013, 2013, https://doi.org/10.1155/2013/572690 10. CHARACTERIZATIONS OF REAL HYPERSURFACES OF TYPE A IN A COMPLEX SPACE FORM vol.47, pp.1, 2010, https://doi.org/10.4134/BKMS.2010.47.1.001 11. Additive mappings act as a generalized left $$(\alpha , \beta )$$(α,β)-derivation in rings pp.2198-2759, 2018, https://doi.org/10.1007/s40574-018-0165-1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9005589485168457, "perplexity": 3526.2623328902023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738858.45/warc/CC-MAIN-20200811235207-20200812025207-00194.warc.gz"}
http://www.zazzle.com/states+calendars
Showing All Results 945 results Page 1 of 16 Related Searches: united, america, lantern Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo No matches for Showing All Results 945 results Page 1 of 16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8189893364906311, "perplexity": 4434.912735365363}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678702437/warc/CC-MAIN-20140313024502-00076-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/statistics/182541-reliability-test.html
# Math Help - Reliability of a Test 1. ## Reliability of a Test 15%of the U.S. population has math fever. If an individual has math fever, a testaccurately identifies the condition 80% of the time. If an individual does nothave math fever, the test accurately identifies the individual as not havingmath fever 90% of the time. The following list describes theseoutcomes: MathFever 15% PosTest 80% NegTest 20% NoMath Fever 85% PosTest 10% NegTest 90% What percent of the time is the test inaccurate? What is the probability of a false positive? What is the probability that the individual does not have math fever given that thetest is negative? I know that the answers should be: 0.885, 0.115, 9623 but I'm not sure how to set up any of these 2. What does it mean to have a false positive? You know that a a test says you have the fever if you actually have the fever 80% of the time. It also says you don't have the fever when you don't actually have the fever 90% of the time. Consider now that you don't actually have the fever but it gives you a positive test anyway. You should have something like [Not Sick] * [Positive Test] + [Sick] * [Negative Test] = [False Positive] In other words, the given how many people are sick or not sick and the chances that the test gets it wrong, you should be able to figure out its false positive rate. I got the correct answer, and from the expression above, so should you. The key here is understanding why it is set up that way. Do you see why? 3. Ahh yes, that certainly makes much more sense. How would you approach the inaccurate question and what would be the difference? 4. Think about the definition. A false positive is precisely that: how likely is a test to indicate the false of what is actually (positively) happening? Inaccuracy is how off the mark a test is. This is about the positive rate itself. Using the above algorithm, just think how many of the population that are sick will test sick or how many of the population that aren't sick will test not sick? The answer falls right out. As for the last question, it is a conditional probability. What do you know about that so far? 5. Thanks so much Bryan! I finally got it! (I didn't realize the last one was conditional probability--thanks!)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8753077983856201, "perplexity": 709.3953063031621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999651529/warc/CC-MAIN-20140305060731-00057-ip-10-183-142-35.ec2.internal.warc.gz"}
https://terrytao.wordpress.com/2015/10/07/sweeping-a-matrix-rotates-its-graph/?replytocom=460285
I recently learned about a curious operation on square matrices known as sweeping, which is used in numerical linear algebra (particularly in applications to statistics), as a useful and more robust variant of the usual Gaussian elimination operations seen in undergraduate linear algebra courses. Given an ${n \times n}$ matrix ${A := (a_{ij})_{1 \leq i,j \leq n}}$ (with, say, complex entries) and an index ${1 \leq k \leq n}$, with the entry ${a_{kk}}$ non-zero, the sweep ${\hbox{Sweep}_k[A] = (\hat a_{ij})_{1 \leq i,j \leq n}}$ of ${A}$ at ${k}$ is the matrix given by the formulae $\displaystyle \hat a_{ij} := a_{ij} - \frac{a_{ik} a_{kj}}{a_{kk}}$ $\displaystyle \hat a_{ik} := \frac{a_{ik}}{a_{kk}}$ $\displaystyle \hat a_{kj} := \frac{a_{kj}}{a_{kk}}$ $\displaystyle \hat a_{kk} := \frac{-1}{a_{kk}}$ for all ${i,j \in \{1,\dots,n\} \backslash \{k\}}$. Thus for instance if ${k=1}$, and ${A}$ is written in block form as $\displaystyle A = \begin{pmatrix} a_{11} & X \\ Y & B \end{pmatrix} \ \ \ \ \ (1)$ for some ${1 \times n-1}$ row vector ${X}$, ${n-1 \times 1}$ column vector ${Y}$, and ${n-1 \times n-1}$ minor ${B}$, one has $\displaystyle \hbox{Sweep}_1[A] = \begin{pmatrix} -1/a_{11} & X / a_{11} \\ Y/a_{11} & B - a_{11}^{-1} YX \end{pmatrix}. \ \ \ \ \ (2)$ The inverse sweep operation ${\hbox{Sweep}_k^{-1}[A] = (\check a_{ij})_{1 \leq i,j \leq n}}$ is given by a nearly identical set of formulae: $\displaystyle \check a_{ij} := a_{ij} - \frac{a_{ik} a_{kj}}{a_{kk}}$ $\displaystyle \check a_{ik} := -\frac{a_{ik}}{a_{kk}}$ $\displaystyle \check a_{kj} := -\frac{a_{kj}}{a_{kk}}$ $\displaystyle \check a_{kk} := \frac{-1}{a_{kk}}$ for all ${i,j \in \{1,\dots,n\} \backslash \{k\}}$. One can check that these operations invert each other. Actually, each sweep turns out to have order ${4}$, so that ${\hbox{Sweep}_k^{-1} = \hbox{Sweep}_k^3}$: an inverse sweep performs the same operation as three forward sweeps. Sweeps also preserve the space of symmetric matrices (allowing one to cut down computational run time in that case by a factor of two), and behave well with respect to principal minors; a sweep of a principal minor is a principal minor of a sweep, after adjusting indices appropriately. Remarkably, the sweep operators all commute with each other: ${\hbox{Sweep}_k \hbox{Sweep}_l = \hbox{Sweep}_l \hbox{Sweep}_k}$. If ${1 \leq k \leq n}$ and we perform the first ${k}$ sweeps (in any order) to a matrix $\displaystyle A = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix}$ with ${A_{11}}$ a ${k \times k}$ minor, ${A_{12}}$ a ${k \times n-k}$ matrix, ${A_{12}}$ a ${n-k \times k}$ matrix, and ${A_{22}}$ a ${n-k \times n-k}$ matrix, one obtains the new matrix $\displaystyle \hbox{Sweep}_1 \dots \hbox{Sweep}_k[A] = \begin{pmatrix} -A_{11}^{-1} & A_{11}^{-1} A_{12} \\ A_{21} A_{11}^{-1} & A_{22} - A_{21} A_{11}^{-1} A_{12} \end{pmatrix}.$ Note the appearance of the Schur complement in the bottom right block. Thus, for instance, one can essentially invert a matrix ${A}$ by performing all ${n}$ sweeps: $\displaystyle \hbox{Sweep}_1 \dots \hbox{Sweep}_n[A] = -A^{-1}.$ If a matrix has the form $\displaystyle A = \begin{pmatrix} B & X \\ Y & a \end{pmatrix}$ for a ${n-1 \times n-1}$ minor ${B}$, ${n-1 \times 1}$ column vector ${X}$, ${1 \times n-1}$ row vector ${Y}$, and scalar ${a}$, then performing the first ${n-1}$ sweeps gives $\displaystyle \hbox{Sweep}_1 \dots \hbox{Sweep}_{n-1}[A] = \begin{pmatrix} -B^{-1} & B^{-1} X \\ Y B^{-1} & a - Y B^{-1} X \end{pmatrix}$ and all the components of this matrix are usable for various numerical linear algebra applications in statistics (e.g. in least squares regression). Given that sweeps behave well with inverses, it is perhaps not surprising that sweeps also behave well under determinants: the determinant of ${A}$ can be factored as the product of the entry ${a_{kk}}$ and the determinant of the ${n-1 \times n-1}$ matrix formed from ${\hbox{Sweep}_k[A]}$ by removing the ${k^{th}}$ row and column. As a consequence, one can compute the determinant of ${A}$ fairly efficiently (so long as the sweep operations don’t come close to dividing by zero) by sweeping the matrix for ${k=1,\dots,n}$ in turn, and multiplying together the ${kk^{th}}$ entry of the matrix just before the ${k^{th}}$ sweep for ${k=1,\dots,n}$ to obtain the determinant. It turns out that there is a simple geometric explanation for these seemingly magical properties of the sweep operation. Any ${n \times n}$ matrix ${A}$ creates a graph ${\hbox{Graph}[A] := \{ (X, AX): X \in {\bf R}^n \}}$ (where we think of ${{\bf R}^n}$ as the space of column vectors). This graph is an ${n}$-dimensional subspace of ${{\bf R}^n \times {\bf R}^n}$. Conversely, most subspaces of ${{\bf R}^n \times {\bf R}^n}$ arises as graphs; there are some that fail the vertical line test, but these are a positive codimension set of counterexamples. We use ${e_1,\dots,e_n,f_1,\dots,f_n}$ to denote the standard basis of ${{\bf R}^n \times {\bf R}^n}$, with ${e_1,\dots,e_n}$ the standard basis for the first factor of ${{\bf R}^n}$ and ${f_1,\dots,f_n}$ the standard basis for the second factor. The operation of sweeping the ${k^{th}}$ entry then corresponds to a ninety degree rotation ${\hbox{Rot}_k: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}^n \times {\bf R}^n}$ in the ${e_k,f_k}$ plane, that sends ${f_k}$ to ${e_k}$ (and ${e_k}$ to ${-f_k}$), keeping all other basis vectors fixed: thus we have $\displaystyle \hbox{Graph}[ \hbox{Sweep}_k[A] ] = \hbox{Rot}_k \hbox{Graph}[A]$ for generic ${n \times n}$ ${A}$ (more precisely, those ${A}$ with non-vanishing entry ${a_{kk}}$). For instance, if ${k=1}$ and ${A}$ is of the form (1), then ${\hbox{Graph}[A]}$ is the set of tuples ${(r,R,s,S) \in {\bf R} \times {\bf R}^{n-1} \times {\bf R} \times {\bf R}^{n-1}}$ obeying the equations $\displaystyle a_{11} r + X R = s$ $\displaystyle Y r + B R = S.$ The image of ${(r,R,s,S)}$ under ${\hbox{Rot}_1}$ is ${(s, R, -r, S)}$. Since we can write the above system of equations (for ${a_{11} \neq 0}$) as $\displaystyle \frac{-1}{a_{11}} s + \frac{X}{a_{11}} R = -r$ $\displaystyle \frac{Y}{a_{11}} s + (B - a_{11}^{-1} YX) R = S$ we see from (2) that ${\hbox{Rot}_1 \hbox{Graph}[A]}$ is the graph of ${\hbox{Sweep}_1[A]}$. Thus the sweep operation is a multidimensional generalisation of the high school geometry fact that the line ${y = mx}$ in the plane becomes ${y = \frac{-1}{m} x}$ after applying a ninety degree rotation. It is then an instructive exercise to use this geometric interpretation of the sweep operator to recover all the remarkable properties about these operations listed above. It is also useful to compare the geometric interpretation of sweeping as rotation of the graph to that of Gaussian elimination, which instead shears and reflects the graph by various elementary transformations (this is what is going on geometrically when one performs Gaussian elimination on an augmented matrix). Rotations are less distorting than shears, so one can see geometrically why sweeping can produce fewer numerical artefacts than Gaussian elimination.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 358, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9449103474617004, "perplexity": 182.03684065955974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540521378.25/warc/CC-MAIN-20191209173528-20191209201528-00229.warc.gz"}
https://www.math.rutgers.edu/academics/graduate-program/course-descriptions/1018-640-507-functional-analysis-i
# Course Descriptions ## 16:640:507 - Functional Analysis I Denis Kriventsov ### Text: Haim Brezis, "Functional Analysis, Sobolev Spaces, and Partial Differential Equations ### Prerequisites: 16:640: 501 and 16:640:502 ### Description: Sobolev spaces and the variational formulation of boundary value problems in one dimension. The Hahn-Banach theorems. Conjugate convex functions. The uniform boundedness principle and the closed graph theorem. Characterization of surjective operators. Weak topologies. Reflexive spaces. Separable spaces. Uniform convexity. L^{p} spaces. Hilbert spaces. Compact operators. Spectral decomposition of self-adjoint compact operators. ## Contacts Departmental Chair Michael Saks
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606194257736206, "perplexity": 4130.486689282585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00249.warc.gz"}
http://tdunning.blogspot.ru/2011/05/
## Sunday, May 29, 2011 ### Online algorithms for boxcar-ish moving averages One problem with exponentially weighted moving averages is that the weight that older samples decays sharply even for very recent samples. The impulse response of such an average shows this clearly. In the graph to the right, the red line is the impulse response of the exponential weighted average is shown by the red line. The impulse response of a different kind of moving average derived by John Canny in the early 80's is shown by the black line. The Canny filter puts higher weight on the events in the recent past which makes it preferable when you don't want to forget things right away, but do want to forget them before long and also want an on-line algorithm. The cool thing about the Canny filter is that it is only twice as much work as a simple exponential moving average. The idea is that if you take the difference between two exponentially weighted averages with different time constants, you can engineer things to give you an impulse response of the sort that you would like. The formula for such a difference looks like this \begin{eqnarray*} w(x) &=& k_1 e^{-x/\alpha} - k_2 e^{-x/\beta} \end{eqnarray*} Here $k_1$ and $k_2$ scale the two component filters in magnitude and $\alpha$ and $\beta$ are the time constants for the filters. It is nice to set the magnitude of the filter at delay $0$ to be exactly 1. We can use this to get a value for $k_2$ in terms of $k_1$ \begin{eqnarray*} w(0) &=& k_1 - k_2 = 1 \\ k_2 &=& k_1 -1 \end{eqnarray*} Similarly, we can constrain the slope of the impulse response to be $0$ at delay $0$. This gives us $\beta$ in terms $\alpha$ \begin{eqnarray*} w'(0) &=& {k_1 \over \alpha} - {k_2 \over \beta} = 0\\ {k_1 \over \alpha} &=& {k_1-1 \over \beta} \\ \beta &=& \alpha \frac{ k_1-1} { k_1} \end{eqnarray*} The final result is this impulse response \begin{eqnarray*} w(x) = k \exp \left(-{x \over \alpha}\right)-(k-1) \exp\left(-{k x\over \alpha (k-1)}\right) \end{eqnarray*} We can do a bit better if we note that as $k \to \infty$, the shape of the impulse quickly converges to a constant shape with mean of $w(x) \to \frac{3a}{2}$ and a total volume of $2a$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9682269096374512, "perplexity": 513.828526916887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608004.38/warc/CC-MAIN-20170525063740-20170525083740-00219.warc.gz"}
http://clay6.com/qa/43663/a-thin-semi-circular-ring-of-radius-r-has-a-positive-charge-q-distributed-u
Browse Questions # A thin semi-circular ring of radius r has a positive charge q distributed uniformly over it. The net field E at the centre O is $(C) \large\frac{-q}{2 \pi ^2 \in_0 r^2} \hat i$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8087001442909241, "perplexity": 603.2412296285187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542002.53/warc/CC-MAIN-20161202170902-00067-ip-10-31-129-80.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-asymptotes-for-1-x-2x-2-5x-3
Precalculus Topics # How do you find the asymptotes for (1-x)/(2x^2-5x-3)? Feb 25, 2016 Two vertical asymptotes are $2 x + 1 = 0$ and $x - 3 = 0$. #### Explanation: To find all the asymptotes for function y=(1-x)/(2x^2-5x−3), let us first start with vertical asymptotes, which are given by putting denominator equal to zero or (2x^2-5x−3)=0. Let us find factors of (2x^2-5x−3) by splitting middle term in $- 6 s$ and $x$ i.e. (2x^2-6x+x−3=2x(x-3)+1(x-3)=(2x+1)(x-3). As the factors of denominators are $\left(2 x + 1\right)$ and $\left(x - 3\right)$, two vertical asymptotes are $2 x + 1 = 0$ and $x - 3 = 0$. As the highest degree of numerator is less than that of denominator, there is no horizontal or slanting asymptote. ##### Impact of this question 97 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8396652340888977, "perplexity": 829.9901368022817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583835626.56/warc/CC-MAIN-20190122095409-20190122121409-00566.warc.gz"}
https://robotics.stackexchange.com/questions/7331/position-control-for-linear-model-of-quadrotor-problem-with-tracking-task
# position control for linear model of quadrotor (problem with tracking task) Lately, if you notice I have posted some questions regarding position tracking for nonlinear model. I couldn't do it. I've switched to linear model, hope I can do it. For regulation problem, the position control seems working but once I switch to tracking, the system starts oscillating. I don't know why. I have stated what I've done below hope someone guides me to the correct path. The linear model of the quadrotor is provided here which is \begin{align} \ddot{x} &= g \theta \ \ \ \ \ \ \ \ \ \ (1)\\ \ddot{y} &= - g \phi \ \ \ \ \ \ \ \ \ \ (2)\\ \ddot{z} &= \frac{U_{1}}{m} - g \\ \ddot{\phi} &= \frac{L}{J_{x}} U_{2} \\ \ddot{\theta} &= \frac{L}{J_{y}} U_{2} \\ \ddot{\psi} &= \frac{1}{J_{z}} U_{2} \\ \end{align} In this paper, the position control based on PD is provided. In the aforementioned paper, from (1) and (2) the desired angles $\phi^{d}$ and $\theta^{d}$ are obtained, therefore, \begin{align} \theta^{d} &= \frac{\ddot{x}^{d}}{g} \\ \phi^{d} &= - \frac{\ddot{y}^{d}}{g} \end{align} where \begin{align} \ddot{x}^{d} &= Kp(x^{d} - x) + Kd( \dot{x}^{d} - \dot{x} ) \\ \ddot{y}^{d} &= Kp(y^{d} - y) + Kd( \dot{y}^{d} - \dot{y} ) \\ U_{1} &= Kp(z^{d} - z) + Kd( \dot{z}^{d} - \dot{z} ) \\ U_{2} &= Kp(\phi^{d} - \phi) + Kd( \dot{\phi}^{d} - \dot{\phi} ) \\ U_{3} &= Kp(\theta^{d} - \theta) + Kd( \dot{\theta}^{d} - \dot{\theta} ) \\ U_{4} &= Kp(\psi^{d} - \psi) + Kd( \dot{\psi}^{d} - \dot{\psi} ) \\ \end{align} with regulation problem where $x^{d} = 2.5 m, \ y^{d} = 3.5 m$ and $z^{d} = 4.5 m$, the results are Now if I change the problem to the tracking one, the results are messed up. In the last paper, they state A saturation function is needed to ensure that the reference roll and pitch angles are within specified limits Unfortunately, the max value for $\phi$ and $\theta$ are not stated in the paper but since they use Euler angles, I believe $\phi$ in this range $(-\frac{\pi}{2},\frac{\pi}{2})$ and $\theta$ in this range $[-\pi, \pi]$ I'm using Euler method as an ODE solver because the step size is fixed. For the derivative, Euler method is used. This is my code %######################( PD Controller & Atittude )%%%%%%%%%%%%%%%%%%%% clear all; clc; dt = 0.001; t = 0; % initial values of the system x = 0; dx = 0; y = 0; dy = 0; z = 0; dz = 0; Phi = 0; dPhi = 0; Theta = 0; dTheta = 0; Psi = pi/3; dPsi = 0; %System Parameters: m = 0.75; % mass (Kg) L = 0.25; % arm length (m) Jx = 0.019688; % inertia seen at the rotation axis. (Kg.m^2) Jy = 0.019688; % inertia seen at the rotation axis. (Kg.m^2) Jz = 0.039380; % inertia seen at the rotation axis. (Kg.m^2) g = 9.81; % acceleration due to gravity m/s^2 errorSumX = 0; errorSumY = 0; errorSumZ = 0; errorSumPhi = 0; errorSumTheta = 0; % Set desired position for tracking task DesiredX = pose(:,1); DesiredY = pose(:,2); DesiredZ = pose(:,3); % Set desired position for regulation task % DesiredX(:,1) = 2.5; % DesiredY(:,1) = 5; % DesiredZ(:,1) = 7.2; dDesiredX = 0; dDesiredY = 0; dDesiredZ = 0; DesiredXpre = 0; DesiredYpre = 0; DesiredZpre = 0; dDesiredPhi = 0; dDesiredTheta = 0; DesiredPhipre = 0; DesiredThetapre = 0; for i = 1:6000 % torque input %&&&&&&&&&&&&( Ux )&&&&&&&&&&&&&&&&&& Kpx = 90; Kdx = 25; Kix = 0.0001; errorSumX = errorSumX + ( DesiredX(i) - x ); % Euler Method Derivative dDesiredX = ( DesiredX(i) - DesiredXpre ) / dt; DesiredXpre = DesiredX(i); Ux = Kpx*( DesiredX(i) - x ) + Kdx*( dDesiredX - dx ) + Kix*errorSumX; %&&&&&&&&&&&&( Uy )&&&&&&&&&&&&&&&&&& Kpy = 90; Kdy = 25; Kiy = 0.0001; errorSumY = errorSumY + ( DesiredY(i) - y ); % Euler Method Derivative dDesiredY = ( DesiredY(i) - DesiredYpre ) / dt; DesiredYpre = DesiredY(i); Uy = Kpy*( DesiredY(i) - y ) + Kdy*( dDesiredY - dy ) + Kiy*errorSumY; %&&&&&&&&&&&&( U1 )&&&&&&&&&&&&&&&&&& Kpz = 90; Kdz = 25; Kiz = 0; errorSumZ = errorSumZ + ( DesiredZ(i) - z ); dDesiredZ = ( DesiredZ(i) - DesiredZpre ) / dt; DesiredZpre = DesiredZ(i); U1 = Kpz*( DesiredZ(i) - z ) + Kdz*( dDesiredZ - dz ) + Kiz*errorSumZ; %####################################################################### %####################################################################### %####################################################################### % Desired Phi and Theta %disp('before') DesiredPhi = -Uy/g; DesiredTheta = Ux/g; %&&&&&&&&&&&&( U2 )&&&&&&&&&&&&&&&&&& KpP = 20; KdP = 5; KiP = 0.001; errorSumPhi = errorSumPhi + ( DesiredPhi - Phi ); % Euler Method Derivative dDesiredPhi = ( DesiredPhi - DesiredPhipre ) / dt; DesiredPhipre = DesiredPhi; U2 = KpP*( DesiredPhi - Phi ) + KdP*( dDesiredPhi - dPhi ) + KiP*errorSumPhi; %-------------------------------------- %&&&&&&&&&&&&( U3 )&&&&&&&&&&&&&&&&&& KpT = 90; KdT = 10; KiT = 0.001; errorSumTheta = errorSumTheta + ( DesiredTheta - Theta ); % Euler Method Derivative dDesiredTheta = ( DesiredTheta - DesiredThetapre ) / dt; DesiredThetapre = DesiredTheta; U3 = KpT*( DesiredTheta - Theta ) + KdP*( dDesiredTheta - dTheta ) + KiT*errorSumTheta; %-------------------------------------- %&&&&&&&&&&&&( U4 )&&&&&&&&&&&&&&&&&& KpS = 90; KdS = 10; KiS = 0; DesiredPsi = 0; dDesiredPsi = 0; U4 = KpS*( DesiredPsi - Psi ) + KdS*( dDesiredPsi - dPsi ); %###################( ODE Equations of Quadrotor )################### ddx = g * Theta; dx = dx + ddx*dt; x = x + dx*dt; %======================================================================= ddy = -g * Phi; dy = dy + ddy*dt; y = y + dy*dt; %======================================================================= ddz = (U1/m) - g; dz = dz + ddz*dt; z = z + dz*dt; %======================================================================= ddPhi = ( L/Jx )*U2; dPhi = dPhi + ddPhi*dt; Phi = Phi + dPhi*dt; %======================================================================= ddTheta = ( L/Jy )*U3; dTheta = dTheta + ddTheta*dt; Theta = Theta + dTheta*dt; %======================================================================= ddPsi = (1/Jz)*U4; dPsi = dPsi + ddPsi*dt; Psi = Psi + dPsi*dt; %======================================================================= %store the erro ErrorX(i) = ( x - DesiredX(i) ); ErrorY(i) = ( y - DesiredY(i) ); ErrorZ(i) = ( z - DesiredZ(i) ); ErrorPsi(i) = ( Psi - 0 ); X(i) = x; Y(i) = y; Z(i) = z; T(i) = t; t = t + dt; end Figure1 = figure(1); set(Figure1,'defaulttextinterpreter','latex'); subplot(2,2,1) plot(T, ErrorX, 'LineWidth', 2) title('Error in $x$-axis Position (m)') xlabel('time (sec)') ylabel('$x_{d}(t) - x(t)$', 'LineWidth', 2) subplot(2,2,2) plot(T, ErrorY, 'LineWidth', 2) title('Error in $y$-axis Position (m)') xlabel('time (sec)') ylabel('$y_{d}(t) - y(t)$', 'LineWidth', 2) subplot(2,2,3) plot(T, ErrorZ, 'LineWidth', 2) title('Error in $z$-axis Position (m)') xlabel('time (sec)') ylabel('$z_{d} - z(t)$', 'LineWidth', 2) subplot(2,2,4) plot(T, ErrorPsi, 'LineWidth', 2) title('Error in $\psi$ (m)') xlabel('time (sec)') ylabel('$\psi_{d} - \psi(t)$','FontSize',12); grid on Figure2 = figure(2); set(Figure2,'units','normalized','outerposition',[0 0 1 1]); figure(2) plot3(X,Y,Z, 'b') grid on hold on plot3(DesiredX, DesiredY, DesiredZ, 'r') pos = get(Figure2,'Position'); set(Figure2,'PaperPositionMode','Auto','PaperUnits','Inches','PaperSize',[pos(3),pos(4)]); print(Figure2,'output2','-dpdf','-r0'); For the trajectory code clear all; clc; fileID = fopen('xyTrajectory.txt','w'); angle = -pi; z = 0; t = 0; for i = 1:6000 if ( z < 2 ) z = z + 0.1; x = 0; y = 0; end if ( z >= 2 ) angle = angle + 0.1; angle = wrapToPi(angle); z = 2; end X(i) = x; Y(i) = y; Z(i) = z; fprintf(fileID,'%f \t %f \t %f\n',x, y, z); end fclose(fileID); plot3(X,Y,Z) grid on • $\phi, \theta$ are inside the intervall -90,90 deg. All the linearization's story is about the fact that the system doesn't go to much far from the linearization's point $\phi = 0 \theta = 0$. About the tracking problem: please try again with the following gains: P = 8, I = 0, D = 35 . I had a lot of problem due to the integrating term. And be sure that the timing is right. 'For example the point to be reached must be not be too far from the last point. Small steps. – Dave May 23 '15 at 10:33 • And if I were you I would still implement a non linear controller. I was really happy with the following one: math.ucsd.edu/~mleok/pdf/LeLeMc2010_quadrotor.pdf You need just a trajectory with position, velocity, acceleration. Snap and jerk are not necessary. That was the first controller I implemented, that was working "out of the box". I was using C++. If you use Matlab, then you have it really fast – Dave May 23 '15 at 10:39 • @Dave, I will try your suggestions. Is there a way to share your code in C++? My ultimate goal is to reimplement the system in C++. Also, which ODE solver did you use? Regarding the nonlinear controller, I've tried my best but always arcsin yields undefined number based on the approach in this link researchgate.net/publication/… – CroCo May 23 '15 at 15:15 • Cont., I have seen one of your posts that you are implementing backstepping controller based on the approach in the aforementioned paper, if you can help me to carry out the experiment, I will be more than happy. Thanks. – CroCo May 23 '15 at 15:18 • About timing, could you please elaborate a bit since everything in my code in one for loop. – CroCo May 23 '15 at 15:20 for i = 1:6000 if ( z < 2 ) z = z + 0.1; x = 0; y = 0; end if ( z >= 2 ) angle = angle + 0.1; angle = wrapToPi(angle); z = 2; end X(i) = x; Y(i) = y; Z(i) = z; fprintf(fileID,'%f \t %f \t %f\n',x, y, z); end there is a jump when the altitude reachs 2. The x coordinates goes in only one single step from the value 0 to the value stored in the variable radius. Please arrange your code and let the generation of a much more soft trajectory in your code, in which you have more interpolated points between the position 0 and the radius. As a general rule, you must not have too much distance between two contiguous points. In the case of a quadrotor the fact that those points too far away are, leads to a very huge pitch and/or roll angle (the quadrotor tries to do its best to reach the next point in a very small time, so it accelerates suddenly by setting a very steep angle). This causes the asin function to not work properly, since the arguments are not anymore in the wanted function's domain ($]-1,1[$) An example could be: for i = 1:6000 if ( z < 2 ) z = z + 0.1; x = 0; y = 0; end if ( z >= 2 ) angle = angle + 0.1; angle = wrapToPi(angle); x = ( i / 6000 ) * radius * cos(angle); y = ( i / 6000 ) * radius * sin(angle); z = 2; end X(i) = x; Y(i) = y; Z(i) = z; fprintf(fileID,'%f \t %f \t %f\n',x, y, z); end regards • this is useful, because it shows the error in the algorithm. it would be beneficial to also define some constraints for a "good, followable trajectory". – Gürkan Çetin Jul 23 '15 at 19:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9678505659103394, "perplexity": 4598.823753662116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540484477.5/warc/CC-MAIN-20191206023204-20191206051204-00411.warc.gz"}
http://physics.stackexchange.com/questions/8954/what-are-the-main-differences-between-p-p-and-p-bar-p-colliders?answertab=oldest
# What are the main differences between $p p$ and $p \bar p$ colliders I know that it is somehow related to the parton distribution functions, allowing specific reactions with gluons instead of quarks and anti-quarks, but I would really appreciate more detailed answers ! Thanks - The difference in scattering cross sections is more evident the lower the energy of collisions. Fig 41.11. At the energies of TeV the probability of new physics observations is the same for both choices of collisions. The reason is that at low energies the fact that the proton has three quarks and the anti proton three anti quarks predominates. Quark antiquark scattering at low energies has much higher cross section than quark quark due to the extra possibility of annihilation of the quarks. At low energy the gluon "sea" plays a small part. The higher the energy of interactions the higher the number of energetic gluons that scatter and finally at TEV energies that is what predominates and the two cross sections converge. Thus for physics it makes no difference whether one uses as targets protons or antiproitons, as far as discovery potential goes. There may be some technical advantage in the construction, in that in principle the antiproton-proton beams can circulate in the same magnetic configuration as mirror images and make the magnet construction circuits simpler. I guess that the need for high luminosity made LHC a proton proton collider, since it is more difficult to store antiprotons. I would have to research this guess. - But using proton proton isn't a way to reduce $q \bar q$ reaction and favour g g ? –  gdz Apr 22 '11 at 14:35 My remark is related to higgs production by gluon fusion –  gdz Apr 22 '11 at 14:47 Within each, nucleon and antinucleon, the distributions of the gluon "sea" are the same, that is why at high energies it makes no difference which one uses, since then the percentage of energy carried by the original quarks of the incoming quarks/antiquarks is small in the interactions of interest ( high transverse momentum, i.e. deep inelastic) and the sea gluons predominate. –  anna v Apr 23 '11 at 4:30 I would add to @anna's answer that $p\bar{p}$ collider such as the Tevatron is CP symmetric. This was one of the arguments for continuing the Tevatron Experiments. To quote from the proposal Measurements that get a special advantage from the p-pbar environment. The primary example in this category is CP-violation, which strongly limits the range of allowed models of new physics up to scales of several TeV. There are good a priori reasons to expect the existence of some non-SM CP-violating processes, and finding them is of comparable importance to addressing electroweak symmetry breaking. Precision measurements at the 1% level or better are accessible at the Tevatron due to the CP-symmetric initial state (p-pbar), and symmetry of the detectors that allow cancellation of systematics. Some of these measurements already show tantalizing effects, like the recently published di-muon asymmetry result from the DZero experiment, showing the first indication of a deviation from the Standard Model picture of CP-violation. Other measurements are exploring a completely new field, as the recent CPVmeasurement with the D 0 mesons at CDF, yielding a substantial improvement in precision with respect to previous B-factories data. This has provided a proof of feasibility of an exciting program of precision measurement with a unique possibility to find anomalous interactions in up-type quarks. A non CP-related example in this category is the forward-backward asymmetry in top quark production. Current measurements by both CDF and DZero indicate an asymmetry above the Standard Model prediction. If this persists with more data, it can be interpreted as new dynamics. This is not an easy measurement to replicate in a proton-proton environment. http://www.fnal.gov/directorate/Tevatron/Tevatron_whitepaper.pdf - This is an important addition, since this difference between proton-proton and antiproton-proton collisions exists at all energies. –  anna v Apr 21 '11 at 18:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9455534815788269, "perplexity": 1024.0470256833032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445299.20/warc/CC-MAIN-20141017005725-00081-ip-10-16-133-185.ec2.internal.warc.gz"}
https://rd.springer.com/chapter/10.1007/978-1-4613-8156-3_5
Fourier Series pp 234-276 # Lacunary Fourier Series • R. E. Edwards Part of the Graduate Texts in Mathematics book series (GTM, volume 85) ## Abstract As the name suggests, a lacunary trigonometric series is, roughly speaking, a trigonometric series $$\sum\nolimits_{{{\text{n\^I Z}}}} {{c_{{\text{n}}}}{e^{{{\text{inx}}}}}}$$ in which c n = 0 for all integers n save perhaps those belonging to a relatively sparse subset E of Z. Examples of such series have appeared momentarily in Exercises 5.6 and 6.13. Indeed for the Cantor group ℒ, the good behaviour of a lacunary Walsh-Fourier series $$\sum\limits_{{\zeta\in\mathcal{R}}} {{c_{\zeta }}\zeta}$$ (whose coefficients vanish outside the subset ℛ of ℒ^) has already been noted: by Exercise 14.9, if the lacunary series belongs to C(ℒ) then it belongs to A(ℒ); and, by 14.2.1, if it belongs to L p (ℒ) for some p < 0, then it also belongs to L q (ℒ) for q ∈ [p, ∞]. In this chapter we shall be mainly concerned with lacunary Fourier series on the circle group and will deal more systematically with some (though by no means all) aspects of their curious behaviour.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9546939134597778, "perplexity": 1049.3572039894166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812579.21/warc/CC-MAIN-20180219091902-20180219111902-00574.warc.gz"}
http://www.physicsforums.com/showthread.php?t=564772
# Is it possible that Mass is the /same thing/ as curved spacetime? by tgm1024 Tags: curved, mass, or same, spacetime, thing or P: 20 It's fairly easily to visualize space bending as a result of the mass of an object, and that the bending of space is effectively gravity. But if mass always results in bending space (how else could it hold it in this universe?), is it possible that mass and the bending of space is precisely the same thing? IOW, if I were to wave a magic wand and place a bend distortion in space, did I just create mass? It would behave as a gravitational field and pull things toward its center, no? THANKS!!!!!!!!!!!!! P: 5,632 IOW, if I were to wave a magic wand and place a bend distortion in space, did I just create mass? not a crazy idea...but no. Waving you arm and a wand causes gravitational waves...distortions in SPACETIME but not mass. Turns out that momentum, energy, pressure all contribute to gravity...the distortion in SPACETIME . Note 'SPACETIME': if you bend time, you create gravity, too. In general relativity all these are part of the stress energy tensor which is the source of the gravitational field. P: 4 I'm fairly sure not. As far as I'm aware, the bending of space is purely a gravity thing which acts on a mass. Mass itself, however, effects how all forces (EM, Gravity, Strong and Weak) are applied to a body. You can mimic a massive object with your magic wand and things would be gravitationally drawn towards it, but if that area of curved space-time didn't actually contain mass, it wouldn't react to external forces in the same way that a massive object would (note that I'm assuming your magic wand doesn't just use energy to make its distortion, which would just be mass in a different form). P: 20 Is it possible that Mass is the /same thing/ as curved spacetime? Quote by pdyxs I'm fairly sure not. As far as I'm aware, the bending of space is purely a gravity thing which acts on a mass. Mass itself, however, effects how all forces (EM, Gravity, Strong and Weak) are applied to a body. You can mimic a massive object with your magic wand and things would be gravitationally drawn towards it, but if that area of curved space-time didn't actually contain mass, it wouldn't react to external forces in the same way that a massive object would (note that I'm assuming your magic wand doesn't just use energy to make its distortion, which would just be mass in a different form). So given my magic wand: would'nt the external forces act on that bent region as if there were a mass there? <----broken way of saying it, I'll rephrase. It sounds as if you both are in concert with that there are things that can act upon a mass independent of gravity. I suppose I have to wonder about this: Aren't the fact that "momentum/energy/pressure" are all issues related to matter make it such that there would be a momentum/energy/pressure the moment that space was bent? I believe you both, but I'm not sure that I understand how what you're saying actually flies in the face of what I'm saying. P: 20 Quote by Naty1 Note 'SPACETIME': if you bend time, you create gravity, too. In general relativity all these are part of the stress energy tensor which is the source of the gravitational field. Quick note on this. In all things, it's easiest for me to view time as just another dimension---makes more sense to me. So I'm assuming that bending space along any one of the (for now, say 4) axes will beget (or be) gravity. P: 260 The problem with what you are asking is that it isn't well defined. How do you propose to generate spacetime curvature without any non-zero components of the stress-energy tensor? A "magic wand" isn't a valid answer; you're asking a physics question based on the false premise that magic exists. We don't know the physics of magic, so we can't give you a well defined answer. P: 20 Quote by elfmotat The problem with what you are asking is that it isn't well defined. How do you propose to generate spacetime curvature without any non-zero components of the stress-energy tensor? A "magic wand" isn't a valid answer; you're asking a physics question based on the false premise that magic exists. We don't know the physics of magic, so we can't give you a well defined answer. Entirely incorrect. It's entirely valid to use an absurd postulation as an alternative way of explaining a prior question about something real. The notion of a "magic wand" has *nothing* to do with a magic wand per se. The question is not about magic, nor is it in particular "what does a magic wand do." Example: Suppose someone questions whether or not an angry cat and it's ears pointing back is a causal relationship or two facets of precisely the same phenomenon. They would be perfectly valid in saying: "Is it possible for an angry cat to not have its ears point backward, or for a cat with its ears pushed forward to be angry? In other words, if I were to wave a magic wand and have an obviously angry cat's ears moved forward would it cease to be angry?" It would not be appropriate then to question the nature of magic wands. Sci Advisor PF Gold P: 5,027 There is a sense in which you can do this. Posit a metric tensor for a manifold (spacetime). Derive the Einstein tensor (no assumptions about matter needed). Now, by equality (and a few constants), you have stress energy tensor that can be interpreted as mass-energy, pressure, etc. If you choose an 'arbitrary' metric, you will get physically implausible stress energy tensors (and there is an active and unresolved research effort into how to characterize physically plausible stress energy tensors). However, you can still view this approach as instantiating the idea of geometry producing mass. P: 96 Quote by tgm1024 Entirely incorrect. It's entirely valid to use an absurd postulation as an alternative way of explaining a prior question about something real. The notion of a "magic wand" has *nothing* to do with a magic wand per se. The question is not about magic, nor is it in particular "what does a magic wand do." Example: Suppose someone questions whether or not an angry cat and it's ears pointing back is a causal relationship or two facets of precisely the same phenomenon. They would be perfectly valid in saying: "Is it possible for an angry cat to not have its ears point backward, or for a cat with its ears pushed forward to be angry? In other words, if I were to wave a magic wand and have an obviously angry cat's ears moved forward would it cease to be angry?" It would not be appropriate then to question the nature of magic wands. I interpret the question along the lines of these descriptions floating about regarding spacetime and gravity: 1) The effect of gravity is to create a "depression" in spacetime that then causes objects to roll into it, like marbles to a low spot. 2) If the low spot could be created in another way (Thus far unknown) would objects still roll into that depression? I believe the magic wand analogy was merely used to represent the unknown way of creating that low spot without an object/mass. So, the answers thus far have seemed to indicate that we really don't know of another way to create that depression in the spacetime fabric, so its an experiment we can't perform yet. As a thought experiment, so far, we are not really offering any clue as to what the outcome would be, but, logically, if the DEPRESSION in the spacetime fabric was being used as yet another analogy, and not as an actual physical description, then, the question loses meaning. If the depression in the spacetime fabric is meant to be a literal physical description, or at least describe an effect that works in that fashion, then it is implied that the effect, and not the object creating it, was necessary to roll our marbles. IE: If the mass "creating the depression" is actually the attractive force at play, and the depression description is an analogy, then creating the depression otherwise would not attract objects. If the mass "creating the depression" was actually creating the moral equivalent of a depression, such the at the objects were drawn in by the depression itself, then creating the depression alone would actually be sufficient to draw in the objects. Physics PF Gold P: 6,058 Quote by tgm1024 But if mass always results in bending space (how else could it hold it in this universe?), is it possible that mass and the bending of space is precisely the same thing? It depends on what you mean by "the same thing". If you mean "the same" as in "appears the same to our experience", then certainly not: mass is a very different thing from spacetime curvature. If you mean "the same" as in "equivalent according to the laws of physics", then it's not just "possible" that they're the same, it's actual; according to the Einstein Field Equation, spacetime curvature *is* mass (more precisely, stress-energy; as other posters have pointed out, mass is not the only component of the stress-energy tensor); the one is equal to the other. Quote by tgm1024 Example: Suppose someone questions whether or not an angry cat and it's ears pointing back is a causal relationship or two facets of precisely the same phenomenon. They would be perfectly valid in saying: "Is it possible for an angry cat to not have its ears point backward, or for a cat with its ears pushed forward to be angry? In other words, if I were to wave a magic wand and have an obviously angry cat's ears moved forward would it cease to be angry?" No, it wouldn't be appropriate, it would be obfuscating a very simple issue. The question in quotes above is a simple question about the correlation, averaged over all cats, between being angry and having ears pointed backward. The way you answer such a question is to look at the observed correlation. You will find, of course, that the correlation is high but not perfect, indicating that these two phenomena are contingently closely related but are not "the same thing". If you want to go deeper and ask "why", you investigate the biology of anger and ear behavior in cats. Talk about "magic wands" does nothing but obscure the sorts of things you actually need to look at to answer the question. Similarly, your question about mass and spacetime curvature is a simple question about the correlation between observing mass (stress-energy) and observing spacetime curvature. In this case, the correlation is perfect, indicating a stronger relationship than just being contingently linked. General relativity explains this through the Einstein Field Equation, which requires the correlation to be perfect. Emeritus Sci Advisor P: 7,599 Given that the general mathematical representation of space-time curvature is a 4 dimensonal rank 4 tensor - the so called Riemann curvature tensor - and that mass is a scalar, I would say that they're not "the same thing". PF Gold P: 5,027 Quote by pervect Given that the general mathematical representation of space-time curvature is a 4 dimensonal rank 4 tensor - the so called Riemann curvature tensor - and that mass is a scalar, I would say that they're not "the same thing". Agreed, but a looser interpretation of the OP is: If I change the curvature of spacetime in an appropriate way, have I necessarily changed/produced mass? In classical GR, I would answer this: yes. P: 20 Quote by Tea Jay I interpret the question along the lines of these descriptions floating about regarding spacetime and gravity: 1) The effect of gravity is to create a "depression" in spacetime that then causes objects to roll into it, like marbles to a low spot. 2) If the low spot could be created in another way (Thus far unknown) would objects still roll into that depression? Not quite. For #1 above, I'm already assuming (correctly or incorrectly) that a depression in spacetime *is* gravity. For #2 above, I'm already assuming that if the 4D low spot were created another way that it would draw objects toward its center. The question is: is bent space actually matter itself? Or perhaps: is it a misinterpretation to view matter and bent-space (or matter and gravity if you like) as separate phenomenons when they are actually the same thing. Quote by =Tea Jay I believe the magic wand analogy was merely used to represent the unknown way of creating that low spot without an object/mass. Yes, that's dead on. P: 20 Quote by PAllen Agreed, but a looser interpretation of the OP is: If I change the curvature of spacetime in an appropriate way, have I necessarily changed/produced mass? In classical GR, I would answer this: yes. If that's the case, and the "yes" is reassurring, then broadening this to all of physics: Is it the case that whenever "thing" A (in our case, mass/momentum/etc) cannot exist without "effect" B, and "effect" B cannot exist without "thing" A, is it unreasonable to assume (generally) that "thing" A and "effect" B are "the same"? I don't view this as a semantic question. I understand that you can have two views of an item. The marble is not the bend in the rubber sheet it is sitting on. But that's only in the context of All Things. If you constrain the context to the marble and the rubber sheet only (that's all there is), then yes, the bend *is* the marble. Or adding an item *outside* the context (magic wand) then bending the rubber sheet does [create or beget or form or require] the marble. I'm not sure that's entirely nuts. PF Gold P: 5,027 Quote by tgm1024 If that's the case, and the "yes" is reassurring, then broadening this to all of physics: Is it the case that whenever "thing" A (in our case, mass/momentum/etc) cannot exist without "effect" B, and "effect" B cannot exist without "thing" A, is it unreasonable to assume (generally) that "thing" A and "effect" B are "the same"? I don't view this as a semantic question. I understand that you can have two views of an item. The marble is not the bend in the rubber sheet it is sitting on. But that's only in the context of All Things. If you constrain the context to the marble and the rubber sheet only (that's all there is), then yes, the bend *is* the marble. Or adding an item *outside* the context (magic wand) then bending the rubber sheet does [create or beget or form or require] the marble. I'm not sure that's entirely nuts. Your first question is philosophic to me, so my answer is more opinion than science. I would say just because A implies the existence of B and B implies the existence of A within some physical theory, it is not necessarily useful to think of A and B as the same thing. As to your second question, I have been trying to answer an underlying, more valid question implied by your 'poetic' description. However, I can't any longer. Please erase the 'rubber sheet' analogy from you mind. There is nothing analogous to a rubber sheet, with mass sitting on it and bending it. There is also nothing analogous to the space in which the rubber sheet sits (as is implied by this image). The best I can say in words is: There is an intrinsic geometry of spacetime (similar to a 2-d being living on a balloon can tell they have non-euclidean geometry by adding up angles of triangles). An aspect of this geometry (the Einstein tensor) can be simultaneous described as a distribution of mass/energy density and pressure/stress distribution. P: 20 Quote by PAllen Your first question is philosophic to me, so my answer is more opinion than science. I would say just because A implies the existence of B and B implies the existence of A within some physical theory, it is not necessarily useful to think of A and B as the same thing. As to your second question, I have been trying to answer an underlying, more valid question implied by your 'poetic' description. However, I can't any longer. Please erase the 'rubber sheet' analogy from you mind. There is nothing analogous to a rubber sheet, with mass sitting on it and bending it. There is also nothing analogous to the space in which the rubber sheet sits (as is implied by this image). The best I can say in words is: There is an intrinsic geometry of spacetime (similar to a 2-d being living on a balloon can tell they have non-euclidean geometry by adding up angles of triangles). An aspect of this geometry (the Einstein tensor) can be simultaneous described as a distribution of mass/energy density and pressure/stress distribution. What's poetic? You seem angry. There's nothing mystical about, say, a 4D surface of a 5D balloon, nor the topological way of discovering it's "shape". Heck, people used to scratch their heads at walking north and getting closer to each other. But that analogy I used was just another magic wand example: don't mistake me for someone thinking that the universe is a rubber sheet with stuff pushing on it. It's just an example of me replacing the causal nature of "the marble causes the bend" with an equivalence. I don't care if that's a common description for gravity or a total mistake in many videos on the subject. I just chose it because there's a "thing" and a bend. The choice of a marble and a rubber sheet should have done nothing to abort the conversation. In any case, there's enough in this thread for me to look this up further.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216750025749207, "perplexity": 772.898118775115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273663.2/warc/CC-MAIN-20140728011753-00292-ip-10-146-231-18.ec2.internal.warc.gz"}
http://www.ck12.org/chemistry/Derived-Units/web/user:13IntK/Essentials-of-the-SI/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> # Derived Units % Progress Practice Derived Units Progress % Here is a guide to the base and derived units of the SI, which establishes what the difference between a base and derived unit is. ### Explore More Sign in to explore more, including practice questions and solutions for Derived Units.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8657603859901428, "perplexity": 4397.078930118778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300799.9/warc/CC-MAIN-20150323172140-00003-ip-10-168-14-71.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2026849/how-do-i-show-that-the-galois-group-of-an-algebraic-number-over-the-field-contai
# How do I show that the Galois group of an algebraic number over the field containing roots of unity is cyclic? Suppose that I have a complex number $\alpha\not\in\mathbb Q$ such that $\alpha^n\in\mathbb Q$. I take the extension $\mathbb Q(\alpha)$. Suppose that $\mathbb Q(\alpha)$ is Galois. Clearly, this extension need not be cyclic. However, I believe that if I consider the extension $F$ of $\mathbb Q$ where $F$ contains all the roots of unity in $\mathbb Q(\alpha)$, then the extension $[\mathbb Q(\alpha):F]$ becomes cyclic. I've seen this used in a few places, but can't seem to rigorously find why this is the case. I can see that $\alpha$ satisfies $x^n-\alpha^n$, but how do I know the minimal polynomial? I guessed that the roots of the minimal polynomial might be $\alpha\zeta_n$, where $\zeta_n$ is a primitive root of unity, but even then I'm not sure. I'd appreciate some help. Thanks. • You probably want to say that $F(\alpha)/F$ is cyclic, not $\mathbb Q(\alpha)/F$ because in the latter, $\mathbb Q(\alpha)$ does not necessarily contain $F$. – Ravi Nov 23 '16 at 6:10 • Yes, but I think I may as well suppose $\mathbb Q(\alpha)$ is Galois over $\mathbb Q$. I've edited it. – adrija Nov 23 '16 at 6:13 • I don't think you should assume that because that will lead to you quickly coming to the conclusion that $n=2$ from your opening sentence. – Ravi Nov 23 '16 at 6:16 • How so? I didn't say $\alpha$ is real, it may as well be a primitive $n^{th}$ root. – adrija Nov 23 '16 at 6:20 First some definitions and notations: $a\in \mathbf Q^*, K=\mathbf Q(\alpha)$, where $\alpha$ is a root of $g_n (X)=X^n - a$. You suppose that $K/ \mathbf Q$ is normal, and you ask whether a certain subfield $F$ of $K$, containing enough roots of unity, is such that $K/F$ is a cyclic extension. Let us consider two cases according to the (ir)reducibility of $g_n (X)=X^n - a$. In the following, just to avoid petty trouble with $2$ , all rational primes which enter the game will be odd (but this not a mathematical restriction). 1) If $g_n (X)$ is irreducible, the normality hypothesis implies that $K$ contains the group $\mu_n$ of $n$-th roots of unity. Take $F=\mathbf Q (\mu_n)$ ; then $K=F(\alpha)$ is a simple Kummer extension, hence cyclic (see e.g. S. Lang's "Algebra", chap.8, §8), of degree dividing $n$ 2) If $g_n (X)$ is reducible, a general criterion in op. cit., chap.8, §9, asserts that there exists a prime divisor $p$ of $n$ such that $a\in \mathbf Q ^{*p}$. Let $p^r$ the maximal power such that $a\in \mathbf Q^{*p^r}$. Then $\beta :=\alpha^{p^r}$ is a root of $g_m(X)=X^m - a$, where $n=mp^r$. If $\beta$ were a $p$-th power in $\mathbf Q(\beta)$, say $\beta=\gamma^{p^r}$, taking norms in $\mathbf Q(\beta) / \mathbf Q$ would give that $a= N(\pm\gamma) ^{p^{r+1}}$ (here we use that $p$ is odd), which contradicts the maximality of $r$. Hence, according to the same criterion op. cit., $X^{p^r} – a$ is irreducible over $\mathbf Q(\beta)$. The same argument as in 1) then shows that $K$ is cyclic over $F_p := \mathbf Q(\beta, \mu_{p^r})$, of degree dividing $p^r$. Repeat this process for all prime divisors $q$ of $n$ wich share the same propery as $p$ above, and take $F$ to be the compositum of all the $F_q$ ‘s. Then $Gal(K/F)$, being the intersection of all the $Gal(K/F_q)$'s, is cyclic
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9770938754081726, "perplexity": 91.18768028945235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998913.66/warc/CC-MAIN-20190619043625-20190619065625-00329.warc.gz"}
http://ptsymmetry.net/?p=670
## Conservation relations and anisotropic transmission resonances in one-dimensional PT-symmetric photonic heterostructures Li Ge, Y. D. Chong, A. D. Stone We analyze the optical properties of one-dimensional (1D) PT-symmetric structures of arbitrary complexity. These structures violate normal unitarity (photon flux conservation) but are shown to satisfy generalized unitarity relations, which relate the elements of the scattering matrix and lead to a conservation relation in terms of the transmittance and (left and right) reflectances. One implication of this relation is that there exist anisotropic transmission resonances in PT-symmetric systems, frequencies at which there is unit transmission and zero reflection, but only for waves incident from a single side. The spatial profile of these transmission resonances is symmetric, and they can occur even at PT-symmetry breaking points. The general conservation relations can be utilized as an experimental signature of the presence of PT-symmetry and of PT-symmetry breaking transitions. The uniqueness of PT-symmetry breaking transitions of the scattering matrix is briefly discussed by comparing to the corresponding non-Hermitian Hamiltonians. http://arxiv.org/abs/1112.5167 Optics (physics.optics)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8591693043708801, "perplexity": 1650.2875735390624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887692.13/warc/CC-MAIN-20180119010338-20180119030338-00626.warc.gz"}
https://threesquirrelsdotblog.com/2017/05/15/77/?like_comment=739&_wpnonce=e9b2aa709b
# Solution to (general) Truncated Moment Problem We are going to solve the truncated moment problem in this post. The theorem we are going to establish is more general than the original problem itself. The following theorem is a bit abstract, you can skip to Corollary 2 to see what the truncated moment problem is and why it has a generalization in the form of Theorem 1. Theorem 1 Suppose ${X}$ is a random transformation from a probability space ${(A,\mathcal{A},\mathop{\mathbb P})}$ to a measurable space ${(B,\mathcal{B})}$ where each singleton set of $B$ is in $\mathcal{B}$. Let each ${f_i}$ be a real valued (Borel measurable) function with its domain to be ${B}$, ${i=1,\dots,m}$. Given $\displaystyle (\mathbb{E}f_i(X))_{i=1,\dots,m}$ and they are all finite, there exists a random variable ${Y\in B}$ such that ${Y}$ takes no more than ${m+1}$ values in ${B}$, and $\displaystyle (\mathbb{E}f_i(Y))_{i=1,\dots,m} = (\mathbb{E}f_i(X))_{i=1,\dots,m}.$ (If you are not familiar with terms Borel measurable, measurable space and sigma-algebras $\mathcal{A}, \mathcal{B}$,  then just ignore these. I put these term here just to make sure the that the theorem is rigorous enough.) Let me parse the theorem for you. Essentially, the theorem is trying to say that given ${m}$ many expectations, no matter what kind of source the randomness comes from, i.e., what ${X}$ is, we can always find a finite valued random variable (which is ${Y}$ in the theorem) that achieves the same expectation. To have a concrete sense of what is going on, consider the following Corollary of Theorem 1. It is the original truncated moment problem. Corollary 2 (Truncated Moment Problem) For any real valued random variable ${X\in {\mathbb R}}$ with its first ${m}$ moments all finite, i.e., for all ${1\leq i\leq m}$ $\displaystyle \mathop{\mathbb E}|X|^i < \infty,$ there exists a real valued discrete random variable ${Y}$ which takes no more than ${m+1}$ values in ${{\mathbb R}}$ and its first ${m}$ moments are the same as ${X}$, i.e., $\displaystyle (\mathbb{E}Y,\mathbb{E}(Y^2),\dots, \mathbb{E}(Y^m) )=(\mathbb{E}X,\mathbb{E}(X^2),\dots, \mathbb{E}(X^m)).$ This original truncated moment problem is asking that given the (uncentered) moments, can we always find a finite discrete random variable that matches all the moments. It should be clear that is a simple consequence of Theorem 1 by letting ${B={\mathbb R}}$ and ${f_i(x) = x^{i},, i=1,\dots,m}$. There is also a multivariate version of truncated moment problem which can also be regarded as a special case of Theorem 1. Corollary 3 (Truncated Moment Problem, Multivariate Version) For any real random vector ${X=(X_1,\dots,X_n)\in \mathbb{R}^n}$ and its all ${k}$th order moments are finite, i.e., $\displaystyle \mathop{\mathbb E}(\Pi_{i=1}^n|X_{i}|^{\alpha_i}) <\infty$ for any ${{1\leq \sum \alpha_i\leq k}}$. Each ${\alpha_i}$ here is a nonnegative integer. The total number of moments in this case is ${n+k \choose k}$. Then there is a real random vector ${Y \in \mathbb{R}^n}$ such that it takes no more than ${{n+k \choose k}+1}$ values, and $\displaystyle (\mathop{\mathbb E}(\Pi_{i=1}^nX_{i}^{\alpha_i}))_{1\leq \sum \alpha_i\leq k} = (\mathop{\mathbb E}(\Pi_{i=1}^nY_{i}^{\alpha_i})) _{1\leq \sum \alpha_i\leq k}.$ Though the form of Theorem 1 is quite general and looks scary, it is actually a simple consequence of the following lemma and the use of convex hull. Lemma 4 For any convex set ${C \in \mathbb{R}^k}$, and any random variable ${Z}$ which has finite mean and takes value only in ${C}$ , i.e, $\displaystyle \mathop{\mathbb E}(Z) \in \mathbb{R}^k, \mathop{\mathbb P}(Z\in C) =1,$ we have $\displaystyle \mathop{\mathbb E} (Z) \in C.$ The above proposition is trivially true if ${C}$ is closed or $Z$ takes only finitely many value. But it is true that ${C}$ is only assumed to be convex. We will show it in this post. We are now ready to show Theorem 1. Proof of Theorem 1: Consider the set $\displaystyle L = \{ (f_i(x))_{i=1,\dots,m}\mid x\in B \},$ The convex hull of this set ${L}$ is $\displaystyle \text{conv}(L) = \{ \sum_{j=1}^l \alpha _j a_j\mid \alpha_j \geq 0 ,\sum_{j=1}^l \alpha_j =1, a_j\in L, l \in {\mathbb N}\}.$ Now take the random variable ${Z=(f_i(X))_{i=1,\dots,m}}$ which takes value only in ${L\subset \text{conv}(L)}$, by Lemma 4 of convex set, we know that $\displaystyle \mathop{\mathbb E} Z \in \text{conv}(L).$ Note that every element in ${\text{conv}(L)}$ has a FINITE representation in terms of ${a_j}$s! This means we can find ${l\in {\mathbb N}}$, ${\alpha_j\geq 0, \sum_{j = 1}^l \alpha_j =1}$ and ${a_j \in L, j=1,\dots,l}$ such that $\displaystyle \sum_{j=1}^l \alpha_ja_j = \mathop{\mathbb E} Z = (\mathop{\mathbb E} f_i(X))_{i=1,\dots,m}.$ Since each ${a_j = (f(x_j))_{i=1,\dots,m}}$ for some ${x_j \in B}$, we can simply take the distribution of ${Y}$ to be $\displaystyle \mathop{\mathbb P}(Y = x_j) = \alpha_j, \forall i =1,\dots,l.$ Finally, apply the theorem of Caratheodory to conclude that ${l\leq m+1}$. $\Box$ ## 4 thoughts on “Solution to (general) Truncated Moment Problem” 1. Good info over again. Thumbs up;) Like 2. certainly like your web site however you have to check the spelling on several of your posts. Many of them are rife with spelling problems and I to find it very troublesome to inform the truth on the other hand I will surely come back again. Like • ding1ijun Hi Lola, thank you for reading the blog and sorry for the spelling errors. Some of them are prepared in a hurry and I did not get time to correct the spellings. Can you point out some spelling errors the next time you read? Like 3. Hi there, this weekend is nice for me, as this moment i am reading this impressive educational article here at my home. Like
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 65, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8979355692863464, "perplexity": 190.73455393909228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710870.69/warc/CC-MAIN-20221201221914-20221202011914-00012.warc.gz"}
https://karagila.org/2019/in-praise-of-replacement/
Asaf Karagila I don't have much choice... I have often seen people complain about Replacement axioms. For example, this MathOverflow question, or this one, or that one, and also this one. This technical-looking schema of axioms state that if $$\varphi$$ defines a function on a set $$x$$, then the image of $$x$$ under that function is a set. And this axiom schema is a powerhouse! It is one of the three component that give $$\ZF$$ its power (the others being power set and infinity, of course). You'd think that people in category theory would like it, from a foundational point of view, it literally tells you that functions exists if you can define them! And category theory is all about the functions (yes, I know it's not, but I'm trying to make a point). One of the equivalences of Replacement is the following statement: Suppose that $$\varphi(x,y,z,p)$$ is a formula such that for some parameter $$p$$, $$\varphi(x,y,z,p)\leftrightarrow\varphi(x,y,z',p)$$ if and only if $$z=z'$$. Namely, $$\varphi$$ (modulo the parameter $$p$$) defines the ordered pair $$(x,y)$$. Then the Cartesian product $$A\times_{\varphi,p} B=\{z\mid\exists a\in A\exists b\in B\varphi(a,b,z,p)\}$$ is a set. In other words, Replacement is equivalent to saying "I don't care how you're coding ordered pairs, as long as it's going to satisfy the axioms of ordered pairs!". So you'd think people from non-$$\ZFC$$ foundations would be happy to have something like that, especially if they are focused on category theory (which is all about functions (yes, again, I know, I'm just trying to make a point)). Well. It is exactly the opposite. Since the Kuratowski coding of ordered pairs is so simple, it's an easy solution to the problem. So from a foundational point of view, there's no problem anymore, and nothing else matter. Once you have one coding that lets you have ordered pairs from a set theoretic point of view, the rest is redundant. This is very much reflected in how $$\ETCS$$ is equivalent to a relatively weak set theory: bounded Zermelo. So you might want to say, well, if category theory is not really using it, then maybe it's not that necessary. And indeed, the uses of Replacement outside of set theory are rare. Borel determinacy is one of them, and arguably it is a set theoretic statement. But, like many other posts, this too was inspired by some discussion on Math.SE. And here is a very nice example of why Replacement is important. Theorem. If $$A$$ is a set, then $$\{A^n\mid n\in\Bbb N\}$$ exists. One would like to prove that by saying that this is just a bunch of subsets of $$A^{<\omega}$$, which itself is a subset of $$\mathcal P(\omega\times A)$$, so by power set and very bounded separation axioms, we can get that very set. But this depends on how $$A^n$$ is defined. If $$A^n$$ is the set of functions from $$n$$ to $$A$$, that's fine, the above suggestion works just fine. However, it is not uncommon to see the following inductive definition: $$A^0=\{\varnothing\}$$ and $$A^{n+1}=A^n\times A$$, or $$A^1=A$$ and $$A^{n+1}=A^n\times A$$. Under that latter definition and using the Kuratowski definition of ordered pairs, $$A^{n+1}$$ has a strictly larger von Neumann rank comapred to $$A^n$$. At least when starting with $$A=V_{\omega+1}$$, or something like that. So the von Neumann ranks of $$V_{\omega+1}^n$$ are strictly increasing under the Kuratowski definition, and so there is no set in $$V_{\omega+\omega}$$ which contains exactly the $$V_{\omega+1}^n$$'s. But since $$V_{\omega+\omega}$$ is a model of Zermelo, that means that we cannot prove without Replacement that $$\{A^n\mid n\in\Bbb N\}$$ is a set for any set $$A$$. Hey hey, wait a minute, you might claim. You might argue that you don't care that $$A\times A$$ and $$\{f\colon\{0,1\}\to A\}$$ are two different sets. They both satisfy the same properties from an abstract point of view. BUT THIS IS THE POINT! This is exactly the point! When you say that you don't want to distinguish between $$A\times A$$ and $$A^2$$, you are ostensibly replacing one object with another. You are literally appealing to Replacement! Yes, this argument does not use a lot of Replacement, but it does use some of it. And it might just be enough to clarify of its necessity. (And I haven't even started on talking about how it is equivalent to Reflection, which is so awesome on its own (foundationally speaking)...) ### There are no comments on this post. Want to comment? Send me an email!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8693873286247253, "perplexity": 173.2400634481812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00398.warc.gz"}
https://socratic.org/questions/553d8d81581e2a30db7f2b83
Chemistry Topics # Question f2b83 Apr 27, 2015 The new concentration will be 1.00%. When doing percent concentration problems, you should always start by taking a closer look at your initial solution. A mass by volume percent concentration is expressed as grams of solute, in your case sodium chloride, per 100 mL of solution. A 1% m/v solution will have 1 g of solute in 100 mL of solution $\text{% m/v" = "grams of solute"/"100 mL of solution} \cdot 100$ Your initial solution is 10.% m/v, which means that it has 10 g of sodium chloride in every 100 mL of solution. Since you have less than 100 mL of solution, you'll have less than 10 g of sodium chloride in your sample "%m/v" = "x g NaCl"/"50. mL" * 100 => x = ("%m/v" * 50)/100 x = (10 * 50cancel("ml"))/(100cancel("mL")) = "5.0 g"# This is how much sodium chloride your initial solution contains. Now, you want to increase the volume of the solution to 500 mL. The key here is to realize that the amount of sodium chloride present will not change. Your final solution will still have 5.0 g of $N a C l$, but its volume will be bigger. The same amount of solute in a bigger volume will automatically mean a smaller concentration. $\text{%m/v" = "5.0 g"/"500 mL" * 100 = "1.00%}$ The volume is 10 times bigger, so the concentration must be ten times smaller. ##### Impact of this question 1285 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8570941090583801, "perplexity": 1108.0193737540437}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738653.47/warc/CC-MAIN-20200810072511-20200810102511-00180.warc.gz"}
http://mathonline.wikidot.com/tangent-planes-to-level-surfaces
Tangent Planes to Level Surfaces # Tangent Planes to Level Surfaces We have already looked at computing tangent planes to surfaces described by a two variable function $z = f(x, y)$ on the Finding a Tangent Plane on a Surface page. This method is convenient when the variable $z$ is isolated from the variables $x$ and $y$ as we can apply the following formula to obtain the tangent plane at a point $P(x_0, y_0, z_0)$: (1) \begin{align} \quad z - z_0 = f_x (x_0, y_0) (x - x_0) + f_y (x_0, y_0) (y - y_0) \quad \mathrm{or} \quad z - z_0 = \frac{\partial}{\partial x} f(x_0, y_0) (x - x_0) + \frac{\partial}{\partial y} f(x_0, y_0) (y - y_0) \end{align} If $z$ is not isolated from the variables $x$ and $y$, then finding the tangent plane at a point can be messy with this method, so instead, we will view these surfaces as "level surfaces" to a three variable function. Let $w = f(x, y, z)$ be a three variable real-valued function, and consider the equation $f(x, y, z) = k$ where $k \in \mathbb{R}^2$. The equation $f(x, y, z) = k$ represents a level surface corresponding to the real number $k$ (this is analogous to obtaining level curves for functions of two variables). Let $P(x_0, y_0, z_0)$ be a point on this level surface, and let $C$ be any curve that passes through $P$ and that is on the surface $S$. This curve $C$ can be parameterized as a vector-valued function $\vec{r}(t) = (x(t), y(t), z(t))$. Let $P$ correspond to $t_0$, that is $\vec{r}(t_0) = (x_0, y_0, z_0)$. Now since the curve $C$ is on the surface $S$, we must have that $f(x(t), y(t), z(t)) = k$ for any defined $t$ for the curve $C$. Suppose that $x = x(t)$, $y = y(t)$, and $z = z(t)$ are differentiable functions. If we apply the chain rule to both sides of the equation $f(x(t), y(t), z(t)) = k$ we get that: (2) \begin{align} \quad \frac{\partial w}{\partial x} \frac{dx}{dt} + \frac{\partial w}{\partial y} \frac{dy}{dt} + \frac{\partial w}{\partial z} \frac{dz}{dt} = 0 \\ \quad \left ( \frac{\partial w}{\partial x}, \frac{\partial w}{\partial y}, \frac{\partial w}{\partial z} \right ) \cdot \left (\frac{dx}{dt}, \frac{dy}{dt}, \frac{dz}{dt} \right ) = 0 \\ \quad \nabla f(x, y, z) \cdot \vec{r'}(t) = 0 \end{align} Therefore, the gradient vector $\nabla f(x, y, z)$ and the derivative (tangent) vector $\vec{r}(t)$ are perpendicular since their dot product is equal to $0$. Therefore, at any $t_0$, we have that: (3) \begin{align} \quad \nabla f(x_0, y_0, z_0) \cdot \vec{r'}(t_0) = 0 \end{align} Since $\nabla f(x_0, y_0, z_0)$ is perpendicular to the tangent vector at point $P$ (corresponding to $t_0$) and passes through the point $P(x_0, y_0, z_0)$, then we can obtain the equation of the tangent plane at $P$ as: (4) \begin{align} \quad \quad \nabla f(x_0, y_0, z_0) \cdot (x - x_0, y - y_0, z - z_0) = 0 \\ \quad \quad \left ( \frac{\partial}{\partial x} f(x_0, y_0, z_0), \frac{\partial}{\partial y} f(x_0, y_0, z_0), \frac{\partial}{\partial z} f(x_0, y_0, z_0) \right ) \cdot (x - x_0, y - y_0, z - z_0) = 0 \\ \quad \quad \frac{\partial}{\partial x} f(x_0, y_0, z_0) (x - x_0) + \frac{\partial}{\partial y} f(x_0, y_0, z_0) (y - y_0) + \frac{\partial}{\partial z} f(x_0, y_0, z_0) (z - z_0) = 0 \end{align} ## Example 1 Find the equation of the tangent plane to the sphere $x^2 + y^2 + z^2 = 16$ at $(1, 2, \sqrt{11})$. We note that the sphere $x^2 + y^2 + z^2 = 16$ is the level surface of the function $w = f(x,y,z) = x^2 + y^2 + z^2$ when $k = 3$. The partial derivatives of $f$ are $\frac{\partial w}{\partial x} = 2x$, $\frac{\partial w}{\partial y} = 2y$, and $\frac{\partial w}{\partial z} = 2z$. Therefore we have that $\frac{\partial}{\partial x} f(1, 2, \sqrt{11}) = 2$, $\frac{\partial}{\partial y} f(1, 2, \sqrt{11}) = 4$, and $\frac{\partial}{\partial z} f(1, 2, \sqrt{11}) = 2 \sqrt{11}$. Applying these values to the formula above and we get that the equation of the tangent plane is: (5) \begin{align} \quad 2(x - 1) + 4(y - 2) + 2 \sqrt{11}(z - \sqrt{11}) = 0 \end{align}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996461868286133, "perplexity": 186.87983010834589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542714.38/warc/CC-MAIN-20161202170902-00152-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.maplesoft.com/support/help/maple/view.aspx?path=Maplets/Elements/BoxCell&L=E
Box Cell - Maple Help Online Help All Products    Maple    MapleSim Maplets[Elements] BoxCell specify a cell in a box column, box layout, or box row Calling Sequence BoxCell(opts) Parameters opts - equation(s) of the form option=value where option is one of halign, hscroll, valign, value, or vscroll; specify options for the BoxCell element Description • The BoxCell layout element specifies an entry in a BoxColumn, BoxLayout, or BoxRow element. The contents of the cell are specified by using the value option. • For horizontal control in a box layout, use the BoxRow element. For vertical control in a box layout, use the BoxColumn element. • The BoxCell element features can be modified by using options. To simplify specifying options in the Maplets package, certain options and contents can be set without using an equation. The following table lists elements, symbols, and types (in the left column) and the corresponding option or content (in the right column) to which inputs of this type are, by default, assigned. Elements, Symbols, or Types Assumed Option or Content always, as_needed, or never hscroll and vscroll options left or right halign option string or symbol value option top or bottom valign option • A BoxCell element can contain BoxLayout, GridLayout, BorderLayout, or window body elements to specify the value option. • A BoxCell element can be contained in a BoxColumn or BoxRow element, or Maplet element in a nested list representing a box layout. • The following table describes the control and use of the BoxCell element options. An x in the I column indicates that the option can be initialized, that is, specified in the calling sequence (element definition). An x in the R column indicates that the option is required in the calling sequence. An x in the G column indicates that the option can be read, that is, retrieved by using the Get tool. An x in the S column indicates that the option can be written, that is, set by using the SetOption element or the Set tool. Option I R G S halign x hscroll x valign x value x vscroll x • The opts argument can contain one or more of the following equations that set Maplet options. halign = left, center, right, or none Horizontally aligns the cell when in a BoxRow. By default, the value is center. The none option can be used in combination with HorizontalGlue elements for finer control of the layout of cells in a row. For more detail, see BoxRow, and the example of this usage below. hscroll = never, as_needed, or always This option determines when a horizontal scroll bar appears in the box cell.  By default, the value is never. valign = top, center, bottom, or none Vertically aligns the cell when in a BoxColumn. By default, the value is center. The none option can be used in combination with VerticalGlue elements for finer control of the layout of cells in a column. For more detail, see BoxColumn. value = window body, BoxLayout, GridLayout, or BorderLayout element, or reference to such an element (name or string) The Maplet element that appears in this cell. vscroll = never, as_needed, or always This option determines when a vertical scroll bar appears in the box cell.  By default, the value is never. Examples A Maplet application in which element layout is controlled by using BoxCell elements. > $\mathrm{with}\left(\mathrm{Maplets}\left[\mathrm{Elements}\right]\right):$ > $\mathrm{maplet}≔\mathrm{Maplet}\left(\left[\left[\mathrm{BoxCell}\left("Hello",'\mathrm{right}'\right)\right],\left[\mathrm{BoxCell}\left(\mathrm{Button}\left("Quit",\mathrm{Shutdown}\left(\right)\right),'\mathrm{left}'\right)\right]\right]\right):$ > $\mathrm{Maplets}\left[\mathrm{Display}\right]\left(\mathrm{maplet}\right)$ A Maplet application in which the halign=none option is used in BoxCell elements in combination with HorizontalGlue to achieve better control over the location of objects. > \mathrm{maplet}≔\mathrm{Maplet}\left(\mathrm{BoxLayout}\left(\mathrm{BoxColumn}\left(\mathrm{BoxRow}\left(\mathrm{BoxCell}\left("Long text label to force alignment usage"\right)\right),\mathrm{BoxRow}\left(\mathrm{BoxCell}\left(\mathrm{Button}\left("Left1",\mathrm{Shutdown}\left(\right)\right),'\mathrm{halign}'='\mathrm{none}'\right),\mathrm{BoxCell}\left(\mathrm{Button}\left("Left2",\mathrm{Shutdown}\left(\right)\right),'\mathrm{halign}'='\mathrm{none}'\right),\mathrm{HorizontalGlue}\left(\right)\right),\mathrm{BoxRow}\left(\mathrm{BoxCell}\left(\mathrm{Button}\left("Left",\mathrm{Shutdown}\left(\right)\right),'\mathrm{halign}'='\mathrm{none}'\right),\mathrm{HorizontalGlue}\left(\right),\mathrm{BoxCell}\left(\mathrm{Button}\left("Right",\mathrm{Shutdown}\left(\right)\right),'\mathrm{halign}'='\mathrm{none}'\right)\right),\mathrm{BoxRow}\left(\mathrm{HorizontalGlue}\left(\right),\mathrm{BoxCell}\left(\mathrm{Button}\left("Right2",\mathrm{Shutdown}\left(\right)\right),'\mathrm{halign}'='\mathrm{none}'\right),\mathrm{BoxCell}\left(\mathrm{Button}\left("Right1",\mathrm{Shutdown}\left(\right)\right),'\mathrm{halign}'='\mathrm{none}'\right)\right)\right)\right)\right): > $\mathrm{Maplets}\left[\mathrm{Display}\right]\left(\mathrm{maplet}\right)$ Note: This is different than the effect of using halight=left/right for the cells, as these options add space between consecutive elements with the same alignment.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8382619023323059, "perplexity": 2569.71953736907}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00838.warc.gz"}
http://metamathological.blogspot.com/2013/05/maths-on-steroids-8.html
## Tuesday, 21 May 2013 ### Maths on Steroids: 8 W0000! The Puzzle Hunt is over! Now I can finally get some work done. =P Well, maybe not straight-away...maybe after I watch the entire first season of Pendleton Ward's (*) Bravest Warriors. Blogging (**) is strange: there's something disarmingly private (even diary-esque?) about writing up a post, especially given how open blogging is as a medium. Okay...okay...okay...fair enough: no-one's actually going to ever read it, but there's still something sort of...umm...faintly and intangibly Huxleyan about all this. Or not. Hmm...whatevs, - it's not like I know what I'm on about. Err...let's just do some maths. I mentioned at the end of tute 8 that I would write up a solution to a non-standard Gram-Schmidt orthonormalisation question in the yellow book. Something like question 29: Let $$P_2$$ be the vector space of polynomials of degree at most two with the inner product\begin{align}\langle p,q\rangle :=\int_{-1}^{1}p(x)q(x)\,\mathrm{d}x.\end{align}Obtain an orthonormal basis for $$P_2$$ from the basis $$\{1,x,x^2\}$$ using the Gram-Schmidt process. To begin with, we just pick one of the three vectors and normalise it. Let's use the vector $$1\in P_2$$. We know that the magnitude of this vector is:\begin{align}||1||^2=\langle1,1\rangle=\int_{-1}^{1}1\cdot1\,\mathrm{d}x=2.\end{align}Therefore, the first vector in our orthogonal basis is: $$\frac{1}{\sqrt{2}}\cdot1\in P_2$$. The second step is to take another basis vector, like $$x\in P_2$$, and to first remove the $$1$$-component from this vector, before normalising the result to form a new vector. So, consider the vector\begin{align}x-\langle x,\frac{1}{\sqrt{2}}1\rangle\cdot\frac{1}{\sqrt{2}}\cdot1=x-(\frac{1}{2}\int_{-1}^1 x\,\mathrm{d}x)\cdot1=x-0\cdot1=x.\end{align} At this point, we've got two orthogonal vectors: $$\frac{1}{\sqrt{2}}1$$ and $$x$$, and we just need to normalise this latter vector to obtain: \begin{align}\{\frac{1}{\sqrt{2}}1,\sqrt{\frac{3}{2}}x\}.\end{align}In this last step, we take the final basis vector $$x^2\in P_2$$ and remove the $$1$$ and $$x$$-components from this vector, before normalising the result to form a new vector. So, consider the vector\begin{align}&x^2-\langle x^2,\frac{1}{\sqrt{2}}1\rangle\cdot\frac{1}{\sqrt{2}}1-\langle x^2,\sqrt{\frac{3}{2}}x\rangle\cdot\sqrt{\frac{3}{2}}x\\=&x^2-(\frac{1}{2}\int_{-1}^1x^2\,\mathrm{d}x)\cdot1-(\frac{3}{2}\int_{-1}^1x^3\,\mathrm{d}x)\cdot x=x^2-\frac{1}{3}.\end{align}After normalisation, we obtain the Gram-Schmidt orthonormalised basis: \begin{align}\{\frac{1}{\sqrt{2}},\sqrt{\frac{3}{2}}x,\sqrt{\frac{45}{8}}(x^2-\frac{1}{3}).\}\end{align} W000000. Note that the answer to this problem is non-unique, because the basis that results from the Gram-Schmidt process depends a lot upon the order in that we feed the original basis into the orthonormalisation procedure. For example, shoving in the vectors in the order $$x^2,x,1$$ results in the orthonormal basis: $$\{\sqrt{\frac{5}{2}}x^2,\sqrt{\frac{3}{2}}x,\sqrt{\frac{5}{8}}(1-3x^2)\}.$$ Well, that was that. The other thing that I promised to do was to give answers to the tute 9 lab sheets. Psst: Turns out that that's going to have to wait because I left the lab sheets at uni again.  >____< *: the creator of Adventure Time! **: actually, when you think about it, this is pretty applicable to Twitter, Facebook, YouTube, heck - it's applicable to pretty much anything without a password on the internet.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8485283255577087, "perplexity": 1093.3115612989936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510415.29/warc/CC-MAIN-20181016072114-20181016093614-00088.warc.gz"}
https://methods.sagepub.com/Reference/encyc-of-research-design/n509.xml
# z Score Encyclopedia Edited by: Published: 2010 • ## Subject Index z score, also called z value, normal score, or standard score, is the standardized value of a normal random variable. The difference between the value of an observation and the mean of the distribution is usually called the deviation from the mean of the observation. The z score is then a dimensionless quantity obtained by dividing the deviation from the mean of the observation by the standard deviation of the distribution. In the following, random variables are represented by uppercase letters such as X and Z, and the specific values of the random variable are represented by the corresponding lowercase letters such as x and z. The Normal Distribution and Standardization The normal distribution is a family of normal random variables. A normal random variable X with ... • All • A • B • C • D • E • F • G • H • I • J • K • L • M • N • O • P • Q • R • S • T • U • V • W • X • Y • Z ## Methods Map Research Methods Copy and paste the following HTML into your website
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8000367283821106, "perplexity": 2005.4525114651874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251671078.88/warc/CC-MAIN-20200125071430-20200125100430-00010.warc.gz"}
https://www.physicsforums.com/threads/collision-question-involving-velocity-kinetic-energy-and-conservation.753136/
# Collision question involving velocity, Kinetic energy and conservation 1. May 11, 2014 ### selsunblue 1. The problem statement, all variables and given/known data A 50 gram steel ball moving on a frictionless horizontal surface at 2.0m.s^-1 hits a stationary 20 gram ceramic ball. After the collision the ceramic ball moves off at a velocity of 2.5m.s^-1. (i) Calculate the velocity of the steel ball after the collision. (ii) Calculate the total kinetic energy of the balls before the collision and again after the collision. (iii) From your results in part (ii) has the kinetic energy been conserved? If not, where has this energy gone? 2. Relevant equations initial momentum = final momentum 3. The attempt at a solution For the first sub-question I got initial momentum=50*2=100 and final momentum i got 50v+2.5(20) equating these I got 100=50v+50 => v=1ms^-1 Would the 20 be correct in that calculation of final momentum? initial KE = ½ * 50 * 2^2 = 100 Final KE = ½ * 50 * v^2 + ½ * m * 2.5^2 Last edited: May 11, 2014 2. May 11, 2014 ### voko Where does "20" come from? It was not specified in the statement of the problem. 3. May 11, 2014 ### selsunblue Oh It's the mass of the stationary ceramic ball, sorry about that, forgot to write it in I guess. Would it be 50 or 20 in this case? 4. May 11, 2014 ### voko Then your solution for the steel's ball velocity after the collision is correct. 5. May 11, 2014 ### Nathanael Yes but it should be remembered that the units are not in standard units (joules) since you didn't make the conversion from grams to kilograms. The answers would have to be divided by 1000 to be in joules (it's a good habit to pay attention to units).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8393332958221436, "perplexity": 1156.0868702095174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647322.47/warc/CC-MAIN-20180320072255-20180320092255-00165.warc.gz"}
https://indico.cern.ch/event/732911/contributions/3152187/
DISCRETE 2018 26-30 November 2018 Europe/Vienna timezone CMS searches for pair production of charginos and top squarks in final states with two oppositely charged leptons 28 Nov 2018, 14:50 25m Johannessaal Johannessaal Austrian Academy of Sciences, Dr.-Ignaz-Seipel-Platz 2, 1010 Vienna, AUSTRIA Non-Invited Talk [6] Supersymmetry Speaker Barbara Chazin Quero (Universidad de Cantabria (ES)) Description A search for pair production of supersymmetric particles in events with two oppositely charged leptons and missing transverse momentum is reported. The data sample corresponds to an integrated luminosity of 35.9/fb of proton-proton collisions at 13 TeV collected with the CMS detector during the 2016 data taking period at the LHC. No significant deviation is observed from the predicted standard model background. The results are interpreted in terms of several simplified models for chargino and top squark pair production, assuming R-parity conservation and with the neutralino as the lightest supersymmetric particle. Content of the contribution Experiment Primary author Barbara Chazin Quero (Universidad de Cantabria (ES))
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9779683947563171, "perplexity": 4584.0228442140515}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668682.16/warc/CC-MAIN-20191115144109-20191115172109-00155.warc.gz"}
https://www.khanacademy.org/math/algebra2/x2ec2f6f830c9fb89:transformations/x2ec2f6f830c9fb89:symmetry/v/even-and-odd-functions-tables
If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. # Even and odd functions: Tables CCSS.Math: ## Video transcript - [Instructor] We're told this table defines function f. All right. For every x, they give us the corresponding f of x. According to the table, is f even, odd, or neither? So pause this video and see if you can figure that out on your own. All right, now let's work on this together. So let's just remind ourselves the definition of even and odd. One definition that we can think of is that f of x, if f of x is equal to f of negative x, then we're dealing with an even function. And if f of x is equal to the negative of f of negative x, or another way of saying that, if f of negative x. If f of negative x, instead of it being equal to f of x, it's equal to negative f of x. These last two are equivalent. Then in these situations, we are dealing with an odd function. And if neither of these are true, then we're dealing with neither. So what about what's going on over here? So let's see. F of negative seven is equal to negative one. What about f of the negative of negative seven? Well, that would be f of seven. And we see f of seven here is also equal to negative one. So at least in that case and that case, if we think of x as seven, f of x is equal to f of negative x. So it works for that. It also works for negative three and three. F of three is equal to f of negative three. They're both equal to two. And you can see and you can kind of visualize in your head that we have symmetry around the y-axis. And so this looks like an even function. So I will circle that in. Let's do another example. So here, once again, the table defines function f. It's a different function f. Is this function even, odd, or neither? So pause this video and try to think about it. All right, so let's just try a few examples. So here we have f of five is equal to two. F of five is equal to two. What is f of negative five? F of negative five. Not only is it not equal to two, it would have to be equal to two if this was an even function. And it would be equal to negative two if this was an odd function, but it's neither. So we very clearly see just looking at that data point that this can neither be even, nor odd. So I would say neither or neither right over here. Let's do one more example. Once again, the table defines function f. According to the table, is it even, odd, or neither? Pause the video again. Try to answer it. All right, so actually let's just start over here. So we have f of four is equal to negative eight. What is f of negative four? And the whole idea here is I wanna say, okay, if f of x is equal to something, what is f of negative x? Well, they luckily give us f of negative four. It is equal to eight. So it looks like it's not equal to f of x. It's equal to the negative of f of x. This is equal to the negative of f of four. So on that data point alone, at least that data point satisfies it being odd. It's equal to the negative of f of x. But now let's try the other points just to make sure. So f of one is equal to five. What is f of negative one? Well, it is equal to negative five. Once again, f of negative x is equal to the negative of f of x. So that checks out. And then f of zero, well, f of zero is of course equal to zero. But of course if you say what is the negative of f of, if you say what f of negative of zero, well, that's still f of zero. And then if you were to take the negative of zero, that's still zero. So you could view this. This is consistent still with being odd. This you could view as the negative of f of negative zero, which of course is still going to be zero. So this one is looking pretty good that it is odd.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8334291577339172, "perplexity": 314.2040625588542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401614309.85/warc/CC-MAIN-20200928202758-20200928232758-00527.warc.gz"}
https://blog.plover.com/math/pi-approx-2.html
# The Universe of Discourse Tue, 14 Feb 2006 More approximations to pi In an earlier post I discussed the purported Biblical approximation to π, and the verses that supposedly equate it to 3. Eli Bar-Yahalom wrote in to tell me of a really fascinating related matter. He says that the word for "perimeter" is normally written "QW", but in the original, canonical text of the book of Kings, it is written "QWH", which is a peculiar (mis-)spelling. (M. Bar-Yahalom sent me the Hebrew text itself, in addition to the Romanizations I have shown, but I don't have either a Hebrew terminal or web browser handy, and in any event I don't know how to type these characters. Q here is qoph, W is vav, and H is hay.) M. Bar-Yahalom says that the canonical text also contains a footnote, which explains the peculiar "QWH" by saying that it represents "QW". The reason this is worth mentioning is that the Hebrews, like the Greeks, made their alphabet do double duty for both words and numerals. The two systems were quite similar. The Greek one went something like this: Α 1 Κ 10 Τ 100 Β 2 Λ 20 Υ 200 Γ 3 Μ 30 Φ 300 Δ 4 Ν 40 Χ 400 Ε 5 Ξ 50 Ψ 500 Ζ 6 Ο 60 Ω 600 Η 7 Π 70 Θ 8 Ρ 80 Ι 9 Σ 90 This isn't quite right, because the Greek alphabet had more letters then, enough to take them up to 900. I think there was a "digamma" between Ε and Ζ, for example. (This is why we have F after E. The F is a descendant of the digamma. The G was put in in place of Ζ, which was later added back at the end, and the H is a descendent of Η.) But it should give the idea. If you wanted to write the number 172, you would use ΒΠΤ. Or perhaps ΤΒΠ. It didn't matter. Anyway, the Hebrew system was similar, only using the Hebrew alphabet. So here's the point: "QW" means "circumference", but it also represents the number 106. (Qoph is 100; vav is 6.) And the odd spelling, "QWH", also represents the number 111. (Hay is 5.) So the footnote could be interpreted as saying that the 106 is represented by 111, or something of the sort. Now it so happens that 111/106 is a highly accurate approximation of π/3. π/3 is 1.04719755 and 111/106 is 1.04716981. And the value cited for the perimeter, 30, is in fact accurate, if you put 111 in place of 106, by multiplying it by 111/106. It's really hard to know for sure. But if true, I wonder where the Hebrews got hold of such an accurate approximation? Archimedes pushed it as far as he could, by calculating the perimeters of 96-sided polygons that were respectively inscribed within and circumscribed around a unit circle, and so calculated that 223/71 < π < 22/7. Neither of these fractions is as good an approximation as 333/106. Thanks very much, M. Bar-Yaholom.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9344792366027832, "perplexity": 1698.837928213414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888878.44/warc/CC-MAIN-20180120023744-20180120043744-00477.warc.gz"}
http://math.stackexchange.com/questions/20809/trouble-deriving-the-harris-corner-detection
Trouble deriving the Harris Corner Detection I just started studying a small paper about the Harris Corner Detection. The problem is I don't understand how step 7 is derived from step 6. In step 7 the expression is expanded in a way that we get a structure tensor $C$ for $x$ and $y$. If one would multiply the three matrices again, I see that we would end up with 6 again (and that it's correct). However I do not see given step 6 how one can derive step 7. - $$\big(a^Tb\big)^2 = \big(a^Tb\big)\big(a^Tb\big) = \big(b^Ta\big)\big(a^Tb\big) = b^T\big(aa^T\big)b.$$ Didn't know this identity: $$\big(a^Tb\big)\big(a^Tb\big) = \big(b^Ta\big)\big(a^Tb\big)$$ thanks! – Nils Feb 7 '11 at 10:16 @Nils: It's merely the fact that the dot product is commutative: $a^Tb = \sum a_i b_i = b^Ta$. – Rahul Feb 7 '11 at 10:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850397706031799, "perplexity": 228.06416662900793}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00070-ip-10-164-35-72.ec2.internal.warc.gz"}
http://mathhelpforum.com/number-theory/122112-how-many-elements-order-2-there-product-groups.html
# Math Help - How many elements of order 2 are there in this product of groups? 1. ## How many elements of order 2 are there in this product of groups? How many elements of order 2 are there in the following abelian group of order 16? : Z2 X Z2 X Z4 where Z2 is the integers mod 2 and Z4 is the integers mod 4. How are these found? Thanks for any help 2. Originally Posted by Siknature How many elements of order 2 are there in the following abelian group of order 16? : Z2 X Z2 X Z4 where Z2 is the integers mod 2 and Z4 is the integers mod 4. How are these found? Thanks for any help In this case perhaps is easier to ask how many elements are NOT of order 2: first, note that every non-unit element is of order 2 or 4. Now, if we agree to write the elements of the group in the form $(x,y,z)\,,\,\,x,y,=0,1\!\!\!\!\pmod 2\,,\,z=0,1,2,3\!\!\!\!\pmod 4$, then an element has order 4 iff it has either $1\,\,\, or\,\,\, 3\!\!\!\!\pmod 4$ in the last entry...can you now count them all? Tonio 3. Originally Posted by tonio can you now count them all? Tonio Thanks, yes, now i think that this means the answer must be 7 (8 if we include the identity)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410894274711609, "perplexity": 299.03078636532297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646242843.97/warc/CC-MAIN-20150827033042-00057-ip-10-171-96-226.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/112317/how-can-i-find-the-expected-value-due-to-the-trials-instead-of-simply-expected-v
# How can I find the expected value due to the trials instead of simply expected value of trials of a geometric distribution? Suppose in a game, if I win in the $j$th round, I gain $+\$2^{j-1}$and if I don't win in the$j$th, I lose$-\$2^{j-1}$. If I lose, I will keep playing until I win. Once I win, I leave the game. Otherwise, I continue to play until 30 rounds and leave the game even if I don't win. In other words, I will just stop at the $30$th round. Each round is independent and the probability of winning in each round is $\frac{9}{13}$. I let $X$ be a random variable of my winnings. I want to find my expected winnings. Since it works in a way that I would stop during my first win, I have a feeling that $X$ should be distributed over the Geometric Distribution. The game is either a win or a lose, so it is pretty much like a Bernoulli trial. But since I am not getting the expected number of rounds played, the Bernoulli trials cannot be just $0$ or $1$. So I thought I could modify it to become this way: $${ X }_{ j }=\left\{\begin{matrix} +2^{j-1} & if\; win\\ -2^{j-1} & if\; lose \end{matrix}\right.$$ Then, $E(X)=E(X_1+X_2+\cdots +X_{30})=E(X_1)+E(X_2)+\cdots +E(X_{30})$ However, because I thought this is a Geometric Distribution, the expected value is just $\frac{1}{p}=\frac{13}{9}$. But I don't think the expected winnings is $\frac{13}{9}$ and is wrong. Is what I have done correct? So, instead of the usual finding of the expected number of trials of a standard geometric distribution, how can I find an expected number of another factor due to the trials (in this case, the expected number of winnings from the trials)? Edit: What I attempted to do was to make use of an indicator function to determine the expectation. But it doesn't seem successful. - The ideas in your post can be used to produce a nice solution, which will be given later. But the expected length of the game, which is almost but not quite $\frac{13}{9}$, because of the cutoff at $30$, has I think not much bearing on the problem. First Solution: Let's do some calculations. If we win on the first round, we win $1$ dollar and leave. If we lose on the first and win on the second, we lost $1$ dollar but won $2$, for a net of $1$. If we lose on the first two rounds, and win on the third, we have lost $1+2$ dollars, and won $4$, for a net of $1$. Reasoning in the same way, we can see that we either leave with $1$ dollar, or lose a whole bunch of money, namely $1+2+4+\cdots+2^{29}$, which is $2^{30}-1$. The reason that if we win, our net gain is $1$, is that if we win on the $i$-th round, we have lost $1+2+\cdots +2^{i-2}$ and won $2^{i-1}$. Since $1+2+\cdots +2^{i-2}=2^{i-1}-1$, our net gain is $1$. The probability we lose an enormous amount of money is $\left(\frac{4}{13}\right)^{30}$ ($30$ losses in a row). Now the expectation is easy to find. It is $$(1)\left(1-\left(\frac{4}{13}\right)^{30}\right)- (2^{30}-1)\left(\frac{4}{13}\right)^{30}.$$ There is a bit of cancellation. The expectation simplifies to $$1-2^{30}\left(\frac{4}{13}\right)^{30}.$$ Second Solution: We give a much more attractive solution based on the idea of your post. For $j=1$ to $30$, let $X_j$ be the amount "won" on the $j$-th trial Then the total amount $X$ won is $X_1+\cdots+X_{30}$. Thus, by the linearity of expectation, $$E(X)=\sum_{j=1}^{30} E(X_j).$$ The $X_j$ are not independent, but that is irrelevant. Let $p=4/13$. We win $2^{j-1}$ at stage $j$ with probability $p^{j-1}(1-p)$, and lose $2^{j-1}$ with probability $p^{j-1}p$. So $$E(X_j)=2^{j-1}p^{j-1}(1-2p).$$ Now sum, from $j=1$ to $j=30$. The sum is $$(1-2p)\frac{1-(2p)^{30}}{1-2p}, \quad\text{that is,}\quad 1-\left(\frac{8}{13}\right)^{30}.$$ - Thanks! I think if we look at it this way, then the expectation will be $(2^{30}-1)\left(\frac{4}{13}\right)^{30}+1(1-\left(\frac{4}{13}\right)^{30})$, is this right? Does that mean that I will not be able to use the linearity method to solve for this problem? –  xenon Feb 23 '12 at 6:26 @xEnOn: I earlier made a sign typo, and so have you. The second part is right, but the first should have a $-$ sign in front, since we lose $2^{30}-1$. I think my post at this moment is (after a few corrections!) fairly typo-free. There is a smoother way to do it using indicator functions, but it is late, and my brain has stopped working. –  André Nicolas Feb 23 '12 at 6:44 oh yea...there should be a minus sign in front because that is a lost. Thanks! And I think what I attempted to do earlier was to use what you said as the indicator functions method. I should have used this term in my question though. I didn't know of the name of the method. Thanks for telling. It will be interesting to know how I could use the indicator function way because there could be chances where the net profit doesn't cancel out so nicely to just $\$1$. – xenon Feb 23 '12 at 7:07 Would you have time to show how this problem could be done with an indicator function later in the day? I kept trying for many hours but still couldn't figure out how I should do it. :( Thanks for you help. – xenon Feb 23 '12 at 12:09 @xEnOn: I wrote out the much nicer solution that goes along the line you tried. Had done it before, but made an arithmetical error, was too tired to figure out what was happening. An inessential variant uses$Y_j=1$if we win on the$j$-th trial,$-1$if we lose, and$X=\sum 2^{j-1}Y_j\$. –  André Nicolas Feb 23 '12 at 19:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836899638175964, "perplexity": 554.9901072871313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775080.16/warc/CC-MAIN-20141217075255-00055-ip-10-231-17-201.ec2.internal.warc.gz"}
http://pseudomonad.blogspot.com/2011/06/theory-update-85.html
## Monday, June 6, 2011 ### Theory Update 85 Back at the start of our M Theory lessons, when we were discussing twistor amplitudes, associahedra, motivic cohomology and Kepler's law, we were always thinking about ternary analogues of Stone duality. Recall that a special object in the (usual) category of topological spaces is specified by a two point lattice, because every space contains the empty set (a zero) and the whole space itself (a one). In a sense, this is the most basic arrow ($0 \rightarrow 1$) of category theory, since much mathematical industry is motivated by classical topology. The requirements of M Theory*, however, are different. Recall, once more, that if we cannot begin with space, which after all is an emergent concept, we cannot begin with the classical symmetries that act upon space, or with simple generalisations such as traditional supersymmetry. The imposition of such symmetries from the outset simply does not make any sense. The groups that physicists like, such as $SU(2)$ and $SU(3)$, are very basic categories, with one object and nice properties, so we should not fear that they will disappear into the mist, never to be recovered. There are a number of ternary analogues for the arrow, but what really is the ternary analogue of its self dual property (ie. true triality)? We have drawn triangles and cubes and globule triangles. We could draw Kan extensions. As a minimum, we expect three dualities for the sides of the ternary triangle (let us call them S, T and U), but what truly three dimensional element appears for a self ternary arrow? For a start, the Gray tensor product of Crans will be busy generating higher dimensional arrows for us. Since a classical space, properly described, ought to be an infinite dimensional category, we would like to make use of arrow generation in its definition. But M Theory* requires even more. The dualities S, T and U do not merely reflect the properties of a classical space. Already, quantum information takes precedence. We let our lone arrow stand for the noncommutative world, and look further into the nonassociative one for the meaning of ternary. And here at last, categorical weakness is forced upon us, unbid. The associahedra provide the simplest possible object (a $1$-operad) for describing nonassociativity (of alphabets). There exists a vast collection of such combinatorial structures, of higher and higher information dimension. *This term, as always, will refer here to the correct form of M Theory, and not to some offshoot of crackpot stringy physics.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8160178065299988, "perplexity": 970.3982480414976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867374.98/warc/CC-MAIN-20180526073410-20180526093410-00094.warc.gz"}
http://tex.stackexchange.com/questions/12638/how-to-change-catcode-in-a-macro
# How to change #catcode in a macro I want to change the catcode in a macro, but it seems it doesn't work. Can anyone help me? \def\A{\catcode\|=0 |bf{test}} |bf{test} would not work as expect. - Li: A tip: If you indent lines by 4 spaces, then they are marked as a code sample. You can also highlight the code and click the "code" button (with "101010" on it). –  lockstep Mar 4 '11 at 9:21 This doesn't work because the | was already read as part of the argument of \def\A and therefore has its catcode already before the included \catcode is executed. You need to move the catcode change out of the macro: \begingroup \catcode\|=0 \gdef\A{|bf{test}} \endgroup There are also other ways to do it: eTeX privides \scantokens which re-read its content so that the catcodes are reapplied and there is a trick to do it using \lowercase. Note that in this example makes actually no difference if \ or | is used. It would if you also change the catcode of \ to something else. If you tell us more about your exact application more specific answers can be given. Also note that your code example would make the catcode change active for the rest of the group \A is used in, which is most likely not what you intend. - Thank you. I know in general, I need to use this code out of a macro. But I wish I can write a macro, which read the stream begin with \\ (such as \test), then stores test (without \) in somewhere. So, if I use \begingroup \catcode\|=0 \catcode\\=12 |@tfor|B:=\test|input|do{\if \|B |relax |else do something|fi} –  Kuang-Li Huang Mar 4 '11 at 16:12 @Kuang-Li Huang: You can convert a macro like \test read be macro as argument #1 into test using \expandafter\@gobble\string#1. Note that \string returns the following token (e.g. a macro) as its string representation, e.g. the macro \test as string "\test". The \@gobble then removes the \\ . The \expandafter is required to expand \string before \@gobble. Alternatively you can change the \escapechar variable which tells \string which character to place for the \\ `. –  Martin Scharrer Mar 4 '11 at 16:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9503605365753174, "perplexity": 2450.461008628525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776425157.62/warc/CC-MAIN-20140707234025-00018-ip-10-180-212-248.ec2.internal.warc.gz"}
https://www.ijert.org/acoustic-echo-cancellation-using-variable-step-size-nlms-algorithms
# Acoustic Echo Cancellation using Variable Step- Size NLMS Algorithms Text Only Version #### Acoustic Echo Cancellation using Variable Step- Size NLMS Algorithms GOPALAIAH Research Scholar, Dayananda Sagar College of Engineering Bangalore, India [email protected] Dr.K. SURESH Professor Sri Darmasthala Manjunatheswara Institute of Technology Ujire, India [email protected] AbstractThe purpose of a variable step-size normalized LMS filter is to solve the dilemma of fast convergence rate and low excess MSE. In the past two decades, many VSS-NLMS algorithms have been presented and have claimed that they have good convergence and tracking properties. This paper summarizes several promising algorithms and gives a performance comparison via extensive simulation. Simulation results demonstrate that Benestys NPVSS and our GSER have the best performance in both time-invariant and time-varying systems. Index TermsAdaptive filters, normalized least mean square (NLMS), variable step-size NLMS, regularization parameter. 1. INTRODUCTION Adaptive filtering algorithms have been widely employed in many signal processing applications. Among them, the normalized least mean square (NLMS) adaptive filter is most popular due to its simplicity. The stability of the basic NLMS is controlled by a fixed step-size constant , which also governs the rate of convergence, speed of tracking ability and the amount of steady-state excess mean-square error (MSE). In practice, the NLMS is usually implemented by adding the squared norm of input vector with a small positive number commonly called the regularization parameter. For the basic NLMS algorithm, the role of is to prevent the associated denominator from getting too close to zero, so as to keep the filter from divergence. Since the performance of -NLMS is affected by the overall step-size parameter, the regularization parameter has an effect on convergence properties and the excess MSE as well, i.e., a too big may slow down the adaptation of the filter in certain applications. There are conflicting objectives between fast convergence and low excess MSE for NLMS with fixed regularization parameter. In the past two decades, many variable step-size NLMS (VSS-NMS) algorithms have been proposed to solve this dilemma of the conventional NLMS. For example, Kwong used the power of instantaneous error to introduce a variable step-size LMS (VSSLMS) filter [6]. This VSSLMS has a larger step size when the error is large, and has a smaller step size when the error is small. Later Aboulnasr pointed out that VSSLMS algorithm is fairly sensitive to the accompanying noise, and presented a modified VSSLMS (MVSS) algorithm to alleviate the influence of uncorrelated disturbance. The step-size update of MVSS is adjusted by utilizing an estimate of the autocorrelation of errors at adjacent time samples. Recently Shin, Sayed, and Song used the norm of filter coefficient error vector as a criterion for optimal variable step-size, and proposed a variable step-size affine projection algorithm (VS-APA), and a variable step-size NLMS (VS-NLMS) as well. Lately Benesty proposed a non- parametric VSS NLMS algorithm (NPVSS), which need not tune many parameters as that of many variable step size algorithms. Another type of VSS algorithms has time-varying regularization parameter that is fixed in the conventional – NLMS filters. By making the regularization parameter gradient-adaptively, Mandic presented a generalized normalized gradient descent (GNGD) algorithm. Mandic claimed that the GNGD adapts its learning rate according to the dynamics of the input signals, and its performance is bounded from below by the performance of the NLMS. Very recently, Mandic introduced another scheme with hybrid filters structure to further improve the steady-state misadjustment of the GNGD. Choi, Shin, and Song then proposed a robust regularized NLMS (RR-NLMS) filter, which uses a normalized gradient to update the regularization parameter. While most variable step-size algorithms need to tune several parameters for better performance, we presented an almost tuning-free generalized square-error-regularized NLMS algorithm (GSER) recently. Our GSER exhibits very good performance with fast convergence, quick tracking The purpose of this paper is to provide a fair comparison among these VSS algorithms. In Section II, we summarize the algorithms. Section III illustrates the simulation results. Conclusions are given in Section IV. 2. VARIABLE STEP-SIZE ALGORITHMS In this section, we summarize several variable step-size adaptive filtering algorithms including VSSLMS , MVSS , VS- APA , VS-NLMS, NPVSS, GNGD, RR-NLMS, and GSER algorithm . Let d(n) be the desired response signal of the adaptive filter d(n) = xT (n)h(n) + v(n) , (1) where h(n) denotes the coefficient vector of the unknown system with length M , h(n )= [h0(n ), p(n ),.. .,hM- 1(n)]T, (2) x(n) is the input vector x(n) = [x(n), x(n 1),, x(n M +1)]T , (3) and v(n) is the system noise that is independent of x(n) . Let the adaptive filter have same structure and same order as that of the unknown system. Denoting the coefficient vector of the filter at iteration n as w(n) . We express the a priori estimation error as e(n) = d(n) xT (n)w(n) . (4) 1. VSSLMS algorithm Kwong used the squared instantaneous a priori estimation error to update the step size as (n +1) = (n) + e2 (n) , (5) where 0 < < 1 , > 0 , and (n +1) is restricted in some pre- decided[min ,max ] . The filter coefficient vector update recursion is given by w(n +1) = w(n) + (n)e(n)x(n) . (6) where 2 is a positive number proportional to K, max < 2 ,and p(n) is an M ×1 vector recursively given by p(n) = p(n 1)+(1-)X(n)(XT(n)X(n)+1I)-1 (13) A variable step size NLMS (VS-NLMS) was obtained as a special case of VS-APA by choosing K = 1 as follows. D.NPVSS algorithm Benesty argued that many variable step-size algorithms may not work reliably because they need to set several parameters which are not easy to tune in practice, and proposed a nonparametric variable step-size NLMS algorithm (NPVSS). The filter coefficient vector update recursion is given as that of (6), and the variable step size is updated as v Where 3, 4 are positive numbers, 2 is the power of the 2. MVSS algorithm Aboulnasr utilized an estimate of the autocorrelation of e(n) at adjacent time samples to update the variable step size system noise, and the power of the error signal is estimated as E.GNGD algorithm. as where (n +1) = (n) + p2 (n) , (7) The GNGD belongs to the family of time-varying regularized VSS algorithm. The filter coefficient vector is updated as p(n) = p(n 1) + (1 )e(n)e(n 1) , (8) and 0 < < 1 . 3. VS-APA and VS-NLMS Shin, Sayed, and Song proposed a variable step-size affine projection algorithm (VS-APA), which employed an error vector, instead of a scalar error as used in VSSLMS [6] and MVSS, to adjust the variable step size. The coefficient vector update recursion is given by w(n +1) = w(n)+µ(n)X(n)(XT(n)X(n)+1I)-1 (9) where 1 is a small positive number, I is an unit matrix of size K × K , X(n) is an M × K input matrix defined as X(n) = [x(n),x(n 1),, x(n K +1)] , (10) and e(n) = [e(n), e(n 1),, e(n K +1)]T . (11) The variable step size (n) is obtained by , where c is a fixed step size, and the regularization parameter (n) is recursively calculated as where is an adaptation parameter needs tuning, and the initial value (0) has to be set as well. F.RR-NLMS Algorithm Chois RR-NLMS algorithm is a modified version of GNGD. The regularization parameter is updated as where sgn(x) represents the sign function, and min is a parameter needs tuning. G. GSER Algorithm. The GSER updates w(n) as follows, where is a positive parameter that makes the filter more general, and the power of the error signal is estimated. 3. SIMULATION RESULTS In this section, we present the comparison results of several experiments of VSSLMS, MVSS , VS-APA, VS-NLMS, NPVSS, GNGD, RR-NLMS, and GSER algorithm. The adaptive filter is used to identify a 128-tap acoustic echo system ho(n ) .We have used the normalized squared coefficient error (NSCE) to evaluate the performance of the algorithms. The NSCE is defined as We have run extensive simulations. The results are reasonably consistent. In this section, we show some simulation results with the following parameters setup: = = 0.99, = 5×105, 1 = 0.1, 2=10-4, 3=20, 4=10-3, min=10-4 min=10- 3,max=1,c= 1, = 0.15, and K = 4 . We assume the power of slightly NSCE advantage than that of GSER in 20-dB SNR case. It should be noted that NPVSS assume 2 v is available in the simulation. Figures 5 and 6 are the results of AR process input with 30-dB SNR and 20-dB SNR, respectively. RR-NLMS has worst NSCE and shows slower tracking behavior compare to white Gaussian signal input case. VS-APA still has problem in low SNR situation: the NSCE of VS-APA is 10 dB worse than that of its competing algorithms. GSER has fastest tracking and convergence speed in 30-dB SNR case. v system noise, 2 , is available for NPVSS algorithm. 1. Time-Invariant System The reference input, x(n) , is either a zero mean, unit variance white Gaussian signal or a second-order AR process. The power of the echo system is about 1. An independent white Gaussian signal with zero mean and variance 0.001 is added to the system output. Figures 1 and 2 are the results of white Gaussian signal input and AR process input, respectively. The NSCE curves are ensemble averages over 20 independent runs. As can be seen, VS-NLMS has the worst performance. GNGD and RR-NLMS have similar convergence speed in the early period, and GNGD exhibits very limited performance in later phase while RR-NLMS keeps adaptation to a lower NSCE. However, we notice that RR-NLMS is out-performed by the rest algorithms in this category. VSSLMS and MVSS have the same performance in the simulation. VS-APA, NPVSS and GSER are among the best group that has fast convergence speed and low NSCE. 2. Time-Varying System Tracking the time-varying system is an important issue in adaptive signal processing. We compare RR-NLMS, VA-APA, NPVSS and GSER in a scenario that the acoustic echo system h (n) is changed to its negative value at sample 35,000. The additive zero mean white Gaussian noise, v(n) , is either with variance 0.01 or 0.001. Figures 3 and 4 are the results of white Gaussian signal input with 30-dB signal-to-noise ration (SNR) and 20-dB SNR, respectively. All algorithms have fast tracking performance. RR-NLMS has worst NSCE. VS-APA achieves the lowest NSCE when SNR is 30dB. However, the NSCE of VS-APA is 5dB worse than that of NPVSS and GSER. Notice that VS-APA exhibits slow convergence rate. NPVSS has 4. CONCLUSIONS Many variable step-size NLMS algorithms have been proposed to achieve fast convergence rate, rapid tracking, and low misalignment in the past two decades. This paper summarized several promising algorithms and presented a performance comparison by means of extensive simulation. According to the simulation, Benestys NPVSS and our GSER have the best performance in both time-invariant and time- varying systems. REFERENCES 1. T. Aboulnasr and K Mayyas, A robust variable step-size LMS-type algorithm: analysis and simulations, IEEE Transactions on Signal Processing, Vol. 45, No. 3, pp. 631 639, March 1997. 2. M. T. Akhtar, M. Abe, and M. Kawamata, A new variable step size LMS algorithm-based method for improved online secondary path modeling in active noise control systems, IEEE Transactions on Audio, Speech and Language Processing, Vol. 14, No. 2, pp. 720-726, March 2006. 3. J. Benesty et al., A nonparametric VSS NLMS algorithm, IEEE Signal Processing Letters, Vol. 13, No. 10, pp 581-584, Oct. 2006. 4. Y. S. Choi, H. C. Shin, and W. J. Song, Robust regularization for normalized LMS algorithms, IEEE Transactions on Circuits and Systems II, Express Briefs, Vol. 53, No. 8, pp. 627631, Aug. 2006. 5. Y. S. Choi, H. C. Shin, and W. J. Song, Adaptive regularization matrix for affine projection algorithm, IEEE Transactions on Circuits and Systems II, Express Briefs, Vol. 54, No. 12, pp. 10871091, Dec. 2007. 6. R. H. Kwong and E. W. Johnston, A variable step size LMS algorithm, IEEE Transactions on Signal Processing, Vol. 40, pp. 1633 – 1642, July 1992. 7. J. Lee, H. C. Huang, and Y. N. Yang, The generalized square-error-regularized LMS algorithm, Proceedings of WCECS 2008, pp. 157 160, Oct. 2008. 8. D. P. Mandic et al., Collaborative adaptive learning using hybrid filters, Proceedings of 2007 IEEE ICASSP, pp. III 921924, April 2007. 9. D. P. Mandic; A generalized normalized gradient descent algorithm, IEEE Signal Processing Letters, Vol. 11, No. 2, pp. 115118, Feb. 2004. 10. H. C. Shin, A. H. Sayed, and W. J. Song, Variable step-size NLMS and affine projection algorithms, IEEE Signal Processing Letters, Vol. 11, No. 2, pp. 132 – 135, Feb. 2004. 11. J. M. Valin and I. B. Collings, Interference-normalized least mean square algorithm, IEEE Signal Processing Letters, Vol. 14, No. 12, pp. 988-991, Dec. 2007.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9036243557929993, "perplexity": 2675.8119987524337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494449.56/warc/CC-MAIN-20190220044622-20190220070622-00428.warc.gz"}
https://jayryablon.wordpress.com/2013/02/
# Lab Notes for a Scientific Revolution (Physics) ## February 22, 2013 ### Two New Papers: Grand Unified SU(8) Gauge Theory Based on Baryons which are Yang-Mills Magnetic Monopoles . . . and . . . Predicting the Neutron and Proton Masses Based on Baryons which are Yang-Mills Magnetic Monopoles and Koide Mass Triplets I have not had the chance to make my readers aware of two new recent papers. The first is at Grand Unified SU(8) Gauge Theory Based on Baryons which are Yang-Mills Magnetic Monopoles and has been accepted for publication by the Journal of Modern Physics, and will appear in their April 2013 “Special Issue on High Energy Physics.”  The second is at Predicting the Neutron and Proton Masses Based on Baryons which are Yang-Mills Magnetic Monopoles and Koide Mass Triplets and is presently under review. The latter paper on the neutron and proton masses fulfills a goal that I have had for 42 years, which I have spoken about previously in the blog, of finding a way to predict the proton and neutron masses based on the masses of the fermions, specifically, the electron and the up and down quark (and as you will see , the Fermi vev).  Between this latter paper and my earlier paper at Predicting the Binding Energies of the 1s Nuclides with High Precision, Based on Baryons which are Yang-Mills Magnetic Monopoles, I have made six distinct, independent predictions with accuracy ranging from parts in 10,000 for the neutron plus proton mass sum, to an exact relationship for the proton minus neutron mass difference, parts per 100,000 for the 3He binding energy, parts per million for the for the 3H and 4He binding energies, and parts per ten million for the 2H binding energy (based on the proton minus neutron mass difference being made exact).  I have also proposed in the binding energies paper, a new approach to nuclear fusion, known as “resonant fusion,” in which one bathes hydrogen in gamma radiation at certain specified frequencies that should catalyze the fusion process. In addition, the neutron and proton mass paper appears to also provide a seventh prediction for part of the determinant of the CKM generational mixing matrix.  And the GUT paper establishes the theoretical foundation for exactly three fermion generations and the observed mixing patterns, answering Rabi’s question “who ordered this?”. All of this in turn, is based on my foundational paper Why Baryons Are Yang-Mills Magnetic Monopoles.  Taken together, these four papers place nuclear physics on a new foundation, with empirical support from multiple independent data points.  The odd against six independent parts per 10^6 concurrences being mere coincidence are one in 10^36, and I now actually have about ten independent data points of very tight empirical support.  If you want to start learning nuclear physics as it will be taught around the world in another decade, this is where you need to start. Best to all, Jay Blog at WordPress.com.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106796383857727, "perplexity": 1345.5142632911427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607314.32/warc/CC-MAIN-20200122161553-20200122190553-00526.warc.gz"}
http://paperity.org/p/77586818/loop-corrections-to-the-antibrane-potential
# Loop corrections to the antibrane potential Journal of High Energy Physics, Jul 2016 Antibranes provide some of the most generic ways to uplift Anti-de Sitter flux compactifications to de Sitter, and there is a growing body of evidence that antibranes placed in long warped throats such as the Klebanov-Strassler warped deformed conifold solution have a brane-brane-repelling tachyon. This tachyon was first found in the regime of parameters in which the backreaction of the antibranes is large, and its existence was inferred from a highly nontrivial cancellation of certain terms in the inter-brane potential. We use a brane effective action approach, similar to that proposed by Michel, Mintun, Polchinski, Puhm and Saad in [29], to analyze antibranes in Klebanov-Strassler when their backreaction is small, and find a regime of parameters where all perturbative contributions to the action can be computed explicitly. We find that the cancellation found at strong coupling is also present in the weak-coupling regime, and we establish its existence to all loops. Our calculation indicates that the spectrum of the antibrane worldvolume theory is not gapped, and may generically have a tachyon. Hence uplifting mechanisms involving antibranes remain questionable even when backreaction is small. This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2FJHEP07%282016%29132.pdf Iosif Bena, Johan Blåbäck, David Turton. Loop corrections to the antibrane potential, Journal of High Energy Physics, 2016, 132, DOI: 10.1007/JHEP07(2016)132
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8751664161682129, "perplexity": 1889.86100885665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893530.89/warc/CC-MAIN-20180124070239-20180124090239-00620.warc.gz"}
https://www.queryxchange.com/q/20_367742/jointly-sufficient-statistics-of-a-multi-parameter-exponential-family/
# Jointly sufficient statistics of a multi-parameter exponential family by Xiaomi   Last Updated September 20, 2018 02:19 AM Let $f_X$ be a joint density function that comes from an $s$-parameter exponential family with sufficient statistics $(T_1, T_2, \dots, T_s)$ so that the density $f_X$ can be expressed as $$f_{X|\theta}(x) = h(x) \exp \left(\sum_{i=1}^s T_i(x)\eta_i(\theta) - A(\theta) \right)$$ I have two questions: 1. Expressed in this form, is it correct to say the statistics $T_1,\dots,T_s$ are jointly sufficient, as opposed to independently sufficient? 2. Based on them being jointly sufficient, is it correct to say any unbiased estimator $\tau (X)$ such that $E[\tau(X)|T_i,T_j] = \theta_k$, for some $i,j,k$, must be UMVUE of $\theta_k \in \theta$? I'm trying to understand the difference between being sufficient, and jointly sufficient Tags :
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8056963682174683, "perplexity": 361.7940409156897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512434.71/warc/CC-MAIN-20181019191802-20181019213302-00314.warc.gz"}
https://proofwiki.org/wiki/Definition:Coset_Space
# Definition:Coset Space ## Definition Let $G$ be a group, and let $H$ be a subgroup of $G$. ### Left Coset Space The left coset space (of $G$ modulo $H$) is the quotient set of $G$ by left congruence modulo $H$, denoted $G / H^l$. It is the set of all the left cosets of $H$ in $G$. ### Right Coset Space The right coset space (of $G$ modulo $H$) is the quotient set of $G$ by right congruence modulo $H$, denoted $G / H^r$. It is the set of all the right cosets of $H$ in $G$. ### Note If we are (as is usual) concerned at a particular time with only the left or the right coset space, then the superscript is usually dropped and the notation $G / H$ is used for both the left and right coset space. If, in addition, $H$ is a normal subgroup of $G$, then $G / H^l = G / H^r$ and the notation $G / H$ is then unambiguous anyway. ## Also known as Some sources call this the left quotient set and right quotient set respectively. Some sources use: $G \mathrel \backslash H$ for $G / H^l$ $G / H$ for $G / H^r$ This notation is rarely encountered, and can be a source of confusion.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9949989914894104, "perplexity": 231.92025854318305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540517156.63/warc/CC-MAIN-20191209013904-20191209041904-00544.warc.gz"}
http://mathhelpforum.com/calculus/127603-integration-parts-print.html
# Integration by parts • February 7th 2010, 07:17 AM confusedgirl Integration by parts Integrate (x^3 dx) / square root of (1-x^2) i have to integrate it by parts. i already tried it but my answer waa 0=0 is that possible? • February 7th 2010, 07:22 AM Henryt999 how? Quote: Originally Posted by confusedgirl Integrate (x^3 dx) / square root of (1-x^2) i have to integrate it by parts. i already tried it but my answer waa 0=0 is that possible? Curious how did you end up with 0? • February 7th 2010, 07:27 AM confusedgirl let u = x^3 du = 3x^2 dx0 dv = dx / sq rt of (1-x^2) v= arcsin x = x^3 arcsin x - integral of 3x^2 arcsinx dx u = arcsin x du = dx / sq rt of (1-x^2) dv = 3x^2 dx v = x^3 = x^3 arcsin x - x^3 arcsin x + integral of x^3 dx / sq rt (1-x^2) 0=0 • February 7th 2010, 08:29 AM Pulock2009 first find integrate:sin^-1 (x) then go on integrating keeping x^3 as the part to differentiate everytime. • February 7th 2010, 08:30 AM Pulock2009 first find integrate:sin^-1 (x) then go on integrating keeping x^3 as the part to differentiate everytime. • February 7th 2010, 08:34 AM Soroban Hello, confusedgirl! Watch very carefully . . . Quote: $I \;=\;\int\frac{x^3\,dx}{\sqrt{1-x^2}}$ We have: . $I \;=\;\int(x^2)\left[x(1-x^2)^{-\frac{1}{2}}dx\right]$ $\text{By parts: }\;\begin{array}{ccccccc}u &=& x^2 && dv &=& x(1-x^2)^{-\frac{1}{2}}dx \\ du &=& 2x\,dx && v &=& -(1-x^2)^{\frac{1}{2}} \end{array}$ Then: . $I \;=\;-x^2(1-x^2)^{\frac{1}{2}} + \int 2x(1-x^2)^{\frac{1}{2}}dx$ . . For the second integral, use "normal" substitution. . . We have: . $J \;=\;\int (1-x^2)^{\frac{1}{2}}(2x\,dx)$ . . Let: . $u \,=\,1-x^2\quad\Rightarrow\quad du \,=\,-2x\,dx \quad\Rightarrow\quad 2x\,dx \,=\,-du$ . . Substitute: . $J \;=\;\int u^{\frac{1}{2}}(-du) \;=\;-\int u^{\frac{1}{2}}\,du \:=\:-\tfrac{2}{3}u^{\frac{3}{2}} + C \;=\;-\tfrac{2}{3}(1-x^2)^{\frac{3}{2}} + C$ Therefore: . $I \;=\;-x^2(1-x^2)^{\frac{1}{2}} - \tfrac{2}{3}(1-x^2)^{\frac{3}{2}} + C$ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ . . is often simplified beyond all recognition. Here's how they do it . . . We have: . $-x^2(1-x^2)^{\frac{1}{2}} - \tfrac{2}{3}(1-x^2)^{\frac{3}{2}}$ Factor: . $-\tfrac{1}{3}(1-x^2)^{\frac{1}{2}}\bigg[3x^2 + 2(1-x^2)\bigg]$ . . . . hope you followed that! . . . . $=\;-\tfrac{1}{3}(1-x^2)^{\frac{1}{2}}\bigg[3x^2 + 2 - 2x^2\bigg]$ . . . . $=\;-\tfrac{1}{3}(1-x^2)^{\frac{1}{2}}(x^2+2)$ . . . . ta-DAA! • February 7th 2010, 10:15 AM hi confusedgirl, $\int{\frac{x^3}{\sqrt{1-x^2}}}dx$ $y=1-x^2\ \Rightarrow\ x^2=1-y$ $\frac{dy}{dx}=-2x\ \Rightarrow\ dy=-2xdx\ \Rightarrow\ x^3dx=-\frac{1}{2}x^2dy=\frac{1}{2}(y-1)dy$ $\frac{1}{2}\int{\frac{(y-1)}{\sqrt{y}}}dy=\frac{1}{2}\int{\left(\sqrt{y}-\frac{1}{\sqrt{y}}\right)}dy=\frac{1}{2}\int{\left (y^{\frac{1}{2}}-y^{-\frac{1}{2}}\right)}dy$ This can then be integrated using the power rule. To integrate by parts... $\frac{1}{2}\int{\frac{(y-1)}{\sqrt{y}}}dy$ $u=y-1\ du=dy$ $dv=y^{-\frac{1}{2}}\ \Rightarrow\ v=\int{y^{-\frac{1}{2}}}dy=2\sqrt{y}$ $\frac{1}{2}\int{u}dv=\frac{1}{2}\left(uv-\int{v}du\right)=\frac{1}{2}\left((y-1)2\sqrt{y}-\int{2\sqrt{y}}dy\right)$ $\frac{1}{2}\left(-x^2(2)\sqrt{1-x^2}-\frac{4}{3}\left(1-x^2\right)\sqrt{1-x^2}\right)+C$ $=-x^2\sqrt{1-x^2}-\frac{2}{3}\left(1-x^2\right)\sqrt{1-x^2}+C=\sqrt{1-x^2}\left(-x^2-\frac{2}{3}+\frac{2}{3}x^2\right)+C$ $=-\frac{1}{3}\sqrt{1-x^2}\left(x^2+2\right)+C$ • February 8th 2010, 03:17 AM confusedgirl thanks :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9866877794265747, "perplexity": 3990.418118390348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398453553.36/warc/CC-MAIN-20151124205413-00210-ip-10-71-132-137.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3513066/matrix-exponential-is-continuous
# Matrix exponential is continuous I want to prove that the function $$\exp\colon M_n(\mathbb{C})\to \mathrm{GL}_n(\mathbb{C})$$ is continuous under standard matrix norm $$\lVert A\rVert=\sup_{\lVert x\rVert=1}\lVert Ax\rVert.$$ Wikipedia says that it follows from the inequality $$\lVert e^{X+Y}-e^X\rVert\leqslant \lVert Y\rVert e^{\lVert X\rVert}e^{\lVert Y\rVert},$$ and I understand why, but I don't quite follow how to get this inequality. Could somebody explain that? Let $$p:M_n(\Bbb C )^k\to M_n(\Bbb C ),\quad (X_1,X_2,\ldots ,X_n)\mapsto X_1\cdot X_2\cdots X_n\tag1$$ the ordered product of a vector of matrices, and $$c_X:\{X,Y\}^k\to \{0,\ldots ,k\}\tag2$$ is a function that count the number of coordinates of a vector in $$\{X,Y\}^k$$ that are equal to $$X$$. Then we have that $$(X+Y)^k=\sum_{v\in \{X,Y\}^k}p(v)=\sum_{j=0}^k\sum_{\substack{v\in\{X,Y\}^k\\c_X(v)=j}}p(v)\tag3$$ And if $$X$$ and $$Y$$ commute then $$\sum_{\substack{v\in\{X,Y\}^k\\c_X(v)=j}}p(v)=\binom{k}{j}X^jY^{k-j}\tag4$$ Then from $$\mathrm{(3)}$$ we have that \begin{align*} \|(X+Y)^k-X^k\|&\leqslant \left\|\sum_{j=0}^k\sum_{\substack{v\in\{X,Y\}^k\\c_X(v)=j}}p(v)-X^k\right\|\\ &=\left\|\sum_{j=0}^{k-1}\sum_{\substack{v\in\{X,Y\}^k\\c_X(v)=j}}p(v)\right\|\\ &\leqslant \sum_{j=0}^{k-1}\sum_{\substack{v\in\{X,Y\}^k\\c_X(v)=j}}\|p(v)\|\\ &\leqslant \sum_{j=0}^{k-1}\sum_{\substack{v\in\{X,Y\}^k\\c_X(v)=j}}\|X\|^j\|Y\|^{k-j}\\ &=\sum_{j=0}^{k-1}\binom{k}{j}\|X\|^j\|Y\|^{k-j}\\ &=\sum_{j=0}^{k-1}\binom{k}{j}\|X\|^j\|Y\|^{k-j}+\|X\|^k-\|X\|^k\\ &=(\|X\|+\|Y\|)^k-\|X\|^k\tag5 \end{align*} where in the third inequality we used implicitly the inequality $$\|AB\|\leqslant \|A\|\|B\|$$ for any square matrices $$A$$ and $$B$$. Then you have that \begin{align*} \|e^{X+Y}-e^X\|&=\left\|\sum_{k\geqslant 0}\frac{(X+Y)^k-X^k}{k!}\right\|\\ &\leqslant \sum_{k\geqslant 0}\frac{\|(X+Y)^k-X^k\|}{k!}\\ &\leqslant \sum_{k\geqslant 0}\frac{(\|X\|+\|Y\|)^k-\|X\|^k}{k!}\\ &=e^{\|X\|+\|Y\|}-e^{\|X\|}\\ &=e^{\|X\|}(e^{\|Y\|}-1)\tag6 \end{align*} And $$e^c-1\leqslant ce^c\iff \sum_{k\geqslant 0}\frac{c^{k+1}}{(k+1)!}\leqslant \sum_{k\geqslant 0}\frac{c^{k+1}}{k!}\tag7$$ clearly holds for $$c\geqslant 0$$. Then $$\mathrm{(6)}$$ and $$\mathrm{(7)}$$ prove your inequality.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959403872489929, "perplexity": 232.2506234979935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410284.51/warc/CC-MAIN-20200530165307-20200530195307-00346.warc.gz"}
http://math.stackexchange.com/questions/40533/what-is-the-relation
# What is the relation? The function $x^2 = y\quad$ limits two areas $A$ and $B$: $A$ is further limited with the line $x= a$, $a\gt 0$. $A$ rotates around the $x$-axis, which gives Volume $A = Va$. $B$ is limited with the line $y=b$, $b\gt 0$. $B$ rotates around the $y$-axis, which gives Volume $B = Vb$. What are the relations between $a$ and $b$, when $Vb = Va$? .................. I have come to the solution that: $Vb = (\pi b^2 )/ 2$ $Va = (\pi a^5) / 5$ so the relation between them is: $2.5b^2 = a^5$ Is that the final solution or is it more? - If your calculations are correct this is what you should have found. - Its homework right? – Tau May 21 '11 at 20:10 I was thinking maybe a simplification of my answer? – aka May 21 '11 at 20:29 No i've got my final exam on monday, so was thinking of training on old exams and other diverse stuff. – aka May 21 '11 at 20:31 Unless stated in the question that you have to produce some special answer, your answer should be excepted. Good luck on your test. – Tau May 21 '11 at 22:01 No it just says, what are the relations between a and b. when Va=Vb! Thank you Tau! – aka May 22 '11 at 14:41 Without a drawing or a more detailed description, I cannot be certain. But under the reasonable interpretation of what you wrote, your conclusion is absolutely correct. Maybe, since $a$ and $b$ are positive, it might be slightly better to say that $$b=a^2\sqrt{\frac{2a}{5}}$$ - Who did you get your solution? – aka May 21 '11 at 20:29 @aka: I am not sure of the meaning of your question. I drew the parabola, decided what the regions $A$ and $B$ were supposed to be, calculated the volumes of the solids, in each case by slicing, so for $A$ I integrated with respect to $x$ and for $B$ with respect to $y$. The integrations were trivial, as you discovered. – André Nicolas May 21 '11 at 20:47 The solution given above by user6312 is a simplification of your solution, in terms of expressing the relation b as a function of a: multiply both sides of your solution by 2/5 (the reciprocal of 2.5 = 5/2), then take the square root of both sides. – amWhy May 21 '11 at 20:50 I'm sorry i mean, how* ! – aka May 22 '11 at 14:31 @aka: I tried to describe it briefly in a comment above. I drew the diagram, and then calculated as explained in the very detailed explanation by @Arturo Magidin. I quickly obtained the expressions you wrote down, so took it for granted that you had done it the same way, so that a detailed working out of the details was not needed by you. Wrote down an alternate formula expressing $b$ in terms of $a$, in case you met this sort of question in a multiple choice test. – André Nicolas May 22 '11 at 15:05 If we take the region bounded by the $y$-axis, the $x$-axis, the line $x=a$ (with $a\gt 0$), and the parabola $y=x^2$, and rotate it about the $x$-axis, the volume of the resulting solid of revolution is easily computed (using, for example, discs perpendicular to the $x$-axis) to be $$\text{Volume A} = \int_0^a \pi(x^2)^2\,dx = \frac{\pi}{5}x^5\Bigm|_0^a = \frac{\pi a^5}{5}.$$ If the region bounded by the $y$-axis, the $x$ axis, the line $y=b$ (with $b\gt 0$), and the parabola $y=x^2$ is revolved around the $y$-axis, then using discs perpendicular to the $y$-axis we obtain the volume to be: $$\text{Volume B} = \int_0^b \pi (\sqrt{y})^2\,dy = \frac{\pi}{2}y^2\Bigm|_0^b = \frac{\pi b^2}{2}.$$ So your computations are correct there. If the two volumes are the same, then we must have $$\text{Volume A} = \frac{\pi a^5}{5} = \frac{\pi b^2}{2} = \text{Volume B};$$ there are many ways to express this: you can solve for one of $a$ or $b$ in terms of the other: $$b = \sqrt{\frac{2a^5}{5}} = a^{5/2}\sqrt{\frac{2}{5}},$$ or, if you want to express $a$ in terms of $b$ instead, $$a = \sqrt[5]{\frac{5}{2}b^2} = b^{2/5}\sqrt[5]{\frac{5}{2}}.$$ Or you can simply express this relation by saying, say $$2a^5 = 5b^2.$$ Note. If $a\lt 0$, then the volume of $A$ can be computed the same way, but the integral would go from $a$ to $0$, so that the volume would be $-\frac{\pi a^5}{5}$; to account for both possibilities, both $a\gt 0$ and $a\lt 0$, you can simply write that the volume is $\frac{\pi|a|^5}{5}$. For solid $B$, however, it makes no sense to talk about $b\lt 0$, because then we don't have a finite area "enclosed" by the curves in question. - I did exacly the same calculations, was just a bit confused by the simplification you made in the last steps... but i think it is as you say... 2a^5 = 5b^2 2a^5/5 = b^2 then take square root we get: sqroot ((2a^5)/5) = b b = a^2* sqroot(2a/5) if we want a: a^5 = (5b^2/2) a = 5root(5b^2/2) a = b^(2/5) * 5root(5/2) – aka May 22 '11 at 14:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330353736877441, "perplexity": 229.7314450797586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051036499.80/warc/CC-MAIN-20160524005036-00070-ip-10-185-217-139.ec2.internal.warc.gz"}
https://study.com/academy/answer/a-guitarist-is-tuning-a-guitar-she-hears-a-low-beat-every-4-0-seconds-what-is-the-difference-between-the-correct-frequency-and-measured-frequency-of-the-guitar-string.html
# A guitarist is tuning a guitar. She hears a low beat every 4.0 seconds. What is the difference... ## Question: A guitarist is tuning a guitar. She hears a low beat every 4.0 seconds. What is the difference between the correct frequency and measured frequency of the guitar string? ## The Beat Frequency If two wave sources with slightly differing frequencies {eq}\displaystyle {\nu_1} {/eq} and {eq}\displaystyle {\nu_2} {/eq} generate waves at the same time and these waves superpose then an interference effect in time will occur. The intensity is found to oscillate in time with a frequency {eq}\displaystyle {\nu} {/eq} called the beat frequency. It is given by, {eq}\displaystyle {\nu = \pm (\nu_1-\nu_2)}-----------(1) {/eq}. This can be utilized for tuning musical instruments with the help of a known reference frequency. By sounding the two frequencies together and adjusting the instrument till the beats disappear the correct frequency is attained. The guitarist hears one beat every 4 seconds. Therefore the beat frequency is {eq}\displaystyle {\frac{1}{4}\ Hz} {/eq}. Since the beat frequency is the difference between the two different frequencies which are sounded together it follows that the correct frequency and the measured frequency differ by 0.25 Hz.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9643551707267761, "perplexity": 2007.4090563112165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146643.49/warc/CC-MAIN-20200227002351-20200227032351-00511.warc.gz"}
https://physicshelpforum.com/threads/intensity-of-a-wave-in-electromagnetism-compared-to-quantum-mechanical-waves.12353/
# Intensity of a wave in electromagnetism compared to quantum mechanical waves #### fergal Dec 2016 4 0 Hi, why is it that in electromagnetism we can compute the intensity of a wave by taking the square of the amplitude but we cant do the same for quantum mechanical waves? My best guess was that its because of the complex numbers in quantum but I'm really unsure. Any help?? Thanks #### HallsofIvy Aug 2010 434 174 Perhaps you have these backwards. The amplitude of an electromagnetic wave (or any wave) is the intensity- there is no need to square. The difference with "quantum mechanical waves" is that the probability a particle exists at a given point is the square of the amplitude at that point. Reactions: 1 person
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9726634621620178, "perplexity": 527.8656197263098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810617.95/warc/CC-MAIN-20200408041431-20200408071931-00462.warc.gz"}
https://www.jiskha.com/questions/1477473/f-x-varies-inversely-with-x-and-f-x-10-when-x-20-what-is-the-inverse-variation
# Algebra f(x) varies inversely with x and f(x) = –10 when x = 20. What is the inverse variation equation? 1. 👍 2. 👎 3. 👁 1. well, f(x) = k/x so, plug in your numbers to find k. 1. 👍 2. 👎 ## Similar Questions 1. ### variation Can you please check my answers? Thanxs! Write an equation that expresses the relationship. Use k as the constant of variation. 20. f varies jointly as b and the square of c. -I got: f=kbc^2 22. r varies jointly as the square of s 2. ### MATH if p varies inversely as the square of q and p=8 when q=4.find q when p=32 3. ### Precalculus m varies directly as the cube of n and inversely as g 4. ### Mathematics M varies directly as n and inversely as p. if M=3, when n=2,and p=1, find M in terms of n and p 1. ### math p varies directly as the square of q and inversely as r when p=36, q=3 and r=4 calculate q when p=200 and r=2 2. ### Maths X varies directly as the product of u and v and inversely as their sum. If x=3 when u=3 and v =1, what is the value of x if u=3 and v=3 3. ### math If y varies inversely as x, and y = 12 when x = 6, what is K, the variation constant? A. 1⁄3 C. 72 B. 2 D. 144 4. ### math If p varies inversely with q, and p=2 when q=1, find the equation that relates p and q. 1. ### Inverse Variation Help me please.. Explain also :( 1. E is inversely proportional to Z and Z = 4 when E = 6. 2. P varies inversely as Q and Q = 2/3 when P = 1/2. 3.R is inversely proportional to the square of I and I = 25 when R = 100. 4. F varies 2. ### math Suppose f varies inversely with g and that f=20 when g=4. What is the value of f when g=10? 3. ### Math A Number P Varies Directly As q And Partly Inversely As Q ^ 2, Given That P=11 When q=2 and p=25.16 when q=5. calculate the value of p when q=7 4. ### mathematics a varies directly as the cube of b and inversely as the product of c and d
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926445484161377, "perplexity": 2234.391881226718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154277.15/warc/CC-MAIN-20210801221329-20210802011329-00081.warc.gz"}
http://mathhelpforum.com/algebra/98309-expanding-expressions-print.html
# Expanding expressions Show 40 post(s) from this thread on one page Page 1 of 2 12 Last • August 16th 2009, 11:41 PM fvaras89 Expanding expressions Im not too sure how to expand these two expressions 1/ (x+2y)(x-2y)2 2/ (1-x)(1+x+x2+x3) The small numbers r squared and cubed.. not sure how to use it here (Worried) Would be good if you can put steps also so i can understand where it comes from..(Happy) Thanks • August 17th 2009, 12:01 AM songoku Hi fvaras89 Example To type square : x^2 To type cube : x^3 or you can learn it here : http://www.mathhelpforum.com/math-he...-tutorial.html 1. $(x+1)^2 = (x+1) * (x+1) = x^2 + x + x + 1 = x^2 + 2x + 1$ 2. $(x+1)\cdot(x^2+3x+5) = x\cdot x^2 + x\cdot 3x + x\cdot 5 + 1\cdot x^2 + 1\cdot 3x + 1\cdot 5 = x^3 + 4x^2 + 8x + 5$ • August 17th 2009, 12:08 AM fvaras89 Thanks, but im really still not sure what to do.. any more hints? (Crying) • August 17th 2009, 12:20 AM songoku Hi fvaras89 Can you expand (x+5)^2 ? :) • August 17th 2009, 12:59 AM fvaras89 Is that how you would expand it? x^2 + 25x + 25 • August 17th 2009, 01:02 AM songoku Hi fvaras89 Almost right. How can you get 25x ? :) • August 17th 2009, 01:04 AM fvaras89 oh woops hahah.. so would it be x^2 + 25? • August 17th 2009, 01:17 AM songoku Hi fvaras89 Still no. :) $(x+a) \cdot (y+b) = x \cdot y + x \cdot b + a \cdot y + a \cdot b$ • August 18th 2009, 12:10 AM jgv115 Do you have a text book that you are working from? I doubt your teacher would give you a task if you're not clear. I'll use songoku's question: $(x+5)^2$ since the power is 2 it's multiplying itself. for example $2^2$ = $2 * 2$ now we do the same for this.. $(x+5)(x+5)$ now we multiply the first number/letter of the first bracket by the first number/letter of the second bracket $x * x = x^2$ now we multiply the first letter of the first bracket by the second number/ letter $x * 5 = 5x$ now we multiply the second letter/number by the first number of the second bracket $5 * x = 5x$ now the last step... yep that's right, the second number/letter of the first bracket by the second number/letter of the second bracket. $5 * 5 = 25$ do you get the pattern? Now put them together $x^2 + 5x + 5x + 25$ simplify.. $x^2 + 10x + 25$ now that you've learnt how to do it properly now I tell you the shortcut when you have 2 brackets multiplying each other and they are IDENTICAL. They have to be identical. The formula: $a^2 + 2ab + b^2$ can be used.. for example: $(x+5)^2$ x will be "a" and 5 will be "b" now just substitute: $a^2 + 2ab + b^2$ $x^2 + 2*x*5 + 5^2$ $x^2 + 10x + 25$ Was that the same answer as before? it sure was. Do you understand better now? • May 5th 2010, 08:37 AM NatalieB help i dont understand 6(x+4) 7(3x-9) x(x+7) as soon as thanks • May 5th 2010, 10:30 AM Wilmer WHAT are you asked to do? • May 5th 2010, 10:39 AM NatalieB Sorry i am new to this... i am asked to expand these alebraic expressions 5(x+5) 9(8x-9) x(x+8) and i really dont understand them (Wondering) so please could you help thanks Natalie B (Rofl) • May 5th 2010, 12:31 PM Wilmer Quote: Originally Posted by NatalieB i am asked to expand these alebraic expressions 5(x+5) 9(8x-9) x(x+8) Means multiply what's inside brackets by what's outside. Take 5(x+5) 5 times x = 5x 5 times 5 = 25 So answer is 5x + 25 Let's see you do the other 2: GO Natalie GO (Rock) • May 5th 2010, 10:11 PM coolnice • May 6th 2010, 08:10 AM NatalieB Okayy so is this correct ...? 9(8x-9) 9 times 8x=72x 9 times 9 = 81 so its 72x-81 P.S how do you add the quote in at the top of the page ??? (Rofl)(Thinking) Show 40 post(s) from this thread on one page Page 1 of 2 12 Last
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184224367141724, "perplexity": 4324.304010906481}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054976/warc/CC-MAIN-20131204131734-00095-ip-10-33-133-15.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/275611/rotating-co-ordinates-in-3d?answertab=votes
# Rotating co-ordinates in 3D Suppose I have 3 axes, $x$, $y$, and $z$ such that $x$ is horizontal, $y$ is vertical, and $z$ goes in/out of the computer screen where $+$ve values stick out and $-$ve values are sunken in. Suppose I have a spherical co-ordinate system where $r$ is the radius from the origin $(x, y, z) = (0, 0, 0)$, $\theta$ is the rotation about the $x$ axis and $\phi$ is the rotation about the $y$ axis, such that $(r,\theta,\phi)=(1,0,0) \mapsto (x, y, z) = (0, 0, 1)$. Assume that I rotate about $y$ first, then about $x$. I have a number of points, each of which has an associated $r, \theta, \phi$, and a world $\Theta,\Phi$. Imagine the points exist in space and we're looking at them from a camera, and depending on how the world $\Theta,\Phi$ changes, the camera is looking at the origin from a different point. Basically, I'm trying to render some objects on a computer in a roughly spherical arrangement, and allow the user to rotate the view. Sort of like those animated, interactive keyword clouds you sometimes see on the Internet where you move the mouse and the keywords move around in a spherical arrangement. First I calculate each point's ($p_i$) original $(x_i, y_i, z_i)$ like so: \begin{align} x_i &= r_i * sin(\phi_i) * cos(\theta_i) \\ y_i &= r_i * sin(\theta_i) \\ z_i &= r_i * cos(\phi_i) * cos(\theta_i) \end{align} Then, I rotate the original $(x_i,y_i,z_i)$ by the world $\Theta$ first about the $y$ axis: \begin{align} x_i &:= cos(\Phi) * x_i - sin(\Phi) * z_i \\ y_i &:= y_i \\ z_i &:= sin(\Phi) * x_i + cos(\Phi) * z_i \end{align} Then about the $x$ axis: \begin{align} x_i &:= x_i \\ y_i &:= cos(\Theta) * y_i + sin(\Theta) * z_i \\ z_i &:= cos(\Theta) * z_i - sin(\Theta) * y_i \end{align} I then render each point $p_i$ at co-ordinates $(x_i, y_i, z_i)$. The problem is, as long as I only apply one of the two world rotations - either $\Theta$ or $\Phi$, everything is rendered correctly. As soon as I apply both rotations together, as the interactive image is displayed with $\Theta$ and $\Phi$ changing, the objects get distorted, as though they're being sheared into a plane, and then back to their original spherical arrangement again. I'm not entirely sure what I'm doing wrong. - your $r,\theta,\phi$ seem to be different from the standard math usage. Perhaps if you link to or add a figure it could help. – Maesumi Jan 11 '13 at 4:03 @Maesumi $x$ is left/right, $y$ is up/down, $z$ is in/out. $\theta$ is rotation about the $x$ axis, and $\phi$ is rotation about the $y$ axis. $y$-rotation is applied first, then $x$-rotation. I don't have a figure illustrating this, but it shouldn't be too complicated? – user1002358 Jan 11 '13 at 4:20 I am not sure if this is a typo but on your second set of equations you have $x_i$ in terms of $x_i$ and $z_i$, while $z_i$ is in terms of $x_i$ and $y_i$ instead of $z_i$. – Maesumi Jan 11 '13 at 4:45 @Maesumi Thanks! You're right - but it was just a typo in my question. My code had the correct equations. – user1002358 Jan 11 '13 at 4:58 That is what I thought. Now in your second paragraph you say $\theta$ is rotation angle about $x$ axis. This is very confusing to me. So let's make sure we are talking about the same thing. Your first set of equations look like spherical coordinate formulas. Except for your formula for $y_i$. Are you using spherical coordinates? e.g. as in here. If so let me know how you relabel the first picture there to get yours. – Maesumi Jan 11 '13 at 5:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9888754487037659, "perplexity": 504.54096042159483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398459875.44/warc/CC-MAIN-20151124205419-00138-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.lessonplanet.com/teachers/right-around-5th
# Right Around For this estimation worksheet, 5th graders are to study possible answers and write a dividend that would make the estimated quotient to be true. Students complete 15 fill in the blank questions choosing their answers from the box of dividends provided at the bottom of the page. Concepts Resource Details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368930220603943, "perplexity": 1591.6666322759072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685850.32/warc/CC-MAIN-20170919145852-20170919165852-00643.warc.gz"}
https://math.stackexchange.com/questions/71000/certain-permutations-of-the-set-of-all-pythagorean-triples
# Certain permutations of the set of all Pythagorean triples The fact that the set of all primitive Pythagorean triples naturally has the structure of a ternary rooted tree may have first been published in 1970: http://www.jstor.org/stable/3613860 I learned of it from Joe Roberts' book Elementary number theory: A problem oriented approach, published in 1977 by MIT Press. In 2005 it was published again by Robert C. Alperin: http://www.math.sjsu.edu/~alperin/pt.pdf He seems to say in a footnote on the first page that he didn't know that someone had discovered this before him until a referee pointed it out. That surprised me. Maybe because of Roberts' book, I had thought this was known to all who care about such things. If we identify a Pythagorean triple with a rational point (or should I say a set of eight points?) on the circle of unit radius centered at $0$ in the complex plane, then we can say the set of all nodes in that tree is permuted by any linear fractional transformation that leaves the circle invariant. The function $f(z) = (az+b)/(bz+a)$, where $a$ and $b$ are real, leaves the circle invariant and fixes $1$ and $-1$. There are also LFTs that leave the circle invariant and don't fix those two points. Are there any interesting relationships between the geometry of the LFT's action on the circle and that of the permutation of the nodes in the tree? (Would it be too far-fetched if this reminds me of the recent discovery by Ruedi Suter of rotational symmetries in some well-behaved subsets of Young's lattice?) • As beautifully written (literally) as the book by Joe Roberts is, the fact that something is mentioned there doesn't mean all who care about number theory are supposed to know it. It's not the universal bible of number theory. – KCd Oct 9 '11 at 17:54 • That the primitive Pythagorean triples are generated from (3,4,5) by the action of 3 integral matrices was found before 1970. It is in the paper "Pytagoreiska triangular" by B. Berggren, which appeared in Tidskrift för elementär matematik, fysik och kemi 17 (1934), 129--139. But I haven't been able to see a copy of this article directly. – KCd Oct 9 '11 at 18:07 • @KCd: Of course, no one would think that because it's in that book, it's universally known. Nonetheless, having read it in that book (without remembering where one read it) might be the cause of an impression that it's universally known. (The cause of an impression, as opposed to information from which the impression is inferred.) – Michael Hardy Oct 9 '11 at 21:57 • It appears that maybe the fact that each Pythagorean triple shows up eight times on the circle might mean the answer is not as pleasant as I had hoped. However, I've found that by applying inverse matrices, you can find each Pythagorean triple occurring many times in the tree as well. – Michael Hardy Oct 10 '11 at 19:35 • I have created a new Wikipedia article titled Tree of primitive Pythagorean triples. – Michael Hardy Oct 12 '11 at 15:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8436899185180664, "perplexity": 374.4701141105651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517639.17/warc/CC-MAIN-20190418121317-20190418143317-00252.warc.gz"}
http://wiki.stat.ucla.edu/socr/index.php?title=SOCR_EduMaterials_ModelerActivities_NormalBetaModelFit
# SOCR EduMaterials ModelerActivities NormalBetaModelFit ## SOCR Educational Materials - Activities - SOCR Normal and Beta Distribution Model Fit Activity ### Summary This activity describes the process of SOCR model fitting in the case of using Normal or Beta distribution models. Model fitting is the process of determining the parameters for an analytical model in such a way that we obtain optimal parameter estimates according to some criterion. There are many strategies for parameter estimation. The differences between most of these are the underlying cost-functions and the optimization strategies applied to maximize/minimize the cost-function. ### Goals The aims of this activity are to: • motivate the need for (analytical) modeling of natural processes • illustrate how to use the SOCR Modeler to fit models to real data • present applications of model fitting ### Background & Motivation Suppose we are given the sequence of numbers {1, 2, 3, 4, 5, 6, 7, 8, 9, 10} and asked to find the best (Continuous) Uniform Distribution that fits that data. In this case, there are two parameters that need to be estimated - the minimum (m) and the maximum (M) of the data. These parameters determine exactly the support (domain) of the continuous distribution and we can explicitly write the density for the (best fit) continuous uniform distribution as: $f(x) = {{1}\over{M-m}}$, for $m \le x \le M$ and f(x) = 0, for $x \notin [m:M]$. Having this model distribution, we can use its analytical form, f(x), to compute probabilities of events, critical functional values and, in general, do inference on the native process without acquiring additional data. Hence, a good strategy for model fitting is extremely useful in data analysis and statistical inference. Of course, any inference based on models is only going to be as good as the data and the optimization strategy used to generate the model. Let's look at another motivational example. This time, suppose we have recorded the following (sample) measurements from some process {1.2, 1.4, 1.7, 3.4, 1.5, 1.1, 1.7, 3.5, 2.5}. Taking bin-size of 1, we can easily calculate the frequency histogram for this sample, {6, 1, 2}, as there are 6 observations in the interval [1:2), 1 measurement in the interval [2:3) and 2 measurements in the interval [3:4). We can now ask about the best Beta distribution model fit to the histogram of the data! Most of the time when we study natural processes using probability distributions, it makes sense to fit distribution models to the frequency histogram of a sample, not the actual sample. This is because our general goals are to model the behavior of the native process, understand its distribution and quantify likelihoods of various events of interest (e.g., in terms of the example above, we may be interested in the probability of observing an outcome in the interval [1.50:2.15) or the chance that an observation exceeds 2.8). ### Exercises #### Exercise 1 Let's first solve the challenge we presented in the background section, where we calculated the frequency histogram for a sample to be {6, 1, 2}. Go to the SOCR Modeler and click on the Data tab. Paste in the two columns of data. Column 1 {1, 2, 3} - these are the ranges of the sample values and correspond to measurements in the intervals [1:2), [2:3) and [3:4). The second column represents the actual frequency counts of measurements within each of these 3 histogram bins - these are the values {6, 1, 2}. Now press the Graphs tab. You should see an image like the one below. Then choose Beta_Fit_Modeler from the drop-down list of models in the top-left and click the estimate parameters check-box, also on the top-left. The graph now shows you the best Beta distribution model fit to the frequency histogram {6, 1, 2}. Click the Results tab to see the actual estimates of the two parameters of the corresponding Beta distribution (Left Parameter = 0.0446428571428572; Right Parameter = 0.11607142857142871; Left Limit = 1.0; Right Limit = 3.0). You can also see how the (general) Beta distribution degenerates to this shape by going to SOCR Distributions, selecting the (Generalized) Beta Distribution from the top-left and setting the 4 parameters to the 4 values we computed above. Notice how the shape of the Beta distribution changes with each change of the parameters. This is also a good demonstration of why we did the distribution model fitting to the frequency histogram in the first place - precisely to obtain an analytic model for studying the general process without acquiring mode data. Notice how we can compute the odds (probability) of any event of interest, once we have an analytical model for the distribution of the process. For example, this figure depicts the probabilities that a random observation from this process exceeds 2.8 (the right limit). This probability is computed to be 0.756 #### Exercise 2 Go to the SOCR Modeler and select the Graphs tab and click the "Scale Up" check-box. Then select Normal_Model_Fit from the drop-down list of models and begin clicking on the graph panel. The latter allows you to construct manually a histogram of interest. Notice that these are not random measurements, but rather frequency counts that you are manually constructing the histogram of. Try to make the histogram bins form a unimodal, bell-shaped and symmetric graph. Observe that as you click, new histogram bins will appear and the model fit will update. Now click the Estimate Parameters check-box on the top-left and see the best-fit Normal curve appear superimposed on the manually constructed histogram. Under the Results tab you can find the maximum likelihood estimates for the mean and the standard deviation for the best Normal distribution fit to this specific frequency histogram.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8945499062538147, "perplexity": 570.430598655033}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824293.62/warc/CC-MAIN-20171020173404-20171020193404-00465.warc.gz"}
http://etna.mcs.kent.edu/volumes/2001-2010/vol16/abstract.php?vol=16&pages=165-185
## Analysis of two-dimensional FETI-DP preconditioners by the standard additive Schwarz framework Susanne C. Brenner ### Abstract FETI-DP preconditioners for two-dimensional elliptic boundary value problems with heterogeneous coefficients are analyzed by the standard additive Schwarz framework. It is shown that the condition number of the preconditioned system for both second order and fourth order problems is bounded by $C\big(1+\ln(H/h)\big)^2$, where $H$ is the maximum of the diameters of the subdomains, $h$ is the mesh size of a quasiuniform triangulation, and the positive constant $C$ is independent of $h$, $H$, the number of subdomains and the coefficients of the boundary value problems on the subdomains. The sharpness of the bound for second order problems is also established. Full Text (PDF) [264 KB] ### Key words FETI-DP, additive Schwarz, domain decomposition, heterogeneous coefficients. 65N55, 65N30. < Back
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.951435387134552, "perplexity": 465.03203524478425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210362.19/warc/CC-MAIN-20180815220136-20180816000136-00312.warc.gz"}
http://mathhelpforum.com/calculus/9770-limit.html
1. The Limit Let $f(x)=5x^2+5x+4$ and let $g(h)=\frac{f(2+h)-f(2)}{h}$ Determine each of the following: A. g(1) B. g(0.1) C. g(0.01) You will notice that the values that you entered are getting closer and closer to a number $L$. This number is called the limit of $g(h)$ as $h$ approaches $0$ and is also called the derivative of f(x) at the point when x=2. Enter the value of the number $L$ ____?_____. Thanks for all your help, as I am currently stuggling with some of these new concepts. 2. Hello, qbkr21! Let $f(x)=5x^2+5x+4$ and let $g(h)=\frac{f(2+h)-f(2)}{h}$ Determine each of the following: . $A.\;g(1)\qquad B.\;g(0.1)\qquad C.\;g(0.01)$ You will notice that the values that you entered are getting closer and closer to a number $L$. This number is called the limit of $g(h)$ as $h \to 0$ and is also called the derivative of $f(x)$ at the point when $x=2.$ Enter the value of the number $L$ ____?_____. First, we'll determine $g(h)$ . . . and do all the algebra first. There are three stages to $g(h):$ . . (1) Find $f(2 + h)$ . . . and simplify. . . (2) Subtract $f(2)$ . . . and simplify. . . (3) Divide by $h$ . . . and simplify. (1) $f(2+h) \:=\:5(2+h)^2 + 5(2+h) + 4$ . . . . . . . . . $= \:20 + 20h + 5h^2 + 10 + 5h = 4$ . . . . . . . . . $= \:5h^2 + 25h + 34$ (2) $f(2) \:=\:5(2^2) + 5(2) + 4\:=\:34$ Subtract: . $f(2+h) - f(2) \;=\;(5h^2 + 25h + 34) - 34 \;=\;5h^2 + 25h$ (3) Divide by $h:\;\;\frac{f(2+h) - f(2)}{h} \;= \;\frac{5h^2 + 25h}{h}$ . . .Factor and simplify: . $\frac{5\!\!\not{h}(h + 5)}{\not{h}}\;=\;5(h+5)$ There! . . . $g(h) \:=\:5(h+5)$ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Now we can crank out the answers. $\begin{array}{ccc}A,\\B.\\C.\end{array}\begin{arra y}{ccc}g(1)\\g(0.1)\\g(0.01)\end{array}\begin{arra y}{ccc}= \\ = \\ =\end{array}\begin{array}{ccc}5(1 + 5) \\ 5(0.1 + 5) \\ 5(0.01 + 5)\end{array}\begin{array}{ccc}=\\=\\=\end{array}\ begin{array}{ccc}30\\25.5\\25.05\end{array}$ As $h\to0$ (gets smaller and smaller), $g(h)$ approaches 25. And that it the limit they're asking for: . $L \:=\:25$ 3. Maybe this will help heir. (I am still not complete. But hopefully soon I will have everything made that I want to). 4. Thanks so much PerfectHacker you are the man! 5. Maybe you shoud check out PH's calc tutorial. Anyway, take the derivative of your polynomial. You get $f'(x)=10x+5$. When x=2, we have f'(x)=L. Now, the concepts you're using are the nuts and bolts of the derivative. Using $\frac{f(2+h)-f(2)}{h}$: $\frac{5(2+h)^{2}+5(2+h)+4-(5(2)^{2}+5(2)+4)}{h}$ $=5(h+5)$ Now, if you enter in h=1, you get 30. h=0.1, you get 25.5 h=0.01, you get 25.05 and so on. Actually, if you let h approach 0, you are converging on this aforementioned number L. See what it is?. Does it jive with what we got by taking the derivative(the easy way) mentioned at the top. $\lim_{h\rightarrow{0}}5(h+5)=L$ The closer h gets to 0, the closer you get to the slope(derivative) at that point.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 38, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9775016903877258, "perplexity": 448.3893595128623}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707773051/warc/CC-MAIN-20130516123613-00060-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/215706-inequalities-help-print.html
Inequalities help • March 26th 2013, 07:33 PM calmo11 1 Attachment(s) Inequalities help I need help with these two inequalities Attachment 27702 For c) I can get to x>or=5 but I know that it is true when x<0 too but I don't know how to show that algebraically For d) I can get to x>3/2 but I also know that it is true when x<1 but cant show it please help • March 26th 2013, 08:44 PM Soroban Re: Inequalities help Hello, calmo11! Quote: $(c)\;\;e^{x^2} \:\ge\:e^{5x}$ We have: . $x^2 \:\ge\:5x \quad\Rightarrow\quad x^2 - 5x \:\ge\:0$ . . . . . . . . $x(x-5)\:\ge\:0$ It says: the product of two number is positive. This is true if both factors are positive or both factors are negative. We must consider these two cases. Both positive: . . $x \,\ge\, 0 \:\text{ and }\:x-5\,\ge\,0 \quad\Rightarrow\quad x \,\ge\,5$ . . Both are true if $x\,\ge\,5$ Both negative: . . $x\,\le\,0\:\text{ and }\:x-5 \,\le\,0 \quad\Rightarrow\quad x\,\le\,5$ . . Both are true if $x\,\le\,0$ Answer: . $(x\,\ge\,5)\,\text{ or }\,(x\,\le\,0)$ Quote: $(d)\;\;|4x-5| \:>\:1$ The absolute inequality gives us two statements: . . $[1]\;4x-5 \,>\,1\;\text{ or }\;[2]\;4x-5\,<\,-1$ Solve them separately: . . $[1]\;4x - 5 \,>\,1 \quad\Rightarrow\quad 4x\,>\.6 \quad\Rightarrow\quad x \,>\,\tfrac{3}{2}$ . . $[2]\;4x-5 \,<\,-1 \quad\Rightarrow\quad 4x \,<\,4 \quad\Rightarrow\quad x \,<\,1$ Answer: . $(x\,<\,1)\:\text{ or }\:(x\,>\,\tfrac{3}{2})$ • March 26th 2013, 09:30 PM calmo11 Re: Inequalities help You are a genius! Why can't you teach me my uni course!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9584217667579651, "perplexity": 2064.8274348431096}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096780.24/warc/CC-MAIN-20150627031816-00234-ip-10-179-60-89.ec2.internal.warc.gz"}
http://jeremykun.com/tag/random-variables/
# Martingales and the Optional Stopping Theorem This is a guest post by my colleague Adam Lelkes. The goal of this primer is to introduce an important and beautiful tool from probability theory, a model of fair betting games called martingales. In this post I will assume that the reader is familiar with the basics of probability theory. For those that need to refresh their knowledge, Jeremy’s excellent primers (1, 2) are a good place to start. ## The Geometric Distribution and the ABRACADABRA Problem Before we start playing with martingales, let’s start with an easy exercise. Consider the following experiment: we throw an ordinary die repeatedly until the first time a six appears. How many throws will this take in expectation? The reader might recognize immediately that this exercise can be easily solved using the basic properties of the geometric distribution, which models this experiment exactly. We have independent trials, every trial succeeding with some fixed probability $p$. If $X$ denotes the number of trials needed to get the first success, then clearly $\Pr(X = k) = (1-p)^{k-1} p$ (since first we need $k-1$ failures which occur independently with probability $1-p$, then we need one success which happens with probability $p$). Thus the expected value of $X$ is $\displaystyle E(X) = \sum_{k=1}^\infty k P(X = k) = \sum_{k=1}^\infty k (1-p)^{k-1} p = \frac1p$ by basic calculus. In particular, if success is defined as getting a six, then $p=1/6$ thus the expected time is $1/p=6$. Now let us move on to a somewhat similar, but more interesting and difficult problem, the ABRACADABRA problem. Here we need two things for our experiment, a monkey and a typewriter. The monkey is asked to start bashing random keys on a typewriter. For simplicity’s sake, we assume that the typewriter has exactly 26 keys corresponding to the 26 letters of the English alphabet and the monkey hits each key with equal probability. There is a famous theorem in probability, the infinite monkey theorem, that states that given infinite time, our monkey will almost surely type the complete works of William Shakespeare. Unfortunately, according to astronomists the sun will begin to die in a few billion years, and the expected time we need to wait until a monkey types the complete works of William Shakespeare is orders of magnitude longer, so it is not feasible to use monkeys to produce works of literature. So let’s scale down our goals, and let’s just wait until our monkey types the word ABRACADABRA. What is the expected time we need to wait until this happens? The reader’s first idea might be to use the geometric distribution again. ABRACADABRA is eleven letters long, the probability of getting one letter right is $\frac{1}{26}$, thus the probability of a random eleven-letter word being ABRACADABRA is exactly $\left(\frac{1}{26}\right)^{11}$. So if typing 11 letters is one trial, the expected number of trials is $\displaystyle \frac1{\left(\frac{1}{26}\right)^{11}}=26^{11}$ which means $11\cdot 26^{11}$ keystrokes, right? Well, not exactly. The problem is that we broke up our random string into eleven-letter blocks and waited until one block was ABRACADABRA. However, this word can start in the middle of a block. In other words, we considered a string a success only if the starting position of the word ABRACADABRA was divisible by 11. For example, FRZUNWRQXKLABRACADABRA would be recognized as success by this model but the same would not be true for AABRACADABRA. However, it is at least clear from this observation that $11\cdot 26^{11}$ is a strict upper bound for the expected waiting time. To find the exact solution, we need one very clever idea, which is the following: ## Let’s Open a Casino! Do I mean that abandoning our monkey and typewriter and investing our time and money in a casino is a better idea, at least in financial terms? This might indeed be the case, but here we will use a casino to determine the expected wait time for the ABRACADABRA problem. Unfortunately we won’t make any money along the way (in expectation) since our casino will be a fair one. Let’s do the following thought experiment: let’s open a casino next to our typewriter. Before each keystroke, a new gambler comes to our casino and bets $1 that the next letter will be A. If he loses, he goes home disappointed. If he wins, he bets all the money he won on the event that the next letter will be B. Again, if he loses, he goes home disappointed. (This won’t wreak havoc on his financial situation, though, as he only loses$1 of his own money.) If he wins again, he bets all the money on the event that the next letter will be R, and so on. If a gambler wins, how much does he win? We said that the casino would be fair, i.e. the expected outcome should be zero. That means that it the gambler bets $1, he should receive$26 if he wins, since the probability of getting the next letter right is exactly $\frac{1}{26}$ (thus the expected value of the change in the gambler’s fortune is $\frac{25}{26}\cdot (-1) + \frac{1}{26}\cdot (+25) = 0$. Let’s keep playing this game until the word ABRACADABRA first appears and let’s denote the number of keystrokes up to this time as $T$. As soon as we see this word, we close our casino. How much was the revenue of our casino then? Remember that before each keystroke, a new gambler comes in and bets $1, and if he wins, he will only bet the money he has received so far, so our revenue will be exactly $T$ dollars. How much will we have to pay for the winners? Note that the only winners in the last round are the players who bet on A. How many of them are there? There is one that just came in before the last keystroke and this was his first bet. He wins$26. There was one who came three keystrokes earlier and he made four successful bets (ABRA). He wins $\26^4$. Finally there is the luckiest gambler who went through the whole ABRACADABRA sequence, his prize will be $\26^{11}$. Thus our casino will have to give out $26^{11}+26^4+26$ dollars in total, which is just under the price of 200,000 WhatsApp acquisitions. Now we will make one crucial observation: even at the time when we close the casino, the casino is fair! Thus in expectation our expenses will be equal to our income. Our income is $T$ dollars, the expected value of our expenses is $26^{11}+26^4+26$ dollars, thus $E(T)=26^{11}+26^4+26$. A beautiful solution, isn’t it? So if our monkey types at 150 characters per minute on average, we will have to wait around 47 million years until we see ABRACADABRA. Oh well. ## Time to be More Formal After giving an intuitive outline of the solution, it is time to formalize the concepts that we used, to translate our fairy tales into mathematics. The mathematical model of the fair casino is called a martingale, named after a class of betting strategies that enjoyed popularity in 18th century France. The gambler’s fortune (or the casino’s, depending on our viewpoint) can be modeled with a sequence of random variables. $X_0$ will denote the gambler’s fortune before the game starts, $X_1$ the fortune after one round and so on. Such a sequence of random variables is called a stochastic process. We will require the expected value of the gambler’s fortune to be always finite. How can we formalize the fairness of the game? Fairness means that the gambler’s fortune does not change in expectation, i.e. the expected value of $X_n$, given $X_1, X_2, \ldots, X_{n-1}$ is the same as $X_{n-1}$. This can be written as $E(X_n | X_1, X_2, \ldots, X_{n-1}) = X_{n-1}$ or, equivalently, $E(X_n - X_{n-1} | X_1, X_2, \ldots, X_{n-1}) = 0$. The reader might be less comfortable with the first formulation. What does it mean, after all, that the conditional expected value of a random variable is another random variable? Shouldn’t the expected value be a number? The answer is that in order to have solid theoretical foundations for the definition of a martingale, we need a more sophisticated notion of conditional expectations. Such sophistication involves measure theory, which is outside the scope of this post. We will instead naively accept the definition above, and the reader can look up all the formal details in any serious probability text (such as [1]). Clearly the fair casino we constructed for the ABRACADABRA exercise is an example of a martingale. Another example is the simple symmetric random walk on the number line: we start at 0, toss a coin in each step, and move one step in the positive or negative direction based on the outcome of our coin toss. ## The Optional Stopping Theorem Remember that we closed our casino as soon as the word ABRACADABRA appeared and we claimed that our casino was also fair at that time. In mathematical language, the closed casino is called a stopped martingale. The stopped martingale is constructed as follows: we wait until our martingale X exhibits a certain behaviour (e.g. the word ABRACADABRA is typed by the monkey), and we define a new martingale X’ as follows: let $X'_n = X_n$ if $n < T$ and $X'_n = X_T$ if $n \ge T$ where $T$ denotes the stopping time, i.e. the time at which the desired event occurs. Notice that $T$ itself is a random variable. We require our stopping time $T$ to depend only on the past, i.e. that at any time we should be able to decide whether the event that we are waiting for has already happened or not (without looking into the future). This is a very reasonable requirement. If we could look into the future, we could obviously cheat by closing our casino just before some gambler would win a huge prize. We said that the expected wealth of the casino at the stopping time is the same as the initial wealth. This is guaranteed by Doob’s optional stopping theorem, which states that under certain conditions, the expected value of a martingale at the stopping time is equal to its expected initial value. Theorem: (Doob’s optional stopping theorem) Let $X_n$ be a martingale stopped at step $T$, and suppose one of the following three conditions hold: 1. The stopping time $T$ is almost surely bounded by some constant; 2. The stopping time $T$ is almost surely finite and every step of the stopped martingale $X_n$ is almost surely bounded by some constant; or 3. The expected stopping time $E(T)$ is finite and the absolute value of the martingale increments $|X_n-X_{n-1}|$ are almost surely bounded by a constant. Then $E(X_T) = E(X_0).$ We omit the proof because it requires measure theory, but the interested reader can see it in these notes. For applications, (1) and (2) are the trivial cases. In the ABRACADABRA problem, the third condition holds: the expected stopping time is finite (in fact, we showed using the geometric distribution that it is less than $26^{12}$) and the absolute value of a martingale increment is either 1 or a net payoff which is bounded by $26^{11}+26^4+26$. This shows that our solution is indeed correct. ## Gambler’s Ruin Another famous application of martingales is the gambler’s ruin problem. This problem models the following game: there are two players, the first player has $a$ dollars, the second player has $b$ dollars. In each round they toss a coin and the loser gives one dollar to the winner. The game ends when one of the players runs out of money. There are two obvious questions: (1) what is the probability that the first player wins and (2) how long will the game take in expectation? Let $X_n$ denote the change in the second player’s fortune, and set $X_0 = 0$. Let $T_k$ denote the first time $s$ when $X_s = k$. Then our first question can be formalized as trying to determine $\Pr(T_{-b} < T_a)$. Let $t = \min \{ T_{-b}, T_a\}$. Clearly $t$ is a stopping time. By the optional stopping theorem we have that $\displaystyle 0=E(X_0)=E(X_t)=-b\Pr(T_{-b} < T_a)+a(1-\Pr(T_{-b} < T_a))$ thus $\Pr(T_{-b} < T_a)=\frac{a}{a+b}$. I would like to ask the reader to try to answer the second question. It is a little bit trickier than the first one, though, so here is a hint: $X_n^2-n$ is also a martingale (prove it), and applying the optional stopping theorem to it leads to the answer. ## A Randomized Algorithm for 2-SAT The reader is probably familiar with 3-SAT, the first problem shown to be NP-complete. Recall that 3-SAT is the following problem: given a boolean formula in conjunctive normal form with at most three literals in each clause, decide whether there is a satisfying truth assignment. It is natural to ask if or why 3 is special, i.e. why don’t we work with $k$-SAT for some $k \ne 3$ instead? Clearly the hardness of the problem is monotone increasing in $k$ since $k$-SAT is a special case of $(k+1)$-SAT. On the other hand, SAT (without any bound on the number of literals per clause) is clearly in NP, thus 3-SAT is just as hard as $k$-SAT for any $k>3$. So the only question is: what can we say about 2-SAT? It turns out that 2-SAT is easier than satisfiability in general: 2-SAT is in P. There are many algorithms for solving 2-SAT. Here is one deterministic algorithm: associate a graph to the 2-SAT instance such that there is one vertex for each variable and each negated variable and the literals $x$ and $y$ are connected by a directed edge if there is a clause $(\bar x \lor y)$. Recall that $\bar x \lor y$ is equivalent to $x \implies y$, so the edges show the implications between the variables. Clearly the 2-SAT instance is not satisfiable if there is a variable x such that there are directed paths $x \to \bar x$ and $\bar x \to x$ (since $x \Leftrightarrow \bar x$ is always false). It can be shown that this is not only a sufficient but also a necessary condition for unsatisfiability, hence the 2-SAT instance is satisfiable if and only if there is are no such path. If there are directed paths from one vertex of a graph to another and vice versa then they are said to belong to the same strongly connected component. There are several graph algorithms for finding strongly connected components of directed graphs, the most well-known algorithms are all based on depth-first search. Now we give a very simple randomized algorithm for 2-SAT (due to Christos Papadimitriou in a ’91 paper): start with an arbitrary truth assignment and while there are unsatisfied clauses, pick one and flip the truth value of a random literal in it. Stop after $O(n^2)$ rounds where $n$ denotes the number of variables. Clearly if the formula is not satisfiable then nothing can go wrong, we will never find a satisfying truth assignment. If the formula is satisfiable, we want to argue that with high probability we will find a satisfying truth assignment in $O(n^2)$ steps. The idea of the proof is the following: fix an arbitrary satisfying truth assignment and consider the Hamming distance of our current assignment from it. The Hamming distance of two truth assignments (or in general, of two binary vectors) is the number of coordinates in which they differ. Since we flip one bit in every step, this Hamming distance changes by $\pm 1$ in every round. It also easy to see that in every step the distance is at least as likely to be decreased as to be increased (since we pick an unsatisfied clause, which means at least one of the two literals in the clause differs in value from the satisfying assignment). Thus this is an unfair “gambler’s ruin” problem where the gambler’s fortune is the Hamming distance from the solution, and it decreases with probability at least $\frac{1}{2}$. Such a stochastic process is called a supermartingale — and this is arguably a better model for real-life casinos. (If we flip the inequality, the stochastic process we get is called a submartingale.) Also, in this case the gambler’s fortune (the Hamming distance) cannot increase beyond $n$. We can also think of this process as a random walk on the set of integers: we start at some number and in each round we make one step to the left or to the right with some probability. If we use random walk terminology, 0 is called an absorbing barrier since we stop the process when we reach 0. The number $n$, on the other hand, is called a reflecting barrier: we cannot reach $n+1$, and whenever we get close we always bounce back. There is an equivalent version of the optimal stopping theorem for supermartingales and submartingales, where the conditions are the same but the consequence holds with an inequality instead of equality. It follows from the optional stopping theorem that the gambler will be ruined (i.e. a satisfying truth assignment will be found) in $O(n^2)$ steps with high probability. [1] For a reference on stochastic processes and martingales, see the text of Durrett . # Optimism in the Face of Uncertainty: the UCB1 Algorithm The software world is always atwitter with predictions on the next big piece of technology. And a lot of chatter focuses on what venture capitalists express interest in. As an investor, how do you pick a good company to invest in? Do you notice quirky names like “Kaggle” and “Meebo,” require deep technical abilities, or value a charismatic sales pitch? This author personally believes we’re not thinking as big as we should be when it comes to innovation in software engineering and computer science, and that as a society we should value big pushes forward much more than we do. But making safe investments is almost always at odds with innovation. And so every venture capitalist faces the following question. When do you focus investment in those companies that have proven to succeed, and when do you explore new options for growth? A successful venture capitalist must strike a fine balance between this kind of exploration and exploitation. Explore too much and you won’t make enough profit to sustain yourself. Narrow your view too much and you will miss out on opportunities whose return surpasses any of your current prospects. In life and in business there is no correct answer on what to do, partly because we just don’t have a good understanding of how the world works (or markets, or people, or the weather). In mathematics, however, we can meticulously craft settings that have solid answers. In this post we’ll describe one such scenario, the so-called multi-armed bandit problem, and a simple algorithm called UCB1 which performs close to optimally. Then, in a future post, we’ll analyze the algorithm on some real world data. As usual, all of the code used in the making of this post are available for download on this blog’s Github page. ## Multi-Armed Bandits The multi-armed bandit scenario is simple to describe, and it boils the exploration-exploitation tradeoff down to its purest form. Suppose you have a set of $K$ actions labeled by the integers $\left \{ 1, 2, \dots, K \right \}$. We call these actions in the abstract, but in our minds they’re slot machines. We can then play a game where, in each round, we choose an action (a slot machine to play), and we observe the resulting payout. Over many rounds, we might explore the machines by trying some at random. Assuming the machines are not identical, we naturally play machines that seem to pay off well more frequently to try to maximize our total winnings. This is the most general description of the game we could possibly give, and every bandit learning problem has these two components: actions and rewards. But in order to get to a concrete problem that we can reason about, we need to specify more details. Bandit learning is a large tree of variations and this is the point at which the field ramifies. We presently care about two of the main branches. How are the rewards produced? There are many ways that the rewards could work. One nice option is to have the rewards for action $i$ be drawn from a fixed distribution $D_i$ (a different reward distribution for each action), and have the draws be independent across rounds and across actions. This is called the stochastic setting and it’s what we’ll use in this post. Just to pique the reader’s interest, here’s the alternative: instead of having the rewards be chosen randomly, have them be adversarial. That is, imagine a casino owner knows your algorithm and your internal beliefs about which machines are best at any given time. He then fixes the payoffs of the slot machines in advance of each round to screw you up! This sounds dismal, because the casino owner could just make all the machines pay nothing every round. But actually we can design good algorithms for this case, but “good” will mean something different than absolute winnings. And so we must ask: How do we measure success? In both the stochastic and the adversarial setting, we’re going to have a hard time coming up with any theorems about the performance of an algorithm if we care about how much absolute reward is produced. There’s nothing to stop the distributions from having terrible expected payouts, and nothing to stop the casino owner from intentionally giving us no payout. Indeed, the problem lies in our measurement of success. A better measurement, which we can apply to both the stochastic and adversarial settings, is the notion of regret. We’ll give the definition for the stochastic case, and investigate the adversarial case in a future post. Definition: Given a player algorithm $A$ and a set of actions $\left \{1, 2, \dots, K \right \}$, the cumulative regret of $A$ in rounds $1, \dots, T$ is the difference between the expected reward of the best action (the action with the highest expected payout) and the expected reward of $A$ for the first $T$ rounds. We’ll add some more notation shortly to rephrase this definition in symbols, but the idea is clear: we’re competing against the best action. Had we known it ahead of time, we would have just played it every single round. Our notion of success is not in how well we do absolutely, but in how well we do relative to what is feasible. ## Notation Let’s go ahead and draw up some notation. As before the actions are labeled by integers $\left \{ 1, \dots, K \right \}$. The reward of action $i$ is a $[0,1]$-valued random variable $X_i$ distributed according to an unknown distribution and possessing an unknown expected value $\mu_i$. The game progresses in rounds $t = 1, 2, \dots$ so that in each round we have different random variables $X_{i,t}$ for the reward of action $i$ in round $t$ (in particular, $X_{i,t}$ and $X_{i,s}$ are identically distributed). The $X_{i,t}$ are independent as both $t$ and $i$ vary, although when $i$ varies the distribution changes. So if we were to play action 2 over and over for $T$ rounds, then the total payoff would be the random variable $G_2(T) = \sum_{t=1}^T X_{2,t}$. But by independence across rounds and the linearity of expectation, the expected payoff is just $\mu_2 T$. So we can describe the best action as the action with the highest expected payoff. Define $\displaystyle \mu^* = \max_{1 \leq i \leq K} \mu_i$ We call the action which achieves the maximum $i^*$. A policy is a randomized algorithm $A$ which picks an action in each round based on the history of chosen actions and observed rewards so far. Define $I_t$ to be the action played by $A$ in round $t$ and $P_i(n)$ to be the number of times we’ve played action $i$ in rounds $1 \leq t \leq n$. These are both random variables. Then the cumulative payoff for the algorithm $A$ over the first $T$ rounds, denoted $G_A(T)$, is just $\displaystyle G_A(T) = \sum_{t=1}^T X_{I_t, t}$ and its expected value is simply $\displaystyle \mathbb{E}(G_A(T)) = \mu_1 \mathbb{E}(P_1(T)) + \dots + \mu_K \mathbb{E}(P_K(T))$. Here the expectation is taken over all random choices made by the policy and over the distributions of rewards, and indeed both of these can affect how many times a machine is played. Now the cumulative regret of a policy $A$ after the first $T$ steps, denoted $R_A(T)$ can be written as $\displaystyle R_A(T) = G_{i^*}(T) - G_A(T)$ And the goal of the policy designer for this bandit problem is to minimize the expected cumulative regret, which by linearity of expectation is $\mathbb{E}(R_A(T)) = \mu^*T - \mathbb{E}(G_A(T))$. Before we continue, we should note that there are theorems concerning lower bounds for expected cumulative regret. Specifically, for this problem it is known that no algorithm can guarantee an expected cumulative regret better than $\Omega(\sqrt{KT})$. It is also known that there are algorithms that guarantee no worse than $O(\sqrt{KT})$ expected regret. The algorithm we’ll see in the next section, however, only guarantees $O(\sqrt{KT \log T})$. We present it on this blog because of its simplicity and ubiquity in the field. ## The UCB1 Algorithm The policy we examine is called UCB1, and it can be summed up by the principle of optimism in the face of uncertainty. That is, despite our lack of knowledge in what actions are best we will construct an optimistic guess as to how good the expected payoff of each action is, and pick the action with the highest guess. If our guess is wrong, then our optimistic guess will quickly decrease and we’ll be compelled to switch to a different action. But if we pick well, we’ll be able to exploit that action and incur little regret. In this way we balance exploration and exploitation. The formalism is a bit more detailed than this, because we’ll need to ensure that we don’t rule out good actions that fare poorly early on. Our “optimism” comes in the form of an upper confidence bound (hence the acronym UCB). Specifically, we want to know with high probability that the true expected payoff of an action $\mu_i$ is less than our prescribed upper bound. One general (distribution independent) way to do that is to use the Chernoff-Hoeffding inequality. As a reminder, suppose $Y_1, \dots, Y_n$ are independent random variables whose values lie in $[0,1]$ and whose expected values are $\mu_i$. Call $Y = \frac{1}{n}\sum_{i}Y_i$ and $\mu = \mathbb{E}(Y) = \frac{1}{n} \sum_{i} \mu_i$. Then the Chernoff-Hoeffding inequality gives an exponential upper bound on the probability that the value of $Y$ deviates from its mean. Specifically, $\displaystyle \textup{P}(Y + a < \mu) \leq e^{-2na^2}$ For us, the $Y_i$ will be the payoff variables for a single action $j$ in the rounds for which we choose action $j$. Then the variable $Y$ is just the empirical average payoff for action $j$ over all the times we’ve tried it. Moreover, $a$ is our one-sided upper bound (and as a lower bound, sometimes). We can then solve this equation for $a$ to find an upper bound big enough to be confident that we’re within $a$ of the true mean. Indeed, if we call $n_j$ the number of times we played action $j$ thus far, then $n = n_j$ in the equation above, and using $a = a(j,T) = \sqrt{2 \log(T) / n_j}$ we get that $\textup{P}(Y > \mu + a) \leq T^{-4}$, which converges to zero very quickly as the number of rounds played grows. We’ll see this pop up again in the algorithm’s analysis below. But before that note two things. First, assuming we don’t play an action $j$, its upper bound $a$ grows in the number of rounds. This means that we never permanently rule out an action no matter how poorly it performs. If we get extremely unlucky with the optimal action, we will eventually be convinced to try it again. Second, the probability that our upper bound is wrong decreases in the number of rounds independently of how many times we’ve played the action. That is because our upper bound $a(j, T)$ is getting bigger for actions we haven’t played; any round in which we play an action $j$, it must be that $a(j, T+1) = a(j,T)$, although the empirical mean will likely change. With these two facts in mind, we can formally state the algorithm and intuitively understand why it should work. UCB1: Play each of the $K$ actions once, giving initial values for empirical mean payoffs $\overline{x}_i$ of each action $i$. For each round $t = K, K+1, \dots$: Let $n_j$ represent the number of times action $j$ was played so far. Play the action $j$ maximizing $\overline{x}_j + \sqrt{2 \log t / n_j}$. Observe the reward $X_{j,t}$ and update the empirical mean for the chosen action. And that’s it. Note that we’re being super stateful here: the empirical means $x_j$ change over time, and we’ll leave this update implicit throughout the rest of our discussion (sorry, functional programmers, but the notation is horrendous otherwise). Before we implement and test this algorithm, let’s go ahead and prove that it achieves nearly optimal regret. The reader uninterested in mathematical details should skip the proof, but the discussion of the theorem itself is important. If one wants to use this algorithm in real life, one needs to understand the guarantees it provides in order to adequately quantify the risk involved in using it. Theorem: Suppose that UCB1 is run on the bandit game with $K$ actions, each of whose reward distribution $X_{i,t}$ has values in [0,1]. Then its expected cumulative regret after $T$ rounds is at most $O(\sqrt{KT \log T})$. Actually, we’ll prove a more specific theorem. Let $\Delta_i$ be the difference $\mu^* - \mu_i$, where $\mu^*$ is the expected payoff of the best action, and let $\Delta$ be the minimal nonzero $\Delta_i$. That is, $\Delta_i$ represents how suboptimal an action is and $\Delta$ is the suboptimality of the second best action. These constants are called problem-dependent constants. The theorem we’ll actually prove is: Theorem: Suppose UCB1 is run as above. Then its expected cumulative regret $\mathbb{E}(R_{\textup{UCB1}}(T))$ is at most $\displaystyle 8 \sum_{i : \mu_i < \mu^*} \frac{\log T}{\Delta_i} + \left ( 1 + \frac{\pi^2}{3} \right ) \left ( \sum_{j=1}^K \Delta_j \right )$ Okay, this looks like one nasty puppy, but it’s actually not that bad. The first term of the sum signifies that we expect to play any suboptimal machine about a logarithmic number of times, roughly scaled by how hard it is to distinguish from the optimal machine. That is, if $\Delta_i$ is small we will require more tries to know that action $i$ is suboptimal, and hence we will incur more regret. The second term represents a small constant number (the $1 + \pi^2 / 3$ part) that caps the number of times we’ll play suboptimal machines in excess of the first term due to unlikely events occurring. So the first term is like our expected losses, and the second is our risk. But note that this is a worst-case bound on the regret. We’re not saying we will achieve this much regret, or anywhere near it, but that UCB1 simply cannot do worse than this. Our hope is that in practice UCB1 performs much better. Before we prove the theorem, let’s see how derive the $O(\sqrt{KT \log T})$ bound mentioned above. This will require familiarity with multivariable calculus, but such things must be endured like ripping off a band-aid. First consider the regret as a function $R(\Delta_1, \dots, \Delta_K)$ (excluding of course $\Delta^*$), and let’s look at the worst case bound by maximizing it. In particular, we’re just finding the problem with the parameters which screw our bound as badly as possible, The gradient of the regret function is given by $\displaystyle \frac{\partial R}{\partial \Delta_i} = - \frac{8 \log T}{\Delta_i^2} + 1 + \frac{\pi^2}{3}$ and it’s zero if and only if for each $i$, $\Delta_i = \sqrt{\frac{8 \log T}{1 + \pi^2/3}} = O(\sqrt{\log T})$. However this is a minimum of the regret bound (the Hessian is diagonal and all its eigenvalues are positive). Plugging in the $\Delta_i = O(\sqrt{\log T})$ (which are all the same) gives a total bound of $O(K \sqrt{\log T})$. If we look at the only possible endpoint (the $\Delta_i = 1$), then we get a local maximum of $O(K \sqrt{\log T})$. But this isn’t the $O(\sqrt{KT \log T})$ we promised, what gives? Well, this upper bound grows arbitrarily large as the $\Delta_i$ go to zero. But at the same time, if all the $\Delta_i$ are small, then we shouldn’t be incurring much regret because we’ll be picking actions that are close to optimal! Indeed, if we assume for simplicity that all the $\Delta_i = \Delta$ are the same, then another trivial regret bound is $\Delta T$ (why?). The true regret is hence the minimum of this regret bound and the UCB1 regret bound: as the UCB1 bound degrades we will eventually switch to the simpler bound. That will be a non-differentiable switch (and hence a critical point) and it occurs at $\Delta = O(\sqrt{(K \log T) / T})$. Hence the regret bound at the switch is $\Delta T = O(\sqrt{KT \log T})$, as desired. ## Proving the Worst-Case Regret Bound Proof. The proof works by finding a bound on $P_i(T)$, the expected number of times UCB chooses an action up to round $T$. Using the $\Delta$ notation, the regret is then just $\sum_i \Delta_i \mathbb{E}(P_i(T))$, and bounding the $P_i$‘s will bound the regret. Recall the notation for our upper bound $a(j, T) = \sqrt{2 \log T / P_j(T)}$ and let’s loosen it a bit to $a(y, T) = \sqrt{2 \log T / y}$ so that we’re allowed to “pretend” a action has been played $y$ times. Recall further that the random variable $I_t$ has as its value the index of the machine chosen. We denote by $\chi(E)$ the indicator random variable for the event $E$. And remember that we use an asterisk to denote a quantity associated with the optimal action (e.g., $\overline{x}^*$ is the empirical mean of the optimal action). Indeed for any action $i$, the only way we know how to write down $P_i(T)$ is as $\displaystyle P_i(T) = 1 + \sum_{t=K}^T \chi(I_t = i)$ The 1 is from the initialization where we play each action once, and the sum is the trivial thing where just count the number of rounds in which we pick action $i$. Now we’re just going to pull some number $m-1$ of plays out of that summation, keep it variable, and try to optimize over it. Since we might play the action fewer than $m$ times overall, this requires an inequality. $P_i(T) \leq m + \sum_{t=K}^T \chi(I_t = i \textup{ and } P_i(t-1) \geq m)$ These indicator functions should be read as sentences: we’re just saying that we’re picking action $i$ in round $t$ and we’ve already played $i$ at least $m$ times. Now we’re going to focus on the inside of the summation, and come up with an event that happens at least as frequently as this one to get an upper bound. Specifically, saying that we’ve picked action $i$ in round $t$ means that the upper bound for action $i$ exceeds the upper bound for every other action. In particular, this means its upper bound exceeds the upper bound of the best action (and $i$ might coincide with the best action, but that’s fine). In notation this event is $\displaystyle \overline{x}_i + a(P_i(t), t-1) \geq \overline{x}^* + a(P^*(T), t-1)$ Denote the upper bound $\overline{x}_i + a(i,t)$ for action $i$ in round $t$ by $U_i(t)$. Since this event must occur every time we pick action $i$ (though not necessarily vice versa), we have $\displaystyle P_i(T) \leq m + \sum_{t=K}^T \chi(U_i(t-1) \geq U^*(t-1) \textup{ and } P_i(t-1) \geq m)$ We’ll do this process again but with a slightly more complicated event. If the upper bound of action $i$ exceeds that of the optimal machine, it is also the case that the maximum upper bound for action $i$ we’ve seen after the first $m$ trials exceeds the minimum upper bound we’ve seen on the optimal machine (ever). But on round $t$ we don’t know how many times we’ve played the optimal machine, nor do we even know how many times we’ve played machine $i$ (except that it’s more than $m$). So we try all possibilities and look at minima and maxima. This is a pretty crude approximation, but it will allow us to write things in a nicer form. Denote by $\overline{x}_{i,s}$ the random variable for the empirical mean after playing action $i$ a total of $s$ times, and $\overline{x}^*_s$ the corresponding quantity for the optimal machine. Realizing everything in notation, the above argument proves that $\displaystyle P_i(T) \leq m + \sum_{t=K}^T \chi \left ( \max_{m \leq s < t} \overline{x}_{i,s} + a(s, t-1) \geq \min_{0 < s' < t} \overline{x}^*_{s'} + a(s', t-1) \right )$ Indeed, at each $t$ for which the max is greater than the min, there will be at least one pair $s,s'$ for which the values of the quantities inside the max/min will satisfy the inequality. And so, even worse, we can just count the number of pairs $s, s'$ for which it happens. That is, we can expand the event above into the double sum which is at least as large: $\displaystyle P_i(T) \leq m + \sum_{t=K}^T \sum_{s = m}^{t-1} \sum_{s' = 1}^{t-1} \chi \left ( \overline{x}_{i,s} + a(s, t-1) \geq \overline{x}^*_{s'} + a(s', t-1) \right )$ We can make one other odd inequality by increasing the sum to go from $t=1$ to $\infty$. This will become clear later, but it means we can replace $t-1$ with $t$ and thus have $\displaystyle P_i(T) \leq m + \sum_{t=1}^\infty \sum_{s = m}^{t-1} \sum_{s' = 1}^{t-1} \chi \left ( \overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t) \right )$ Now that we’ve slogged through this mess of inequalities, we can actually get to the heart of the argument. Suppose that this event actually happens, that $\overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t)$. Then what can we say? Well, consider the following three events: (1) $\displaystyle \overline{x}^*_{s'} \leq \mu^* - a(s', t)$ (2) $\displaystyle \overline{x}_{i,s} \geq \mu_i + a(s, t)$ (3) $\displaystyle \mu^* < \mu_i + 2a(s, t)$ In words, (1) is the event that the empirical mean of the optimal action is less than the lower confidence bound. By our Chernoff bound argument earlier, this happens with probability $t^{-4}$. Likewise, (2) is the event that the empirical mean payoff of action $i$ is larger than the upper confidence bound, which also occurs with probability $t^{-4}$. We will see momentarily that (3) is impossible for a well-chosen $m$ (which is why we left it variable), but in any case the claim is that one of these three events must occur. For if they are all false, we have $\displaystyle \begin{matrix} \overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t) & > & \mu^* - a(s',t) + a(s',t) = \mu^* \\ \textup{assumed} & (1) \textup{ is false} & \\ \end{matrix}$ and $\begin{matrix} \mu_i + 2a(s,t) & > & \overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t) \\ & (2) \textup{ is false} & \textup{assumed} \\ \end{matrix}$ But putting these two inequalities together gives us precisely that (3) is true: $\mu^* < \mu_i + 2a(s,t)$ This proves the claim. By the union bound, the probability that at least one of these events happens is $2t^{-4}$ plus whatever the probability of (3) being true is. But as we said, we’ll pick $m$ to make (3) always false. Indeed $m$ depends on which action $i$ is being played, and if $s \geq m > 8 \log T / \Delta_i^2$ then $2a(s,t) \leq \Delta_i$, and by the definition of $\Delta_i$ we have $\mu^* - \mu_i - 2a(s,t) \geq \mu^* - \mu_i - \Delta_i = 0$. Now we can finally piece everything together. The expected value of an event is just its probability of occurring, and so \displaystyle \begin{aligned} \mathbb{E}(P_i(T)) & \leq m + \sum_{t=1}^\infty \sum_{s=m}^t \sum_{s' = 1}^t \textup{P}(\overline{x}_{i,s} + a(s, t) \geq \overline{x}^*_{s'} + a(s', t)) \\ & \leq \left \lceil \frac{8 \log T}{\Delta_i^2} \right \rceil + \sum_{t=1}^\infty \sum_{s=m}^t \sum_{s' = 1}^t 2t^{-4} \\ & \leq \frac{8 \log T}{\Delta_i^2} + 1 + \sum_{t=1}^\infty \sum_{s=1}^t \sum_{s' = 1}^t 2t^{-4} \\ & = \frac{8 \log T}{\Delta_i^2} + 1 + 2 \sum_{t=1}^\infty t^{-2} \\ & = \frac{8 \log T}{\Delta_i^2} + 1 + \frac{\pi^2}{3} \\ \end{aligned} The second line is the Chernoff bound we argued above, the third and fourth lines are relatively obvious algebraic manipulations, and the last equality uses the classic solution to Basel’s problem. Plugging this upper bound in to the regret formula we gave in the first paragraph of the proof establishes the bound and proves the theorem. $\square$ ## Implementation and an Experiment The algorithm is about as simple to write in code as it is in pseudocode. The confidence bound is trivial to implement (though note we index from zero): def upperBound(step, numPlays): return math.sqrt(2 * math.log(step + 1) / numPlays) And the full algorithm is quite short as well. We define a function ub1, which accepts as input the number of actions and a function reward which accepts as input the index of the action and the time step, and draws from the appropriate reward distribution. Then implementing ub1 is simply a matter of keeping track of empirical averages and an argmax. We implement the function as a Python generator, so one can observe the steps of the algorithm and keep track of the confidence bounds and the cumulative regret. def ucb1(numActions, reward): payoffSums = [0] * numActions numPlays = [1] * numActions ucbs = [0] * numActions # initialize empirical sums for t in range(numActions): payoffSums[t] = reward(t,t) yield t, payoffSums[t], ucbs t = numActions while True: ucbs = [payoffSums[i] / numPlays[i] + upperBound(t, numPlays[i]) for i in range(numActions)] action = max(range(numActions), key=lambda i: ucbs[i]) theReward = reward(action, t) numPlays[action] += 1 payoffSums[action] += theReward yield action, theReward, ucbs t = t + 1 The heart of the algorithm is the second part, where we compute the upper confidence bounds and pick the action maximizing its bound. We tested this algorithm on synthetic data. There were ten actions and a million rounds, and the reward distributions for each action were uniform from $[0,1]$, biased by $1/k$ for some $5 \leq k \leq 15$. The regret and theoretical regret bound are given in the graph below. The regret of ucb1 run on a simple example. The blue curve is the cumulative regret of the algorithm after a given number of steps. The green curve is the theoretical upper bound on the regret. Note that both curves are logarithmic, and that the actual regret is quite a lot smaller than the theoretical regret. The code used to produce the example and image are available on this blog’s Github page. ## Next Time One interesting assumption that UCB1 makes in order to do its magic is that the payoffs are stochastic and independent across rounds. Next time we’ll look at an algorithm that assumes the payoffs are instead adversarial, as we described earlier. Surprisingly, in the adversarial case we can do about as well as the stochastic case. Then, we’ll experiment with the two algorithms on a real-world application. Until then! # Probabilistic Bounds — A Primer Probabilistic arguments are a key tool for the analysis of algorithms in machine learning theory and probability theory. They also assume a prominent role in the analysis of randomized and streaming algorithms, where one imposes a restriction on the amount of storage space an algorithm is allowed to use for its computations (usually sublinear in the size of the input). While a whole host of probabilistic arguments are used, one theorem in particular (or family of theorems) is ubiquitous: the Chernoff bound. In its simplest form, the Chernoff bound gives an exponential bound on the deviation of sums of random variables from their expected value. This is perhaps most important to algorithm analysis in the following mindset. Say we have a program whose output is a random variable $X$. Moreover suppose that the expected value of $X$ is the correct output of the algorithm. Then we can run the algorithm multiple times and take a median (or some sort of average) across all runs. The probability that the algorithm gives a wildly incorrect answer is the probability that more than half of the runs give values which are wildly far from their expected value. Chernoff’s bound ensures this will happen with small probability. So this post is dedicated to presenting the main versions of the Chernoff bound that are used in learning theory and randomized algorithms. Unfortunately the proof of the Chernoff bound in its full glory is beyond the scope of this blog. However, we will give short proofs of weaker, simpler bounds as a straightforward application of this blog’s previous work laying down the theory. If the reader has not yet intuited it, this post will rely heavily on the mathematical formalisms of probability theory. We will assume our reader is familiar with the material from our first probability theory primer, and it certainly wouldn’t hurt to have read our conditional probability theory primer, though we won’t use conditional probability directly. We will refrain from using measure-theoretic probability theory entirely (some day my colleagues in analysis will like me, but not today). ## Two Easy Bounds of Markov and Chebyshev The first bound we’ll investigate is almost trivial in nature, but comes in handy. Suppose we have a random variable $X$ which is non-negative (as a function). Markov’s inequality is the statement that, for any constant $a > 0$, $\displaystyle \textup{P}(X \geq a) \leq \frac{\textup{E}(X)}{a}$ In words, the probability that $X$ grows larger than some fixed constant is bounded by a quantity that is inversely proportional to the constant. The proof is quite simple. Let $\chi_a$ be the indicator random variable for the event that $X \geq a$ ($\chi_a = 1$ when $X \geq a$ and zero otherwise). As with all indicator random variables, the expected value of $\chi_a$ is the probability that the event happens (if this is mysterious, use the definition of expected value). So $\textup{E}(\chi_a) = \textup{P}(X \geq a)$, and linearity of expectation allows us to include a factor of $a$: $\textup{E}(a \chi_a) = a \textup{P}(X \geq a)$ The rest of the proof is simply the observation that $\textup{E}(a \chi_a) \leq \textup{E}(X)$. Indeed, as random variables we have the inequality $a \chi_a \leq X$. Whenever $a < X$, the former is equal to zero while the latter is nonnegative. And whenever $a \geq X$, the former is precisely $a$ while the latter is by assumption at least $a$. It follows that $\textup{E}(a \chi_a) \leq \textup{E}(X)$. This last point is a simple property of expectation we omitted from our first primer. It usually goes by monotonicity of expectation, and we prove it here. First, if $X \geq 0$ then $\textup{E}(X) \geq 0$ (this is trivial). Second, if $0 \leq X \leq Y$, then define a new random variable $Z = Y-X$. Since $Z \geq 0$ and using linearity of expectation, it must be that $\textup{E}(Z) = \textup{E}(Y) - \textup{E}(X) \geq 0$. Hence $\textup{E}(X) \leq \textup{E}(Y)$. Note that we do require that $X$ has a finite expected value for this argument to work, but if this is not the case then Markov’s inequality is nonsensical anyway. Markov’s inequality by itself is not particularly impressive or useful. For example, if $X$ is the number of heads in a hundred coin flips, Markov’s inequality ensures us that the probability of getting at least 99 heads is at most 50/99, which is about 1/2. Shocking. We know that the true probability is much closer to $2^{-100}$, so Markov’s inequality is a bust. However, it does give us a more useful bound as a corollary. This bound is known as Chebyshev’s inequality, and its use is sometimes referred to as the second moment method because it gives a bound based on the variance of a random variable (instead of the expected value, the “first moment”). The statement is as follows. Chebyshev’s Inequality: Let $X$ be a random variable with finite expected value and positive variance. Then we can bound the probability that $X$ deviates from its expected value by a quantity that is proportional to the variance of $X$. In particular, for any $\lambda > 0$, $\displaystyle \textup{P}(|X - \textup{E}(X)| \geq \lambda) \leq \frac{\textup{Var}(X)}{\lambda^2}$ And without any additional assumptions on $X$, this bound is sharp. Proof. The proof is a simple application of Markov’s inequality. Let $Y = (X - \textup{E}(X))^2$, so that $\textup{E}(Y) = \textup{Var}(X)$. Then by Markov’s inequality $\textup{P}(Y \geq \lambda^2) \leq \frac{\textup{E}(Y)}{\lambda^2}$ Since $Y$ is nonnegative $|X - \textup{E}(X)| = \sqrt(Y)$, and $\textup{P}(Y \geq \lambda^2) = \textup{P}(|X - \textup{E}(X)| \geq \lambda)$. The theorem is proved. $\square$ Chebyshev’s inequality shows up in so many different places (and usually in rather dry, technical bits), that it’s difficult to give a good example application.  Here is one that shows up somewhat often. Say $X$ is a nonnegative integer-valued random variable, and we want to argue about when $X = 0$ versus when $X > 0$, given that we know $\textup{E}(X)$. No matter how large $\textup{E}(X)$ is, it can still be possible that $\textup{P}(X = 0)$ is arbitrarily close to 1. As a colorful example, let $X$ is the number of alien lifeforms discovered in the next ten years. We might debate that $\textup{E}(X)$ can arbitrarily large: if some unexpected scientific and technological breakthroughs occur tomorrow, we could discover an unbounded number of lifeforms. On the other hand, we are very likely not to discover any, and probability theory allows for such a random variable to exist. If we know everything about $\textup{Var}(X)$, however, we can get more informed bounds. Theorem: If $\textup{E}(X) \neq 0$, then $\displaystyle \textup{P}(X = 0) \leq \frac{\textup{Var}(X)}{\textup{E}(X)^2}$. Proof. Simply choose $\lambda = \textup{E}(X)$ and apply Chebyshev’s inequality. $\displaystyle \textup{P}(X = 0) \leq \textup{P}(|X - \textup{E}(X)| \geq \textup{E}(X)) \leq \frac{\textup{Var}(X)}{\textup{E}(X)^2}$ The first inequality follows from the fact that the only time $X$ can ever be zero is when $|X - \textup{E}(X)| = \textup{E}(X)$, and $X=0$ only accounts for one such possibility. $\square$ This theorem says more. If we know that $\textup{Var}(X)$ is significantly smaller than $\textup{E}(X)^2$, then $X > 0$ is more certain to occur. More precisely, and more computationally minded, suppose we have a sequence of random variables $X_n$ so that $\textup{E}(X_n) \to \infty$ as $n \to \infty$. Then the theorem says that if $\textup{Var}(X_n) = o(\textup{E}(X_n)^2)$, then $\textup{P}(X_n > 0) \to 1$. Remembering one of our very early primers on asymptotic notation, $f = o(g)$ means that $f$ grows asymptotically slower than $g$, and in terms of this fraction $\textup{Var}(X) / \textup{E}(X)^2$, this means that the denominator dominates the fraction so that the whole thing tends to zero. ## The Chernoff Bound The Chernoff bound takes advantage of an additional hypothesis: our random variable is a sum of independent coin flips. We can use this to get exponential bounds on the deviation of the sum. More rigorously, Theorem: Let $X_1 , \dots, X_n$ be independent random $\left \{ 0,1 \right \}$-valued variables, and let $X = \sum X_i$. Suppose that $\mu = \textup{E}(X)$. Then the probability that $X$ deviates from $\mu$ by more than a factor of $\lambda > 0$ is bounded from above: $\displaystyle \textup{P}(X > (1+\lambda)\mu) \leq \frac{e^{\lambda \mu}}{(1+\lambda)^{(1+\lambda)\mu}}$ The proof is beyond the scope of this post, but we point the interested reader to these lecture notes. We can apply the Chernoff bound in an easy example. Say all $X_i$ are fair coin flips, and we’re interested in the probability of getting more than 3/4 of the coins heads. Here $\mu = n/2$ and $\lambda = 1/2$, so the probability is bounded from above by $\displaystyle \left ( \frac{e}{(3/2)^3} \right )^{n/4} \approx \frac{1}{5^n}$ So as the number of coin flips grows, the probability of seeing such an occurrence diminishes extremely quickly to zero. This is important because if we want to test to see if, say, the coins are biased toward flipping heads, we can simply run an experiment with $n$ sufficiently large. If we observe that more than 3/4 of the flips give heads, then we proclaim the coins are biased and we can be assured we are correct with high probability. Of course, after seeing 3/4 of more heads we’d be really confident that the coin is biased. A more realistic approach is to define some $\varepsilon$ that is small enough so as to say, “if some event occurs whose probability is smaller than $\varepsilon$, then I call shenanigans.” Then decide how many coins and what bound one would need to make the bad event have probability approximately $\varepsilon$. Finding this balance is one of the more difficult aspects of probabilistic algorithms, and as we’ll see later all of these quantities are left as variables and the correct values are discovered in the course of the proof. ## Chernoff-Hoeffding Inequality The Hoeffding inequality (named after the Finnish statistician, Wassily Høffding) is a variant of the Chernoff bound, but often the bounds are collectively known as Chernoff-Hoeffding inequalities. The form that Hoeffding is known for can be thought of as a simplification and a slight generalization of Chernoff’s bound above. Theorem: Let $X_1, \dots, X_n$ be independent random variables whose values are within some range $[a,b]$. Call $\mu_i = \textup{E}(X_i)$, $X = \sum_i X_i$, and $\mu = \textup{E}(X) = \sum_i \mu_i$. Then for all $t > 0$, $\displaystyle \textup{P}(|X - \mu| > t) \leq 2e^{-2t^2 / n(b-a)^2}$ For example, if we are interested in the sum of $n$ rolls of a fair six-sided die, then the probability that we deviate from $(7/2)n$ by more than $5 \sqrt{n \log n}$ is bounded by $2e^{(-2 \log n)} = 2/n^2$. Supposing we want to know how many rolls we need to guarantee with probability 0.01 that we don’t deviate too much, we just do the algebra: $2n^{-2} < 0.01$ $n^2 > 200$ $n > \sqrt{200} \approx 14$ So with 15 rolls we can be confident that the sum of the rolls will lie between 20 and 85. It’s not the best possible bound we could come up with, because we’re completely ignoring the known structure on dice rolls (that they follow a uniform distribution!). The benefit is that it’s a quick and easy bound that works for any kind of random variable with that expected value. Another version of this theorem concerns the average of the $X_i$, and is only a minor modification of the above. Theorem: If $X_1, \dots, X_n$ are as above, and $X = \frac{1}{n} \sum_i X_i$, with $\mu = \frac{1}{n}(\sum_i \mu_i)$, then for all $t > 0$, we get the following bound $\displaystyle \textup{P}(|X - \mu| > t) \leq 2e^{-2nt^2/(b-a)^2}$ The only difference here is the extra factor of $n$ in the exponent. So the deviation is exponential both in the amount of deviation ($t^2$), and in the number of trials. This theorem comes up very often in learning theory, in particular to prove Boosting works. Mathematicians will joke about how all theorems in learning theory are just applications of Chernoff-Hoeffding-type bounds. We’ll of course be seeing it again as we investigate boosting and the PAC-learning model in future posts, so we’ll see the theorems applied to their fullest extent then. Until next time! # Probability Theory — A Primer It is a wonder that we have yet to officially write about probability theory on this blog. Probability theory underlies a huge portion of artificial intelligence, machine learning, and statistics, and a number of our future posts will rely on the ideas and terminology we lay out in this post. Our first formal theory of machine learning will be deeply ingrained in probability theory, we will derive and analyze probabilistic learning algorithms, and our entire treatment of mathematical finance will be framed in terms of random variables. And so it’s about time we got to the bottom of probability theory. In this post, we will begin with a naive version of probability theory. That is, everything will be finite and framed in terms of naive set theory without the aid of measure theory. This has the benefit of making the analysis and definitions simple. The downside is that we are restricted in what kinds of probability we are allowed to speak of. For instance, we aren’t allowed to work with probabilities defined on all real numbers. But for the majority of our purposes on this blog, this treatment will be enough. Indeed, most programming applications restrict infinite problems to finite subproblems or approximations (although in their analysis we often appeal to the infinite). We should make a quick disclaimer before we get into the thick of things: this primer is not meant to connect probability theory to the real world. Indeed, to do so would be decidedly unmathematical. We are primarily concerned with the mathematical formalisms involved in the theory of probability, and we will leave the philosophical concerns and applications to  future posts. The point of this primer is simply to lay down the terminology and basic results needed to discuss such topics to begin with. So let us begin with probability spaces and random variables. ## Finite Probability Spaces We begin by defining probability as a set with an associated function. The intuitive idea is that the set consists of the outcomes of some experiment, and the function gives the probability of each event happening. For example, a set $\left \{ 0,1 \right \}$ might represent heads and tails outcomes of a coin flip, while the function assigns a probability of one half (or some other numbers) to the outcomes. As usual, this is just intuition and not rigorous mathematics. And so the following definition will lay out the necessary condition for this probability to make sense. Definition: A finite set $\Omega$ equipped with a function $f: \Omega \to [0,1]$ is a probability space if the function $f$ satisfies the property $\displaystyle \sum_{\omega \in \Omega} f(\omega) = 1$ That is, the sum of all the values of $f$ must be 1. Sometimes the set $\Omega$ is called the sample space, and the act of choosing an element of $\Omega$ according to the probabilities given by $f$ is called drawing an example. The function $f$ is usually called the probability mass function. Despite being part of our first definition, the probability mass function is relatively useless except to build what follows. Because we don’t really care about the probability of a single outcome as much as we do the probability of an event. Definition: An event $E \subset \Omega$ is a subset of a sample space. For instance, suppose our probability space is $\Omega = \left \{ 1, 2, 3, 4, 5, 6 \right \}$ and $f$ is defined by setting $f(x) = 1/6$ for all $x \in \Omega$ (here the “experiment” is rolling a single die). Then we are likely interested in more exquisite kinds of outcomes; instead of asking the probability that the outcome is 4, we might ask what is the probability that the outcome is even? This event would be the subset $\left \{ 2, 4, 6 \right \}$, and if any of these are the outcome of the experiment, the event is said to occur. In this case we would expect the probability of the die roll being even to be 1/2 (but we have not yet formalized why this is the case). As a quick exercise, the reader should formulate a two-dice experiment in terms of sets. What would the probability space consist of as a set? What would the probability mass function look like? What are some interesting events one might consider (if playing a game of craps)? Of course, we want to extend the probability mass function $f$ (which is only defined on single outcomes) to all possible events of our probability space. That is, we want to define a probability measure $\textup{P}: 2^\Omega \to \mathbb{R}$, where $2^\Omega$ denotes the set of all subsets of $\Omega$. The example of a die roll guides our intuition: the probability of any event should be the sum of the probabilities of the outcomes contained in it. i.e. we define $\displaystyle \textup{P}(E) = \sum_{e \in E} f(e)$ where by convention the empty sum has value zero. Note that the function $\textup{P}$ is often denoted $\textup{Pr}$. So for example, the coin flip experiment can’t have zero probability for both of the two outcomes 0 and 1; the sum of the probabilities of all outcomes must sum to 1. More coherently: $\textup{P}(\Omega) = \sum_{\omega \in \Omega} f(\omega) = 1$ by the defining property of a probability space. And so if there are only two outcomes of the experiment, then they must have probabilities $p$ and $1-p$ for some $p$. Such a probability space is often called a Bernoulli trial. Now that the function $\textup{P}$ is defined on all events, we can simplify our notation considerably. Because the probability mass function $f$ uniquely determines $\textup{P}$ and because $\textup{P}$ contains all information about $f$ in it ($\textup{P}(\left \{ \omega \right \}) = f(\omega)$), we may speak of $\textup{P}$ as the probability measure of $\Omega$, and leave $f$ out of the picture. Of course, when we define a probability measure, we will allow ourselves to just define the probability mass function and the definition of $\textup{P}$ is understood as above. There are some other quick properties we can state or prove about probability measures: $\textup{P}(\left \{ \right \}) = 0$ by convention, if $E, F$ are disjoint then $\textup{P}(E \cup F) = \textup{P}(E) + \textup{P}(F)$, and if $E \subset F \subset \Omega$ then $\textup{P}(E) \leq \textup{P}(F)$. The proofs of these facts are trivial, but a good exercise for the uncomfortable reader to work out. ## Random Variables The next definition is crucial to the entire theory. In general, we want to investigate many different kinds of random quantities on the same probability space. For instance, suppose we have the experiment of rolling two dice. The probability space would be $\displaystyle \Omega = \left \{ (1,1), (1,2), (1,3), \dots, (6,4), (6,5), (6,6) \right \}$ Where the probability measure is defined uniformly by setting all single outcomes to have probability 1/36. Now this probability space is very general, but rarely are we interested only in its events. If this probability space were interpreted as part of a game of craps, we would likely be more interested in the sum of the two dice than the actual numbers on the dice. In fact, we are really more interested in the payoff determined by our roll. Sums of numbers on dice are certainly predictable, but a payoff can conceivably be any function of the outcomes. In particular, it should be a function of $\Omega$ because all of the randomness inherent in the game comes from the generation of an output in $\Omega$ (otherwise we would define a different probability space to begin with). And of course, we can compare these two different quantities (the amount of money and the sum of the two dice) within the framework of the same probability space. This “quantity” we speak of goes by the name of a random variable. Definition: random variable $X$ is a real-valued function on the sample space $\Omega \to \mathbb{R}$. So for example the random variable for the sum of the two dice would be $X(a,b) = a+b$. We will slowly phase out the function notation as we go, reverting to it when we need to avoid ambiguity. We can further define the set of all random variables $\textup{RV}(\Omega)$. It is important to note that this forms a vector space. For those readers unfamiliar with linear algebra, the salient fact is that we can add two random variables together and multiply them by arbitrary constants, and the result is another random variable. That is, if $X, Y$ are two random variables, so is $aX + bY$ for real numbers $a, b$. This function operates linearly, in the sense that its value is $(aX + bY)(\omega) = aX(\omega) + bY(\omega)$. We will use this property quite heavily, because in most applications the analysis of a random variable begins by decomposing it into a combination of simpler random variables. Of course, there are plenty of other things one can do to functions. For example, $XY$ is the product of two random variables (defined by $XY(\omega) = X(\omega)Y(\omega)$) and one can imagine such awkward constructions as $X/Y$ or $X^Y$. We will see in a bit why it these last two aren’t often used (it is difficult to say anything about them). The simplest possible kind of random variable is one which identifies events as either occurring or not. That is, for an event $E$, we can define a random variable which is 0 or 1 depending on whether the input is a member of $E$. That is, Definition: An indicator random variable $1_E$ is defined by setting $1_E(\omega) = 1$ when $\omega \in E$ and 0 otherwise. A common abuse of notation for singleton sets is to denote $1_{\left \{ \omega \right \} }$ by $1_\omega$. This is what we intuitively do when we compute probabilities: to get a ten when rolling two dice, one can either get a six, a five, or a four on the first die, and then the second die must match it to add to ten. The most important thing about breaking up random variables into simpler random variables will make itself clear when we see that expected value is a linear functional. That is, probabilistic computations of linear combinations of random variables can be computed by finding the values of the simpler pieces. We can’t yet make that rigorous though, because we don’t yet know what it means to speak of the probability of a random variable’s outcome. Definition: Denote by $\left \{ X = k \right \}$ the set of outcomes $\omega \in \Omega$ for which $X(\omega) = k$. With the function notation, $\left \{ X = k \right \} = X^{-1}(k)$. This definition extends to constructing ranges of outcomes of a random variable. i.e., we can define $\left \{ X < 5 \right \}$ or $\left \{ X \textup{ is even} \right \}$ just as we would naively construct sets. It works in general for any subset of $S \subset \mathbb{R}$. The notation is $\left \{ X \in S \right \} = X^{-1}(S)$, and we will also call these sets events. The notation becomes useful and elegant when we combine it with the probability measure $\textup{P}$. That is, we want to write things like $\textup{P}(X \textup{ is even})$ and read it in our head “the probability that $X$ is even”. This is made rigorous by simply setting $\displaystyle \textup{P}(X \in S) = \sum_{\omega \in X^{-1}(S)} \textup{P}(\omega)$ In words, it is just the sum of the probabilities that individual outcomes will have a value under $X$ that lands in $S$. We will also use for $\textup{P}(\left \{ X \in S \right \} \cap \left \{ Y \in T \right \})$ the shorthand notation $\textup{P}(X \in S, Y \in T)$ or $\textup{P}(X \in S \textup{ and } Y \in T)$. Often times $\left \{ X \in S \right \}$ will be smaller than $\Omega$ itself, even if $S$ is large. For instance, let the probability space be the set of possible lottery numbers for one week’s draw of the lottery (with uniform probabilities), let $X$ be the profit function. Then $\textup{P}(X > 0)$ is very small indeed. We should also note that because our probability spaces are finite, the image of the random variable $\textup{im}(X)$ is a finite subset of real numbers. In other words, the set of all events of the form $\left \{ X = x_i \right \}$ where $x_i \in \textup{im}(X)$ form a partition of $\Omega$. As such, we get the following immediate identity: $\displaystyle 1 = \sum_{x_i \in \textup{im} (X)} P(X = x_i)$ The set of such events is called the probability distribution of the random variable $X$. The final definition we will give in this section is that of independence. There are two separate but nearly identical notions of independence here. The first is that of two events. We say that two events $E,F \subset \Omega$ are independent if the probability of both $E, F$ occurring is the product of the probabilities of each event occurring. That is, $\textup{P}(E \cap F) = \textup{P}(E)\textup{P}(F)$. There are multiple ways to realize this formally, but without the aid of conditional probability (more on that next time) this is the easiest way. One should note that this is distinct from $E,F$ being disjoint as sets, because there may be a zero-probability outcome in both sets. The second notion of independence is that of random variables. The definition is the same idea, but implemented using events of random variables instead of regular events. In particular, $X,Y$ are independent random variables if $\displaystyle \textup{P}(X = x, Y = y) = \textup{P}(X=x)\textup{P}(Y=y)$ for all $x,y \in \mathbb{R}$. ## Expectation We now turn to notions of expected value and variation, which form the cornerstone of the applications of probability theory. Definition: Let $X$ be a random variable on a finite probability space $\Omega$. The expected value of $X$, denoted $\textup{E}(X)$, is the quantity $\displaystyle \textup{E}(X) = \sum_{\omega \in \Omega} X(\omega) \textup{P}(\omega)$ Note that if we label the image of $X$ by $x_1, \dots, x_n$ then this is equivalent to $\displaystyle \textup{E}(X) = \sum_{i=1}^n x_i \textup{P}(X = x_i)$ The most important fact about expectation is that it is a linear functional on random variables. That is, Theorem: If $X,Y$ are random variables on a finite probability space and $a,b \in \mathbb{R}$, then $\displaystyle \textup{E}(aX + bY) = a\textup{E}(X) + b\textup{E}(Y)$ Proof. The only real step in the proof is to note that for each possible pair of values $x, y$ in the images of $X,Y$ resp., the events $E_{x,y} = \left \{ X = x, Y=y \right \}$ form a partition of the sample space $\Omega$. That is, because $aX + bY$ has a constant value on $E_{x,y}$, the second definition of expected value gives $\displaystyle \textup{E}(aX + bY) = \sum_{x \in \textup{im} (X)} \sum_{y \in \textup{im} (Y)} (ax + by) \textup{P}(X = x, Y = y)$ and a little bit of algebraic elbow grease reduces this expression to $a\textup{E}(X) + b\textup{E}(Y)$. We leave this as an exercise to the reader, with the additional note that the sum $\sum_{y \in \textup{im}(Y)} \textup{P}(X = x, Y = y)$ is identical to $\textup{P}(X = x)$. $\square$ If we additionally know that $X,Y$ are independent random variables, then the same technique used above allows one to say something about the expectation of the product $\textup{E}(XY)$ (again by definition, $XY(\omega) = X(\omega)Y(\omega)$). In this case $\textup{E}(XY) = \textup{E}(X)\textup{E}(Y)$. We leave the proof as an exercise to the reader. Now intuitively the expected value of a random variable is the “center” of the values assumed by the random variable. It is important, however, to note that the expected value need not be a value assumed by the random variable itself; that is, it might not be true that $\textup{E}(X) \in \textup{im}(X)$. For instance, in an experiment where we pick a number uniformly at random between 1 and 4 (the random variable is the identity function), the expected value would be: $\displaystyle 1 \cdot \frac{1}{4} + 2 \cdot \frac{1}{4} + 3 \cdot \frac{1}{4} + 4 \cdot \frac{1}{4} = \frac{5}{2}$ But the random variable never achieves this value. Nevertheless, it would not make intuitive sense to call either 2 or 3 the “center” of the random variable (for both 2 and 3, there are two outcomes on one side and one on the other). Let’s see a nice application of the linearity of expectation to a purely mathematical problem. The power of this example lies in the method: after a shrewd decomposition of a random variable $X$ into simpler (usually indicator) random variables, the computation of $\textup{E}(X)$ becomes trivial. tournament  $T$ is a directed graph in which every pair of distinct vertices has exactly one edge between them (going one direction or the other). We can ask whether such a graph has a Hamiltonian path, that is, a path through the graph which visits each vertex exactly once. The datum of such a path is a list of numbers $(v_1, \dots, v_n)$, where we visit vertex $v_i$ at stage $i$ of the traversal. The condition for this to be a valid Hamiltonian path is that $(v_i, v_{i+1})$ is an edge in $T$ for all $i$. Now if we construct a tournament on $n$ vertices by choosing the direction of each edges independently with equal probability 1/2, then we have a very nice probability space and we can ask what is the expected number of Hamiltonian paths. That is, $X$ is the random variable giving the number of Hamiltonian paths in such a randomly generated tournament, and we are interested in $\textup{E}(X)$. To compute this, simply note that we can break $X = \sum_p X_p$, where $p$ ranges over all possible lists of the vertices. Then $\textup{E}(X) = \sum_p \textup{E}(X_p)$, and it suffices to compute the number of possible paths and the expected value of any given path. It isn’t hard to see the number of paths is $n!$ as this is the number of possible lists of $n$ items. Because each edge direction is chosen with probability 1/2 and they are all chosen independently of one another, the probability that any given path forms a Hamiltonian path depends on whether each edge was chosen with the correct orientation. That’s just $\textup{P}(\textup{first edge and second edge and } \dots \textup{ and last edge})$ which by independence is $\displaystyle \prod_{i = 1}^n \textup{P}(i^\textup{th} \textup{ edge is chosen}) = \frac{1}{2^{n-1}}$ That is, the expected number of Hamiltonian paths is $n!2^{-(n-1)}$. ## Variance and Covariance Just as expectation is a measure of center, variance is a measure of spread. That is, variance measures how thinly distributed the values of a random variable $X$ are throughout the real line. Definition: The variance of a random variable $X$ is the quantity $\textup{E}((X - \textup{E}(X))^2)$. That is, $\textup{E}(X)$ is a number, and so $X - \textup{E}(X)$ is the random variable defined by $(X - \textup{E}(X))(\omega) = X(\omega) - \textup{E}(X)$. It is the expectation of the square of the deviation of $X$ from its expected value. One often denotes the variance by $\textup{Var}(X)$ or $\sigma^2$. The square is for silly reasons: the standard deviation, denoted $\sigma$ and equivalent to $\sqrt{\textup{Var}(X)}$ has the same “units” as the outcomes of the experiment and so it’s preferred as the “base” frame of reference by some. We won’t bother with such physical nonsense here, but we will have to deal with the notation. The variance operator has a few properties that make it quite different from expectation, but nonetheless fall our directly from the definition. We encourage the reader to prove a few: • $\textup{Var}(X) = \textup{E}(X^2) - \textup{E}(X)^2$. • $\textup{Var}(aX) = a^2\textup{Var}(X)$. • When $X,Y$ are independent then variance is additive: $\textup{Var}(X+Y) = \textup{Var}(X) + \textup{Var}(Y)$. • Variance is invariant under constant additives: $\textup{Var}(X+c) = \textup{Var}(X)$. In addition, the quantity $\textup{Var}(aX + bY)$ is more complicated than one might first expect. In fact, to fully understand this quantity one must create a notion of correlation between two random variables. The formal name for this is covariance. Definition: Let $X,Y$ be random variables. The covariance of $X$ and $Y$, denoted $\textup{Cov}(X,Y)$, is the quantity $\textup{E}((X - \textup{E}(X))(Y - \textup{E}(Y)))$. Note the similarities between the variance definition and this one: if $X=Y$ then the two quantities coincide. That is, $\textup{Cov}(X,X) = \textup{Var}(X)$. There is a nice interpretation to covariance that should accompany every treatment of probability: it measures the extent to which one random variable “follows” another. To make this rigorous, we need to derive a special property of the covariance. Theorem: Let $X,Y$ be random variables with variances $\sigma_X^2, \sigma_Y^2$. Then their covariance is at most the product of the standard deviations in magnitude: $|\textup{Cov}(X,Y)| \leq \sigma_X \sigma_Y$ Proof. Take any two non-constant random variables $X$ and $Y$ (we will replace these later with $X - \textup{E}(X), Y - \textup{E}(Y)$). Construct a new random variable $(tX + Y)^2$ where $t$ is a real variable and inspect its expected value. Because the function is squared, its values are all nonnegative, and hence its expected value is nonnegative. That is, $\textup{E}((tX + Y)^2)$. Expanding this and using linearity gives $\displaystyle f(t) = t^2 \textup{E}(X^2) + 2t \textup{E}(XY) + \textup{E}(Y^2) \geq 0$ This is a quadratic function of a single variable $t$ which is nonnegative. From elementary algebra this means the discriminant is at most zero. i.e. $\displaystyle 4 \textup{E}(XY)^2 - 4 \textup{E}(X^2) \textup{E}(Y^2) \leq 0$ and so dividing by 4 and replacing $X,Y$ with $X - \textup{E}(X), Y - \textup{E}(Y)$, resp., gives $\textup{Cov}(X,Y)^2 \leq \sigma_X^2 \sigma_Y^2$ and the result follows. $\square$ Note that equality holds in the discriminant formula precisely when $Y = -tX$ (the discriminant is zero), and after the replacement this translates to $Y - \textup{E}(Y) = -t(X - \textup{E}(X))$ for some fixed value of $t$. In other words, for some real numbers $a,b$ we have $Y = aX + b$. This has important consequences even in English: the covariance is maximized when $Y$ is a linear function of $X$, and otherwise is bounded from above and below. By dividing both sides of the inequality by $\sigma_X \sigma_Y$ we get the following definition: Definition: The Pearson correlation coefficient of two random variables $X,Y$ is defined by $\displaystyle r= \frac{\textup{Cov}(X,Y)}{\sigma_X \sigma_Y}$ If $r$ is close to 1, we call $X$ and $Y$ positively correlated. If $r$ is close to -1 we call them negatively correlated, and if $r$ is close to zero we call them uncorrelated. The idea is that if two random variables are positively correlated, then a higher value for one variable (with respect to its expected value) corresponds to a higher value for the other. Likewise, negatively correlated variables have an inverse correspondence: a higher value for one correlates to a lower value for the other. The picture is as follows: The  horizontal axis plots a sample of values of the random variable $X$ and the vertical plots a sample of $Y$. The linear correspondence is clear. Of course, all of this must be taken with a grain of salt: this correlation coefficient is only appropriate for analyzing random variables which have a linear correlation. There are plenty of interesting examples of random variables with non-linear correlation, and the Pearson correlation coefficient fails miserably at detecting them. Here are some more examples of Pearson correlation coefficients applied to samples drawn from the sample spaces of various (continuous, but the issue still applies to the finite case) probability distributions: Various examples of the Pearson correlation coefficient, credit Wikipedia. Though we will not discuss it here, there is still a nice precedent for using the Pearson correlation coefficient. In one sense, the closer that the correlation coefficient is to 1, the better a linear predictor will perform in “guessing” values of $Y$ given values of $X$ (same goes for -1, but the predictor has negative slope). But this strays a bit far from our original point: we still want to find a formula for $\textup{Var}(aX + bY)$. Expanding the definition, it is not hard to see that this amounts to the following proposition: Proposition: The variance operator satisfies $\displaystyle \textup{Var}(aX+bY) = a^2\textup{Var}(X) + b^2\textup{Var}(Y) + 2ab \textup{Cov}(X,Y)$ And using induction we get a general formula: $\displaystyle \textup{Var} \left ( \sum_{i=1}^n a_i X_i \right ) = \sum_{i=1}^n \sum_{j = 1}^n a_i a_j \textup{Cov}(X_i,X_j)$ Note that in the general sum, we get a bunch of terms $\textup{Cov}(X_i,X_i) = \textup{Var}(X_i)$. Another way to look at the linear relationships between a collection of random variables is via a covariance matrix. Definition: The covariance matrix of a collection of random variables $X_1, \dots, X_n$ is the matrix whose $(i,j)$ entry is $\textup{Cov}(X_i,X_j)$. As we have already seen on this blog in our post on eigenfaces, one can manipulate this matrix in interesting ways. In particular (and we may be busting out an unhealthy dose of new terminology here), the covariance matrix is symmetric and nonnegative, and so by the spectral theorem it has an orthonormal basis of eigenvectors, which allows us to diagonalize it. In more direct words: we can form a new collection of random variables $Y_j$ (which are linear combinations of the original variables $X_i$) such that the covariance of distinct pairs $Y_j, Y_k$ are all zero. In one sense, this is the “best perspective” with which to analyze the random variables. We gave a general algorithm to do this in our program gallery, and the technique is called principal component analysis. ## Next Up So far in this primer we’ve seen a good chunk of the kinds of theorems one can prove in probability theory. Fortunately, much of what we’ve said for finite probability spaces holds for infinite (discrete) probability spaces and has natural analogues for continuous probability spaces. Next time, we’ll investigate how things change for discrete probability spaces, and should we need it, we’ll follow that up with a primer on continuous probability. This will get our toes wet with some basic measure theory, but as every mathematician knows: analysis builds character. Until then!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 610, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9467002749443054, "perplexity": 221.03450610881424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115891802.74/warc/CC-MAIN-20150124161131-00244-ip-10-180-212-252.ec2.internal.warc.gz"}
http://www.math.kit.edu/iana3/lehre/mi1w-forschsemdauer/en
Workgroup Functional Analysis Secretariat Kollegiengebäude Mathematik (20.30) Room 2.041 Karlsruher Institut für Technologie Institut für Analysis Englerstraße 2 76131 Karlsruhe [email protected] Office hours: 9:00 - 11:00 Tel.: +49 721 608 43727 Fax.: +49 721 608 67650 # Research Seminar (Continuing Class) Talks in the winter term 2019/2020 Unless otherwise stated the talks take place in room 2.066 in the "Kollegiengebäude Mathematik" (20.30) from 14:00 to 15:30. 15.10.2019 Nick Lindemulder (Karlsruhe) An Intersection Representation for a Class of Anisotropic Vector-valued Function Spaces In this talk we discuss an intersection representation for a class of anisotropic vector-valued function spaces in an axiomatic setting à la Hedberg & Netrusov, which includes weighted anisotropic mixed-norm Besov and Triebel-Lizorkin spaces. In the special case of the classical Triebel-Lizorkin spaces, the intersection representation gives an improvement of the well-known Fubini property. The motivation comes from the weighted -maximal regularity problem for parabolic boundary value problems, where weighted anisotropic mixed-norm Triebel-Lizorkin spaces occur as spaces of boundary data. 22.10.2019 Bas Nieraeth (Karlsruhe) Weighted theory and extrapolation for multilinear operators 19.11.2019 Andreas Geyer-Schulz (Karlsruhe) On global well-posedness of the Maxwell–Schrödinger system 02.12.2019 Wenqi Zhang (Canberra) Localisation of eigenfunctions via an effective potential for Schrödinger operators For Schrödinger operators with potentials (possibly random) we introduce the Landscape function as an effective potential. Due to the nicer properties of this Landscape function we are able to recover localisation estimates for continuous potentials, and specialise these estimates to obtain an approximate diagonalisation. We give a brief sketch of these arguments. This talk takes place in seminar room 2.066 at 10.30 am. 03.12.2019 Yonas Mesfun (Darmstadt) On the stability of a chemotaxis system with logistic growth In this talk we are concerned with the asymptotic behavior of the solution to a certain Neumann initial-boundary value problem which is a variant of the so-called Keller-Segel model describing chemotaxis. Chemotaxis is the directed movement of cells in response to an external chemical signal and plays an important role in various biochemical processes such as e.g. cancer growth. We show a result due to Winkler which says that under specific conditions, there exists a unique classical solution to this Neumann problem which converges to the equilibrium solution with respect to the -norm. For this purpose we study the Neumann Laplacian in , in particular some decay properties of its semigroup and embedding properties of the domain of its fractional powers, and then use those properties to prove Winkler's result. 10.12.2019 Emiel Lorist (Delft) Singular stochastic integral operators: The vector-valued and the mixed-norm approach Singular integral operators play a prominent role in harmonic analysis. By replacing integration with respect to some measure by integration with respect to Brownian motion, one obtains stochastic singular integral operators, which arise naturally in questions related to stochastic PDEs. In this talk I will introduce Calderón-Zygmund theory for these singular stochastic integral operators from both a vector-valued and a mixed-norm viewpoint. 14.01.2020 Alex Amenta (Bonn) Vector-valued time-frequency analysis and the bilinear Hilbert transform The bilinear Hilbert transform is a bilinear singular integral operator (or Fourier multiplier) which is invariant not only under translations and dilations, but also under modulations. This additional symmetry turns out to make proving -bounds especially difficult. I will give an overview of how time-frequency analysis is used in proving these -bounds, with focus on the recently understood setting of functions valued in UMD Banach spaces. 21.01.2020 Willem van Zuijlen (Berlin) Spectral asymptotics of the Anderson Hamiltonian In this talk I will discuss the asymptotics of the eigenvalues of the Anderson Hamiltonian, which is the operator given by . We consider to be (a realisation of) white noise and consider the operator on a box with Dirichlet boundary conditions. I will discuss the result in joint work with Khalil Chouk: almost surely the eigenvalues divided by the logarithm of the size of the box converge to the same limit. I will also discuss the application of this to obtain the large-time asymptotics of the total mass of the parabolic Anderson model, which is the SPDE given by . 18.02.2020 TULKKA in Konstanz The workshop is taking place in room A 704 (University of Konstanz). 11:45-12:15 Adrian Spener (Ulm) Curvature-dimension inequalities for nonlocal operators 12:30-13:45 Lunch break 13:45-14:30 Sophia Rau (Konstanz) Stability results for thermoelastic plate-membrane systems 14:45-15:30 Andreas Geyer-Schulz (Karlsruhe) On global well-posedness of the Maxwell-Schrödinger system 15:30-16:15 Coffee break 16:15-17:00 Delio Mugnolo (Hagen) Linear hyperbolic systems You find previous talks in the archive of the research seminar.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8960179090499878, "perplexity": 1115.3831083778623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370504930.16/warc/CC-MAIN-20200331212647-20200401002647-00439.warc.gz"}
http://www.zora.uzh.ch/16578/
Quick Search: Browse by: Zurich Open Repository and Archive Permanent URL to this publication: http://dx.doi.org/10.5167/uzh-16578 # Mandelbaum, R; Seljak, U; Hirata, C M (2008). A halo mass—concentration relation from weak lensing. Journal of Cosmology and Astroparticle Physics, 2008(8):006. Preview Accepted Version PDF (Accepted manuscript, Version 2) 1MB View at publisher Preview Accepted Version PDF (Accepted manuscript, Vesion 1) 369kB ## Abstract We perform a statistical weak lensing analysis of dark matter profiles around tracers of halo mass from galactic- to cluster-size halos. In this analysis we use 170 640 isolated ∼ L∗ galaxies split into ellipticals and spirals, 38 236 groups traced by isolated spectroscopic Luminous Red Galaxies (LRGs) and 13 823 MaxBCG clusters from the Sloan Digital Sky Survey (SDSS) covering a wide range of richness. Together these three samples allow a determination of the density profiles of dark matter halos over three orders of magnitude in mass, from 1012M⊙ to 1015M⊙. The resulting lensing signal is consistent with an NFW or Einasto profile on scales outside the central region. In the inner regions, uncertainty in modeling of the proper identification of the halo center and inclusion of baryonic effects from the central galaxy make the comparison less reliable. We find that the NFW concentration parameter c200b decreases with halo mass, from around 10 for galactic halos to 4 for cluster halos. Assuming its dependence on halo mass in the form of c200b = c0(M/1014h−1M⊙)− we find c0 = 4.6 ± 0.7 (at z = 0.22) and = 0.13 ± 0.07, with very similar results for the Einasto profile. The slope () is in agreement with theoretical predictions, while the amplitude is about two standard deviations below the predictions for this mass and redshift, but we note that the published values in the literature differ at a level of 10-20% and that for a proper comparison our analysis should be repeated in simulations. We compare our results to other recent determinations, some of which find significantly higher concentrations. We discuss the implications of our results for the baryonic effects on the shear power spectrum: since these are expected to increase the halo concentration, the fact that we see no evidence of high concentrations on scales above 20% of the virial radius suggests that baryonic effects are limited to small scales, and are not a significant source of uncertainty for the current weak lensing measurements of the dark matter power spectrum. ## Citations 105 citations in Web of Science® 89 citations in Scopus® Google Scholar™ ## Downloads 71 downloads since deposited on 13 Mar 2009 26 downloads since 12 months Detailed statistics ## Additional indexing Item Type: Journal Article, refereed, original work 07 Faculty of Science > Institute for Computational Science 530 Physics English August 2008 13 Mar 2009 11:46 27 Nov 2013 16:46 Institute of Physics Publishing 1475-7516 10.1088/1475-7516/2008/08/006 http://arxiv.org/abs/0805.2552v2 Users (please log in): suggest update or correction for this item Repository Staff Only: item control page
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8734390139579773, "perplexity": 3235.286072456934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300222.27/warc/CC-MAIN-20150323172140-00191-ip-10-168-14-71.ec2.internal.warc.gz"}
http://www.thermix.net/2012/08/the-separating-and-throttling.html
## THE SEPARATING AND THROTTLING CALORIMETER BASIC INFORMATION AND TUTORIALS The quality (i.e. the dryness) of wet steam can be found by using a separating and throttling calorimeter. Figure 2.6.4 shows the general arrangement of the device. The separator, as its name suggests, physically separates the water droplets from the steam sample. This alone would give us a good idea of the dryness of the steam, despite that the separation is not complete, because, as we have seen the dryness fraction is the ratio of the mass of pure steam to the total mass of the steam. Having separated out the water droplets we can find their mass to give us the mass of water in the sample, m1. The ‘pure steam’ is then condensed to allow its mass to be found, m2. Then, Dryness fraction from separator, x = m2/ (m1 + m2) A more accurate answer is obtained by connecting the outlet from the separator directly to a throttle and finding the dryness fraction of the partly dried steam. In the throttling calorimeter, the steam issuing through the orifice must be superheated, or we have two dryness fractions, neither of which we can find. Throttling improves the quality of the steam, which is already high after passing through the separator, therefore superheated steam at this point is not difficult to create. To find the enthalpy of the superheated steam, we need its temperature and its pressure. For the throttling calorimeter, Enthalpy before = enthalpy after throttling hf + x.hfg = enthalpy from superheat tables If we call the dryness from the separator, x1, and the dryness from the throttling calorimeter x2, the dryness fraction of the steam sample is x, given by, x = x1 × x2
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8697285056114197, "perplexity": 1697.449740452245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645396463.95/warc/CC-MAIN-20150827031636-00236-ip-10-171-96-226.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/124153/can-random-elements-be-defined-in-terms-of-a-measure-algebra
# Can random elements be defined in terms of a measure algebra? Let $(\Omega,\Sigma,\mu)$ be a probability space, $(X,\mathcal{X})$ be a measurable space and $R(\Omega,X)$ be the set of equivalence classes of measurable functions from $\Omega$ to $X$ under almost everywhere equality, they are random elements. Let $(\mathcal{A},\mu_A)$ be the measure algebra of $(\Omega,\Sigma,\mu)$, that is $\mathcal{A}$ identifies elements of $\Sigma$ if their symmetric difference has outer measure zero and $\mu_A$ is defined in the natural way in terms of its representatives. I would like to know if one can identify the elements of $R(\Omega,X)$ with something that can be canonically be constructed in terms of $(\mathcal{A},\mu_A)$ and $(X,\mathcal{X})$. The motivation behind the question is the following: I work with certain random elements that are defined on a countably generated probability space. By Maharam's theorem, this amounts to the measure algebra being isomorphic to one that consists of a convex combination of Lebesgue measure on $[0,1]$ and a discrete probability space. I would like to know whether it makes sense for me to say that I'm essentially working with such a probability space. - Perhaps this calls for consideration of the Image measure catastrophe ... google.com/search?as_q=&as_epq=image+measure+catastrophe –  Gerald Edgar Mar 10 '13 at 16:27 @Gerald: could you please summarize the image measure catastrophe in a comment? –  Tom LaGatta Mar 11 '13 at 0:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9567443132400513, "perplexity": 120.65648471332533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443062.21/warc/CC-MAIN-20141017005723-00018-ip-10-16-133-185.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2032311/finding-equivalence-class-with-a-binary-set
# Finding equivalence class with a binary set I'm new to discrete math so there might be problems with this solution. Prompt is to find at least one equivalence class if it is an equivalence relation. $$X = R^2, (x_1, y_1) \sim (x_2, y_2) \iff y_1 = y_2$$ 1. Reflexivity $$(x_1, y_1) \sim (x_1, y_1) \iff y_1 = y_1$$ Since $y_1 = y_1$, it is reflexive. 2. Symmetry $$(x_1, y_1) \sim (x_2, y_2) \iff y_1 = y_2$$ Assuming $y_1 = y_2$ (eqn1), $$(x_2, y_2) \sim (x_1, y_1) \iff y_2 = y_1$$ from eqn 1, $y_2 = y_1$, so it is a symmetry. 3. Transitivity $$(x_1, y_1) \sim (x_2, y_2) \iff y_1 = y_2$$ $$(x_2, y_2) \sim (x_3, y_3) \iff y_2 = y_2$$ Assuming $y_1 = y_2$ and $y_2 = y_3$, we get $y_1 = y_3$. $$(x_1, y_1) \sim (x_3, y_3) \iff y_1 = y_3,$$ since $y_1 = y_3$, so it is transitive. Is this the correct way to solve the problem? Also how to find the equivalence classes for the same? You did correctly. Well.. to find the equivalent class, you often need a representant of that class. Take $(x, y)$, and we want $[(x, y)]$ to be an equivalent class which contains the element $(x, y)$. In other words, $(x, y)$ represents that class. That is: $$[(x, y)] = \{(a, b)\in\mathbb{R}^2 : (x, y)\sim(a, b)\}$$ We have an equivalent class such that $(x, y)\in [(x, y)]$. Now, its all a matter of inserting the definition of $\sim$ in the set. $$[(x, y)] = \{(a, b)\in\mathbb{R}^2 : y=b\}$$ Can you come up with a concrete example? What would $[(1, 2)]$ be, for instance? Yes, your proof is correct. To construct an equivalence class, pick an element of the set and find (describe) all elements that are equivalent to it. More formally: if $R$ is an equivalence relation on a set $X$, then for an $x\in X$, its equivalence class is $[x]=\{y\in X\,\colon\,yRx\}$. In this example: pick an arbitrary $(x,y)\in\mathbb{R}^2$. By the given definition, any other $(x_1,y_1)\in\mathbb{R}^2$ is equivalent to it, $(x_1,y_1)\sim(x,y)$, iff $y_1=y$, so $(x_1,y_1)=(x_1,y)$. Note that there are no constraints on $x_1$, so it can be any real. Thus the equivalence class of $(x,y)$ is $\{(x_1,y)\in\mathbb{R}^2\,\colon\,x_1\in\mathbb{R}\}$ (where $y$ is fixed). Geometrically, they are horizontal lines $y=\text{const}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9634618163108826, "perplexity": 146.142423479725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500331.13/warc/CC-MAIN-20200331053639-20200331083639-00029.warc.gz"}
https://www.physicsforums.com/threads/second-order-coherence-g2-t-of-collision-broadened-light-and-leds.286102/
# Second-order coherence – g2(t) of collision-broadened light and LEDs 1. Jan 20, 2009 ### Xela Hi. I have 2 questions about second-order coherence – g2(t): 1) For collision-broadened light according to the literature g2(t)=1+|g1(t)|^2, where g1(t) is the 1st order coherence. Therefore for very low collision rate g1(t) =1 and thus g2(t)=2. However I would expect collision broadened light to reach a costant phase limit of CW for low collision rate and thus g2(t)=1. What did I miss here? 2) In a light emitting diode – LED there should be many different scattering mechanisms for the radiating carriers and thus it should behave as a collision-broadened light g2(0)=2 with super-poissonian photon statistics. On the other hand the literature about LEDs talks about poissonian and even sub-poissonian photon statistics dependent only on the electron current and ignoring the scattering. Does anyone have a simple explanation of what should LED’s g2(t) look like and why? 2. Jan 20, 2009 ### Cthugha Re: Second-order coherence – g2(t) of collision-broadened light and LEDs The above equation is an approximation for chaotic light, which is used to get g1 if g2 is known. It is not generally valid. If you go to the limit of low collision rates you also approach the monochromatic limit. In that case the light is not truly chaotic anymore and the above equation cannot be applied. Intensity correlation measurements are mostly measurements of noise. The properties of the emitted light from LEDs depend strongly on the kinds of noise present and how strong they are. There is not only noise due to scattering, but mostly due to the emission process itself and due to pump noise. If you have access to a good library, check the book "nonclassical light from semiconductor lasers and LEDs" by Kim, Somani and Yamamoto for a more detailed study.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8052409887313843, "perplexity": 1341.9733530943383}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720154.20/warc/CC-MAIN-20161020183840-00038-ip-10-171-6-4.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/186421/how-to-divide-using-addition-or-subtraction
# How to divide using addition or subtraction We can multiply $a$ and $n$ by adding $a$ a total of $n$ times. $$n \times a = a + a + a + \cdots +a$$ Can we define division similarly using only addition or subtraction? - Do you admit logarithms? If so, we can very easily define division using subtraction: $$a/b = \exp\left(\log \frac{a}{b}\right) = \exp(\log a - \log b).$$ – Emily Aug 24 '12 at 18:14 doesn't exponents and logarithms come we define multiplication and division – Monkey D. Luffy Aug 24 '12 at 18:19 Nope. We can define exponents and logarithms without requiring multiplication or division -- in a manner of speaking. We define $b^x$ as the supremum of a very specific subset of real numbers. This definition does not require that we define $b^x = b\cdot b \cdot b \cdots b$ some $x$ times. In fact, this definition works for any possible value of $x$. Logarithms can be defined in a similar manner. To justify this definition, we require that multiplication is an assumed property of the field of real numbers. We don't need to define exponentiation as repeated multiplication. – Emily Aug 24 '12 at 18:43 Reminded me this good old question. – user2468 Aug 25 '12 at 2:41 @Arkamis, what is this set whose supremum is $b^x$? Sounds very interesting. – goblin Dec 25 '14 at 7:06 To divide $60$ by $12$ using subtraction: \begin{align*} &60-12=48\qquad\text{count }1\\ &48-12=36\qquad\text{count }2\\ &36-12=24\qquad\text{count }3\\ &24-12=12\qquad\text{count }4\\ &12-12=0\qquad\;\text{ count }5\;. \end{align*} Thus, $60\div 12=5$. You can even handle remainders: \begin{align*} &64-12=52\qquad\text{count }1\\ &52-12=40\qquad\text{count }2\\ &40-12=28\qquad\text{count }3\\ &28-12=16\qquad\text{count }4\\ &16-12=4\qquad\;\text{ count }5\;. \end{align*} $4<12$, so $64\div 12$ is $5$ with a remainder of $4$. - I remember re-implementing the built-in integer division and modulo functions of (insert language here) being a common programming exercise... :D – J. M. Aug 24 '12 at 23:45 @J.M. I had it on an exam; language was assembly for Zilog Z80 processors! – user2468 Aug 25 '12 at 3:37 Note this is horribly inefficient, computationally speaking. – Thomas Aug 25 '12 at 4:44 If $n$ is divisible by $b$ ($\frac{n}{b}$ is a whole number), then keep doing $n - b - b - b - b - b - \cdots - b$ until the value of that is $0$. The number of times you subtract $b$ is the answer. For example, $\frac{20}{4} \rightarrow 20 - 4 - 4 - 4 - 4 - 4$. We subtracted '$4$' five times, so the answer is $5$. - You can also use additions. One should use results from intermediate calculations to speed up. Let us divide 63 by 12. $$\begin{split} 12+12=24,&\qquad\textrm{count }1+1=2\\ 24+24=48,&\qquad\textrm{count }2+2=4\\ 48+24=72,&\qquad\textrm{count }4+2=6\textrm{ (exceeded 63)}\\ 48+12=60,&\qquad\textrm{count }4+1=5\textrm{ (so we try adding less)}\\ 63-60=3,&\qquad\textrm{(calculation of the remainder)}\\ \end{split}$$ - You can define division as repeated subtraction:$${72\over 9}=72-9-9-9-9-9-9-9-9$$Subtracting by $9$ eight times is the same as subtracting by $72$ since $9\cdot8=72$. So, the answer is $8$. Also, this is why ${n\over a}=n-a-a-a-a\cdots$ for whatever whole number $a$ is other than zero. If you have a remainder, then you just do this:$${13\over 2}=13-2-2-2-2-2-2-1$$as you just saw, subtracting by $2$ six times is the same as subtracting by $12$ since $2\cdot6=12$, but there's a remainder of $1$ being sutracted, so it's the same as subtracting by $13$ since $2\cdot6+1=13$, so the answer is $6$ R$1$ or $6.5$. - $\frac{n}a\neq n-a-a-a\ldots$. Certainly it is not true that $$\frac{72}{9}=72-9-9-9-9-9-9-9-9$$ – Milo Brandt Jan 24 '15 at 22:27 Okay, I edited it so it would make more sense. – ReliableMathBoy Jan 24 '15 at 22:33 Here's a diagram to visualize how to divide with subtraction. In this diagram we divide $20$ by $3$ and leave a remainder of $2$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9046963453292847, "perplexity": 805.0058530896424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274119.75/warc/CC-MAIN-20160524002114-00182-ip-10-185-217-139.ec2.internal.warc.gz"}
http://www.earthtech.org/experiments/fusor/bigsys3.html
EarthTech's Farnsworth Fusor - Version 2....................................117MAR99 This photo shows our implementation of the Farnsworth Fusor. It is housed in a 6" Conflat cross. The outer grid is about 3.9" OD, made of 0.062" diameter 308 stainless wire (welding rod) spot welded. A portion of the outer grid can be seen in the photo as three dark lines crossing the viewport. The inner grid is 1.25" OD, made of 0.020" diameter Ta wire spot welded. This grid is glowing visibly in the photo. The inner grid is supported by a 0.094" diameter stainless rod which is part of the 30 kV feedthrough visible on top of the chamber. The stainless rod is insulated with a 99.8% alumina ceramic tube that tends to glow alarmingly red during operation! By carefully adjusting the D2 pressure to around 10-15 millitorr (measured with a capacitance manometer) and applying about 20,000 volts across the grids, a thin glow discharge can be established. The current is typically around 6 milliamps. Under these conditions, the system produces D+D fusion in the center due to head-on collisions between ~20 keV deuterons. Evidence of this fusion reaction is the emission of ~104 neutrons/sec. Neutrons are detected with a Bicron BC-720 fast neutron scintillator which consists of ZnS(Ag) phosphor embedded in a clear hydrogenous plastic. It is 2" in diameter. We have coupled it to a 2" PM tube (bi-alkali photocathode) and enclosed it in the cardboard tube visible on the left. The detector electronics are visible in the lower left corner. That box is the all-in-one (HV supply, amplifier, single-channel analyzer, and scaler) Texas Nuclear 9200 system that was originally sold as portable XRF system in the 1960's and 1970's. The total neutron emission rate was calculated from the observed count rate of up to 1.5 count/sec (background is 0.2 count/sec), taking into account the 1/36 geometry factor and 0.6% detector efficiency for 2.5 MeV neutrons. In front of the chamber is an 8mm thick sheet of yellowish leaded glass that does an excellent job of stopping the torrent of soft x-rays that pours out of the glass viewport during operation. At the very top of the photo you can see the Plastic Capacitor's HV power supply and a neon sign transformer. The secondary of this transformer is wired in series with the chamber to serve as a filter. It's inductance measures 190 henries! Neutron Detector Details One of the advantages of the BC-720 is gamma rejection. The following waveforms, collected from the amplifier output of the TN9200, show how this is accomplished: .................. The left waveform shows a typical gamma pulse. This waveform was generated by exposing the detector to a large piece of 232Th, which emits multiple gamma energies up to 2.6 MeV. The waveform on the right was collected during operation of the fusor. A single-channel analyzer is used to discriminate against small the pulses, making the detection system essentially insensitive to gammas. We also looked at some background pulses. As our system is presently adjusted, background pulses occur approximately once every 5 seconds. Some of these pulses look just like the neutron pulse shown on the right above. Occassionally, there are some odd ones: ................ The background pulse on the left is very narrow compared to either the gamma or neutron pulses. The background pulse on the right is huge! Presumably these are different forms of cosmic radiation. Power Balance Under the optimal operating conditions mentioned above, our fusor consumes about 102 watts of power from the HV supply. With this input it stimulates enough D+D fusion to create about 104 neutrons/sec. Each of the neutron-forming D+D reactions releases 3.27 MeV. Presumably there are also about an equal number of undetectable D+D reactions occurring that yield 3H, a proton, and 4.03 MeV. Thus the total energy yield per emitted neutron is about 7 MeV. At 104/sec, that's a total fusion power output of 10-8 watts!!!....10-10 of the input power. Counting the heat power generated by this experiment, the observed ratio of Pout/Pin is therefore about 1.0000000001. Yes, EarthTech has finally observed the excess heat phenomenon! Now we just need to make it bigger...a lot bigger! Thanks to Richard Hull for generous technical support of our efforts! [email protected]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8150230646133423, "perplexity": 2944.454718969751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410665301782.61/warc/CC-MAIN-20140914032821-00155-ip-10-234-18-248.ec2.internal.warc.gz"}
https://cozilikethinking.wordpress.com/2014/12/28/algebraic-geometry-2-philosophizing-categories-2/
### Algebraic Geometry 2: Philosophizing Categories Faithful functor: Let $F:A\to B$ be a functor between two categories. If the map Mor$(A,A')\to$ Mor$(F(A),F(A'))$ is injective, then the functor is called faithful. For example, the functor between the category of sets with only bijective mappings qualified to be morphisms between objects, to the category of sets with all kinds of mappings allowed to be morphisms, carrying objects and morphisms to themselves, is faithful. Full functor: If Mor$(A,B)\to$ Mor$(F(A),F(B))$ is surjective, then the functor is called full. For example, the functor defined above is not full. Natural transformation of covariant functors: Let $F,G$ be functors between categories $A$ and $B$. Let $M,M'\in A$ be objects, and let $f:M\to M'$ be morphisms between them. Then $m:F\to G$ is a natural transformation if the following diagram commutes: Philosophical point: Why are commutative diagrams so omnipresent and important in Mathematics? It seems to be a fairly arbitrary condition to satisfy! Commuting diagrams essentially signify that “similar things” are happening at “different places”, and the “similar things” can easily be inter-converted. Too hand-wavy? Please bear with me. Say $M,M',F(M),F(M'),G(M),G(M')$ are all $\Bbb{Z}$, and $m(\Bbb{Z})$ is idenity. Also assume that  $F(f)$ maps $k\to k+1$. Then if $G(f)$ also maps $k\to k+1$, then $m$ is a natural transformation. However, if $G(f)$ is of any other description, then $m$ is not a natural transformation. Forming other examples should convince you of the fact above.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9973599910736084, "perplexity": 545.098095121252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948612570.86/warc/CC-MAIN-20171218083356-20171218105356-00715.warc.gz"}
https://lexique.netmath.ca/en/formula/
# Formula ## Formula Name given to certain fundamental relationships between variable quantities and constants. ### Examples • The formula to calculate the perimeter P of a square of side length c is : P = 4c. • The formula to calculate the volume V of a ball of radius r is : V = $$\dfrac{4πr^3}{3}$$. • Euler’s formula is a formula that relates the number of vertices S, the number of faces F and the number of edges A of a convex polyhedron : S + F = A + 2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9503790140151978, "perplexity": 358.862218156791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00302.warc.gz"}
https://www.physicsforums.com/threads/universal-gate-question.798536/
# Universal gate question 1. Feb 18, 2015 ### braceman Hi guys, got a question that's got me stumped. Not looking for the answer as I'd prefer to work it out myself, just a nudge or a pointer in the right direction. I'm being asked to prove if an XOR gate can be classed as universal (like the NAND & NOR gates are), but not sure how to go about it. I think there must be a simple way to do it, rather than draw numerous combinations of XOR gates. 1. The problem statement, all variables and given/known data Determine the elementary operations that can be derived from XOR and hence determine if it is a universal gate. 2. Relevant equations 3. The attempt at a solution Obviously got the truth table for XOR, am I supposed to be manipulating this, or taking a function, ie - F = A.B + C. not A and then trying to manipulate this like we do when converting to NAND/NOR (changing gates and inverting terms etc). Bit stuck, so any pointers would be grateful. Last edited by a moderator: Feb 18, 2015 2. Feb 18, 2015 ### lewando Sorry, misread your post. Look into definitions or requirements of a universal gate. Last edited: Feb 18, 2015 3. Feb 18, 2015 ### phinds To prove that gate type X is "universal", you just need to show that the Boolean functions AND, OR, and NOT can be implemented using only gate type X without the need for any other type gate. Can you do that with an XOR gate? 4. Feb 18, 2015 ### donpacino Or prove that you can make a single NOR or NAND gate 5. Feb 18, 2015 ### donpacino here is a hint. first see if you can make an inverter then see if you can make any input a 'blocker.' an example of that is an and gate, if any input is a zero, the ouput will be zero, independent of any other input 6. Feb 19, 2015 ### LCKurtz Here's another hint: Try to prove you can't make an AND gate. (I don't think this is so easy to show if you haven't seen an argument before though). 7. May 11, 2015 ### braceman Sorry I haven't posted back.....forgot all about this post. I got it right in the end, Instructor said I could prove it however I wanted, so I just drew various combinations of 3-4 gates and their associated logic. 8. Jan 20, 2016 ### bizuputyi Is it correct that AND and OR cannot be derived from XOR at all? 9. Jan 20, 2016 ### phinds what do you think, and why? 10. Jan 20, 2016 ### bizuputyi What I meant is that an AND or OR gated cannot be derived by using only XOR gates. We can however make an inverter out of a XOR gate by connecting constant high to one of its input but I can't see any possible way to get either AND or OR from only XORs. To account for that I would say XOR gives high output only if its two inputs differ. 11. Jan 20, 2016 ### phinds I agree. Draft saved Draft deleted Similar Discussions: Universal gate question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8649373650550842, "perplexity": 1086.4042801015996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809160.77/warc/CC-MAIN-20171124234011-20171125014011-00405.warc.gz"}
http://www.onemathematicalcat.org/Math/Precalculus_obj/nonlinearInequalities.htm
# SOLVING NONLINEAR INEQUALITIES IN ONE VARIABLE (INTRODUCTION) • PRACTICE (online exercises and printable worksheets) In Precalculus, it's essential that you can easily and efficiently solve sentences like   ‘$\,\frac{3x}{2}-1 \ge \frac 15 - 7x\,$’   and   ‘$\,x^2 \ge 3\,$’ . The first sentence is an example of a linear inequality in one variable; the prior web exercise covers this type of sentence. For linear inequalities, the variable appears in the simplest possible way—all you have are numbers, times $\,x\,$ to the first power (i.e., terms of the form $\,kx\,$, where $\,k\,$ is a real number). The second sentence is an example of a nonlinear inequality in one variable, and is covered in this web exercise and the next two. In nonlinear sentences, the variable appears in a more complicated way—perhaps you have an $\,x^2\,$ (or higher power), or $\,|x|\,$, or $\,\sin x\,$. For solving nonlinear inequalities, more advanced tools are needed. To get started, review the essential concepts in these two web exercises, being sure to click-click-click to practice the concepts from each: ## Two approaches for solving nonlinear inequalities in one variable There are two basic methods for solving nonlinear inequalities in one variable. Both are called ‘test point methods’, because they involve identifying important intervals, and then ‘testing’ a number from each of these intervals. Below, the sentence ‘$\,x^2 \ge 3\,$’ is solved using both methods, so you can get a sense of which appeals to you more. The solutions below are written extremely compactly—this is pretty much the bare minimum that a teacher would want to see. For all the underlying concepts and details, study the next two web exercises: • the test point method for sentences like ‘$f(x) \gt 0$’ (the one-function method; the ‘compare with zero’ method) • the test point method for sentences like ‘$f(x) \gt g(x)$’ (the two-function method; the ‘truth’ method) By the way, both methods work great. If you like working with just ONE function, thinking about where it is positive, negative, and zero, then you might prefer the first method. If you're fine working with TWO functions, thinking about where the graph of one lies on, above, or below the graph of the other, then you might prefer the second method. ## EXAMPLE:   solve ‘$\,x^2 \ge 3\,$’ using the one-function (‘compare with zero’) method YOU WRITE THIS DOWN COMMENTS $x^2 \ge 3$ original sentence $x^2 - 3\ge 0$ rewrite the inequality with zero on the right-hand side; since the inequality symbol is $\,\ge\,$, we need to determine where the graph of $\,x^2 - 3\,$ lies on ($\,=\,$) or above ($\,\gt\,$) the $x$-axis $f(x) := x^2 - 3$ if desired, name the function on the left-hand side $\,f\,$, so it can be easily referred to in later steps; recall that ‘$:=$’ means ‘equals, by definition’ $x^2 - 3 = 0$ $x^2 = 3$ $x = \pm\sqrt 3$ (no breaks in graph of $\,f\,$) identify the candidates for sign changes for $\,f\,$: where $\,f(x)\,$ is zero breaks in the graph of $f$ SIGN OF $\,f(x)\,$ mark the candidates from the previous step on a number line which is labeled ‘SIGN OF $\,f(x)\,$’ mark zeroes with the tick mark ‘$\,z\,$ ’ test each subinterval to see if $\,f(x)\,$ is $\,+\,$ or $\,-\,$; since $\,\sqrt 3\approx 1.7\,$, test points $\,-2\,$ and $\,2\,$ are easy If the graph of $\,f\,$ is easy to obtain, then draw it in; in this case, you don't even need the test points! Graph above the $x$-axis? Then $\,f(x)\,$ is positive. Graph below the $x$-axis? Then $\,f(x)\,$ is negative. In this example, both the test points and graph are shown. solution set: $(-\infty,-\sqrt 3] \cup [\sqrt 3,\infty)$ sentence form of solution:   $x\le -\sqrt 3\ \ \text{ or }\ \ x\ge \sqrt 3$ read off the solution set, using correct interval notation; or, give the sentence form of the solution When a graph is easy to obtain (as in this example), then you may not need to ‘officially’ use the test point method. Just graph $\,f(x) = x^2 - 3\,$ (see below); the solutions of ‘$\,x^2 - 3 \ge 0\,$’ are the values of $\,x\,$ for which the graph of $\,f\,$ lies on or above the $x$-axis. ## EXAMPLE:   solve ‘$\,x^2 \ge 3\,$’ using the two-function (‘truth’) method $x^2 \ge 3$ original sentence $f(x) := x^2$,   $g(x) := 3$ if desired, define functions $\,f\,$ (the left-hand side) and $\,g\,$ (the right-hand side), so they can be easily referred to in later steps $x^2 = 3$ $x = \pm\sqrt 3$ (no breaks in either graph) find where $\,f(x)\,$ and $\,g(x)\,$ are equal (intersection points); find any breaks in the graphs of $\,f\,$ and $\,g\,$ TRUTH OF ‘$\,x^2 \ge 3\,$’ mark intersection points and breaks on a number line, which is labeled ‘TRUTH OF ‘$\,x^2 \ge 3\,$’ mark intersection points with the tick mark $\,i\,$ using easy test points from each subinterval, check if ‘$\,x^2 \ge 3\,$’ is TRUE or FALSE, and mark accordingly solution set: $(-\infty,-\sqrt 3] \cup [\sqrt 3,\infty)$ sentence form of solution:   $x\le -\sqrt 3\ \ \text{ or }\ \ x \ge\sqrt 3$ read off the solution set, using correct interval notation; or, give the sentence form of the solution When graphs are easy to obtain (as in this example), then you may not need to ‘officially’ use the test point method. Just graph $\,f(x) = x^2\,$ (in red below) and $\,g(x) = 3\,$ (in green below): then, the solutions of ‘$\,x^2 \ge 3\,$’ are the values of $\,x\,$ for which the graph of $\,f\,$ lies on or above the graph of $\,g\,$. ## Using WolframAlpha to solve nonlinear inequalities Just for fun, jump up to wolframalpha.com and key in:   x^2 >= 3 Voila! Master the ideas from this section The Test Point Method for Sentences like ‘$\,f(x) \gt 0$’
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8486063480377197, "perplexity": 781.07225832465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00308-ip-10-171-10-70.ec2.internal.warc.gz"}
https://acs.figshare.com/articles/Modeling_Excluded_Volume_Effects_for_the_Faithful_Description_of_the_Background_Signal_in_Double_Electron_Electron_Resonance/2337574/1
## Modeling Excluded Volume Effects for the Faithful Description of the Background Signal in Double Electron–Electron Resonance 2013-12-27T00:00:00Z (GMT) by We discuss excluded volume effects on the background signal of double electron–electron resonance (DEER) experiments. Assuming spherically symmetric pervaded volumes, an analytical expression of the background signal is derived based on the shell-factorization approach. The effects of crowding and off-center label positions are discussed. Crowding is taken into account using the Percus–Yevick approximation for the radial distribution function of the particle centers. In addition, a versatile approach relating the pair-correlation function of the particle centers with those of off-center labels is introduced. Limiting expressions applying to short and long dipolar evolution times are derived. Furthermore, we show under which conditions the background with significant excluded volume effects resembles that originating from a fractal dimensionality ranging from 3 to 6. DEER time domain data of spin-probed samples of human serum albumin (HSA) are shown to be strongly affected by excluded-volume effects. The excluded volume is determined from the simultaneous analysis of spectra recorded at various protein concentrations but a constant probe-to-protein ratio. The spin-probes 5-DOXYL-stearic acid (5-DSA) and 16-DOXYL-stearic acid (16-DSA) are used, which, when taken up by HSA, give rise to broad and well-defined distance distributions, respectively. We compare different, model-free approaches of analyzing these data. The most promising results are obtained by the concurrent Tikhonov regularization of all spectra when a common background model is simultaneously adjusted such that the a posteriori probability is maximized. For the samples of 16-DSA in HSA, this is the only approach that allows suppressing a background artifact. We suggest that the delineated simultaneous analysis procedure can be generally applied to reduce ambiguities related to the ill-posed extraction of distance distributions from DEER spectra. This approach is particularly valuable for dipolar signals resulting from broad distance distributions, which as a consequence, are devoid of explicit dipolar oscillations.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8718125820159912, "perplexity": 2125.69672350461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540585566.60/warc/CC-MAIN-20191214070158-20191214094158-00156.warc.gz"}
http://www.reference.com/browse/balmer-series
Definitions # Balmer series [bahl-mer] The Balmer series or Balmer lines in atomic physics, is the designation of one of a set of six different named series describing the spectral line emissions of the hydrogen atom. The Balmer series is calculated using the Balmer formula, an empirical equation discovered by Johann Balmer in 1885. The visible spectrum of light from hydrogen displays four wavelengths, 410 nm, 434 nm, 486 nm, and 656 nm, that reflect emissions of photons by electrons in excited states transitioning to the quantum level described by the principal quantum number n equals 2. ## Overview The Balmer series is characterized by the electron transitioning from n ≥ 3 to n = 2, where n refers to the radial quantum number or principal quantum number of the electron. The transitions are named sequentially by Greek letter: n = 3 to n = 2 is called H-α, 4 to 2 is H-β, 5 to 2 is H-γ, and 6 to 2 is H-δ. As the first spectral lines associated with this series are located in the visible part of the electromagnetic spectrum, these lines are historically referred to as "H-alpha", "H-beta", "H-gamma" and so on, where H is the element hydrogen. Transition of $n$ Name Wavelength (nm) Color 3→2 4→2 5→2 6→2 7→2 8→2 9→2 $infty$→2 H-α H-β H-γ H-δ H-ε H-ζ H-η 656.3 486.1 434.1 410.2 397.0 388.9 383.5 364.6 Red Blue-green Violet Violet Violet Violet (Ultraviolet) (Ultraviolet) Although physicists were aware of atomic emissions before 1885, they lacked a tool to accurately predict where the spectral lines should appear. The Balmer equation predicts the four visible absorption/emission lines of hydrogen with high accuracy. Balmer's equation inspired the Rydberg equation as a generalization of it, and this in turn led physicists to find the Lyman, Paschen, and Brackett series which predicted other absorption/emission lines of hydrogen found outside the visible spectrum. The familiar red H-alpha spectral line of hydrogen gas, which is the transition from the shell n = 3 to the Balmer series shell n = 2, is one of the conspicuous colors of the universe. It contributes a bright red line to the spectra of emission or ionization nebula, like the Orion Nebula, which are often H II regions found in star forming regions. In true-color pictures, these nebula have a distinctly pink color from the combination of visible Balmer lines that hydrogen emits. Later, it was discovered that when the spectral lines of the hydrogen spectrum are examined at very high resolution, they are found to be closely-spaced doublets. This splitting is called fine structure. It was also found that excited electrons could jump to the Balmer series n=2 from orbitals where n was greater than 6, emitting shades of violet when doing so. ## Balmer's formula Balmer noticed that a single number had a relation to every line in the hydrogen spectrum that was in the visible light region. That number was 364.56 nm. When any integer higher than 2 was squared and then divided by itself minus 4, then that number multiplied by 364.56 gave a wavelength of another line in the visible hydrogen spectrum. By this formula he was able to show that certain measurements of lines made in his time by spectroscopy were slightly inaccurate measurements and his formula predicted lines that were later found although had not yet been observed. His number also proved to be the limit of the series. The Balmer equation could be used to find the wavelength of the absorption/emission lines and was originally presented as follows (save for a notation change to give Balmer's constant as B): $lambda = Bleft\left(frac\left\{m^2\right\}\left\{m^2 - n^2\right\}right\right) = Bleft\left(frac\left\{m^2\right\}\left\{m^2 - 2^2\right\}right\right)$ Where $lambda$ is the wavelength. B is a constant with the value of 3.6456×10-7 m or 364.56 nm. n is equal to 2 m is an integer such that m > n. In 1888 the physicist Johannes Rydberg generalized the Balmer equation for all transitions of hydrogen. The equation commonly used to calculate the Balmer series is a specific example of the Rydberg formula and follows as a simple reciprocal mathematical rearrangement of the formula above (conventionally using a notation of n for m as the single integral constant needed): $frac\left\{1\right\}\left\{lambda\right\} = frac\left\{4\right\}\left\{B\right\}left\left(frac\left\{1\right\}\left\{2^2\right\} - frac\left\{1\right\}\left\{n^2\right\}right\right) = R_mathrm\left\{H\right\}left\left(frac\left\{1\right\}\left\{2^2\right\} - frac\left\{1\right\}\left\{n^2\right\}right\right), n=3,4,5,...$ where λ is the wavelength of the absorbed/emitted light and RH is the Rydberg constant for hydrogen. The Rydberg constant is seen to be equal to 4/B in Balmer's formula, and for an infinitely heavy nucleus is 4/(3.6456*10-7m) = 10,973,731.57 m−1. ## Role in astronomy The Balmer series is particularly useful in astronomy because the Balmer lines appear in numerous stellar objects due to the abundance of hydrogen in the universe, and therefore are commonly seen and relatively strong compared to lines from other elements. The spectral classification of stars, which is primarily a determination of surface temperature, is based on the relative strength of spectral lines, and the Balmer series in particular are very important. Other characteristics of a star can be determined by close analysis of its spectrum include surface gravity (related to physical size) and composition. Because the Balmer lines are commonly seen in the spectra of various objects, they are often used to determine radial velocities due to doppler shifting of the Balmer lines. This has important uses all over astronomy, from detecting binary stars, exoplanets, compact objects such as neutron stars and black holes (by the motion of hydrogen in accretion disks around them), identifying groups of objects with similar motions and presumably origins (moving groups, star clusters, galaxy clusters, and debris from collisions), determining distances (actually redshifts) of galaxies or quasars, and identifying unfamiliar objects by analysis of their spectrum. Balmer lines can appear as absorption or emission lines in a spectrum, depending on the nature of the object observed. In stars, the Balmer lines are usually seen in absorption, and they are "strongest" in stars with a surface temperature of about 10,000 kelvin (spectral type A). In the spectra of most spiral and irregular galaxies, AGNs, H II regions and planetary nebulae, the Balmer lines are emission lines. In stellar spectra, the H-epsilon line (transition 7-2) is often mixed in with another absorption line caused by ionized calcium known by astronomers as "H" (the original designation given by Fraunhofer). That is, H-epsilon's wavelength is quite close to CaH at 396.847nm, and cannot be resolved in low resolution spectra. The H-zeta line (transition 8-2) is similarly mixed in with a neutral helium line seen in hot stars.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9056036472320557, "perplexity": 1039.436253293611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164641332/warc/CC-MAIN-20131204134401-00019-ip-10-33-133-15.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/226892-microstates.html
# Math Help - microstates 1. ## microstates when you flip a coin 50 times, the total number of microstates is 250. How can you show this by using the binomial theoreom. Thank you! 2. ## Re: microstates Originally Posted by apatite when you flip a coin 50 times, the total number of microstates is 250. How can you show this by using the binomial theoreom. ${2^{50}} = {\left( {1 + 1} \right)^{50}} = \sum\limits_{k = 0}^{50} {\left( {\begin{array}{*{20}{c}} {50} \\ k \end{array}} \right)}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920393705368042, "perplexity": 469.89959955784065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931010713.38/warc/CC-MAIN-20141125155650-00211-ip-10-235-23-156.ec2.internal.warc.gz"}