source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
3,205,349
If x was a positive 4 digit number and you divide it by the sum of its digits to get the smallest value possible, what is the value of x? For example (1234 = 1234/10) I got 1099 as my answer however I don't know if this is right or how to prove it.
Let $x$ be the 4-digit number ending in digit not equal to 9 with the sum of digits equal to $s$ . If you increase the last digit by one, the number becomes $x+1$ and the sum of digits becomes $s+1$ . You can easily prove that: $$\frac xs>\frac{x+1}{s+1}$$ This is true because $x>s$ . So to reduce the ratio as much as possible, the last digit has to be as big as possible, which is $9$ . Now consider number $x$ with the third digit not eqaul to 9. Increase the third digit by one and the number becomes $x+10$ and the sum increases by 1. Again, you can show that: $$\frac xs>\frac{x+10}{s+1}$$ This is true because $x>10s$ . So the third digit also has to be as big as possibe. So the last two digits must be equal to 9. Consider now the first digit and decrease it by one. The number becomes $x-1000$ and the sum becomes $s-1$ . $$\frac xs>\frac{x-1000}{s-1}$$ This is true because it is equivalent to: $$x<1000s$$ ...and you know that $s$ is certainly greater than 18 (the last two digits must be 9). So the first digit has to be as small as possible i.e. 1 and the number is of the form: $$1a99$$ The ratio you want to minimize now becomes: $$\frac{100a+1099}{a+19}=100-\frac{801}{a+19}$$ Minimum value is reached for minumum value of $a$ which is 0. So the solution is: 1099.
{ "source": [ "https://math.stackexchange.com/questions/3205349", "https://math.stackexchange.com", "https://math.stackexchange.com/users/668254/" ] }
3,208,939
How do you prove the following fact about the transpose of a product of matrices? Also can you give some intuition as to why it is so. $(AB)^T = B^TA^T$
Here's an alternative argument. The main importance of the transpose (and this in fact defines it) is the formula $$Ax\cdot y = x\cdot A^\top y.$$ (If $A$ is $m\times n$ , then $x\in \Bbb R^n$ , $y\in\Bbb R^m$ , the left dot product is in $\Bbb R^m$ and the right dot product is in $\Bbb R^n$ .) Now note that $$(AB)x\cdot y = A(Bx)\cdot y = Bx\cdot A^\top y = x\cdot B^\top(A^\top y) = x\cdot (B^\top A^\top)y.$$ Thus, $(AB)^\top = B^\top A^\top$ .
{ "source": [ "https://math.stackexchange.com/questions/3208939", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
3,214,732
I've frequently used “intuition” to solve limits at infinity. For example, if someone asked me what is: $$ \lim_{x \to \infty} f(x) = \frac {x^5 + x^3 + x}{x^2} $$ Or a sequence that can be represented by such a function, I would quickly argue that the only thing that matters as go through higher and higher numbers for $x$ would be the terms with the higher powers, and so the limit becomes: $$ \lim_{x \to \infty} f(x) = \frac {x^5 }{x^2} $$ The numerator is growing at a much faster rate than the denominator, and so the function will diverge to $ + \infty $ . But, when I was going through a problem from Paul's online notes , my intuition didn't go well with solving it by the book. The problem is: $$ \left\{ {\frac{{\ln \left( {n + 2} \right)}}{{\ln \left( {1 + 4n} \right)}}} \right\}_{n = 1}^\infty $$ Does this converge to a value? Now, if we use L'Hopital's rule, this is a easy enough problem. It converges to $1$ . $$ \mathop {\lim }\limits_{n \to \infty } \frac{\ln ( n + 2 )}{\ln (1 + 4n )} = \mathop \lim \limits_{n \to \infty } \frac{^{1}/_{(n + 2)}}{^{4}/_{(1 + 4n)}} = \mathop \lim \limits_{n \to \infty } \frac{1 + 4n}{4( {n + 2} )} = 1 $$ But, as I said, sometimes I like to do these by intuition, and what I did was this: as $ n $ approaches infinity, the integers “ $2$ ” and “ $1$ ” won't matter. So the limit becomes: $$ \lim_{n \to \infty} \frac { \ln(n) }{\ln(4n)} $$ The denominator is increasing at a much faster rate than the numerator, and so the limit will converge to $0$ . I understand that something is wrong with my intuition, and intuition is probably not a good way to solve mathematical problems — indeed, the tutorial this problem is from itself mentions that intuition can sometimes lead you astray — but I'd love to gather some insight as to where I'm going wrong.
When you have $$\lim_{n\to\infty}\frac{\ln(n)}{\ln(4n)},$$ the denominator doesn't increase at a much faster rate than the numerator. As a matter of fact, since we have $$(\forall n\in\mathbb N):\ln(4n)=\ln(4)+\ln(n),$$ they increase at the same rate. And now it is easy to see that the limit is indeed $1$ .
{ "source": [ "https://math.stackexchange.com/questions/3214732", "https://math.stackexchange.com", "https://math.stackexchange.com/users/525700/" ] }
3,219,025
I've tried both calculations on Wolfram Alpha and it returns different results , but I can't get a grasp of why it is like that. From my point of view, both calculations should be the same, as $2.5=25/10,$ and $(-2)^{2.5}$ is equal to $(-2)^{25/10},$ relying on a general rule $(a^m)^n=a^{mn}$ . Links to sources: https://www.wolframalpha.com/input/?i=(-2)%5E(2.5) https://www.wolframalpha.com/input/?i=((-2)%5E(25))%5E(1%2F10)
J.W. Tanner has communicated the main point and provided some links to questions that provide more details. I'd like to try to tell the (mostly) whole story in one place. Recall that the standard definition of $a^b$ for $a \in \mathbb{R}_{>0}$ , $b\in \mathbb{R}$ is $$a^b := e^{b\ln(a)}$$ Where the exponential function can be defined in several ways-- through its power series, as the solution to the differential equation $y'=y$ , or the inverse to the natural logarithm (which is in turn defined as the integral $\ln(x)=\int_1^x\frac{1}{t}dt$ ). From this definition, it's clear that $b\ln(a)=\ln(a^b)$ , so we have $$a^{bc} = e^{bc\ln(a)}=e^{c\ln(a^b)}=(a^b)^c.$$ However, for $a \leq 0$ , this definition requires us to make sense of $\ln(a)$ , and the integral definition referenced above diverges. How might we do this? Since we're trying to understand exponentiation of negative numbers, we surely must include the case of $(-1)^{1/2} = \pm i \in \mathbb{C}$ , so we can't get around working in the complex plane. If we want to try to extend our earlier definition of $a^b$ , then, we're forced to confront the extension of the exponential function to the complex plane. Fortunately, the exponential function's power series definition extends naturally to the complex plane, and from it we can easily derive Euler's identity, which states $$e^{i\theta} = \cos(\theta)+i\sin(\theta)$$ for $\theta \in \mathbb{R}$ , so $e^{i\theta}$ is a point on the unit circle at angle $\theta$ from the positive real axis, measured counterclockwise. In particular, we see that any nonzero complex number $z$ can be written uniquely as $z=re^{i\theta}$ for some $r \in \mathbb{R}_{>0}$ and $-\pi < \theta \leq \pi$ . If we want a defining property of our extension of the natural logarithm to be that the exponential function inverts it (which it had better, if the original formula is to always return $a^1=a$ ), then, one way to define the natural logarithm of $z$ is $\ln(z) := \ln(r)+i\theta$ , as this gives $$e^{\ln(z)}=e^{\ln(r)+i\theta}=re^{i\theta}=z,$$ as desired. Note $z=r$ and $\theta=0$ if $z$ is real and positive, so this is indeed an extension of the usual natural logarithm. However, this choice was not unique-- we had to restrict $-\pi < \theta \leq \pi$ to make this definition. If our defining property is just inversion by the exponential function, it's clear that $\ln(z)=\ln(r)+i(\theta+2\pi n)$ works just as well for any integer $n$ , and in general one could define a natural logarithm by instead restricting $\theta$ to be in any interval of length $2\pi$ we want, even making the interval a function of $r$ -- making this choice is called choosing a branch of the logarithm. The original definition I gave is called the principal branch, and this is what most calculators like Wolfram Alpha will use. Going back to our definition of $a^b$ and declaring it true for any $a,b \in \mathbb{C}$ , we see the result depends on our choice of branch. This is what people mean when they say that exponentiation isn't uniquely defined in $\mathbb{C}$ . Now, let's finally see what goes wrong in your example using the principal branch of the logarithm to define $(-2)^{2.5}$ and $((-2)^{25})^{1/10}$ . We have $$(-2)^{2.5}=e^{2.5\ln(-2)}=e^{2.5(\ln(2)+i\pi)}=e^{2.5\ln(2)+2.5\pi i}=e^{2.5\ln(2)}e^{i\frac{\pi}{2}} = 2^{2.5}i,$$ while $$((-2)^{25})^{1/10}=(-2^{25})^{1/10} = e^{\frac{1}{10}\ln(-2^{25})} = e^{\frac{1}{10}(\ln(2^{25})+i\pi)} = 2^{2.5}e^{i\pi/10}=2^{2.5}(\cos(\pi/10)+i\sin(\pi/10)),$$ and these are clearly different. This example demonstrates precisely that, in general, the identity $a^{bc}=(a^b)^c$ does not hold if $a$ is not a positive real number, and you can similarly see that this identity breaks down if $b$ is not real, even if $a \in \mathbb{R}_{>0}$ .
{ "source": [ "https://math.stackexchange.com/questions/3219025", "https://math.stackexchange.com", "https://math.stackexchange.com/users/665894/" ] }
3,219,191
How can I compute the series $$\sum_{n=1}^{\infty} \frac{1}{2^{n}} \tan \left( \frac{\pi}{2^{n+1}} \right)$$ I just guess using half-angle formula to compute this series, but I can't do any approaches. How should I do to solve this infinite sum?
Let $$\begin{align}S(x)&=\sum_{n=1}^\infty\frac1{2^n}\tan\left(\frac{x}{2^n}\right)\\&=\frac{d}{dx}\sum_{n=1}^\infty\ln\left(\sec\left(\frac{x}{2^n}\right)\right)\\&=\frac{d}{dx}\ln\left(\prod_{n=1}^\infty\sec\left(\frac{x}{2^n}\right)\right)\\&=\frac{d}{dx}\ln\left(\lim_{m\to\infty}\left\{2^m\csc(x)\sin(2^{-m}x)\right\}\right)\\&=\frac{d}{dx}\ln(x\csc(x))\\&=\frac{(1-x\cot(x))\csc(x)}{x\csc(x)}\\&=\frac1x-\cot(x)\end{align}$$ Then we want to compute $$S(\pi/2)=\frac1{π/2}-0=\bbox[5px,border:2px solid black]{\frac2{\pi}}$$ This agrees with the numerical result on Wolfram Alpha. Some other details: I used the step $$\prod_{n=1}^m\sec(2^{-n}x)=2^m\csc(x)\sin(2^{-m}x)$$ in the computation. As Yuta pointed out in the comments, this can be proved by multliplying through by $\csc(2^{-m} x)$ and using $$\begin{align}\csc(2^{-m} x)\prod_{n=1}^m\sec(2^{-n}x)&=\frac1{\sin(2^{-m} x)}\prod_{n=1}^m\frac1{\cos(2^{-n}x)}\\&=2\frac1{\sin(2^{-m+1}x)}\prod_{n=1}^{m-1}\frac1{\cos(2^{-n}x)}\\&=\cdots\\&=2^m\frac1{\sin(x)}\end{align}$$ The limit was evaluated by doing a Taylor expansion.
{ "source": [ "https://math.stackexchange.com/questions/3219191", "https://math.stackexchange.com", "https://math.stackexchange.com/users/672369/" ] }
3,221,904
Recently I open this book to look up a certain theorem and saw something peculiar about the acknowledgments I've never notice before: Acknowledgments I thank Ron Infante and Peter Pappas for assisting with the proof reading and for useful suggestions and corrections. I also thank Gimli Khazad for his corrections. S.L. The first two seem to be the (real) mathematicians Ronald P. Infante and Peter Chris Pappas but the third name is quite suspicious: everything I can find about this name is either related to the famous dwarf from Lord of the Rings or a small town in Canada. What's even more funny is that the surname "Khazad", in the fictional dwarven language created by Tolkien, means "dwarves". Is Serge Lang thanking Gimli the dwarf? Was he known to make pratical jokes? This is too much of a coincidence. EDIT: Here's a link to the book. The acknowledgments are after the foreword.
Some googling shows that member of the MAA and poet Ayshhyah E. Khazad (1944, Kalamazoo, Michigan) ( aka Asha Khazad, aka Gimli Khazad, born John Pilaar) has a degree in mathematics from Western Michigan University. He participated in the Putnam competition in 1964, when he was a sophomore. Lang's book was published in 1966. The material covered there is rather elementary so it seems plausible that he would have been able to proofread Lang's book for him. He might have helped Kenneth Ross too, as another Gimli Khazad is acknowledged here .
{ "source": [ "https://math.stackexchange.com/questions/3221904", "https://math.stackexchange.com", "https://math.stackexchange.com/users/220250/" ] }
3,224,765
The following question was asked on a high school test, where the students were given a few minutes per question, at most: Given that, $$P(x)=x^{104}+x^{93}+x^{82}+x^{71}+1$$ and, $$Q(x)=x^4+x^3+x^2+x+1$$ what is the remainder of $P(x)$ divided by $Q(x)$ ? The given answer was: Let $Q(x)=0$ . Multiplying both sides by $x-1$ : $$(x-1)(x^4+x^3+x^2+x+1)=0 \implies x^5 - 1=0 \implies x^5 = 1$$ Substituting $x^5=1$ in $P(x)$ gives $x^4+x^3+x^2+x+1$ . Thus, $$P(x)\equiv\mathbf0\pmod{Q(x)}$$ Obviously, a student is required to come up with a “trick” rather than doing brute force polynomial division. How is the student supposed to think of the suggested method? Is it obvious? How else could one approach the problem?
The key idea employed here is the method of simpler multiples - a very widely used technique. Note that $\,Q\,$ has a "simpler" multiple $\,QR = x^5\!-\!1,\,$ so we can first reduce $P$ modulo $\,x^{\large 5}\! -\! 1\,$ via $\!\bmod x^{\large 5}-1\!:\,\ \color{#c00}{x^{\large 5}\equiv 1}\Rightarrow\, \color{#0a0}{x^n} =x^{\large r+5q^{\phantom{|}}}\!\!\equiv x^{\large r}(\color{#c00}{x^{\large 5}})^{\large q}\equiv x^{\large r}\equiv \color{#0a0}{x^{n\bmod 5}},\,$ then reduce $\!\bmod Q,\,$ i.e. $$P\bmod Q\, =\, (P\bmod QR)\bmod Q\qquad$$ Proof: $\: \color{darkorange}{P'}:= P\bmod QR\, =\, P-QRS\,\color{darkorange}{\equiv P}\,\pmod{\!Q},\,$ thus $\:\color{darkorange}{P'}\bmod Q = \color{darkorange}{P}\bmod Q$ Therefore, if $\,P = x^{\color{#0a0}A}+x^{\color{#90f}B} +x^C + x^D + x^E\,$ & $\bmod 5\!:\ \color{#0a0}A,\color{#90f}B,C,D,E\equiv \color{#0a0}4,\color{#90f}3,2,1,0\,$ then $\,P\bmod x^5-1 = x^{\large \color{#0a0}4}+x^{\large \color{#90f}3}+x^{\large 2}\,+x^{\large 1}\,+x^{\large 0} = Q,\,$ by $\,\color{#0a0}{x^n\equiv x^{n\bmod 5}}$ as above, hence $P\bmod Q \,=\, (P\bmod X^5\!-\!1)\bmod Q \,=\, Q\bmod Q = 0,\,$ generalizing the OP. This idea is ubiquitous, e.g. we already use it implicitly in grade school in radix $10$ to determine integer parity: first reduce mod $10$ to get the units digit, then reduce the units digits mod $2,\,$ i.e. $$N \bmod 2\, = (N\bmod 2\cdot 5)\bmod 2\qquad\ $$ i.e. an integer has the same parity (even / oddness) as that of its units digit. Similarly since $7\cdot 11\cdot 13 = 10^{\large 3}\!+1$ we can compute remainders mod $7,11,13$ by using $\,\color{#c00}{10^{\large 3}\equiv -1},\,$ e.g. $\bmod 13\!:\,\ d_0+ d_1 \color{#c00}{10^{\large 3}} + d_2 (\color{#c00}{10^{\large 3}})^{\large 2}\!+\cdots\,$ $ \equiv d_0 \color{#c00}{\bf -} d_1 + d_2+\cdots,\,$ and, similar to the OP, by $\,9\cdot 41\cdot 271 = 10^{\large 5}\!-1\,$ we can compute remainders mod $41$ and $271$ by using $\,\color{#c00}{10^5\!\equiv 1}$ $$N \bmod 41\, = (N\bmod 10^{\large 5}\!-1)\bmod 41\quad $$ for example $\bmod 41\!:\ 10000\color{#0a0}200038$ $ \equiv (\color{#c00}{10^{\large 5}})^{\large 2}\! + \color{#0a0}2\cdot \color{#c00}{10^{\large 5}} + 38\equiv \color{#c00}1+\color{#0a0}2+38\equiv 41\equiv 0$ Such "divisibility tests" are frequently encountered in elementary and high-school and provide excellent motivation for this method of "divide first by a simpler multiple of the divisor" or, more simply, "mod first by a simpler multiple of the modulus". This idea of scaling to simpler multiples of the divisor is ubiquitous, e.g. it is employed analogously when rationalizing denominators and in Gauss's algorithm for computing modular inverses. For example, to divide by an algebraic number we can use as a simpler multiple its rational norm = product of conjugates. Let's examine this for a quadratic algebraic number $\,w = a+b\sqrt{n},\,$ with norm $\,w\bar w = (a+b\sqrt n)(a-b\sqrt n) = \color{#0a0}{a^2-nb^2 = c}\in\Bbb Q\ (\neq 0\,$ by $\,\sqrt{n}\not\in\Bbb Q),\,$ which reduces division by an algebraic to simpler division by a rational , i.e. $\, v/w = v\bar w/(w\bar w),$ i.e. $$\dfrac{1}{a+b\sqrt n}\, =\, \dfrac{1}{a+b\sqrt n}\, \dfrac{a-b\sqrt n}{a-b\sqrt n}\, =\, \dfrac{a-b\sqrt n}{\color{#0a0}{a^2-nb^2}}\,=\, {\frac{\small 1}{\small \color{#0a0}c}}(a-b\sqrt n),\,\ \color{#0a0}{c}\in\color{#90f}{\Bbb Q}\qquad $$ so-called $\rm\color{#90f}{rationalizing}\ the\ \color{#0a0}{denominator}$ . The same idea works even with $\,{\rm\color{#c00}{nilpotents}}$ $$\color{#c00}{t^n = 0}\ \Rightarrow\ \dfrac{1}{a-{ t}}\, =\, \dfrac{a^{n-1}+\cdots + t^{n-1}}{a^n-\color{#c00}{t^n}}\, =\, a^{-n}(a^{n-1}+\cdots + t^{n-1})\qquad$$ which simplifies by eliminating $\,t\,$ from the denominator, i.e. $\,a-t\to a^n,\,$ reducing the division to division by a simpler constant $\,a^n\,$ (vs. a simpler rational when rationalizing the denominator). More generally, often we can use norms to reduce divisibility and factorization problems on algebraic integers to "simpler" problems involving their norm multiples in $\Bbb Z$ - analogous to above, where we reduced division by the algebraic integer $\,w = a + b\sqrt{n}\,$ to division by its norm. The same can be done using norms in any (integral) ring extension (i.e. the base ring need not be $\Bbb Z$ ). Another example is Gauss' algorithm, where we compute fractions $\!\bmod m\,$ by iteratively applying this idea of simplifying the denominator by scaling it to a smaller multiple. Here we scale $\rm\color{#C00}{\frac{A}B} \to \frac{AN}{BN}\: $ by the least $\rm\,N\,$ so that $\rm\, BN \ge m,\, $ reduce mod $m,\,$ then iterate this reduction, e.g. $$\rm\\ mod\ 13\!:\,\ \color{#C00}{\frac{7}9} \,\equiv\, \frac{14}{18}\, \equiv\, \color{#C00}{\frac{1}5}\,\equiv\, \frac{3}{15}\,\equiv\, \color{#C00}{\frac{3}2} \,\equiv\, \frac{21}{14} \,\equiv\, \color{#C00}{\frac{8}1}\qquad\qquad$$ Denominators of the $\color{#c00}{\rm reduced}$ fractions decrease $\,\color{#C00}{ 9 > 5 > 2> \ldots}\,$ so reach $\color{#C00}{1}\,$ (not $\,0\,$ else the denominator would be a proper factor of the prime modulus; it may fail for composite modulus) See here and its $25$ linked questions for more examples similar to the OP (some far less trivial). Worth mention: there are simple algorithms for recognizing cyclotomics (and products of such), e.g. it's shown there that $\, x^{16}+x^{14}-x^{10}-x^8-x^6+x^2+1$ is cyclotomic (a factor of $x^{60}-1),\,$ so we can detect when the above methods apply for arbitrarily large degree divisors.
{ "source": [ "https://math.stackexchange.com/questions/3224765", "https://math.stackexchange.com", "https://math.stackexchange.com/users/659544/" ] }
3,224,769
Consider the set $S$ of all $z\in\mathbb C$ for which \begin{equation}\tag{1}\label{1} \left|\frac{2+z}{2-z}\right| \le 1. \end{equation} One can easily find that $S=\{z\in\mathbb C \, :\, \Re(z) \le 0\}$ by using basic properties of the complex numbers (as Martin did; $\Re(z)$ denotes the real part of $z$ ). What are some other interesting ways to arrive at this result?
The key idea employed here is the method of simpler multiples - a very widely used technique. Note that $\,Q\,$ has a "simpler" multiple $\,QR = x^5\!-\!1,\,$ so we can first reduce $P$ modulo $\,x^{\large 5}\! -\! 1\,$ via $\!\bmod x^{\large 5}-1\!:\,\ \color{#c00}{x^{\large 5}\equiv 1}\Rightarrow\, \color{#0a0}{x^n} =x^{\large r+5q^{\phantom{|}}}\!\!\equiv x^{\large r}(\color{#c00}{x^{\large 5}})^{\large q}\equiv x^{\large r}\equiv \color{#0a0}{x^{n\bmod 5}},\,$ then reduce $\!\bmod Q,\,$ i.e. $$P\bmod Q\, =\, (P\bmod QR)\bmod Q\qquad$$ Proof: $\: \color{darkorange}{P'}:= P\bmod QR\, =\, P-QRS\,\color{darkorange}{\equiv P}\,\pmod{\!Q},\,$ thus $\:\color{darkorange}{P'}\bmod Q = \color{darkorange}{P}\bmod Q$ Therefore, if $\,P = x^{\color{#0a0}A}+x^{\color{#90f}B} +x^C + x^D + x^E\,$ & $\bmod 5\!:\ \color{#0a0}A,\color{#90f}B,C,D,E\equiv \color{#0a0}4,\color{#90f}3,2,1,0\,$ then $\,P\bmod x^5-1 = x^{\large \color{#0a0}4}+x^{\large \color{#90f}3}+x^{\large 2}\,+x^{\large 1}\,+x^{\large 0} = Q,\,$ by $\,\color{#0a0}{x^n\equiv x^{n\bmod 5}}$ as above, hence $P\bmod Q \,=\, (P\bmod X^5\!-\!1)\bmod Q \,=\, Q\bmod Q = 0,\,$ generalizing the OP. This idea is ubiquitous, e.g. we already use it implicitly in grade school in radix $10$ to determine integer parity: first reduce mod $10$ to get the units digit, then reduce the units digits mod $2,\,$ i.e. $$N \bmod 2\, = (N\bmod 2\cdot 5)\bmod 2\qquad\ $$ i.e. an integer has the same parity (even / oddness) as that of its units digit. Similarly since $7\cdot 11\cdot 13 = 10^{\large 3}\!+1$ we can compute remainders mod $7,11,13$ by using $\,\color{#c00}{10^{\large 3}\equiv -1},\,$ e.g. $\bmod 13\!:\,\ d_0+ d_1 \color{#c00}{10^{\large 3}} + d_2 (\color{#c00}{10^{\large 3}})^{\large 2}\!+\cdots\,$ $ \equiv d_0 \color{#c00}{\bf -} d_1 + d_2+\cdots,\,$ and, similar to the OP, by $\,9\cdot 41\cdot 271 = 10^{\large 5}\!-1\,$ we can compute remainders mod $41$ and $271$ by using $\,\color{#c00}{10^5\!\equiv 1}$ $$N \bmod 41\, = (N\bmod 10^{\large 5}\!-1)\bmod 41\quad $$ for example $\bmod 41\!:\ 10000\color{#0a0}200038$ $ \equiv (\color{#c00}{10^{\large 5}})^{\large 2}\! + \color{#0a0}2\cdot \color{#c00}{10^{\large 5}} + 38\equiv \color{#c00}1+\color{#0a0}2+38\equiv 41\equiv 0$ Such "divisibility tests" are frequently encountered in elementary and high-school and provide excellent motivation for this method of "divide first by a simpler multiple of the divisor" or, more simply, "mod first by a simpler multiple of the modulus". This idea of scaling to simpler multiples of the divisor is ubiquitous, e.g. it is employed analogously when rationalizing denominators and in Gauss's algorithm for computing modular inverses. For example, to divide by an algebraic number we can use as a simpler multiple its rational norm = product of conjugates. Let's examine this for a quadratic algebraic number $\,w = a+b\sqrt{n},\,$ with norm $\,w\bar w = (a+b\sqrt n)(a-b\sqrt n) = \color{#0a0}{a^2-nb^2 = c}\in\Bbb Q\ (\neq 0\,$ by $\,\sqrt{n}\not\in\Bbb Q),\,$ which reduces division by an algebraic to simpler division by a rational , i.e. $\, v/w = v\bar w/(w\bar w),$ i.e. $$\dfrac{1}{a+b\sqrt n}\, =\, \dfrac{1}{a+b\sqrt n}\, \dfrac{a-b\sqrt n}{a-b\sqrt n}\, =\, \dfrac{a-b\sqrt n}{\color{#0a0}{a^2-nb^2}}\,=\, {\frac{\small 1}{\small \color{#0a0}c}}(a-b\sqrt n),\,\ \color{#0a0}{c}\in\color{#90f}{\Bbb Q}\qquad $$ so-called $\rm\color{#90f}{rationalizing}\ the\ \color{#0a0}{denominator}$ . The same idea works even with $\,{\rm\color{#c00}{nilpotents}}$ $$\color{#c00}{t^n = 0}\ \Rightarrow\ \dfrac{1}{a-{ t}}\, =\, \dfrac{a^{n-1}+\cdots + t^{n-1}}{a^n-\color{#c00}{t^n}}\, =\, a^{-n}(a^{n-1}+\cdots + t^{n-1})\qquad$$ which simplifies by eliminating $\,t\,$ from the denominator, i.e. $\,a-t\to a^n,\,$ reducing the division to division by a simpler constant $\,a^n\,$ (vs. a simpler rational when rationalizing the denominator). More generally, often we can use norms to reduce divisibility and factorization problems on algebraic integers to "simpler" problems involving their norm multiples in $\Bbb Z$ - analogous to above, where we reduced division by the algebraic integer $\,w = a + b\sqrt{n}\,$ to division by its norm. The same can be done using norms in any (integral) ring extension (i.e. the base ring need not be $\Bbb Z$ ). Another example is Gauss' algorithm, where we compute fractions $\!\bmod m\,$ by iteratively applying this idea of simplifying the denominator by scaling it to a smaller multiple. Here we scale $\rm\color{#C00}{\frac{A}B} \to \frac{AN}{BN}\: $ by the least $\rm\,N\,$ so that $\rm\, BN \ge m,\, $ reduce mod $m,\,$ then iterate this reduction, e.g. $$\rm\\ mod\ 13\!:\,\ \color{#C00}{\frac{7}9} \,\equiv\, \frac{14}{18}\, \equiv\, \color{#C00}{\frac{1}5}\,\equiv\, \frac{3}{15}\,\equiv\, \color{#C00}{\frac{3}2} \,\equiv\, \frac{21}{14} \,\equiv\, \color{#C00}{\frac{8}1}\qquad\qquad$$ Denominators of the $\color{#c00}{\rm reduced}$ fractions decrease $\,\color{#C00}{ 9 > 5 > 2> \ldots}\,$ so reach $\color{#C00}{1}\,$ (not $\,0\,$ else the denominator would be a proper factor of the prime modulus; it may fail for composite modulus) See here and its $25$ linked questions for more examples similar to the OP (some far less trivial). Worth mention: there are simple algorithms for recognizing cyclotomics (and products of such), e.g. it's shown there that $\, x^{16}+x^{14}-x^{10}-x^8-x^6+x^2+1$ is cyclotomic (a factor of $x^{60}-1),\,$ so we can detect when the above methods apply for arbitrarily large degree divisors.
{ "source": [ "https://math.stackexchange.com/questions/3224769", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
3,225,899
So this is the integral I must evaluate: $$\int_0^1 \frac{3x}{\sqrt{4-3x}} dx$$ I have this already evaluated but I don't understand one of the steps in its transformation. I understand how integrals are evaluated, but I don't understand some of the steps when it is being broken down and integrated. The steps are as follows: $$ - \left( \frac {-3x} {\sqrt{4-3x}}\right)$$ $$ - \left( \frac {4-3x-4}{\sqrt{4-3x}}\right) $$ $$ - \sqrt {4-3x} + \frac {4}{\sqrt{4-3x}}$$ $\mathbf {Question1} $ In these three steps the first thing I don't understand how it got broken down into two terms in the third step. If I add together the terms of the third step I get back the original one but I don't understnad how the author reached this point in the first place, like how can I know how to break it down into which and which terms? After this it is put back into the original equation: $$ \int_0^1 \frac {3x}{\sqrt{4-3x}} dx = - \int_0^1 (4-3x)^\frac {1}{2} dx + 4 \int_0^1 (4-3x)^\frac{-1}{2} dx $$ $$ = \frac {1}{3} \int_0^1 (4-3x)^\frac {1}{2} (-3 dx) - \frac{4}{3} \int_0^1 (4-3x)^\frac{-1}{2} (-3dx) $$ After this it is integrated as usual with $-3dx$ term disappearing in both and $(4-3x)^\frac{1}{2}$ and $(4-3x)^\frac{-1}{2}$ get integrated with n+1 formula. $\mathbf{Question 2}$ Why was $-3$ multiplied and divided in second step. The dx doesn't change to du so it clearly isn't substitution. So what exactly is happening here?
Your second question is indeed, despite your doubts, about the substitution method. The reason you don't see a $du$ (or any other variable change) is because they did it in a less-common way notationally, but if you try it with substitution you will see that it all works out the same. The first question is a good one, because it is often not enough to know that something works, but instead you need to know how to do similar things in the future. The solution writer noticed that the integral was difficult as written, but that they could use substitution (as mentioned above) if they had two rational expressions, each containing $x$ in just one place. Since the $4$ was under the square root, they went ahead and tried adding and substracting that from the numerator to get the square of the denominator, cancelling it, on one rational, and just a constant on the other. It worked in this case. It won't always work. Try the same logic with the integrands below to see how it goes: $$\dfrac{2x}{5-2x}$$ $$\dfrac{2x}{5-3x}$$ Perhaps after trying both of those you can even conjecture about when the method will work and when it won't!
{ "source": [ "https://math.stackexchange.com/questions/3225899", "https://math.stackexchange.com", "https://math.stackexchange.com/users/597101/" ] }
3,232,057
$$\lim_{n \to \infty }n\int_{0}^{1}\frac{x^{n}}{1+x+x^{n}}dx$$ I factorize $x^n$ then I tried with $u=1+x^{1-n}$ but I didn't get too far. Also I tried to get something from $0<x<1$ . I know that if $x$ is from $(0,1)$ then $x^n$ tends to $0$ as $n \to \infty$ .
$$ \begin{align} \lim_{n\to\infty}n\int_0^1\frac{x^n}{1+x+x^n}\,\mathrm{d}x &=\lim_{n\to\infty}\int_0^1\frac{x^{1/n}}{1+x^{1/n}+x}\,\mathrm{d}x\tag1\\ &=\int_0^1\frac1{2+x}\,\mathrm{d}x\tag2\\[3pt] &=\log\left(\frac32\right)\tag3 \end{align} $$ Explanation: $(1)$ : substitute $x\mapsto x^{1/n}$ $(2)$ : Dominated Convergence $(3)$ : evaluate Heuristics Why did I use the substitution $x\mapsto x^{1/n}$ ? The function $nx^n$ has weight $\frac{n}{n+1}$ , and all that weight gets concentrated near $1$ as $n\to\infty$ . If we didn't have the $x^n$ in the denominator, then we could simply note that $\frac1{1+x}=\frac12$ near $1$ and see that the integral would be $\frac12$ . The $x^n$ in the denominator forces us to look closer at what happens near $1$ . The map $x\to x^{1/n}$ moves the points of $[0,1]$ closer to $1$ ; thus, it spreads the action of $\frac{nx^n}{1+x+x^n}$ from near $1$ further down on $[0,1]$ . The thing that told me that this was the right thing is that the factor of $n$ disappeared. Once we perform the substitution, its easy to see that for all $n\gt1$ , $$ \frac{x^{1/n}}{1+x^{1/n}+x}\le\frac1{1+x} $$ which allows us to use Dominated Convergence. A Far More Basic Approach $$ \begin{align} \lim_{n\to\infty}n\int_0^1\frac{x^n}{1+x+x^n}\,\mathrm{d}x &=\lim_{n\to\infty}\int_0^1\frac{x^{1/n}}{1+x^{1/n}+x}\,\mathrm{d}x\tag4\\ &=\int_0^1\frac1{2+x}\,\mathrm{d}x\tag5\\[3pt] &=\log\left(\frac32\right)\tag6 \end{align} $$ Explanation: $(4)$ : substitute $x\mapsto x^{1/n}$ $(5)$ : the difference vanishes $(6)$ : evaluate Discussion of $\mathbf{(5)}$ : $$ \begin{align} \int_0^1\left[\frac1{2+x}-\frac{x^{1/n}}{1+x^{1/n}+x}\right]\mathrm{d}x &=\int_0^1\frac{(1+x)\left(1-x^{1/n}\right)}{(2+x)\left(1+x^{1/n}+x\right)}\,\mathrm{d}x\tag7\\ &\le\frac12\int_0^1\left(1-x^{1/n}\right)\mathrm{d}x\tag8\\[6pt] &=\frac1{2n+2}\tag9 \end{align} $$ Explanation: $(7)$ : subtract $(8)$ : $\frac{1+x}{(2+x)\left(1+x^{1/n}+x\right)}\le\frac12$ $(9)$ : evaluate
{ "source": [ "https://math.stackexchange.com/questions/3232057", "https://math.stackexchange.com", "https://math.stackexchange.com/users/631905/" ] }
3,232,250
Let $$I=\int \frac{\sin x}{\cos x + \sin x}\ dx \tag{1}$$ Now let $$u=\frac{\pi}{2} - x \tag{2}$$ so $$I=\int \frac{\sin (\frac{\pi}{2} - u)}{\cos (\frac{\pi}{2} - u)+\sin (\frac{\pi}{2} - u)}\ du \tag{3}$$ $$=\int\frac{-\cos u}{\sin u + \cos u} \ du \tag{4}$$ $$= \int\frac{-\cos x}{\sin x + \cos x} \ dx \tag{5}$$ and hence $$2I=\int\frac{\sin x - \cos x}{\sin x + \cos x} \ dx \tag{6}$$ $$=-\ln\ |\sin x + \cos x| + c \tag{7}$$ $\implies I=-\frac{1}{2}\ln|\sin x + \cos x| + c \tag{8}$ But the actual answer is $$I= \frac{1}{2}x -\frac{1}{2}\ln|\sin x + \cos x| + c \tag{9}$$ according to Wolfram Alpha and supported by a different method. Why does my method not yield the correct result?
You have not paid enough attention to the limits of your integration - the two integrals you are adding are not over the same interval. The method to use here is quite similar to yours in using the same trick correctly, without disturbing the interval of integration: $$\frac{2\sin x}{\sin x + \cos x}=\frac {\sin x +\cos x}{\sin x +\cos x}+\frac {\sin x - \cos x}{\sin x + \cos x}$$ You might need a little more explanation of the interval issue. Integrals are additive in two ways. First, if the same function is integrated over disjoint intervals (later measurable sets) then we can integrate over the union of the intervals and the integral over the whole is the sum of the integrals over the parts. Second, if we integrate different functions over the same interval (measurable set), the sum of the integrals is equal to the integral of the sum of the functions. Apply your method to the integral of $x^2$ and use the substitution $y=-x$ . The integral you get is the integral of $-x^2$ . Adding the two you get twice the integral is zero, which is nonsense, because the function you are integrating is positive except at $x=0$ . What has happened here is that you have swapped the limits of integration, and you need to reverse the sign to straighten them out. Your method has both reversed the limits and translated them by $\frac {\pi} 2$ . You can't add the integrals in this case and expected to get the right answer.
{ "source": [ "https://math.stackexchange.com/questions/3232250", "https://math.stackexchange.com", "https://math.stackexchange.com/users/489400/" ] }
3,234,157
Like all combinatoric problems, this one is probably equivalent to another, well-known one, but I haven't managed to find such an equivalent problem (and OEIS didn't help), so I offer this one as being possibly new and possibly interesting. Problem statement I have $2N$ socks in a laundry basket, and I am hanging them on the hot pipes to dry. To make life easier later, I want to hang them in pairs. Since it is dark where the pipes are, I adopt the following algorithm: Take a sock at random from the basket. If it matches one that is already on my arm, hang them both on the pipes: the one in my hand and the matching one taken from my arm. If it does not match one that is already on my arm, hang it on my arm with the others. Do this $2N$ times. The question is: How long does my arm have to be? Clearly, the minimum length is $1$ , for instance if the socks come out in the order $AABBCC$ . Equally clearly, the maximum length is $N$ , for instance if the socks come out as $ABCABC$ . But what is the likeliest length? Or the average length? Or what sort of distribution do the required lengths have? It turns out to be easiest to parameterise the results not by $2N$ , the number of socks, but by $2N-1$ , which I will call $M$ . The first few results (Notation: $n!!$ is the semifactorial, the factorial including only odd numbers; thus $7!!=7\times 5\times 3\times 1$ ). In each case I provide the frequency for each possible arm length, starting with a length of 1. I use frequencies rather than probabilities because they are easier to type, but you can get the probabilities by dividing by $M!!$ . $$ \begin{array}{c|rrrrr} M \\ \hline 1 & 1 \\ 3 & 1 & 2 \\ 5 & 1 & 8 & 6 \\ 7 & 1 & 30 & 50 & 24 \\ 9 & 1 & 148 & 340 & 336 & 120 \\ \end{array} $$ It would be good to know (for example) if these frequencies tend to some sort of known distribution as $M\to\infty$ , just as the binomial coefficients do. But, as I said at the beginning, this may just be a re-encoding of a known combinatorial problem, carrying a lot of previously worked out results along with it. I thought, for instance, of the lengths of random walks in $N$ dimensions with only one step forward and one step back being allowed in each dimension – but that looked too complicated to give any straightforward direction to follow. Background: methods In case it is interesting or helpful, I obtained the results above by means of a two-dimensional generating function, in which the coefficient of $y^n$ identified the arm length needed and the coefficient of $x^n$ identified how many socks had been retrieved at the [first] time that this length was reached. Calling the resulting generating function $A_M(x,y)$ , the recurrence I used was: $$A_M=MxyA_{M-2}+x^2(x-y)\frac\partial{\partial x}A_{M-2}+(1-x^2)xy$$ which is based on sound first principles and matches the results of manual calculation up to $M=5$ . Having found a polynomial, I substitute $x=1$ and the numbers in the table above are then the coefficients of the powers of $y$ . But, mathematics being close to comedy, all this elaboration may be an unnecessarily complicated way to get to a result too trivial to be found even in OEIS. Is it?
I did some Monte Carlo with this interesting problem and came to some interesting conclusions. If you have $N$ pairs of socks the expected maximum arm length is slightly above $N/2$ . First, I made 1,000,000 experiments with 100 pairs of socks and recorded maximum arm length reached in each one. For example, maximum arm length of 54 was reached about 90,000 times. And it all looks like a normal distribution to me. The average value of maximum arm length was 53.91, confirmed several times in a row. Nothing changed with 100 pairs of socks and 10,000,000 experiments. Average value remained the same. So it looks like you need about a million runs to draw up a meaningful conclusion. Here is what I got when I doubled the number of socks to 200 pairs. Maximum arm length on average was 105.12, still above 50%. I got the same value in several repeated experiments ( $\pm0.01$ ). Finally, I decided to check expected maximum arm length for different number of sock pairs, from 10 to 250. Each number of pairs was tested 2,000,000 times before the average value was calculated. Here are the results: $$ \begin{array}{c|rr} \textbf{Pairs} & \textbf{Arm Length} & \textbf{Increment} \\ \hline 10 & 6.49 & \\ 20 & 12.03 & 5.54 \\ 30 & 17.41 & 5.38 \\ 40 & 22.71 & 5.30 \\ 50 & 27.97 & 5.26 \\ 60 & 33.20 & 5.23 \\ 70 & 38.40 & 5.20 \\ 80 & 43.59 & 5.19 \\ 90 & 48.75 & 5.16 \\ 100 & 53.91 & 5.16 \\ 110 & 59.07 & 5.16 \\ 120 & 64.20 & 5.13 \\ 130 & 69.33 & 5.13 \\ 140 & 74.46 & 5.13 \\ 150 & 79.58 & 5.12 \\ 160 & 84.69 & 5.11 \\ 170 & 89.80 & 5.11 \\ 180 & 94.91 & 5.11 \\ 190 & 100.02 & 5.11 \\ 200 & 105.11 & 5.09 \\ 210 & 110.20 & 5.09 \\ 220 & 115.29 & 5.09 \\ 230 & 120.38 & 5.09 \\ 240 & 125.47 & 5.09 \\ 250 & 130.56 & 5.09 \end{array} $$ It looks like a straight line but it's actually an arc, slightly bended downwards (take a look at the increment column). Finally, here is the Java code that I used for my experiments. import java.util.ArrayList; import java.util.Collections; import java.util.HashSet; import java.util.List; import java.util.Set; public class Basket { public static final int PAIRS = 250; public static final int NUM_EXPERIMENTS = 2_000_000; int n; List<Integer> basket; Set<Integer> arm; public Basket(int n) { // basket size this.n = n; // socks are here this.basket = new ArrayList<Integer>(); // arm is just a set of different socks this.arm = new HashSet<Integer>(); // add a pair of same socks to the basket for(int i = 0; i < n; i++) { basket.add(i); basket.add(i); } // shuffle the basket Collections.shuffle(basket); } // returns maximum arm length int hangSocks() { // maximum arm length int maxArmLength = 0; // we have to hang all socks for(int i = 0; i < 2 * n; i++) { // take one sock from the basket int sock = basket.get(i); // if the sock of the same color is already on your arm... if(arm.contains(sock)) { // ...remove sock from your arm and put the pair over the hot pipe arm.remove(sock); } else { // put the sock on your arm arm.add(sock); // update maximum arm length maxArmLength = Math.max(maxArmLength, arm.size()); } } return maxArmLength; } public static void main(String[] args) { // results of our experiments will be stored here int[] results = new int[PAIRS + 1]; // run millions of experiments for(int i = 0; i < NUM_EXPERIMENTS; i++) { Basket b = new Basket(PAIRS); // arm length in a single experiment int length = b.hangSocks(); // remember how often this result appeared results[length]++; } // print results in CSV format so that we can plot them in Excel for(int i = 0; i < results.length; i++) { System.out.println(i + "," + results[i]); } // find average arm length int sum = 0; for(int i = 0; i < results.length; i++) { sum += i * results[i]; } double average = (double) sum / (double) NUM_EXPERIMENTS; System.out.println(String.format("Average arm length is %.2f", average)); } } EDIT : For N=500, the average value of maximum arm length after 2,000,000 tests is 257.19. For N=1000, the result is 509.23. It seems that for $N\to\infty$ , the result goes down to $N/2$ . I don't know how to prove this.
{ "source": [ "https://math.stackexchange.com/questions/3234157", "https://math.stackexchange.com", "https://math.stackexchange.com/users/340970/" ] }
3,240,987
For all my homework in real analysis, when I've been asked to show that a function is continuous, I just found a single $x_n \in D$ and showed that when $x_n \rightarrow x_0$ , $f(x_n) \rightarrow f(x_0)$ . Apparently, the sequence definition (as opposed to the epsilon delta definition) is (basically) only used to prove a function is not continuous, and I can't prove a function is continuous because then I'd have to show this is true for all possible sequences? Am I doing the math wrongly? Should I always use the epsilon delta definition when trying to prove that a function is continuous?
This is indeed incorrect. Take for example the function $$f(x) = \begin{cases} 1, \text{ if } x\in \mathbb{Q}\\ 0, \text{ otherwise} \end{cases}$$ This is obviously not a continuous function. However, if you look at its behavior along a sequence of rational points, it would appear to be constant (hence continuous).
{ "source": [ "https://math.stackexchange.com/questions/3240987", "https://math.stackexchange.com", "https://math.stackexchange.com/users/604375/" ] }
3,241,309
There’s another post asking for the motivation behind uniform continuity. I’m not a huge fan of it since the top-rated comment spoke about local and global interactions of information, and frankly I just did not get it. Playing with the definition, I want to say uniform continuity implies there’s a maximum “average rate of change”. Not literally a derivative, but the rate of change between two points is bounded in the domain. I’m aware that this is essentially Lipschitz continuity, and that Lipschitz implies uniform. This implies there’s more to uniform continuity than just having a bounded average rate of change. And also, how is it that $ f(x)=x$ is uniform yet $f(x)f(x)=g(x)=x^2$ is not? I understand why it isn’t, I can prove it. But I just don’t understand the motivation and importance of uniform continuity.
The real "gist" of continuity, in its various forms, is that it's the "property that makes calculators and measurements useful". Calculators and measurements are fundamentally approximate devices which contain limited amounts of precision. Special functions, like those which are put on the buttons of a calculator, then, if they are to be useful, should have with them some kind of "promise" that, if we only know the input to a limited amount of precision, then we will at least know the output to some useful level of precision as well. Simple continuity is the weakest form of this. It tells us that if we want to know the value of a target function $f$ to within some tolerance $\epsilon$ at a target value $x$ , but using an approximating value $x'$ with limited precision instead of the true value $x$ to which we may not have access or otherwise know to unlimited precision, i.e. we want $$|f(x) - f(x')| < \epsilon$$ then we will be able to have that if we can make our measurement of $x$ suitably accurate, i.e. we can make that $$|x - x'| < \delta$$ for some $\delta > 0$ which may or may not be the same for every $\epsilon$ and $x$ . Uniform continuity is stronger. It tells us that not only do we have the above property, but in fact the same $\delta$ threshold on $x'$ 's accuracy will be sufficient to get $\epsilon$ worth of accuracy in the approximation of $f$ no matter what $x$ is . Basically, if the special function I care about is uniform continuous, and I want 0.001 accuracy, and the max $\delta$ required for that is, say, 0.0001, by measuring to that same tolerance I am assured to always get 0.001 accuracy in the output no matter what $x$ I am measuring . If, on the other hand, it were the case that the function is merely continuous but not uniformly so, I could perhaps measure at one value of $x$ with 0.0001 accuracy and that accuracy would be sufficient to get 0.001 accuracy in the function output, but if I am measuring at another, such a tolerance might give me only 0.5 accuracy - terrible! Lipschitz continuity is even better : it tells us that the max error in approximating $f$ is proportional to that in approximating $x$ , i.e. $\epsilon \propto \delta$ , so that if we make our measurement 10 times more accurate, say (i.e. one more significant figure), we are assured 10 times more accuracy in the function (i.e. gaining a significant figure in the measurement lets us gain one in the function result as well). And in fact, all the functions (that are real-analytic, not combinatorial functions like nCr and what not) on your real-life calculator are at least locally Lipschitz continuous, so that while this proportionality factor (effectively, absolutely how many sig figs you get for a given number of such in the input) may not be the same everywhere, you can still be assured that in relative terms, adding 10x the precision to your measurements, i.e. one more significant figure, will always make the approximation (however good or not it actually is) returned by your calculator 10x more accurate, i.e. also to one more significant figure. And to top it all off, all these forms of continuity - at least in their local variants, that is, over any bounded interval - are implied by differentiability.
{ "source": [ "https://math.stackexchange.com/questions/3241309", "https://math.stackexchange.com", "https://math.stackexchange.com/users/666376/" ] }
3,244,132
I'm trying to teach middle schoolers about the emergence of complex numbers and I want to motivate this organically. By this, I mean some sort of real world problem that people were trying to solve that led them to realize that we needed to extend the real numbers to the complex. For instance, the Greeks were forced to recognize irrational numbers not for pure mathematical reasons, but because the length of the diagonal of a square with unit length really is irrational, and this is the kind of geometrical situation they were already dealing with. What similar situation would lead to complex numbers in terms that kids could appreciate? I could just say, try to solve the equation $x^2 + 1 = 0$ , but that's not something from the physical world. I could also give an abstract sort of answer, like that $\sqrt{-1}$ is just an object that we define to have certain properties that turn out to be consistent and important, but I think that won't be entirely satisfying to kids either.
I don't know a simple, physical situation where complex numbers emerge naturally but I can suggest a way to help you teach middle schoolers about the emergence of complex numbers and I want to motivate this organically. I did this once as a guest lecturer in a middle school classroom by developing a geometric interpretation of arithmetic on the number line. Adding a fixed number $r$ is a shift by $r$ , to the right if $r > 0$ , to the left if $r < 0$ . Successive shifts add the shift amounts. Each geometric shift is characterized by the position that $0$ moves to. You illustrate this visually by physically shifting a yardstick along a number line drawn on the board. The answer to the question "what do you shift by so that doing it twice shifts by $r$ ?" is clearly $r/2$ . This is looking ahead to square roots, but you don't say that yet. The underlying idea is that the group of shifts is the additive group of the real numbers, but you don't say that ever. Now that addition is done you go on to multiplication. Multiplying by a fixed positive $r$ rescales the number line. If $r>1$ things stretch, if $r < 1$ they shrink and multiplying by $r=1$ changes nothing. To know what a scaling does all you need to know is the image of $1$ . Successive scalings multiply, just as successive shifts add. What should you do twice to scale by $9$ ? Half of $9$ doesn't work, but $3$ does. The class will quickly grasp that the geometric way to halve a scaling is to find the square root. What about multiplication by a negative number? The geometry is clear: it's reflection over $0$ followed by a scaling by the absolute value. Again the transformation is characterized by the image of $1$ . Now you're ready for the denoument. What geometric transformation can you do twice to move $1$ to $-1$ on the number line? Take your yardstick, place it on the line on the board, rotate by a quarter of a circle so that it's vertical, then another quarter and you're there. The image of $1$ is not on the line. It's at position $(0,1)$ in the cartesian coordinate system middle schoolers know about. They will find it cool to think of that point as a new number such that multiplying by it twice turns $r$ into $-r$ . Name that number " $i$ ". If you have brought the class along this far the rest is easy. They will quickly see the $y$ axis as the real multiples of $i$ . Clearly adding $i$ should be a vertical translation by one unit. Vector addition for complex numbers follows quickly. Ask for the square root of $i$ and they will rotate the yardstick $45$ degrees. If they know about isosceles right triangles they will know that the (actually a ) square root of $i$ is $(\sqrt{2}/2)(1+i)$ , which they can check formally with the distributive law (which they will not ask you to prove). A caveat . I think this should be pure fun for the class. Make that clear, so if some don't follow they don't worry. I would not try to integrate it into whatever the standard curriculum calls for. It should probably not extend over multiple class periods. Save it for a day near the end of the school year.
{ "source": [ "https://math.stackexchange.com/questions/3244132", "https://math.stackexchange.com", "https://math.stackexchange.com/users/5257/" ] }
3,244,136
I have two sets, $A$ and $B$ : $B$ is in $A$ , so $A \cap B = B$ . I would like to mathematically express the non-intersection of these sets. It is not the same $C= A \setminus B$ or $B \setminus A$ than $A$ (non $\cap$ ) $B$ . In this case it will be $A \setminus B$ , but how I now if $A\subset B$ or $B\subset A$ ? How can I do it?
I don't know a simple, physical situation where complex numbers emerge naturally but I can suggest a way to help you teach middle schoolers about the emergence of complex numbers and I want to motivate this organically. I did this once as a guest lecturer in a middle school classroom by developing a geometric interpretation of arithmetic on the number line. Adding a fixed number $r$ is a shift by $r$ , to the right if $r > 0$ , to the left if $r < 0$ . Successive shifts add the shift amounts. Each geometric shift is characterized by the position that $0$ moves to. You illustrate this visually by physically shifting a yardstick along a number line drawn on the board. The answer to the question "what do you shift by so that doing it twice shifts by $r$ ?" is clearly $r/2$ . This is looking ahead to square roots, but you don't say that yet. The underlying idea is that the group of shifts is the additive group of the real numbers, but you don't say that ever. Now that addition is done you go on to multiplication. Multiplying by a fixed positive $r$ rescales the number line. If $r>1$ things stretch, if $r < 1$ they shrink and multiplying by $r=1$ changes nothing. To know what a scaling does all you need to know is the image of $1$ . Successive scalings multiply, just as successive shifts add. What should you do twice to scale by $9$ ? Half of $9$ doesn't work, but $3$ does. The class will quickly grasp that the geometric way to halve a scaling is to find the square root. What about multiplication by a negative number? The geometry is clear: it's reflection over $0$ followed by a scaling by the absolute value. Again the transformation is characterized by the image of $1$ . Now you're ready for the denoument. What geometric transformation can you do twice to move $1$ to $-1$ on the number line? Take your yardstick, place it on the line on the board, rotate by a quarter of a circle so that it's vertical, then another quarter and you're there. The image of $1$ is not on the line. It's at position $(0,1)$ in the cartesian coordinate system middle schoolers know about. They will find it cool to think of that point as a new number such that multiplying by it twice turns $r$ into $-r$ . Name that number " $i$ ". If you have brought the class along this far the rest is easy. They will quickly see the $y$ axis as the real multiples of $i$ . Clearly adding $i$ should be a vertical translation by one unit. Vector addition for complex numbers follows quickly. Ask for the square root of $i$ and they will rotate the yardstick $45$ degrees. If they know about isosceles right triangles they will know that the (actually a ) square root of $i$ is $(\sqrt{2}/2)(1+i)$ , which they can check formally with the distributive law (which they will not ask you to prove). A caveat . I think this should be pure fun for the class. Make that clear, so if some don't follow they don't worry. I would not try to integrate it into whatever the standard curriculum calls for. It should probably not extend over multiple class periods. Save it for a day near the end of the school year.
{ "source": [ "https://math.stackexchange.com/questions/3244136", "https://math.stackexchange.com", "https://math.stackexchange.com/users/678161/" ] }
3,249,771
Symmetric matrices represent real self-adjoint maps, i.e. linear maps that have the following property: $$\langle\vec{v},f(\vec{w})\rangle=\langle f(\vec{v}),\vec{w}\rangle$$ where $\langle,\rangle$ donates the scalar (dot) product. Using this logic: $$\langle\vec{v},AB\vec{v}\rangle=\langle A\vec{v},B\vec{v}\rangle=\langle BA\vec{v},\vec{v}\rangle$$ Where $A$ and $B$ are symmetric matrices. Using the fact that the real scalar dot product is commutative: $$\langle BA\vec{v},\vec{v}\rangle=\langle\vec{v},BA\vec{v}\rangle$$ We therefore have the result: $$\langle\vec{v},AB\vec{v}\rangle=\langle\vec{v},BA\vec{v}\rangle$$ This holds true for any real vector $\vec{v}$ so therefore $AB=BA$ . However, symmetric matrices do not always commute so something is wrong with this proof.
You have proved that $v\mapsto v^TABv$ and $v\mapsto v^TBAv$ are the same quadratic form . However, since $AB$ and $BA$ are not necessarily symmetric, that doesn't mean they are the same matrix . You can check this by plugging in some matrices where you know commutativity fails, for example $$ A =\begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix} \quad B = \begin{pmatrix}0 & 1 \\ 1 & 0 \end{pmatrix} $$ We then get $$ AB = \begin{pmatrix}0 & 1 \\ 2 & 0 \end{pmatrix} \qquad BA = \begin{pmatrix}0 & 2 \\ 1 & 0 \end{pmatrix} $$ and indeed these define the same quadratic form: $$ (x\;\;y)\begin{pmatrix}0 & 1 \\ 2 & 0 \end{pmatrix}\begin{pmatrix}x\\ y\end{pmatrix} = 3xy = (x\;\;y)\begin{pmatrix}0 & 2 \\ 1 & 0 \end{pmatrix} \begin{pmatrix}x\\ y\end{pmatrix} $$ for all $x,y$ , but the matrices are different.
{ "source": [ "https://math.stackexchange.com/questions/3249771", "https://math.stackexchange.com", "https://math.stackexchange.com/users/411815/" ] }
3,250,241
Why are Sobolev spaces useful, and what problems were they developed to overcome? I'm particularly interested in their relation to PDEs, as they are often described as the 'natural space in which to look for PDE solutions' - why is this? How do weak solutions and distributions come into this? There are plenty of books on the subject, but these seem to jump straight into the details and I'm struggling to see the big picture. I know that the Sobolev norm makes the function spaces complete, which would guarantee that infinite linear combinations of solutions do not leave the space, as can be a problem when working with $\mathscr{C}^2$ , for example, but are there any other reasons why this norm is important? I'm also interested in the Sobolev embedding theorems, since I believe that they're important in the problems I'm trying to solve. These are (1) proving the compactness of the integral operator whose kernel is the Green's function for the Laplacian on a bounded domain $\Omega \subset \mathbb{R}^{n}$ with smooth boundary, and (2) understanding why minimising functions of the Rayleigh quotient, $${\arg\min}_{f \in T} \int_{\Omega} \frac{\nabla f \cdot \nabla f}{\left< f , f \right>}$$ always exist, and that they are necessarily smooth ( $\mathscr{C}^\infty(\Omega)$ ) among the set of trial functions $T$ of continuous functions with piecewise continuous first derivatives which vanish at the boundary and are not identically zero. To me, this sounds like the Sobolev space $H_0^1 (\Omega)$ at work, where the smoothness is the result of a Sobolev embedding theorem; however, I'm very new to Sobolev spaces and so don't know much about this. Could anyone provide me with some insight into how results (1) and (2) might be proven?
To motivate Sobolev spaces, let me pose a motivating problem. Let $\Omega$ be a smooth, bounded domain in ${\Bbb R}^n$ and let $f$ be a $C^\infty$ function on $\Omega$ . Prove that there exists a $C^2$ function $u$ satisfying $-\Delta u = f$ in $\Omega$ and $u = 0$ on the boundary of $\Omega$ . As far as PDE's go, this is the tamest of the tame: it's a second-order, constant coefficient elliptic PDE with a smooth right-hand side and a smooth boundary. Should be easy right? It certainly can be done, but you'll find it's harder than you might think. Imagine replacing the PDE with something more complicated like $-\text{div}(A(x)\nabla u) = f$ for some $C^1$ uniformly positive definite matrix-valued function $A$ . Proving even existence of solutions is a nightmare. Such PDE's come up all the time in the natural sciences, for instance representing the equillibrium distribution of heat (or stress, concentration of impurities,...) in a inhomogenous, anisotropic medium. Proving the existence of weak solutions to such PDE's in Sobolev spaces is incredibly simple: once all the relevant theoretical machinery has been worked out, the existence, uniqueness, and other useful things about the solutions to the PDE can be proven in only a couple of lines. The reason Sobolev spaces are so effective for PDEs is that Sobolev spaces are Banach spaces, and thus the powerful tools of functional analysis can be brought to bear. In particular, the existence of weak solutions to many elliptic PDE follows directly from the Lax-Milgram theorem . So what is a weak solution to a PDE? In simple terms, you take the PDE and multiply by a suitably chosen ${}^*$ test function and integrate over the domain. For my problem, for instance, a weak formulation would be to say that $-\int_\Omega v\Delta u \, dx = \int_\Omega fv \, dx$ for all $C^\infty_0$ functions $v$ . We often want to use integration by parts to simplify our weak formulation so that the order of the highest derivative appearing in the expression goes down: you can check that in fact $\int_\Omega \nabla v\cdot \nabla u \, dx = \int_\Omega fv \, dx$ for all $C^\infty_0$ functions $v$ . Note the logic. You begin with a smooth solution to your PDE, which a priori may or may not exist. You then derive from the PDE a certain integral equation which is guaranteed to hold for all suitable test functions $v$ . You then define $u$ to be a weak solution of the PDE if the integral equation holds for all test functions $v$ . By construction, every classical solution to the PDE is a weak solution. Conversely, you can show that if $u$ is a $C^2$ weak solution, then $u$ is a classical solution. ${}^\dagger$ Showing the existence of solutions in a Sobolev space is easy, but proving that they have enough regularity (that is, they are continuous differentiable up to some order— $2$ , in our case) to be classical solutions often requires very length and technical proofs. ${}^\$$ (The Sobolev embedding theorems you mention in your post are one of the key tools--they establish that if you have enough weak derivatives in a Sobolev sense, then you also are guaranteed to have a certain number of classical derivatives. The downside is you have to work in a Sobolev space $W^{k,p}$ where $p$ is larger than the dimension of the space, $n$ . This is a major bummer since we like to work in $W^{k,2}$ since it is a Hilbert space, and thus has much nicer functional analytic tools. Alternatively, if you show that your function is in $W^{k,2}$ for every $k$ , then it is guaranteed to lie in $C^\infty$ .) All of what I've written kind of dances around the central question of why Sobolev spaces are so useful and why all of these functional analytic tools work for Sobolev spaces but not for spaces like $C^2$ . In a sentence, completeness is really, really important . Often, in analysis, when we want to show a solution to something exists, it's much easier to construct a bunch of approximate solutions and then show those approximations converge to a bona fide solution. But without completeness, there might not be a solution ( a priori , at least) for them to converge to. As a much simpler example, think of the intermediate value theorem. $f(x) = x^2-2$ has $f(2) = 2$ and $f(0) = -2$ , so there must exist a zero (namely $\sqrt{2}$ ) in $(0,2)$ . This conclusion fails over the rationals however, since the rationals are not complete, $\sqrt{2} \notin {\Bbb Q}$ . In fact, one way to define the Sobolev spaces is as the completion of $C^\infty$ (or $C^k$ for $k$ large enough) under the Sobolev norms. ${}^\%$ I have not the space in this to answer your questions (1) and (2) directly, as answering these questions in detail really requires spinning out a whole theory. Most graduate textbooks on PDEs should have answers with all the details spelled out. (Evans is the standard reference, although he doesn't include potential theory so he doesn't answer (1), directly at least.) Hopefully this answer at least motivates why Sobolev spaces are the "appropriate space to look for solutions to PDEs". ${}^*$ Depending on the boundary conditions of the PDE's, our test functions may need to be zero on the boundary or not. Additionally, to make the functional analysis nice, we often want our test functions to be taken from the same Sobolev space as we seek solutions in. This usually poses no problem as we may begin by taking our test functions to be $C^\infty$ and use certain approximation arguments to extend to all functions in a suitable Sobolev space. ${}^\dagger$ Apply integration by parts to recover $-\int_\Omega v\Delta u \, dx = \int_\Omega fv \, dx$ for all $C^\infty_0$ functions $v$ and apply the fundamental lemma of calculus of variations . ${}^\$$ Take a look at a regularity proof for elliptic equations in your advanced PDE book of choice. ${}^\%$ You might ask why complete in Sobolev norm, not some simpler norm like $L^p$ ? Unfortunately, the $L^p$ completion of $C^\infty$ is $L^p$ , and there are functions in $L^p$ which you can't define any sensible weak or strong derivative of. Thus, in order to define a complete normed space of differentiable functions, the derivative has to enter the norm (which is why the Sobolev norms are important, and in some sense natural.)
{ "source": [ "https://math.stackexchange.com/questions/3250241", "https://math.stackexchange.com", "https://math.stackexchange.com/users/201881/" ] }
3,253,240
I know that polynomials can be refactored in terms of their roots. However, this must imply that two different polynomials have different roots (this is just what I think). So my question is: Are polynomials with the same roots identical? - if so, why? A follow-up question that is also about the uniqueness of roots and polynomials can be found here: Is the set of roots unique for each $g(x)$ in $a_n x^n + g(x)$?
No, they are not. For instance, $2x^2-2$ and $x^2-1$ have the same roots, yet they are not identical. And, depending on what you mean by "the same roots", we have that $x^2-2x+1$ and $x-1$ have the same roots, yet they are not identical. Again, depending on what you mean by "the same roots", $x^3+x$ and $x^3+2x$ both only have one real root, yet they are not the same. However, if two monic polynomials have the same roots, with the same multiplicities , over some algebraicaly closed field (like the complex numbers $\Bbb C$ ) then yes, they are identical.
{ "source": [ "https://math.stackexchange.com/questions/3253240", "https://math.stackexchange.com", "https://math.stackexchange.com/users/701280/" ] }
3,253,262
By inspection I notice that Shifting does not change the standard deviation but change mean. {1,3,4} has the same standard deviation as {11,13,14} for example. Sets with the same (or reversed) sequence of adjacent difference have the same standard deviation. For example, {1,3,4} , {0,2,3} , {0,1,3} have the same standard deviation. But the means are different. My conjecture: There are no two distinct sets with the same length, mean and standard deviation. Question Is it possible to have 2 different but equal size real number sets that have the same mean and standard deviation?
$-2,-1,3$ and $-3,1,2$ both have a mean of $0$ and a standard deviation of $\sqrt\frac{14}{3}$ .
{ "source": [ "https://math.stackexchange.com/questions/3253262", "https://math.stackexchange.com", "https://math.stackexchange.com/users/3901/" ] }
3,257,398
It is safe to say that every mathematician, at some point in their career, has had some form of exposure to analysis. Quite often, it appears first in the form of an undergraduate course in real analysis. It is there that one is often exposed to a rigorous viewpoint to the techniques of calculus that one is already familiar with. At this stage, one might argue that real analysis is the study of real numbers, but is it? A big chunk of it involves algebraic properties, and as such lies in the realm of algebra. It is the order properties, though, that do have a sort of analysis point of view. Sure, some of these aspects generalise to the level of topologies, but not all. Completeness, for one, is clearly something that is central to analysis. Similar arguments can be made for complex analysis and functional analysis. Now, the question is: As for all the topics that are bunched together as analysis, is there any central theme to them? What topics would you say that belongs to this theme? And what are the underlying themes in these individual subtopics? Add . It may be a subjective question, but having a rough idea of what the central themes of a certain field are helps one to construct appropriate questions. As such, I think it is important. I am not expecting a single answer, but more of a diverse set of opinions on the matter.
I think that I'd say that one of the underlying themes of analysis is, really, the limit. In pretty much every subfield of analysis, we spend a lot of time trying to control the size of certain quantities, with taking limits in mind. This is especially true in PDEs, when we consistently desire norm estimates on various quantities. Let's just discuss the "basics" of the "basic" subjects (standard topics in real, complex, measure theory, functional). I'm going to keep this discussion loose, since we could quickly get into a very drawn-out and detailed discussion. Real analysis is built on limits. Continuity, differentiability, integration, series, etc. all require the concept of limits. Complex analysis has a bit of a different "flavor" than the other core types, but it still requires limits to do pretty much everything. By Goursat's theorem, holomorphicity is equivalent to complex differentiability over a neighborhood- limits. Integration (and all that comes with it) and residue theory- limits. We can continue this for the entire subject (Laurent series, normal families, conformity, etc.). Lebesgue integration theory and the powerful theorems that come with it are primarily centered around, essentially, swapping limits, and much of measure theory is built around this. In functional analysis, we certainly have times where we can run into not metrizable (or even Hausdorff) topologies, but limits are still central. Many of the common types of spaces like Hilbert, Banach, and Frechet spaces all make use of a metric. We have things like the uniform boundedness principle, compact operators, spectral theory, semigroups, Fourier analysis (this is a field in its own right, of course, but it deals with a lot of functional analysis), and much more, all of which deal with limits (either explicitly or via objects related to previously-discussed material). A significant subfield of analysis is PDEs. As I said earlier, PDEs often deals with obtaining proper norm estimates on certain quantities in appropriate function spaces to prove e.g. existence and regularity of solutions, once again highly dependent on limit arguments (and, of course, the norms themselves are limit-dependent). Something else that I didn't touch on, but is important to discuss, is just how many modes of convergence we use. Some common types of convergence of sequences of functions and operators are pointwise convergence, uniform convergence, local uniform convergence, almost everywhere convergence, convergence in measure, $L^p$ convergence, (more generally) convergence in norm, weak convergence, weak star convergence, uniform operator convergence, strong operator convergence, weak operator convergence, etc. I didn't distinguish between convergence for operators and functions too much here, but it is important to do so e.g. weak star convergence is pointwise convergence for elements of the dual, but I listed them as separate. EDIT: The OP asked for some details. Of course, writing everything above in details would amount to me writing books! Instead of talking about everything, I'd like to talk about one pervasive concept in analysis that comes from limits- the integral. I'd like to note that much of the post deals with limits in many other ways as well, explicitly or otherwise. In real analysis, there are various equivalent ways that the integral is defined, but I'd like to use Riemann sums here: we say that a function is Riemann integrable on an interval $[a,b]$ if and only if there exists $I\in \mathbb{R}$ so that $$I=\lim_{\|P\|\rightarrow 0}\sum\limits_{i=1}^n f(t_i)(x_i-x_{i-1}),$$ where $\|P\|$ denotes the size of the partition, where $t_i\in [x_{i-1},x_i].$ We call $I$ the integral, and we denote it as $$I=\int\limits_a^b f(s)ds.$$ The integral of a continuous (limits!) function is related to the derivative (limits!) through the fundamental theorem of calculus: For $f\in C\left([a,b]\right),$ a function $F$ satisfies $$F(x)-F(a)=\int\limits_a^x f(s)ds$$ for any $x\in [a,b]$ if and only if $F'=f$ . As the name of the theorem states, this is pretty important. All of this generalizes appropriately to higher dimensions, but I won't discuss that here. Sometimes, integrating a function on its own can be hard, so we approximate it with easier functions (or sometimes, we have a sequence of functions tending to something, and we want to know about the limit and how it integrates). A major theorem in an introductory real analysis class is that if we have a sequence of Riemann integrable functions $(f_n)$ converging uniformly on $[a,b]$ to $f$ , then $f$ is Riemann integrable, and we can swap the limit and integral. So, we can swap these two limits. We can do the same for a series of functions that converges uniformly. Okay, let's move on. In complex analysis, the integral is still of importance. Integrals in the complex plane are path integrals, which can be defined similarly. Complex analysis is centered on studying holomorphic functions, and a theorem of Morera relates this to the integral Let $g:\Omega\rightarrow\mathbb{C}$ be continuous, and $$\int\limits_\gamma g(z)dz=0$$ whenever $\gamma=\partial R$ and $R\subset\Omega$ is a rectangle (with sides parallel to the real and imaginary axes). Then, $g$ is holomorphic. Cauchy's theorem states that the integral of a holomorphic function along a closed curve is zero. This can be used to prove the Cauchy integral formula: If $f\in C^1(\bar{\Omega})$ is holomorphic on a bounded region $\Omega$ with smooth boundary, then for any $z\in\Omega$ , we have $$f(z)=\frac{1}{2\pi i}\int\limits_{\partial\Omega}\frac{f(\zeta)}{\zeta-z}d\zeta.$$ Taking the derivative and iterating proves that holomorphic functions are smooth, and further we get that they have a power series expansion, where the coefficients given by integration. This all hinges on integration. The integral pops up in many other fundamental ways here. One is in the form of the mean-value property, which states that $$f(z_0)=\frac{1}{2\pi}\int\limits_0^{2\pi} f(z_0+re^{i\theta})d\theta$$ whenever $f$ is holomorphic on $\Omega$ open and the closed disk centered $z_0$ of radius $r$ is contained in $\Omega.$ We use the integral to prove other important theorems, such as the maximum modulus principle, Liouville's theorem, etc. We also use it to define a branch of the complex logarithm, to define the coefficients of a Laurent series, and to count zeros and poles of functions (argument principle). We also like to calculate various types of integrals in the complex plane where the integrand has singularities (often as a trick to calculate real integrals, which is especially relevant for calculating Fourier transforms). This uses the residue theorem, and residues are also calculated via taking limits. The theorem states that $$\int\limits_{\partial\Omega}f(z)dz=2\pi i\sum_j\text{Res}_{z_j}(f),$$ where $f$ is holomorphic on an open set $\Omega$ , except at singularities $\{z_j\},$ each of which has a relatively compact neighborhood on which $f$ has a Laurent series (the residue is the $(-1)$ 'th indexed coefficient, which are also integrals by construction of the Laurent series). I think that's enough about complex analysis. Now, let's talk a bit about measure theory. The Riemann integral is somewhat restrictive, so we generalize it to the Lebesgue integral (I have a post about the construction, see How to calculate an integral given a measure? ). Note the involvement of limits in the post. If a function is Riemann integrable, then it is equivalent to its Lebesgue integral. We can define the Lebesgue integral on any measure space. Two of the biggest theorems are the monotone and dominated convergence theorems: If $f_j\in L^1(X,\mu)$ , $0\leq f_1(x)\leq f_2(x)\leq \cdots,$ and $\|f_j\|_{L^1}\leq C<\infty,$ then $\lim_j f_j(x)=f(x),$ with $f\in L^1(X,\mu),$ and $\|f_j-f\|_{L^1}\rightarrow 0.$ and If $f_j\in L^1(X,\mu)$ and $\lim_j f_j(x)=f(x)$ $\mu$ -a.e., and there is an $F\in L^1(X,\mu)$ so that $F$ dominates each $|f_j|$ pointwise $\mu$ -a.e., then $f\in L^1(X,\mu)$ and $\|f_j-f\|_{L^1}\rightarrow 0.$ We have immediate generalization to $L^p$ spaces, as well. These theorems are used extensively to prove things in measure theory, functional analysis, and PDEs. The dominated convergence theorem generalizes the result of using uniform convergence to swap limit and integral. We can use these to show that $L^p$ is complete for $p\in [1,\infty),$ in fact a Banach space, as these define norms. We show that if $p$ is in the range and $X$ is $\sigma$ -finite, then the dual of $L^p$ is $L^q$ , where $1/p+1/q=1,$ and this functional is defined, wait for it, via integration. We're beginning to overlap a bit with functional analysis, so I'll switch gears a bit. We often use the integral to define linear functionals, and one such example is in the Riesz representation theorem. Here, we find that the dual of $C(X)$ , where $X$ is a compact metric space, is the space of finite, signed measure of the Borel sigma algebra (Radon measures). In particular, to any bounded linear function $\omega$ on $C(X)$ , there exists a unique Radon measure $\rho$ such that $$\omega (f)=\int\limits_x fd\rho.$$ Also, we get a generalization of the fundamental theorem of calculus using the Hardy-Littlewood maximal function: Let $f\in L^1(\mathbb{R}^n, dx)$ and consider $$A_rf(x)=\frac{1}{m(B_r)}\int\limits_{B_r(x)}f(y)dy,$$ where $r>0.$ Then, $$\lim_{r\rightarrow 0} A_rf(x)=f(x)$$ a.e. In fact, if $f\in L^p$ , then $$\lim_{r\rightarrow 0}\frac{1}{m(B_r)}\int\limits_{B_r(x)}|f(y)-f(x)|^p dy=0$$ for a.e. $x$ . Something of interest is swapping integrals themselves, which manifests in the theorems of Tonelli and Fubini. There are multiple versions, most of which have objects which require definitions, so I'll just give a quick version, the Fubini-Tonelli theorem for complete measures: Let $(X,M,\mu)$ and $(Y,N,\nu)$ be complete, $\sigma$ -finite measure spaces, and let $(X\times Y,\mathcal{L},\lambda)$ be the completion of $(X\times Y,M\otimes N, \mu\times\nu).$ If $f$ is $\mathcal{L}$ -measurable, and $f\in L^1(X\times Y,\lambda),$ then $$\int fd\lambda=\int\int f(x,y)d\mu(x)d\nu(y)=\int\int f(x,y)d\nu(y)d\mu(x).$$ EDIT: I added in a version of Fubini/Tonelli and material related to Cauchy's theorem and the Cauchy integral formula. I am a bit busy today, but I will post about functional analysis tomorrow! Now, we'll move on to functional analysis. This is a pretty big topic, so I'm just going to pick a few things to talk about. We already had some discussion of $L^p$ spaces, and a lot of the remaining discussion will be related to them in some manner, making all of the discussion inherently reliant on the integral. For this reason, I'm going to stop outlining exactly when we're using integrals. Recall that $L^p$ spaces constitute Banach spaces. It is important to know that if $p=2,$ then the space becomes a Hilbert space, which is a very nice structure and makes the use of $L^2$ extremely common in PDEs. If $\mu$ is $\sigma$ -finite and the $\sigma$ -algebra associated to $X$ is finitely-generated, then $L^2(X,\mu)$ is separable, allowing us to get an orthonormal basis. For example, $\{e^{i n \theta}\}_{n\in\mathbb{Z}}$ is an orthonormal basis for $L^2(S^1,dx/2\pi)$ (see Fourier series). Something that is very useful in PDEs is to define functions of operators, such as $\sqrt{-\Delta},$ where $\Delta$ is the Laplacian, and the integral arises naturally in various forms of functional calculus. One such example is the holomorphic functional calculus (I will discuss more later on). If we have a bounded linear operator $T$ , a bounded set $\Omega\subset\mathbb{C}$ with smooth boundary containing the spectrum $\sigma(T)$ of $T$ in the interior, and a function $f$ holomorphic on a neighborhood of $\Omega,$ then we can define $$f(T)=\frac{1}{2\pi i}\int\limits_{\partial\Omega} f(\zeta)R_\zeta d\zeta,$$ where $R_\zeta:=(\zeta-T)^{-1}$ is called the resolvent of $T$ . This is one, simple way of making rigorous the idea of functions of operators. The holomorphic functional calculus can be extremely useful, although functional calculus can be made more general (such as the Borel functional calculus). One of the commonly-studied types of operators are integral operators, given in the form $$Ku(x)=\int\limits_X k(x,y)u(y)d\mu (y)$$ on a measure space $(X,\mu)$ . These arise naturally in solving differential/integral equations. You'll notice, for example, that the fundamental solutions of some of the basic PDEs are given via convolution, in which case our solution involves integral operators. You may have heard of the Hilbert-Schmidt kernel theorem, which says the following: If $T:L^2(X_1,\mu_1)\rightarrow L^2(X_2,\mu_2)$ is a Hilbert-Schmidt operator, then there exist $K\in L^2(X_1\times X_2, \mu_1\times \mu_2)$ so that $$(Tu,v)_{L^2}=\int\int K(x_1,x_2)u (x_1)\overline{v(x_2)}d\mu_1(x_1)d\mu_2(x_2).$$ The converse also holds: given $K\in L^2(X_1\times X_2, \mu_1\times \mu_2)$ , then $T$ as written above defines a Hilbert-Schmidt operator, and it satisfies $\|T\|_{\text{HS}}=\|K\|_{L^2}.$ For a generalization to tempered distributions (which we'll talk about later), look up the Schwartz kernel theorem. Another area of interest is semigroups, which (once again) have applications in differential equations. If we take a contraction semigroup $\{S(t)\}_{t\geq 0}$ on a real Banach space $X$ with infinitesimal generator $A$ , then we have the following cool result: If $\lambda>0,$ then $\lambda$ is in the resolvent set of $A$ , denoted $\rho(A),$ and if $R_\lambda=(\lambda-A)^{-1}$ denotes the resolvent of $A$ , then $$R_\lambda u=\int\limits_0^\infty e^{-\lambda t} S(t) u\ dt$$ for $u\in X,$ and $\|R_\lambda\|\leq\frac{1}{\lambda}.$ This is, we can write the resolvent as the Laplace transform of the semigroup. From here, one can prove some major results, like the Hille-Yosida theorem for contraction semigroups: Let $A$ be a closed, densely-defined linear operator $X$ . Then, $A$ is the generator of a contraction semigroup $\{S(t)\}_{t\geq 0}$ if and only if $$(0,\infty)\subset\rho(A)\hspace{.25in}\text{ and } \hspace{.25in} \|R_\lambda\|\leq\frac{1}{\lambda}$$ for $\lambda>0.$ We can generalize past contraction semigroups to get a more general version of this theorem. Let me also state a related theorem, Stone's theorem: If $A$ is self-adjoint, then $iA$ generates the unitary group $U(t)=e^{itA}.$ Conversely, if $\{U(t)\}_{t\geq 0}$ is a semigroup of unitary operators, then there exists a self-adjoint operator $A$ so that $S(t)=e^{itA}.$ Semigroup theory has direct application to PDEs. For example, if you have a parabolic equation, say $$\partial_t u+Lu=0$$ with appropriate boundary and initial conditions, where $L$ is uniformly elliptic, then you can apply semigroup theory to this problem to guarantee a unique solution given by $S(t)u(x,0)$ . This is a generalization of solving linear ODEs on finite dimensional spaces \begin{align*}x'&=Ax\\ x(0)&=x_0 \end{align*} which have the solution $x(t)=e^{tA}x_0,$ and $\{e^{tA}\}_{t\geq 0}$ generate a one-parameter semigroup. As I mentioned earlier, Fourier analysis is certainly its own field, but it uses a lot of functional analytic techniques, and it is monumentally important in PDEs. Let us first define the Fourier transform $\mathcal{F}:L^1(\mathbb{R}^n)\rightarrow L^\infty (\mathbb{R}^n)$ by $$\mathcal{F} u(\xi)={\hat{u}}(\xi):=(2\pi)^{-n/2}\int\limits_{\mathbb{R}^n} u(x)e^{-ix\cdot \xi}\ dx.$$ We can also extend this to a bounded operator from $L^2\rightarrow L^2.$ For nice enough functions, the Fourier transform enjoys many useful properties, such as $$D_{\xi}^\alpha (\mathcal{F} u)=\mathcal{F}((-x)^\alpha u)$$ and $$\mathcal{F}(D_x^\alpha u)=\xi^\alpha \mathcal{F} u,$$ where $D^\alpha=\frac{1}{i^{|\alpha|}}\partial^\alpha$ and $\alpha$ is a multi-index. It turns out that a very natural place to define this is on the Schwartz space $\mathcal{S},$ where $\mathcal{F}$ is a topological isomorphism. We have the famed Fourier inversion formula $$u(x)=(2\pi )^{-n/2}\int\limits_{\mathbb{R}^n}\hat{u}(\xi)e^{ix\cdot \xi}\ d\xi,$$ and from here, we can get the Plancheral theorem: $$\|u\|^2_{L^2}=\|\mathcal{F}u\|^2_{L^2},$$ so $\mathcal{F}$ is an isometric isomorphism. We can define the Fourier transform on the dual space of $\mathcal{S},$ the space of tempered distributions (denoted $\mathcal{S}'$ ), via duality: $$\langle\hat{u},f\rangle=\langle u,\hat{f}\rangle,$$ where $u\in\mathcal{S}'$ and $f\in\mathcal{S}.$ Due to the algebra and calculus of distributions, one can extend all of the previously-listed properties to here. Now, we might as well mention a powerful and important generalization of the Hilbert-Schmidt kernel theorem, the Schwartz kernel theorem : Let $A:\mathcal{S}\rightarrow\mathcal{S}'$ be a continuous linear map. Then, there exists $K_A\in\mathcal{S}'(\mathbb{R}^n\times\mathbb{R}^n)$ so that for all $u,v\in\mathcal{S},$ $$\langle Au,v\rangle=\langle K_A, u\otimes v\rangle,$$ where $(u\otimes v)(x,y)=u(x)v(y)\in \mathcal{S}(\mathbb{R}^n\times\mathbb{R}^n).$ We sometimes abuse notation and write this as $Au(x)=\int K_A(x,y) u(y)\, dy,$ so that $$\langle Au,v\rangle=\iint K_A(x,y) v(x)u(y)\, dydx.$$ We can use the Fourier transform to define another functional calculus via Fourier multipliers. Motivated by the Fourier transform of the Laplacian, we define $$f(D)u=\mathcal{F}^{-1}\left(f\left(|\xi|\right)\mathcal{F}u\right),$$ for a "nice" function $f$ (by "nice," I mean so that the above makes sense). This allows us to make sense of objects like $e^{t\Delta}$ or $\cos\left(t\sqrt{-\Delta}\right).$ The former is related to the fundamental solution of the heat equation and the latter to the fundamental solution of the wave equation (notice anything related to semigroups for the former?). Fourier multipliers generalize to pseudodifferential operators (which are also a generalization of singular integral operators), but I'd rather not get into this, since this section is already very long. Fourier multipliers can be used to give a general definition of Sobolev spaces, which are of fundamental importance in PDEs, as well. We define these spaces, for any real $k$ , as $$W^{k,p}(\mathbb{R}^n)=\left\lbrace u\in \mathcal{S}'(\mathbb{R}^n): \mathcal{F}^{-1}\left(\langle \xi\rangle ^{k} \mathcal{F} u\right)\in L^p(\mathbb{R}^n\right\rbrace,$$ where $\langle \xi\rangle=\left(1+|\xi|^2\right)^{1/2}.$ These are Banach spaces, and when $p=2,$ it is a Hilbert space. For this reason, we use the provocative notation $W^{k,2}=H^k.$ Something very significant that I haven't discussed yet is spectral theory, so I'll say a bit about it now. This is another field with signficant application to PDEs (and quantum mechanics). Spectral theory is directly connected to the Hilbert space $L^2$ , so the integral is slightly "hidden." Here is one version of the spectral theorem, which can be proven using holomorphic functional calculus or the Fourier transform on $\mathcal{S}'$ : If $A$ is a bounded, self-adjoint operator on a Hilbert space $H$ , then there exists a measure space $(X,\mathfrak{F},\mu)$ , a unitary map $\Phi:H\rightarrow L^2(X,\mu),$ and $a\in L^\infty (X,\mu)$ so that $$\Phi A\Phi^{-1}f(x)=a(x)f(x)$$ for all $f\in L^2(X,\mu).$ Here, $a$ is real-valued, and $\|a\|_{L^\infty}=\|A\|.$ There is also a near-identical version for bounded unitary operators; the only difference is that $|a|=1$ on $X$ . We can further extend to normal operators. This also generalizes for unbounded operators: If $A$ is an unbounded, self-adjoint operator on a separable Hilbert space $H$ , then there is a measure space $(X,\mu)$ , a unitary map $\Phi:L^2(X,\mu)\rightarrow H$ , and a real-valued, measurable function $a$ on $X$ such that $$\Phi^{-1}A\Phi f(x)=a(x)f(x)$$ for $\Phi f\in \mathcal{D}(A).$ If $f\in L^2(X,\mu),$ then $\Phi f\in\mathcal{D}(A)$ if and only if $af\in L^2(X,\mu).$ This givens us a new functional calculus: If $f:\mathbb{R}\rightarrow \mathbb{C}$ is Borel, then we can define $f(A)$ via $$\Phi^{-1}f(A)\Phi g(x)=f(a(x))g(x).$$ If $f$ is bounded and Borel, then we can define this for any $g\in L^2(X,\mu),$ and $f(A)$ will be a bounded operator on $H$ . If not, then we can define $$\mathcal{D}(f(A))=\{\Phi g\in H: g\in L^2(X,\mu)\text{ and } f(a(x))g\in L^2(X,\mu)\}.$$ I think I'll stop here. PDEs utilizes all of the above, but since it is an application of these subjects, rather than "pure analysis," I'm going to leave that out. TL;DR Limits are fundamental in analysis, and one such example of their use is in the definition of the integral, which is ubiquitous in the field. References: 1. Gerald Folland: Real Analysis 2. Elias M. Stein and Rami Shakarchi: Complex Analysis 3. Michael Taylor: Introduction to Complex Analysis (link to pdf on his website: http://mtaylor.web.unc.edu/files/2018/04/complex.pdf) 4. Michael Taylor: Measure Theory and Integration 5. Michael Taylor: Partial Differential Equations I 6. Michael Taylor: Partial Differential Equations II 7. John Conway: A Course in Functional Analysis 8. Lawrence Evans: Partial Differential Equations EDIT: I've added in functional analysis (and, arguably, went a bit overboard) and listed some references, as well as tweaked some previous statements. I apologize in advance if there are typos- this was difficult to proofread! EDIT: I've added in a statement of the Schwartz kernel theorem, which I forgot to write before.
{ "source": [ "https://math.stackexchange.com/questions/3257398", "https://math.stackexchange.com", "https://math.stackexchange.com/users/247251/" ] }
3,262,735
The Fibonacci sequence $P_n = P_{n-1}+P_{n-2}$ is $$1,1,2,3,5,8,13,21,34,55,89,144,233,377, 610, \cdots $$ I learnt that the fraction $1/89$ contains all the numbers in the sequence. $$\begin{align} \frac{1}{89}&= 0.\overline{01123595505617977528089887640449438202247191~}\\ &=0.01+0.001+0.0002+0.00003+0.000005+0.0000008+~\\ &~~~~~0.00000013+0.000000021+0.0000000034+0.00000000055+ ~\\ &~~~~~0.000000000089+0.0000000000144+0.00000000000233+~\\ &~~~~~0.000000000000377+0.0000000000000610+\cdots \end{align}$$ where the over line represents repeated cycle. Rule of the number of zeros (not sure if this is right or not): Don't add zero for the next number if it is "smaller" than the previous number. To compare numbers, we only keep the first digit and make the rest of the digits after the decimals point. For example, $13$ in this case is "smaller" than $8$ because $1.3<8$ , so we don't add any zero for $13$ -- the same $7$ zeros in front of both $13$ and $8$ . On the other hand, if the number in the sequence is greater than or equal to the previous one, we would add a zero before the greater number. For example, $3>2$ , so we add a zero in front of $3$ , making $5$ zeros in front of $3$ and $4$ zeros in front of $2$ . I think that the rule of the number of zeros applies to all the metallic sequences. If not, let's assume it is for now and keep on reading. I then decided to further explore other metallic sequences. Lets define the $n^{th}$ metallic sequence $$\sigma_n: P_n = nP_{n-1}+P_{n-2}$$ In this post, the Fibonacci sequence is $\sigma_1$ . The next metallic sequence $\sigma_2$ , or the silver sequence, is $$\sigma_2: P_n = 2P_{n-1}+P_{n-2}$$ $$1,2,5,12,29,70,169,408,985,2378,5741,13860,33461,80782,\cdots$$ I took a guess that $1/79$ would contain all the numbers in $\sigma_2$ , and it seems like I am correct for the numerical value although I am not sure how to prove the relationship. $$\begin{align} \frac{1}{79}&=0.\overline{0126582278481}\\ &= 0.01+0.002+0.005+0.00012+0.000029+0.0000070+~\\ &~~~~~0.00000169+0.000000408+0.0000000985+~\\ &~~~~~0.00000002378+0.000000005741+0.000000001386+~\\ &~~~~~0.00000000033461+0.000000000080782+\cdots \end{align}$$ I will present two more cases so that you will get the idea of the pattern. Here is $\sigma_3$ , or copper sequence: $$\sigma_3: P_n = 3P_{n-1}+P_{n-2}$$ $$1,3,10,33,109,360,1189,3927,12970,42837,141481,467280$$ $$\begin{align} \frac{1}{69}&=0.\overline{01449275362}\\ &= 0.01+0.003+0.0010+0.00033+0.000109+0.0000360+~\\ &~~~~~0.00001189+0.000003927+0.0000012970+~\\ &~~~~~0.00000042837+0.000000141481+0.000000046728+~\cdots \end{align}$$ Lastly, I will present the case for $\sigma_{9}$ : $$\sigma_9: P_n = 9P_{n-1}+P_{n-2}$$ $$1,9,82,747,6805,61992,564733,5144589,46866034,426938895,3889316089,\cdots$$ $$\begin{align} \frac{1}{9}&=0.\overline{1}\\ &=0.01+0.009+0.0082+0.00747+0.006805+0.0061992+~\\ &~~~~~0.00564733+0.005144589+0.0046866034+0.00426938895+~\\ &~~~~~0.003889316089+\cdots \end{align}$$ For $\sigma_9$ , I know that if you only type these numbers into the calculator, the value is by no way close to $1/9$ because the series approaches $1/9$ very slowly, so we have to type in a lot of numbers to get the value close to $1/9$ . Now, I have two questions on hand: $1)$ How to prove that a fraction, such as $1/89,~1/79,~1/69,\cdots,~1/9$ , is the sum of all the numbers in the corresponding metallic sequence? $2)$ I am trying to find a fraction that contains all the numbers in $\sigma_{10}$ , but with no avail. Are there any other fractions that contain all the numbers in the metallic sequence $\sigma_{10}$ ? Maybe also fractions for $\sigma_{11},~ \sigma_{12}$ , and so on?
Answer to question 1) : The generating function for the Fibonacci numbers $F_n$ is known to be $$\dfrac{1}{1-(x+x^2)}=\underbrace{1}_{F_0}+\underbrace{1}_{F_1}x+\underbrace{2}_{F_2}x^2+\underbrace{3}_{F_3}x^3+\underbrace{5}_{F_4}x^4+\cdots+F_nx^n+...$$ Taking $x=0.1$ gives : $$\dfrac{1}{1-0.11}=1+1 \times 0.1+2 \times 0.01+3 \times 0.001+5 \times 0.0001+\cdots+F_n 0.1^n+...$$ justifying the equality of LHS and RHS of your first identity (multiplied by $100$ ). Same process for the other metallic sequences. For example, the generating functions of the silver and bronze sequences are resp. $$\dfrac{1}{1-(2x+x^2)} \ \ \ \text{and} \ \ \ \dfrac{1}{1-(3x+x^2)}$$ An interesting generalization along these lines : the recent paper https://arxiv.org/pdf/1901.02619.pdf Remark: one finds as well this generating function for the Fibonacci numbers: $$\dfrac{x}{1-x-x^2}$$ if one takes a shifted-by-one convention: $$F_0=0, \ \ F_1=1, \ \ , F_2=1, \ \ F_3=2, \cdots$$ (this is the convention taken in OEIS )
{ "source": [ "https://math.stackexchange.com/questions/3262735", "https://math.stackexchange.com", "https://math.stackexchange.com/users/587091/" ] }
3,264,677
Why can a line never intersect a plane in exactly two points? I know this seems like a really simple question, but I'm having a hard time figuring out how to answer it. I also tried googling the question but I couldn't find an answer for exactly what I'm looking for.
If you pick two points on a plane and connect them with a straight line then every point on the line will be on the plane. Given two points there is only one line passing those points. Thus if two points of a line intersect a plane then all points of the line are on the plane.
{ "source": [ "https://math.stackexchange.com/questions/3264677", "https://math.stackexchange.com", "https://math.stackexchange.com/users/682634/" ] }
3,269,812
In a polygon, if all the sides are equal, it doesn’t necessarily mean that the polygon is regular (eg. a rhombus). Is this also true for angles? Meaning can you draw a polygon whose interior angles are equal, but the shape is still not regular? I couldn’t think of any examples, but I’m sure there is one.
Start with any polygon that has more than three edges. Move one of the edges parallel to itself a little, extend or contract the adjacent edges appropriately and you will have a new polygon with the same edge directions but different relative side lengths. If you start with a regular polygon the angles will remain all the same. The idea behind this construction is generic. If you start with any sequence of $n > 3$ vectors that span the plane there will be an $n-2$ dimensional space of linear combinations that vanish. Each such linear combination defines a polygon with the same edge directions: form the partial sums in order to find the vertices.
{ "source": [ "https://math.stackexchange.com/questions/3269812", "https://math.stackexchange.com", "https://math.stackexchange.com/users/665586/" ] }
3,273,874
After noticing that function $f: \mathbb R\rightarrow \mathbb R $ $$ f(x) = \left\{\begin{array}{l} \sin\frac{1}{x} & \text{for }x\neq 0 \\ 0 &\text{for }x=0 \end{array}\right. $$ has a graph that is a connected set, despite the function not being continuous at $x=0$ , I started wondering, doest there exist a function $f: X\rightarrow Y$ that is nowhere continuous, but still has a connected graph? I would like to consider three cases $X$ and $Y$ being general topological spaces $X$ and $Y$ being Hausdorff spaces ADDED: $X=Y=\mathbb R$ But if you have answer for other, more specific cases, they may be interesting too.
Check out this paper: F. B. Jones , Totally discontinuous linear functions whose graphs are connected , November 23, (1940). Abstract: Cauchy discovered before 1821 that a function satisfying the equation $$ f(x)+f(y)=f(x+y) $$ is either continuous or totally discontinuous. After Hamel showed the existence of a discontinuous function, many mathematicians have concerned themselves with problems arising from the study of such functions. However, the following question seems to have gone unanswered: Since the plane image of such a function (the graph of $y =f(x)$ ) must either be connected or be totally disconnected, must the function be continuous if its image is connected? The answer is no. In particular, Theorem 5 presents a nowhere continuous function $f:\Bbb R \rightarrow \Bbb R$ whose graph is connected. Whether Conway base 13 function is such an example remains unknown. (at least on MSE; see Is the graph of the Conway base 13 function connected? ) It turns out the graph of Conway base 13 function is totally disconnected . See this brilliant answer .
{ "source": [ "https://math.stackexchange.com/questions/3273874", "https://math.stackexchange.com", "https://math.stackexchange.com/users/653715/" ] }
3,275,538
If water is poured into a cone at a constant rate and if $\frac {dh}{dt}$ is the rate of change of the depth of the water, I understand that $\frac {dh}{dt}$ is decreasing. However, I don't understand why $\frac {dh}{dt}$ is non-linear. Why can't it be linear? I am NOT asking whether or not the height function is linear. Many are telling me that the derivative of height is not a constant so thus the height function is not linear, but this is not what I am asking. This is my mistake, because I had used $h(t)$ originally to denote the derivative of height which is what my book used. Rather I am asking if $\frac {dh}{dt}$ is linear or not and why. It would be nice if someone could better explain what my book is telling me: At every instant the portion of the cone containing water is similar to the entire cone; the volume is proportional to the cube of the depth of the water. The rate of change of depth (the derivative) is therefore not linear.
I understand that $\frac {dh}{dt}$ is decreasing. However, I don't understand on an intuitive level why $\frac {dh}{dt}$ is non-linear. Because any strictly decreasing function that is linear eventually becomes negative, but you already know that $\frac {dh}{dt}$ is always positive.
{ "source": [ "https://math.stackexchange.com/questions/3275538", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
3,275,562
The $6$ letters in the list $G,H,I,J,K,L$ are to be rearranged so that $G$ is the third letter in the list and $H$ is not next to $G$ . How much such arrangements are possible? I'm guessing the answer is $72=3 \cdot (4 \cdot 3 \cdot 2 \cdot 1)$ . Is this right?
I understand that $\frac {dh}{dt}$ is decreasing. However, I don't understand on an intuitive level why $\frac {dh}{dt}$ is non-linear. Because any strictly decreasing function that is linear eventually becomes negative, but you already know that $\frac {dh}{dt}$ is always positive.
{ "source": [ "https://math.stackexchange.com/questions/3275562", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
3,275,574
Given a positive number $t$ , prove that $$7> \frac{4\,t^{3}+ 56\,t^{2}+ 191\,t+ 188}{\sqrt{t^{2}+ 2\,t+ 8}\left ( 7\sqrt{t^{2}+ 2\,t+ 8}+ 2(6\,t- 2+ t^{2}) \right )}$$ Original problem is: Given two positive numbers $x,\,y$ so that $x^{2}+ 2\,y^{2}= \frac{8}{3}$ . Prove that $$v(x)\equiv v= 7(x+ 2\,y)- 4\sqrt{x^{2}+ 2\,xy+ 8\,y^{2}}\leqq 8$$ 1st solution: (with Cauchy-Schwarz) We need to prove $v(\!x\!)\leqq 7(x+ 2\,y)- (3 x+ 10\,y)= 4(x+ y)$ . Then $4(x+ y)\leqq 4\sqrt{\left ( 1+ \frac{1}{2} \right )(x^{2}+ 2\,y^{2})}= 8\,\therefore\,v(x)\leqq 8$ / q.e.d The 2rd solution is my one without C-S : $$x^{2}+ 2\,y^{2}= \frac{8}{3}\,\therefore\,(x- 2\,y)\left ( x- \frac{4}{3} \right )\geqq 0,\,2\,{y}'= -\,\frac{x}{y}$$ Thus, we have $${v}'(x)= 7(1+ 2\,{y}')- \frac{4(2\,x+ 2\,y+ 2\,x{y}'+ 16\,y{y}')}{2\sqrt{x^{2}+ 2\,xy+ 8\,y^{2}}}=$$ $$= 7\left ( 1- \frac{x}{y} \right )- \frac{2\left ( 2\,x+ 2\,y- \frac{x^{2}}{y}- 8\,x \right )}{\sqrt{x^{2}+ 2\,xy+ 8\,y^{2}}}= 7\left ( 2- \frac{x}{y} \right )+ \frac{2(\,6xy- 2\,y^{2}+ x^{2})}{y\sqrt{x^{2}+ 2\,xy+ 8\,y^{2}}}- 7=$$ $$= \frac{1}{y}\left \{ -7(x- 2\,y)+ \frac{4(6\,xy- 2\,y^{2}+ x^{2})^{2}- 49\,y^{2}(x^{2}+ 2\,xy+ 8\,y^{2})^{2}}{\sqrt{x^{2}+ 2\,xy+ 8\,y^{2}}\left ( 2(6\,xy- 2\,y^{2}+ x^{2})+ 7\,y\sqrt{x^{2}+ 2\,xy+ 8\,y^{2}} \right )} \right \}=$$ $$= -\,\frac{x- 2\,y}{y}\left \{ \underbrace{7- \frac{4\,x^{3}+ 56\,x^{2}y+ 191\,xy^{2}+ 188\,y^{3}}{\sqrt{x^{2}+ 2\,xy+ 8\,y^{2}}\left ( 2(6\,xy- 2\,y^{2}+ x^{2})+ 7\,y\sqrt{x^{2}+ 2\,xy+ 8\,y^{2}} \right )}}_{> 0\,(we\,need\,to\,prove\,that\,it\,holds\,!)} \right \}$$ Indeed, for $t= -\,2\,{y}'= \frac{x}{y}> 0$ , we have $$\frac{\mathrm{d} }{\mathrm{d} t}\left ( \underbrace{2(6\,t- 2+ t^{2})+ 7\sqrt{t^{2}+ 2\,t+ 8}}_{> 0\,\because\,its\,derivative\,is\,positve\,and\,t> 0} \right )= \frac{14(t+ 1)}{2\sqrt{t^{2}+ 2\,t+ 8}}+ 4(t+ 3)> 0$$ Or $$14(6 t- 2+ t^{2})\sqrt{t^{2}+ 2 t+ 8}> 4 t^{3}+ 56 t^{2}+ 191 t+ 188- 49(t^{2}+ 2 t+ 8)\,(\!original.problem\!)$$ My solution in VMF : (and I'm looking forward to seeing a nicer one(s), thanks for your interests !) For $0< t\leqq 2$ $$14(6\,t- 2+ t^{2})\sqrt{t^{2}+ 2\,t+ 8}\geqq 4\,t^{3}+ 241\,t^{2}+ 192\,t+ 188- 49(t^{2}+ 2\,t+ 8)>$$ $$> 4\,t^{3}+ 56\,t^{2}+ 191\,t+ 188- 49(t^{2}+ 2\,t+ 8)$$ For $t\geqq 2$ $$14(6\,t- 2+ t^{2})\sqrt{t^{2}+ 2\,t+ 8}\geqq 4\,t^{3}+ 56\,t^{2}+ 562\,t+ 188- 49(t^{2}+ 2\,t+ 8)>$$ $$> 4\,t^{3}+ 56\,t^{2}+ 191\,t+ 188- 49(t^{2}+ 2\,t+ 8)$$ Therefore $(x- 2\,y){v}'(x)\leqq 0\,\therefore\,\left ( x- \frac{4}{3} \right ){v}'(x)\leqq 0\,\therefore\,v(x)\leqq v\left ( \frac{4}{3} \right )= 8$ / q.e.d
I understand that $\frac {dh}{dt}$ is decreasing. However, I don't understand on an intuitive level why $\frac {dh}{dt}$ is non-linear. Because any strictly decreasing function that is linear eventually becomes negative, but you already know that $\frac {dh}{dt}$ is always positive.
{ "source": [ "https://math.stackexchange.com/questions/3275574", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
3,276,351
A particle is allowed to move in the $\mathbb{Z}\times \mathbb{Z}$ grid by choosing any of the two jumps: 1) Move two units to right and one unit up 2) Move two units up and one unit to right. Let $P=(30,63)$ and $Q=(100,100)$ , if the particle starts at origin then? a) $P$ is reachable but not $Q$ . b) $Q$ is reachable but not $P$ . c) Both $P$ and $Q$ are reachable. d) Neither $P$ nor $Q$ is reachable. I could make out that the moves given are a subset of that of a knight's in chess. I think that it'd never be able to reach $(100,100)$ but I'm not sure of the reason. It has got to do something with the move of the knight but I cannot figure out what. I don't have a very good idea about chess, so I'd be glad if someone could answer elaborately.
Both possible moves increase the sum of coordinates by $3$ , so any reachable position must have the sum of coordinates a multiple of $3$ . This means $Q$ is not reachable, since its sum of coordinates is $200$ , not divisible by $3$ . $P$ is also not reachable – to reach an $x$ -coordinate of $30$ it must make at most $30$ jumps, which means that the highest $y$ -coordinate it could reach is $2×30=60<63$ .
{ "source": [ "https://math.stackexchange.com/questions/3276351", "https://math.stackexchange.com", "https://math.stackexchange.com/users/665584/" ] }
3,277,281
We could have written = as "equals", + as "plus", $\exists$ as "thereExists" and so on. Supplemented with some brackets everything would be just as precise. $$\exists x,y,z,n \in \mathbb{N}: n>2 \land x^n+y^n=z^n$$ could equally be written as: ThereExists x,y,z,n from theNaturalNumbers suchThat n isGreaterThan 2 and x toThePower n plus y toThePower n equals z toThePower n What is the reason that we write these words as symbols (almost like a Chinese word system?) Is it for brevity? Clarity? Can our visual system process it better? Because not only do we have to learn the symbols, in order to understand it we have to say the real meaning in our heads. If algebra and logic had been invented in Japan or China, might the symbols actually have just been the words themselves? It almost seems like for each symbol there should be an equivalent word-phrase that it corresponds to that is accepted.
"Integral From a x squared Plus b Plus c To Infinity of a Times LeftParenthesis x Plus LeftParenthesis x Plus c RighParenthesis Over LeftParenthesis x Plus One RightParenthesis Differential x IsGreater Integral From a x squared Plus b Minus c To Infinity of a Times LeftParenthesis x Plus LeftParenthesis x Plus c RighParenthesis Over LeftParenthesis x Minus One RightParenthesis Differential x Implies Integral From a x squared Minus b Plus c To Infinity of a Times LeftParenthesis x Plus LeftParenthesis x Plus c RighParenthesis Over LeftParenthesis x Plus One RightParenthesis Differential x IsGreater Integral From a x squared Plus b Minus c To Infinity of a Times LeftParenthesis x Plus LeftParenthesis x Plus c RighParenthesis Over LeftParenthesis x Minus One RightParenthesis Differential x" ? Quiz: Do you recognize this one ? Summation On j From 1 Upto N Of y Index k Times the Product On k From 1 And k NotEqual to j Upto N Of x Minus x Index k Over x Index j Minus x Index k.
{ "source": [ "https://math.stackexchange.com/questions/3277281", "https://math.stackexchange.com", "https://math.stackexchange.com/users/248558/" ] }
3,278,677
Does there exist a sequence $(a_i)_{i \geq 0}$ of distinct positive integers such that $\sum_{i\geq 0}\frac{1}{a_i} \in \mathbb{Q}$ and $$\{ p \in \mathbb{P} \text{ }|\text{ } \exists\text{ } i\geq 0 \text{ s.t.}\text{ } p | a_i\}$$ is infinite? Motivation: All geometric series (corresponding to sets $\{ 1,n,n^2,n^3,... \}$ ) are rational and the terms obviously contain finitely many primes. The same is true for say, sums of reciprocals of all numbers whose prime factiorisation contains only the primes $p_1, p_2, ...,p_k$ : the sum is then $\prod_{i=1}^k\left(\frac{p_i}{p_i-1}\right)$ On the other side, series corresponding to sets $\{1^2, 2^2, 3^2, ...\}, \{1^3,2^3,3^3,...\},\{1!,2!,3!,...\}$ converge to $\frac{\pi^2}{6}$ , Apery's constant and $e$ respectively, which are all known to be irrationals. I am aware of the fact that if this statement is true then it has not been proven yet (since it implies that the values of the zeta function at positive integers are irrational, which to my knowledge has not been shown yet). Any counterexamples or other possible observations (such as, instead of requiring the set of primes to be infinite, requiring that it contains all primes except a finite set)?
The set $$ A=\{1\times 2, 2\times 3, ...,n\times (n+1),...\}$$ is such a set. Note that $$\sum _1^\infty \frac{1}{n(n+1)}=1$$ and every prime number $p$ divides $p(p+1)$ which is an element of the set $A$ .
{ "source": [ "https://math.stackexchange.com/questions/3278677", "https://math.stackexchange.com", "https://math.stackexchange.com/users/337265/" ] }
3,278,682
Is it true that for any positive integers $V, E, F$ with $V - E + F = 2$ there exists a polyhedron with $V$ vertices, $E$ edges and $F$ faces? In case there is a silly counterexample (say, with $F=1$ ), then what about large $V,E,F$ - say, all greater than or equal to $10$ ? Any help appreciated!
The set $$ A=\{1\times 2, 2\times 3, ...,n\times (n+1),...\}$$ is such a set. Note that $$\sum _1^\infty \frac{1}{n(n+1)}=1$$ and every prime number $p$ divides $p(p+1)$ which is an element of the set $A$ .
{ "source": [ "https://math.stackexchange.com/questions/3278682", "https://math.stackexchange.com", "https://math.stackexchange.com/users/347678/" ] }
3,282,632
I'm working on a math problem but I am having a hard time figuring out the method used by my textbook to make this factorization: $$x^4 + 10x^3 + 39x^2 + 70x + 50 = (x^2 + 4x + 5)(x^2 + 6x + 10)$$ I've tried to see if this equation can be factored by grouping or by long division to no avail. Any help would be greatly appreciated.
The only really general way of which I am aware is to guess at the form of the factorization. Since it is monic (the highest term has coefficient 1), you know that the factors should also be so. Thus, there are really only 2 possible factorizations you need to think of, at least at start, which may then be further reducible through easier methods. If we denote the polynomial by $P(x)$ , we produce the following candidate factorization equations: one is factorization to a linear term and cubic term, i.e. $$P(x) = (x + a)(x^3 + b_2 x^2 + b_1 x + b_0)$$ the other is factorization to two quadratic terms, i.e. $$P(x) = (x^2 + a_1 x + a_0)(x^2 + b_1 x + b_0)$$ The "obvious" next case of this would simply result in now getting a third-degree polynomial on the left and first on the right, but that's just case 1 thanks to the commutative property, so this is exhaustive. The second case is what you have here. The first case is most easily tested and solved by a simple application of the rational root theorem which will, if it's possible, give the value for $a$ - followed by a polynomial long division to get the rest. For the second case, there isn't really a much better method than to just multiply it all out: $$(x^2 + a_1 x + a_0)(x^2 + b_1 x + b_0) = x^4 + c_3 x^3 + c_2 x^2 + c_1 x + c_0$$ where we've introduced for notational cleanliness (I had the computer multiply this out for me because it's there) $$c_3 := a_1 + b_1$$ $$c_2 := a_0 + a_1 b_1 + b_0$$ $$c_1 := a_1 b_0 + a_0 b_1$$ $$c_0 := a_0 b_0$$ Then you just set the $c_j$ equal to the appropriate coefficient values read off from the terms of the given polynomial (i.e. $c_0 = 50$ in your given example), and try to find whole number values for $a_j$ and $b_j$ that work. You'd probably want to start with $c_3$ and $c_0$ first.
{ "source": [ "https://math.stackexchange.com/questions/3282632", "https://math.stackexchange.com", "https://math.stackexchange.com/users/686600/" ] }
3,288,477
Russell's paradox , the set of all sets not containing themselves can be broken down to two statements: A thing that contains all sets that don't contain themselves. This thing/one such thing would necessarily qualify as a set. Now, what was the definition of set that Russell went by that mandated the second statement?
The axiom that told Russell that he could consider that to be a set is called the comprehension axiom, which says that for any property $P$ , there is a set $X_P$ such that $$ x\in X_P\iff P(x).$$ (We usually write $X_P$ using comprehension notation: $X_P=\{x:P(x)\}$ ). The other important axiom here is extensionality, which says two sets are equal if and only if they have the exact same elements: this tells us that the above condition is a valid definition of $X_P.$ Russell applied the comprehension axiom to the property $P(x)=x\notin x$ and then derived his contradiction that $$ X_P\in X_P\iff X_P\notin X_P.$$ The lesson we learned from this is that the comprehension axiom is inconsistent. Thus we do not use it in axiomatic set theory. Instead we use weaker versions, most commonly the separation axiom of ZF that says for any set $A$ and property $P$ that $ \{x\in A:P(x)\}$ is a set. The reason that Russell thought he could use the comprehension axiom is because it seemed naively true, had been used implicitly before in mathematics with no problems, and had even been singled out as a formal axiom by Frege. His discovery that this seemingly innocuous bit of mathematical reasoning led straightforwardly to an obvious contradiction led to a lot of worry about the formal foundations of mathematics in the ensuing decades, and many interesting things were discovered by logicians trying to interrogate exactly why comprehension failed and how to avoid similar problems.
{ "source": [ "https://math.stackexchange.com/questions/3288477", "https://math.stackexchange.com", "https://math.stackexchange.com/users/262231/" ] }
3,289,735
Recently I was wondering: Why does pi have an irrational value as it is simply the ratio of diameter to circumference of a circle? As the value of diameter is rational then the irrationality must come from the circumference. Then I used calculus to calculate the arc length of various functions with curved graphs (between two rational points) and found the arc length two be irrational again. Do all curved paths have irrational lengths? My logic is that while calculating the arc length (calculus) we assume that the arc is composed of infinitely small line segments and we are never close the real value and unlike the area under a curve, there do not exist an upper and lower limit which converges to the same value. If yes, are these the reasons irrational values exist in the first place?
Obviously, a straight line between two rational points can have rational length $-$ just take $(0,0)$ and $(1,0)$ as your rational points. But a curved line can also have rational length. Consider parabolas of the form $y=\lambda x(1-x)$ , which all pass through the rational points $(0,0)$ and $(1,0)$ . If $\lambda=0$ , then we get a straight line, with arc length $1$ . And if $\lambda=4$ , then the curve passes through $(\frac12,1)$ , so the arc length is greater than $2$ . Now let $\lambda$ vary smoothly from $0$ to $4$ . The arc length also varies smoothly, from $1$ to some value greater than $2$ ; so for some value of $\lambda$ , the arc length must be $2$ , which is a rational number.
{ "source": [ "https://math.stackexchange.com/questions/3289735", "https://math.stackexchange.com", "https://math.stackexchange.com/users/603995/" ] }
3,289,763
First question: Is that inequality just cauchy schwarz inequality? Second question: What happened on that last step, where $g(x-t) \rightarrow g(x)?$
Obviously, a straight line between two rational points can have rational length $-$ just take $(0,0)$ and $(1,0)$ as your rational points. But a curved line can also have rational length. Consider parabolas of the form $y=\lambda x(1-x)$ , which all pass through the rational points $(0,0)$ and $(1,0)$ . If $\lambda=0$ , then we get a straight line, with arc length $1$ . And if $\lambda=4$ , then the curve passes through $(\frac12,1)$ , so the arc length is greater than $2$ . Now let $\lambda$ vary smoothly from $0$ to $4$ . The arc length also varies smoothly, from $1$ to some value greater than $2$ ; so for some value of $\lambda$ , the arc length must be $2$ , which is a rational number.
{ "source": [ "https://math.stackexchange.com/questions/3289763", "https://math.stackexchange.com", "https://math.stackexchange.com/users/604375/" ] }
3,289,765
Transcript: The diagram shows a square based pyramid with base PQRS and vertex O. All the edges are length 20 meters. Find the shortest distance, in meters, along the outer surface of the pyramid from P to the midpoint of OR. The only way I have been able to solve this question is using a computer and the paper is non calculator, so there must be a faster, better solution. My solution is creating a point X on OQ and labelling the midpoint of OR as M, and setting $\theta = OPQ$ . I then calculated $PX + PM$ in terms of $\theta$ and found the minimum point of this function, in order to find the shortest possible distance. Finding the minimum point would be nowhere near possible under time constraint without a computer. I have included the correct answer, so it is the working I am looking for. Answer: D $10\sqrt7$
Hint: The distance has to be a straight line along the net of the pyramid. There are two options: The first line has lenght $10\sqrt{7+2\sqrt{3}}$ , and the second one has $10\sqrt{7}$ , which is the smaller one.
{ "source": [ "https://math.stackexchange.com/questions/3289765", "https://math.stackexchange.com", "https://math.stackexchange.com/users/419615/" ] }
3,292,475
Suppose $f(1)=1$ and $f(2)=7$ . For $n\ge 3$ we have $$f(n)=7f(n-1)-12f(n-2). $$ What is the closed form of the function $f$ ? I've tried unrolling it but it gets very complicated very quickly without a clear pattern emerging. Any ideas?
Write $a_n = f(n)$ instead. Step 1 You can note that $$a_{n+1}-4a_n = 3(a_n-4a_{n-1})$$ so putting $b_n=a_n-4a_{n-1}$ you get $$b_{n+1} = 3b_n$$ so $b_n$ is geometric progression, with $b_2=3$ so $b_1=1$ and thus $$b_n = 3^{n-1}$$ so $$\boxed{a_{n+1}-4a_n =3^n}$$ Step 2 You can also note that $$a_{n+1}-3a_n = 4(a_n-3a_{n-1})$$ so putting $c_n=a_n-3a_{n-1}$ you get $$c_{n+1} = 4c_n$$ so $c_n$ is geometric progression, with $c_2=4$ so $c_1=1$ and thus $$c_n = 4^{n-1}$$ so $$\boxed{a_{n+1}-3a_n = 4^{n}}$$ Step 3 If you substract those formulas in boxes you get: $$\boxed{a_n = 4^{n}- 3^n}$$
{ "source": [ "https://math.stackexchange.com/questions/3292475", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
3,292,483
Regarding the proof of the Yoneda lemma on p.98: How does one get that the map $$[\mathscr A^{\text{op}}, \textbf{Set}](H_A,X)\to [\mathscr A^{\text{op}}, \textbf{Set}](H_B,X)$$ is defined by $-\circ H_f$ ? (The definition of $H_f$ is given on p.90.) And how does one get that the map $$[\mathscr A^{\text{op}}, \textbf{Set}](H_A,X)\to [\mathscr A^{\text{op}}, \textbf{Set}](H_A,X')$$ is defined by $\theta\circ -$ ?
Write $a_n = f(n)$ instead. Step 1 You can note that $$a_{n+1}-4a_n = 3(a_n-4a_{n-1})$$ so putting $b_n=a_n-4a_{n-1}$ you get $$b_{n+1} = 3b_n$$ so $b_n$ is geometric progression, with $b_2=3$ so $b_1=1$ and thus $$b_n = 3^{n-1}$$ so $$\boxed{a_{n+1}-4a_n =3^n}$$ Step 2 You can also note that $$a_{n+1}-3a_n = 4(a_n-3a_{n-1})$$ so putting $c_n=a_n-3a_{n-1}$ you get $$c_{n+1} = 4c_n$$ so $c_n$ is geometric progression, with $c_2=4$ so $c_1=1$ and thus $$c_n = 4^{n-1}$$ so $$\boxed{a_{n+1}-3a_n = 4^{n}}$$ Step 3 If you substract those formulas in boxes you get: $$\boxed{a_n = 4^{n}- 3^n}$$
{ "source": [ "https://math.stackexchange.com/questions/3292483", "https://math.stackexchange.com", "https://math.stackexchange.com/users/333743/" ] }
3,292,495
Provide an example (or explain why the request is impossible) of a pair of functions $f$ and $g$ neither of which is continuous at $0$ but such that $f(x)g(x)$ and $f(x) + g(x)$ are continuous at $0$ ; What we know definitionally is that $$\lim_{x\to 0} f(x)g(x)=f(0)g(0) \\ \lim_{x\to 0} f(x) + g(x)=f(0) + g(0)$$ (We know that $x=0$ is not an isolated point of their domains because a function is trivially continuous at those). Provided that $f(0)g(0) \neq 0$ , we could use the fact that a quotient of continuous functions is a continuous function: $$\lim_{x\to 0} \frac{f(x)+ g(x)}{f(x)g(x)}= \lim_{x\to 0} \frac{1}{f(x)} + \frac{1}{g(x)} \stackrel{\text{???}}{=} \lim_{x\to 0} \frac{1}{f(x)} + \lim_{x\to 0} \frac{1}{g(x)}$$ If we knew those functions were continuous, we'd know the request is impossible. But you can distribute limits only if you previously know they exist. So I'm totally lost. How can I approach these exercises that deal with the algebra of continuous functions?
Write $a_n = f(n)$ instead. Step 1 You can note that $$a_{n+1}-4a_n = 3(a_n-4a_{n-1})$$ so putting $b_n=a_n-4a_{n-1}$ you get $$b_{n+1} = 3b_n$$ so $b_n$ is geometric progression, with $b_2=3$ so $b_1=1$ and thus $$b_n = 3^{n-1}$$ so $$\boxed{a_{n+1}-4a_n =3^n}$$ Step 2 You can also note that $$a_{n+1}-3a_n = 4(a_n-3a_{n-1})$$ so putting $c_n=a_n-3a_{n-1}$ you get $$c_{n+1} = 4c_n$$ so $c_n$ is geometric progression, with $c_2=4$ so $c_1=1$ and thus $$c_n = 4^{n-1}$$ so $$\boxed{a_{n+1}-3a_n = 4^{n}}$$ Step 3 If you substract those formulas in boxes you get: $$\boxed{a_n = 4^{n}- 3^n}$$
{ "source": [ "https://math.stackexchange.com/questions/3292495", "https://math.stackexchange.com", "https://math.stackexchange.com/users/664833/" ] }
3,292,518
Consider a 3 dimensional orthogonal basis $\vec{m}, \vec{n}, \vec{t}$ . Consider rotating the $(\vec{m}, \vec{n})$ plane about $\vec{t}$ . For further analysis, consider $\omega$ as the anticlockwise angle from some reference datum to $\vec{m}$ . Please help me explain why then following derivatives hold: $\begin{equation} \frac{\partial \vec{m}} {\partial \omega} = \vec{n}; \ \ \ \frac{\partial \vec{n}} {\partial \omega} = - \vec{m} \end{equation} $ enter image description here
Write $a_n = f(n)$ instead. Step 1 You can note that $$a_{n+1}-4a_n = 3(a_n-4a_{n-1})$$ so putting $b_n=a_n-4a_{n-1}$ you get $$b_{n+1} = 3b_n$$ so $b_n$ is geometric progression, with $b_2=3$ so $b_1=1$ and thus $$b_n = 3^{n-1}$$ so $$\boxed{a_{n+1}-4a_n =3^n}$$ Step 2 You can also note that $$a_{n+1}-3a_n = 4(a_n-3a_{n-1})$$ so putting $c_n=a_n-3a_{n-1}$ you get $$c_{n+1} = 4c_n$$ so $c_n$ is geometric progression, with $c_2=4$ so $c_1=1$ and thus $$c_n = 4^{n-1}$$ so $$\boxed{a_{n+1}-3a_n = 4^{n}}$$ Step 3 If you substract those formulas in boxes you get: $$\boxed{a_n = 4^{n}- 3^n}$$
{ "source": [ "https://math.stackexchange.com/questions/3292518", "https://math.stackexchange.com", "https://math.stackexchange.com/users/630919/" ] }
3,293,278
When reading about some open problems, a lot of them have quotes by renowned mathematicians that “[the conjecture] cannot be solved using the current technology” or something along these lines. What do they mean by that? Are they talking about the axioms? Or are they generally speaking in terms of intelligence and mathematical abilities? These ones are just at the top of my mind: https://terrytao.wordpress.com/2011/08/25/the-collatz-conjecture-littlewood-offord-theory-and-powers-of-2-and-3/ https://m.youtube.com/watch?v=MXJ-zpJeY3E (skip to the end where the talk about the Riemann Hypothesis) But I’ve seen many, many more examples
This doesn't have any formal meaning. It just means that they believe the problem can't be solved with the techniques that mathematicians have already developed and instead some big new idea will be necessary to solve it. That is, "the current technology" refers to the collection of proof methods that mathematicians have discovered already. It's meant as a sort of metaphor between mathematics and engineering: some engineering problems can be solved by just finding a clever way to put together already existing technologies, while others require major new inventions. In the same way, some mathematics problems can be solved by just finding clever new uses of ideas that are already known while other mathematics problems require something more novel.
{ "source": [ "https://math.stackexchange.com/questions/3293278", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
3,297,324
I understand that the three dimensions each have their own axes, for lines, planes and volumes, and that 4 dimension has an axis but it is complicated and hard to determine, but from what I understand is that it is there. So I was wondering, disregarding their complexity, do Higher-Dimensions such as 5D to 10/11D have axes?
They sure do. In three dimensions, the $x$ -axis is the set of all points where both coordinates besides $x$ are zero, the $y$ -axis is the set of all points where both coordinates besides $y$ are zero, and the $z$ -axis is the set of all points where both coordinates besides $z$ are zero. If you're looking at a $26$ -dimensional space with coordinates $(a, b, c, \ldots, z)$ , then the $a$ -axis is the set of all points where all coordinates besides $a$ are zero, and the $b$ -axis is the set of all points where all coordinates besides $b$ are zero, and the $c$ -axis is the set of all points where all coordinates besides $c$ are zero, and ... the $z$ -axis is the set of all points where all coordinates besides $z$ are zero.
{ "source": [ "https://math.stackexchange.com/questions/3297324", "https://math.stackexchange.com", "https://math.stackexchange.com/users/571780/" ] }
3,299,825
Consider the following sequence - $$ 1,2,2,4,4,4,4,8,8,8,8,8,8,8,8, ... $$ In this sequence, what will be the $ 1025^{th}\, term $ So, when we write down the sequence and then write the value of $ n $ (Here, $n$ stands for the number of the below term) above it We can observe the following - $1 - 1$ $2 - 2 $ $3 - 2$ $4 - 4$ $5 - 4$ . . . $8 - 8$ $9 - 8$ . . . We can notice that $ 4^{th}$ term is 4 and similarly, the $ 8^{th}$ term is 8. So the $ 1025^{th}$ term must be 1024 as $ 1024^{th} $ term starts with 1024. So the value of $ 1025^{th}$ term is $ 2^{10} $ . Is there any other method to solve this question?
In binary, the term indexes $$1,10,11,100,101,110,111,1000,1001,1010,1011,1100,1101,1110,1111,\cdots$$ become $$1,10,10,100,100,100,100,1000,1000,1000,1000,1000,1000,1000,1000,\cdots$$ So for any index, clear all bits but the most significant. $$1025_{10}=‭10000000001_b\to‭10000000000_b=1024_{10}.$$ ‬
{ "source": [ "https://math.stackexchange.com/questions/3299825", "https://math.stackexchange.com", "https://math.stackexchange.com/users/594911/" ] }
3,302,853
Below the parallelogram is obtained from square by stretching the top side while fixing the bottom. Since area of parallelogram is base times height, both square and parallelogram have the same area. This is true no matter how far I stretch the top side. In below figure it is easy to see why both areas are same. But it's not that obvious in first two figures. Any help seeing why the area doesn't change in first figure?
Behold, $\phantom{proof without words}$
{ "source": [ "https://math.stackexchange.com/questions/3302853", "https://math.stackexchange.com", "https://math.stackexchange.com/users/168854/" ] }
3,302,866
I encountered the following unsubstantiated fact in a proof: $f$ is differentiable, strictly convex, and coercive. Thus we may conclude $$(\nabla f)^{-1} = \nabla f^*$$ where $f^*$ is the Legendre transformation of $f$ Loosely speaking, I see how this might work: if a function is differentiable then Legendre trasformation will return the slope at some point $x \in \text{dom} f$ . If $f$ is strictly coercive, then $\nabla f$ is monotonic, in the appropriate sense. So an inverse is defined. Loosely, I think the role of coercivity is to make the map between $x \in \text{dom} f$ and its slope at $x$ bijective. I can't understand how coercivity is causing this though. My questions: Can someone provide a heuristic description what coercivity is giving us? Provide a proof of the fact above (or reference)? Assume $f: \mathbb{R}^d \rightarrow \mathbb{R}$ .
Behold, $\phantom{proof without words}$
{ "source": [ "https://math.stackexchange.com/questions/3302866", "https://math.stackexchange.com", "https://math.stackexchange.com/users/65223/" ] }
3,303,294
I want to make an algorithm grouping all the details having the same shape. each detail is defined by its surface, and a list of contour lines. First I believed that having the same perimeter length and same surface would be enough, but I saw on that link that it is wrong hypothesis. If I take as additional condition that the two shapes have the same number of segments, would it be enough? Or else how can I check that? The problem is for each detail, they can get rotation, or symmetry. Edit : Thanks for your answers, I finally found a way to solve the problem (answer below)
In the spirit of the no-words answer to the linked question:
{ "source": [ "https://math.stackexchange.com/questions/3303294", "https://math.stackexchange.com", "https://math.stackexchange.com/users/691323/" ] }
3,303,301
Let $f : \mathbb{R} \to \mathbb{R}$ be smooth. Suppose that $f'(0) = 0$ and $f''(0) < 0$ . If $f(1) > f(0)$ , must there exist a point $x \in (0, 1)$ such that $f'(x) = 0$ ?
In the spirit of the no-words answer to the linked question:
{ "source": [ "https://math.stackexchange.com/questions/3303301", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
3,303,829
I have looked at Tao's book on Measure Theory, and they are perhaps the best math books I have ever seen. Besides the extremely clear and motivated presentation, the main feature of the book is that there is no big list of exercises at the end of each chapter; the exercises are dispersed throughout the text, and they are actually critical in developing the theory. Question: What are some other math books written in this style, or other authors who write in this way? I am open to any fields of math, since I will use this question in the future as a reference. That was the question; the following is just why I think Tao's style is so great. When you come to an exercise, you know that you are ready for it. There is no doubt in the back of your mind that "maybe I haven't read enough of the chapter to solve this exercise" Similarly, there is no bad feeling of "maybe I wasn't supposed to use this more advanced theorem for this exercise, maybe I was supposed to do it from the basic definitions but I can't". It makes everything feel "fair game" It makes it difficult to be a passive reader It makes you become invested in the development of the theory, as if you are living back in 1900 and trying to develop this stuff for the first time I think you can achieve a similar effect with almost any other book, if you try to prove every theorem by yourself before you read the proof and stuff like that, but at least for me there are some severe psychological barriers that prevent me from doing that. For example, if I try to prove a theorem without reading the proof, I always have the doubt that "this proof may be too hard, it would not be expected of the reader to come up with this proof". In Tao's book, the proofs are conciously left to you, so you know that you can do it, which is a big encouragement.
Vakil's notes on Algebraic Geometry http://math.stanford.edu/~vakil/216blog/FOAGnov1817public.pdf are written in the same style. A nice contrast to the terse, traditional nature of the standard reference, Hartshorne https://www.springer.com/gp/book/9780387902449
{ "source": [ "https://math.stackexchange.com/questions/3303829", "https://math.stackexchange.com", "https://math.stackexchange.com/users/64460/" ] }
3,306,015
Question: Is a topological space considered to be a class in set theory? So, by the might of wikipedia, I've found that ...a topological space may be defined as a set of points, along with a set of neighbourhoods for each point... This sounds like a set of sets, because for every point element there is a neighborhood set . Does this make it a class in set theory, as shown here : In set theory and its applications throughout mathematics, a class is a collection of sets (or sometimes other mathematical objects) that can be unambiguously defined by a property that all its members share. Then in order to understand what a class is , via ncatlab, I found what a collection is here : use the word collection to denote a bunch of “things” Which sounds self-referential. It seems like that the idea of an element and set are fairly concrete, and then a class/collection is just a bag that you throw things in. On a slight tangent, how do you define higher level classes in set theory? Is there any notation that can show that this is a class of a class?
A topological space is an ordered pair $(X, \mathcal T) $ where $X$ is a set and $\mathcal T$ is a set of subsets of $X$ satisfying certain axioms. An ordered pair of sets is itself a set. It is a class in the sense that every set is a class, but it has no special reason to be called a class instead of a set.
{ "source": [ "https://math.stackexchange.com/questions/3306015", "https://math.stackexchange.com", "https://math.stackexchange.com/users/619054/" ] }
3,307,156
I randomly had a thought about what proportion of quadratics don't have real $x$ -intercepts. Initially I thought 33%, because 0,1,2 intercepts, then I thought that the proportion of 1 intercepts is infinitesimal. So I then thought 50%, as graphically half the quadratics would be above the $x$ -axis, the other half below. If you put it into a list of (max,min) and (above,below): Minimum + Above = no $x$ -intercepts Minimum + Below = 2 $x$ -intercepts Maximum + Above = 2 $x$ -intercepts Maximum + Below = no $x$ -intercepts. Hence 50% right? Well I simulated it using code. I randomised a, b and c, and output the discriminant. If it is less than 0, add 1 to a variable. Do this about 100000 times. Now divide the variable by 100000. I get numbers like $(37.5\pm0.2)$ %. I hypothesise that it averages $3/8$ . Why? There may be a fallacy to my approach of finding the 'proportion'. Fundamentally, it is the probability of getting a quadratic with no $x$ -intercepts. However, I am not sure. EDIT: The range was from $(-n,n)$ , where $n$ was 100000, or even higher.
The problem isn't that there's no way to choose a random quadratic. The problem is that there's many ways, and there's no obvious reason to think of any of them as "the" way to pick one. For starters, even picking a "random number" is something you can do in many ways: uniformly from some interval, say $(-1,1)$ , or $(0, 1)$ , or $(2,17)$ , from a normal distribution with any mean or variance, from an exponential distribution with any mean, from a Cauchy distribution, from some weird arbitrary distribution you make up yourself (Note that "a number uniformly randomly chosen from all real numbers" is not possible, simply because the integral of the density function over all real numbers (i.e. the total probability) must be $1$ , and no constant function does that.) Now, with that in mind, here are some ways you could pick a random quadratic: choosing each of $a$ , $b$ and $c$ randomly by any of the above methods (or some combination of them) and constructing $ax^2 + bx + c$ , choosing $(a, b)$ as the stationary point and a scaling factor $c$ (again, by any / several of the methods above) and writing $c(x - a)^2 + b$ , choosing $f(0)$ , $f(1)$ , $f(2)$ randomly and using the unique quadratic that goes through all three (of course, you could replace $0$ , $1$ , $2$ with any other three real numbers), choosing $f(0)$ , $f'(0)$ , and $f''(0)$ randomly, and picking the quadratic that has those values (which is admittedly very similar to the first method, but not quite), ... I could go on. All of these are plausible interpretations of "a random quadratic", depending on where the randomness is coming from. They will all (in general) produce different answers to questions like these. (And note that for the same reasons as before, you can't have any way of picking random quadratics that is "translation-invariant" in the sense that $f(x)$ and $f(x - a)$ are equally likely, or $f(x)$ and $f(x) + b$ are equally likely.) Now you could ask for your particular way of choosing a random quadratic, why the probability of having no x-intercepts is what you found it out to be. I think another of this question's answers points you in the right direction for that.
{ "source": [ "https://math.stackexchange.com/questions/3307156", "https://math.stackexchange.com", "https://math.stackexchange.com/users/579154/" ] }
3,308,043
I was wondering if I can split apart the following fraction \begin{align} \frac{1}{a-b} \end{align} into the form: \begin{align} f(a)+f(b) \end{align} where $f(a)$ and $f(b)$ is some function in terms of $a$ and $b$
$f(a)+f(b)$ is symmetric: it does not change if you switch $a$ and $b$ . But $\frac 1 {a-b}$ is not symmetric. So you cannot do this.
{ "source": [ "https://math.stackexchange.com/questions/3308043", "https://math.stackexchange.com", "https://math.stackexchange.com/users/565074/" ] }
3,310,883
Suppose $G$ is a group and let $H \triangleleft G$ . Consider the factor group $G/H$ where the relation is $aHbH = abH$ for all $a,b \in G$ . Suppose we wanted to show that the above relation is well-defined. Then we would pick some elements $a,b,c,d \in G$ such that $aH = cH$ and $bH = dH$ . I then would have to show that $aHbH = cHdH$ , which is the same as showing that $abH = cdH$ . Question . Couldn't we begin with the left hand side of the equation $aHbH = cHdH$ and substitute $cH$ for $aH$ and $dH$ for $bH$ ? This would result in $$aHbH = (cH)(dH) = cHdH$$ The proof in the book doesn't proceed in this way, and although I understand the author's proof, I don't understand why the above method wouldn't work. I have looked at other questions regarding this, but my question is addressing why we can't substitute our assumed equations $aH=cH$ and $bH=dH$ into the expression $aHbH$ ? Thanks!
Your question is a common theme that arises whenever we define expressions that depend on non-unique representations of the same thing. Imagine a similar scenario. Suppose $\text{Units}(x)$ is an operation that picks out the units digit in the decimal expansion of $x$ . So for e.g. $\text{Units}(10.01)$ outputs $0$ and $\text{Units}(3.45)$ outputs $3$ . Now certainly $42 = 41.99\bar{9} \cdots$ as real numbers. But $$ \text{Units}(42) = 2 \neq 1 = \text{Units}(41.99\bar{9} \cdots) $$ So substituting equal things into the same expression did not result in the same outcome! Why did this happen? Because in essence $42 = 41.99\bar{9} \cdots$ as real numbers but they are not equal as decimal expansions. That is, decimal expansions are not unique. The same number can have two different decimal expansions unless we are careful about disallowing a continued series of $9$ 's at the end. And unfortunately the operation $\text{Units}(x)$ outputs values based on the non-unique representation of $x$ that we know as decimal expansions. The exact same issue can happen with an expression like $(aH)(bH)$ . Sure $aH$ may equal $cH$ and $bH$ may equal $dH$ as sets. But those sets $aH$ and $cH$ may have potentially different representatives $a$ and $c$ . And if the operation $(aH)(bH)$ depends on non-unique representatives, which it does as the operation $(aH)(bH)$ picks out representatives $a, b$ of $aH,bH$ and outputs $(ab)H$ , then merely substituting equal things may not result in equal things. This is why proving the well-defined-ness of the definition $(aH)(bH) := (ab)H$ is necessary. You realize that our definition depends on non-unique representatives $a, b$ ; therefore you realize that mere substitution will not work. And that is why you get your hands dirty by literally verifying that the definition of $(aH)(bH)$ as $(ab)H$ outputs the same result even when you choose potentially different representatives $cH$ and $dH$ for $aH$ and $bH$ respectively.
{ "source": [ "https://math.stackexchange.com/questions/3310883", "https://math.stackexchange.com", "https://math.stackexchange.com/users/203407/" ] }
3,310,991
I've just started working with integrals relatively recently and I am so surprised how much harder they are to compute than derivatives. For example, for something as seemingly simple as $\int e^{ \cos x} dx $ is impossible right? I can't use u-sub since there is no $-\sin(x)$ multiplying the function, also integration by parts seems like it wouldn't work, correct? So does this mean this integral is impossible to compute?
The indefinite integral of a continuous function always exists. It might not exist in "closed form", i.e. it might not be possible to write it as a finite expression using "well-known" functions. The concept of "closed form" is somewhat vague, since there's no definite list of which functions are "well-known". A more precise statement is that there are elementary functions whose indefinite integrals are not elementary. For example, the indefinite integral $\int e^{x^2}\; dx$ is not an elementary function, although it can be expressed in terms of a non-elementary special function as $\frac{\sqrt{\pi}}{2} \text{erfi}(x)$ . Your example $\int e^{\cos(x)}\; dx$ is also non-elementary. This can be proven using the Risch algorithm . This one does not seem to have any non-elementary closed form either.
{ "source": [ "https://math.stackexchange.com/questions/3310991", "https://math.stackexchange.com", "https://math.stackexchange.com/users/536430/" ] }
3,312,660
In the following picture, $XY = YZ$ , $\angle a = 40^\circ$ , and the length opposite is $5$ . The problem asks to compute $XY$ We immediately dropped the dotted perpendicular and get two right triangles. The length of the bottom is 2.5 in each right triangle, and therefore $a/2 = 20^\circ$ $\sin 20 = 2.5 / XY$ or, $XY = 2.5 / \sin 20 \approx 7.3$ Their answer uses $5 / a = XY / 70$ $5 / 40 = XY / 70$ $XY = 350/40 = 8.75$ I would have thought both these answers should be the same, where did we go wrong?
Get another book. NOW! The books answer that $\frac 5{\angle a} = \frac {XY}{\angle 70}$ is ... baseless. There's no such similarity between triangle sides and the direct measure of angles. It's .... stupid... to think there would be. But there is a similarity between sides and the SINES of angles. I.e. The law of sines which would allow us to note: that $\frac 5{\sin a} = \frac {XY}{\sin 70}$ or $\frac 5{\sin 40}=\frac {XY}{\sin 70}$ so $XY =5*\frac{\sin 70}{\sin 40} \approx 7.3$ . Which ... is the same as your answer.
{ "source": [ "https://math.stackexchange.com/questions/3312660", "https://math.stackexchange.com", "https://math.stackexchange.com/users/12529/" ] }
3,314,557
This is a sort of soft-question to which I can't find any satisfactory answer. At heart, I feel I have some need for a robust and well-motivated formalism in mathematics, and my work in geometry requires me to learn some analysis, and so I am confronted with the task of understanding weak solutions to PDEs. I have no problems understanding the formal definitions, and I don't need any clarification as to how they work or why they produce generalized solutions. What I don't understand is why I should "believe" in these guys, other than that they are a convenience. Another way of trying to attack the issue I feel is that I don't see any reason to invent weak solutions, other than a a sort of (and I'm dreadfully sorry if this is offensive to any analysts) mathematical laziness. So what if classical solutions don't exist? My tongue-in-cheek instinct is just to say that that is the price one has to pay for working with bad objects! In other words, I do not find the justification of, "well, it makes it possible to find solutions" a very convincing one. A justification I might accept, is if there was a good mathematical reason for us to a priori expect there to be solutions, and for some reason, they could not be found in classical function spaces like $C^k(\Omega)$ , and so we had to look at various enlargements in order to find solutions. If this is the case, what is the heuristic argument that tells me whether or not I should expect a PDE (subject to whatever conditions you want in order to make your argument clear) to have solutions, and what function space(s) are appropriate to look at to actually find these solutions? Another justification that I would accept is if there was some good analytic reason to discard the classical notion of differentiability all together. Perhaps the correct thing to do is to just think of weak derivatives as simply the 'correct' notion of differentiability in the first place. My instinct is to say that maybe weak solutions are a sort of 'almost-everywhere' type generalization of differentiability, similar to the Lebesgue integral being a replacement for the Riemann integral which is more adept at dealing with phenomena only occurring in sets of measure $0$ . Or maybe both of these hunches are just completely wrong. I am basically brand new to these ideas, and wrestling with my skepticism about these ideas. So can somebody make me a believer? Worth noting is that there is already a question on this site here , but the answer in this link is essentially that there exist a bunch of nice theorems if you do this, or that physically we don't care very much about what happens pointwise, only in terms of integrals over small regions. It should be clear why I don't like the first reason, and the second reason I may accept if it could be turned into something that looks like my proposed justification #2 - if integrals over small regions of derivatives are the 'right' mathematical formalism for PDEs. I just don't understand how to make that leap. In other words, I would like a reason to find weak solutions interesting for their own sake.
Reason 1. Even if you actually care only about smooth solutions, it some cases it is much easier to first establish that a weak solution exists and separately show that the structure of the PDE actually enforces it to be smooth. Existence and regularity are handled separately and using different tools. Reason 2. There are physical phenomena which are described by discontinuous solutions of PDEs, e.g. hydrodynamical shock waves. Reason 3. Discontinuous solutions may be used as a convenient approximation for describing macroscopic physics neglecting some details of the microscopic theory. For example in electrodynamics one derives from the Maxwell equations that the electric field of an electric dipole behaves at large distances in a universal way, depending only on the dipole moment but not on the charge distributions. On distances comparable to the dipole size these microscopic details start to become important. If you don't care about these small distances you may work in the approximation in which dipole is a point-like object, with charge distribution given by a derivative of the delta distribution. Even though the actual charge distribution is given by a smooth function, it is more convenient to approximate it by a very singular object. One can still make sense of the Maxwell equations, and the results obtained this way turn out to be correct (provided that you understand the limitations of performed approximations). Reason 4. It is desirable to have "nice" spaces in which you look for solutions. In functional analysis there are many features you might want a topological vector space to have, and among these one of the most important is completeness. Suppose you start with the space of smooth functions on, say, $[0,1]$ and equip it with a certain topology. In this case it is completely natural to pass to the completion. For many choices of the topology you will find that the completed space contains objects which are too singular to be considered as bona fide functions, e.g. measures or distributions. Just to give you an example of this phenomenon: if you are interested in computing integrals of smooth functions, you are eventually going to consider gadgets such as $L^p$ norms on $C^{\infty}[0,1]$ . Once you complete, you get the famous $L^p$ spaces, whose elements are merely equivalence classes of functions modulo equality almost anywhere. Space of distributions on $[0,1]$ may be constructed very similarly: instead of $L^p$ norms you consider the seminorms $p_f$ given by $p_f(g)= \int_{0}^1 f(x) g(x) dx$ for $f,g \in C^{\infty}[0,1]$ . If you can justify to yourself that it is interesting to look at this family of seminorms, then distibutions (and also weak solutions of PDEs) become an inevitable consequence.
{ "source": [ "https://math.stackexchange.com/questions/3314557", "https://math.stackexchange.com", "https://math.stackexchange.com/users/112357/" ] }
3,314,561
Consider the triangle $PAT$ , with angle $P = 36$ degres, angle $A = 56$ degrees and $PA=10$ . The points $U$ and $G$ lie on sides TP and TA respectively, such that PU = AG = 1. Let M and N be the midpoints of segments PA and UG. What is the degree measure of the acute angle formed by the lines MN and PA? It would be very helpful if anyone had a solution using complex numbers to this problem.
Reason 1. Even if you actually care only about smooth solutions, it some cases it is much easier to first establish that a weak solution exists and separately show that the structure of the PDE actually enforces it to be smooth. Existence and regularity are handled separately and using different tools. Reason 2. There are physical phenomena which are described by discontinuous solutions of PDEs, e.g. hydrodynamical shock waves. Reason 3. Discontinuous solutions may be used as a convenient approximation for describing macroscopic physics neglecting some details of the microscopic theory. For example in electrodynamics one derives from the Maxwell equations that the electric field of an electric dipole behaves at large distances in a universal way, depending only on the dipole moment but not on the charge distributions. On distances comparable to the dipole size these microscopic details start to become important. If you don't care about these small distances you may work in the approximation in which dipole is a point-like object, with charge distribution given by a derivative of the delta distribution. Even though the actual charge distribution is given by a smooth function, it is more convenient to approximate it by a very singular object. One can still make sense of the Maxwell equations, and the results obtained this way turn out to be correct (provided that you understand the limitations of performed approximations). Reason 4. It is desirable to have "nice" spaces in which you look for solutions. In functional analysis there are many features you might want a topological vector space to have, and among these one of the most important is completeness. Suppose you start with the space of smooth functions on, say, $[0,1]$ and equip it with a certain topology. In this case it is completely natural to pass to the completion. For many choices of the topology you will find that the completed space contains objects which are too singular to be considered as bona fide functions, e.g. measures or distributions. Just to give you an example of this phenomenon: if you are interested in computing integrals of smooth functions, you are eventually going to consider gadgets such as $L^p$ norms on $C^{\infty}[0,1]$ . Once you complete, you get the famous $L^p$ spaces, whose elements are merely equivalence classes of functions modulo equality almost anywhere. Space of distributions on $[0,1]$ may be constructed very similarly: instead of $L^p$ norms you consider the seminorms $p_f$ given by $p_f(g)= \int_{0}^1 f(x) g(x) dx$ for $f,g \in C^{\infty}[0,1]$ . If you can justify to yourself that it is interesting to look at this family of seminorms, then distibutions (and also weak solutions of PDEs) become an inevitable consequence.
{ "source": [ "https://math.stackexchange.com/questions/3314561", "https://math.stackexchange.com", "https://math.stackexchange.com/users/685634/" ] }
3,314,864
Ok, buckle up for a rather long question. I've spent a large portion of today learning about compactness, stemming mainly from this wikipedia article about point-set topology. The article mentions three main things: continuity, connectedness, and compactness. I will address each in turn, but my question is mainly about the last one. Continuity: At least having gone through high school calculus and basic university analysis, I think people have a great intuitive understanding of continuity (and also differentiability I guess): smooth = yay!, jagged = fine-ish, holes/jumps = really bad :(. The wiki article describes this as "taking nearby points to nearby points", which I understand well enough that I think given a decade or something I could eventually have come up with the formal epsilon-delta definition for continuity: $$\forall \varepsilon >0, \exists \delta, \text{ s.t. } |x-x_{0}|<\delta \Rightarrow |f(x)-f(x_{0})|<\varepsilon $$ Connectedness: Similarly, I think people have a great intuition for connectedness (at least path connectedness), which the wiki article summarizes nicely as "sets that cannot be divided into two pieces that are far apart". Again, I think that given a decade or two I could have gone in the right direction at least for the formal definition of connectedness: a set that can't be represented as the union of two or more disjoint non-empty open subsets . First (Minor) Question: could we have a useful definition for connectedness being: "a set that can't be represented as the union of two or more disjoint non-empty closed subsets"? Similarly, I feel like I could reasonably develop definitions for open sets (based on intuitions from number lines and basic high-school algebra/set theory), and completeness (basically the Least Upper Bound axiom/Cauchy sequences). However, there's one thing missing. Compactness. Never in a million years do I think I could have thought up of the definition that a set that can be "can be covered by finitely many sets of arbitrarily small size." I've looked at these five sites and some links therein: Why is compactness so important? What should be the intuition when working with compactness? https://www.math.ucla.edu/~tao/preprints/compactness.pdf https://www.reddit.com/r/math/comments/47h6hg/what_really_is_a_compact_set/ https://arxiv.org/abs/1006.4131 but none of them so far has really clicked with me. Many people emphasized that it's a generalized version of finiteness with "fat blurry points", and I also understand that by the Heine-Borel Theorem compactness is equivalent to "closed and bounded" in Euclidean space, but those two things seem so far apart that it just seems like a black-magic coincidence that they describe the same phenomenon. How would you motivate and explain the definition and concept of compactness to your students in such a way that they feel like they could have come up with it themselves, naturally, given a decade or two? If you start with "it's a generalized version of finiteness" it seems like a complete and utter coincidence that it happens to be equivalent to "closed and bounded". I mean of all the possible "generalized finiteness formulations", how did ours get it right? If you start with "it's just another way of saying closed and bounded", then students will feel that it's just more arbitrary confusion redefining things they already know (namely that of closed-ness and boundedness); furthermore, even if they did accept this explanation, they would have never figured out on their own that "every open cover has a finite subcover $\iff$ closed and bounded". "Finite subcover" just seems so out-of-left-field. And finally if you go the sequential compactness route (referring to Tao's paper here) students will just say "ahh yes, the Bolzano-Weierstrass Theorem! Why need to give it a new name of compactness?" Maybe I've missed something in my searches, but I hope this question isn't just a badly rehashed old question. I don't think my question is answered in the "Pedagogical History of Compactness" mainly because I don't want the convoluted history, but rather the simplest motivation and explanation based off modern curriculum and notation . -> Also, thank you to those who've commented/left an answer. I hope that this page and all the differing pedagogical interpretations presented will serve as a relatively complete and comprehensive guide for beginners learning compactness in the future. Please upvote answers you think are particularly insightful; as a novice, I would appreciate some expert judgement on the explanatory power of these answers.
I'm going to make a stab at "compactness" here. Suppose you want to prove something about sets in, say, a metric space. You'd like to, say, define the "distance" between a pair of sets $A$ and $B$ . You've thought about this question for, say, finite sets of real numbers, and things worked out OK, and you're hoping to generalize. So you say something like "I'll just take all points in $A$ and all points in $B$ and look at $d(a, b)$ for each of those, and then take the min." But then you realize that "min" might be a problem, because the set of $(a,b)$ -pairs might be infinite -- even uncountably infinite, but "min" is only defined for finite sets. But you've encountered this before, and you say "Oh...I'll just replace this with "inf" the way I'm used to!" That's a good choice. But now something awkward happens: you find yourself with a pair of sets $A$ and $B$ whose distance is zero, but which share no points. You'd figured that in analogy with the finite-subsets-of- $\Bbb R$ , distance-zero would be "some point is in both sets", but that's just not true. Then you think a bit, and realize that if $A$ is the set of all negative reals, and $B$ is the set of positive reals, the "distance" between them is zero (according to your definition), but ...there's no overlap. This isn't some weird metric-space thing ... it's happening even in $\Bbb R$ . And you can SEE what the problem is --- it's the "almost getting to zero" problems, because $A$ and $B$ are open. So you back up and say "Look, I'm gonna define this notion only for closed sets; that'll fix this stupid problem once and for all!" And then someone says "Let $A$ be the $x$ -axis in $\Bbb R^2$ and let $B$ be the graph of $y = e^{-x}$ ." And you realize that these are both closed sets, and they don't intersect, but the distance you've defined is still zero. Damnit! You look more closely, and you realize the problem is with $\{ d(a, b) \mid a \in A, b \in B\}$ . That set is an infinite set of positive numbers, but the inf still manages to be zero. If it were a finite set, the inf (or the min -- same thing in that case!) would be positive, and everything would work out the way it was supposed to. Still looking at $A$ and $B$ , instead of looking at all point in $A$ and $B$ , you could say "Look, if $B$ is at distance $q$ from $A$ , then around any point of $B$ , I should be able to place an (open) ball of radius $q$ without hitting $A$ . How 'bout I rethink things, and say this instead: consider, for all points $b \in B$ , the largest $r$ such that $B_r(b) \cap A = \emptyset$ ...and then I'll just take the smallest of these "radii" as the distance. Of course, that still doesn't work: the set of radii, being infinite, might still have zero as its inf. But what if you could somehow pick just finitely many of them? Then you could take a min and get a positive number. Now, that exact approach doesn't really work, but something pretty close does work, and situations just like that keep coming up: you've got an infinite collection of open balls, and want to take the minimum radius, but "min" has to be "inf" and it might be zero. At some point, you say "Oh, hell. This proof isn't working, and something like that graph-and- $x$ -axis problem keeps messing me up. How 'bout I just restate the claim and say that I'm only doing this for sets where my infinite collection of open sets can always be reduced to a finite collection?" Your skeptical colleague from across the hall comes by and you explain your idea, and colleague says "You're restricting your theorem to these 'special' sets, ones where every covering by open sets has a finite subcover .. .that seems like a pretty extreme restriction. Are there actually any sets with that property?" And you go off and work for a while and convince yourself that the unit interval has that property. And then you realize that in fact if $X$ is special and $f$ is continuous, then $f(X)$ is also special, so suddenly you've got tons of examples, and you can tell your colleague that you're not just messing around with the empty set. But the colleague then asks, "Well, OK. So there are lots of these. But this finite-subcover stuff seems pretty...weird. Is there some equivalent characterization of these special sets?" It turns out that there's not -- the "change infinite into finite" is really the secret sauce. But in some cases -- like for "subsets of $\Bbb R^n$ -- there is an equivalent characterization, namely "closed and bounded". Well, that's something everyone can understand, and it's a pretty reasonable kind of set, so you need a word. Is "compact" the word I'd have chosen? Probably not. But it certainly matches up with the "bounded"-ness, and it's not such a bad word, so it sticks. The key thing here is that the idea of compactness arises because of multiple instances of people trying to do stuff and finding it'd all work out better if they could just replace a cover by a finite cover, often so that they can take a "min" of some kind. And once something gets used enough, it gets a name. [Of course, my "history" here is all fiction, but there are plenty of cases of this sort of thing getting named. Phrases like "in general position", for instance, arise to keep us out of the weeds of endless special cases that are arbitrarily near to perfectly nice cases.] Sorry for the long and rambling discourse, but I wanted to make the case that stumbling on the notion of compactness (or "linear transformation", or 'group') isn't that implausible. One of the big problems I had when first learning math was that I thought all this stuff was handed down to Moses on stone tablets, and didn't realize that it arose far more organically. Perhaps one of the tip-offs was when I learned about topological spaces, and one of the classes of spaces was "T-2 1/2". It seemed pretty clear that someone skipped over something and then went back and filled in a spot that wasn't there by giving a "half-number" as a name. (This could well be wrong, but it's sure how it looked to a beginner!)
{ "source": [ "https://math.stackexchange.com/questions/3314864", "https://math.stackexchange.com", "https://math.stackexchange.com/users/405572/" ] }
3,314,967
A biased coin is tossed. Probability of Head - $\frac{1}{8}$ Probability of Tail - $\frac{7}{8}$ A liar watches the coin toss. Probability of his lying is $\frac{3}{4}$ and telling the truth is $\frac{1}{4}$ . He says that that the outcome is Head. What is the probability that the coin has truly turned Head? My Attempt : I used the formula: $$P(A \mid B) = \frac{P(A\cap B)}{P(B)}$$ $\ \ \ \ \ \ $ P(it is head GIVEN liar said it's head) = P(it's head AND liar said it's head) / P(liar said it's head) or, P(it is head GIVEN liar said it's head) = $\frac{ \frac{1}{8} \frac{1}{4} }{ \frac{7}{8}\frac{3}{4} + \frac{1}{8}\frac{1}{4} }$ [using a probability tree will be helpful here] or, P(it is head GIVEN liar said it's head) = $\frac{1}{22}$ The Question: Is the method I used wrong in any way? Some others I have talked to are saying the answer will be $\frac{1}{4}$ . Their reasoning is this: since the liar lies 3 times out of 4 and he said it is head, then the probability of it being head is 1/4. So who is right? What will be the answer?
You're correct, an easy way to check this is by writing out the possible outcomes. In a perfect world, if we flip the coin 32 times the following will happen: 21 times it lands tails and the liar said head. 7 times it lands tails and the liar said tails. 3 times it lands head and the liar said tails. 1 time it lands head and the liar said head. Since it is given that the liar said head there are 22 options left over, only one of which has the coin actually landing head. I know this is not the proper way of solving this, but I always found it useful to write things out like this when I was getting confused about conditional probability.
{ "source": [ "https://math.stackexchange.com/questions/3314967", "https://math.stackexchange.com", "https://math.stackexchange.com/users/517063/" ] }
3,315,735
The reason I'm asking this question: I work at the National Museum of Mathematics and, amidst my sundry duties (which generally have nothing to do with the exhibits), I do have the authority to alter some of the text in the exhibit descriptions, being the only on-site, full-time, PhD-holding mathematician who works there. When I see things that I know to be wrong, I seek to fix them (though it can take a long time to get to it). I thought it would be interesting to, in a case where I'm not sure, look for a consensus among the community in this way. The usage of the phrase in question refers to a surface embedded in $3$ -dimensional space. In particular, an exhibit in which one can view $2$ -dimensional solutions to equations in $3$ variables is advertised as: "Bring formulas to life by exploring the multiple number of unusual three-dimensional surfaces they can create." My opinion is that a surface is, by definition, 2-dimensional, and that the only reason someone would use the phrase "3-dimensional surface" is if they are not familiar with proper mathematical nomenclature. However, I want to know what the community thinks. Update Now that I no longer work at MoMath, I can say that I was reprimanded by the Director for this post, who also refused to consider any change to the text in question. For context, a colleague found a typo in someone's last name in other exhibit info, and fixing that was also rejected. People should know that that Museum needs a lot of help, politically (the math errors are the tip of the iceberg). I did my part but more people from the math community should look into this and speak up.
I have seen some authors who use "surface" as an equivalent of "manifolds". Personally I don't like it when someone refers to an $n$ -dimensional manifold ( $n\neq2$ ) as a surface; it is contradictory to intuition. But I wouldn't say it is wrong. However, in your case, I do think it is wrong to confuse a "3-dimensional surface" with a "2-dimensional surface embedded in $\mathbb R^3$ ". I'd suggest changing it to "Bring formulas to life by exploring the multiple number of unusual surfaces they can create in three-dimensional space". (The audience probably wouldn't notice a thing.) Situation 1: the description reads "...3-d surfaces..." general public: "cool." mathematician: "bad terminology." Situation 2: the description reads "...surfaces...in 3-d space" general public: "cool." mathematician: "cool and rigorous."
{ "source": [ "https://math.stackexchange.com/questions/3315735", "https://math.stackexchange.com", "https://math.stackexchange.com/users/67563/" ] }
3,317,613
In the set of real numbers, I wonder whether the distributive law uniquely determines multiplication. Suppose that for a function $f$ : $\Bbb{R}\times\Bbb{R}$ $\to$ $\Bbb{R}$ the following hold for every $x,y,z$ , where $+$ is the usual addition (as defined via Cauchy sequences of rationals), and $1$ is the known natural number: $f(x+y,z) = f(x,z) + f(y,z)$ $f(x, y+z) = f(x,y) + f(x,z)$ $f(1,x) = f(x,1) = x $ From the above does it follow that $f(x,y) = xy$ , the usual multiplication? In this post: Are the addition and multiplication of real numbers, as we know them, unique? a somewhat related "dual" question is asked concerning addition , and a simple solution is given in the form of $(x^3+y^3)^{1/3}$ . So am I missing something obvious here?
For one thing $\Bbb R$ is a $\Bbb Q$ -vector space, therefore by choosing a Hamel basis it is possible to define uncountably many symmetric $\Bbb Q$ -bilinear maps $\Bbb R\times\Bbb R\to\Bbb R$ , even with the restriction $\phi(1,\bullet)=\phi(\bullet,1)=id$ . The only continuous one among these is the usual product, though.
{ "source": [ "https://math.stackexchange.com/questions/3317613", "https://math.stackexchange.com", "https://math.stackexchange.com/users/343572/" ] }
3,317,618
Division of number $n$ on parts $a_1,...,a_r$ where $a_1 \le ... \le a_r$ we call a plain if $a_1 = 1$ and $a_i - a_{i-1} \le 1$ for $2 \le i \le r$ . Find enumerator (generating function) for plain divisions. my try The hint was to use bijection between plain divisions and some commonly know enumerator. I tried to use enumerator of divisions on different parts: $$ (1+x)(1+x^2)...(1+x^r)$$ where number of plain divisions is $$[x^n](1+x)(1+x^2)...(1+x^r) $$ let function $$f(n,r) = [x^n](1+x)(1+x^2)...(1+x^r) $$ For some first divisions it works. For example: $$f(4,3) = 1 $$ $$f(6,3) = 1 $$ $$f(11,5) = 2$$ But when I tried to find bijection, I failed. I found that this function isn't correct because $f(15,6) = 4$ but should be equal to $3$ because: $$15 = 1,1, 2, 3, 4, 4 \\ 15 = 1, 2, 2, 3, 3, 4\\ 15 = 1, 2, 3, 3, 3, 3 $$ . There I stucked.
For one thing $\Bbb R$ is a $\Bbb Q$ -vector space, therefore by choosing a Hamel basis it is possible to define uncountably many symmetric $\Bbb Q$ -bilinear maps $\Bbb R\times\Bbb R\to\Bbb R$ , even with the restriction $\phi(1,\bullet)=\phi(\bullet,1)=id$ . The only continuous one among these is the usual product, though.
{ "source": [ "https://math.stackexchange.com/questions/3317618", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
3,318,422
I am talking about classical logic here. I admit this might be a naive question, but as far as I understand it: Syntactic entailment means there is a proof using the syntax of the language, while on the other hand semantic entailment does not care about the syntax, it simply means that a statement must be true if a set of other statements are also true. That being said, isn't semantic entailment sufficient to know whether or not a statement is true? Why do we need syntactic proofs? Granted I know that in the case of boolean logic, proving statements by truth tables gets intractable very fast, but in essence, isn't semantic entailment "superior"? As it does not rely on how we construct the grammar? Thank you Edit: Suppose it wasn't the case that finding a satisfying assignment to an arbitrary boolean statement is an exponentially increasing problem, then would we even need syntactic entailment?
First of all, let me set the terminology straight: By a syntactic proof ( $\vdash$ ) we mean a proof purely operating on a set of rules that manipulate strings of symbols, without talking about semantic notations such as assignment, truth, model or interpretation. A syntactic proof system is one that says, for example, "If you have $A$ written on one line and $B$ on another line, then you are allowed to write the symbols $A \land B$ on a line below that." Examples of syntactic proof systems are Hilbert-style calculi, sequent calculi and natural deduction in their various flavors or Beth tableaus aka truth trees. By a semantic proof ( $\vDash$ ) we mean a proof operating on the semantic notions of the language, such as assignment, truth, model or interpretation. Examples of semantic proofs are truth tables, presentation of (counter) models or arguments in text (along the lines of "Suppose $A \to B$ is true. Then there is an assignment such that..."). Furthermore, the term "entailment" is usually understood as a purely semantic notion ( $\vDash$ ), while the syntactic counterpart ( $\vdash$ ) is normally referred to as derivability . (The division " $\vDash$ = semantics/models and $\vdash$ = syntax/proofs" is oversimplifying matters a bit -- proof theoretic semantics , for example, argues that a semantics can be established in terms of formal (= "syntactic") proofs rather than just by model-theoretic considerations, but for the sake of this explanation, let's keep this more simple two-fold distinction up.) I'm clarifying this because the way you set out things is not entirely accurate: Syntactic entailment means there is a proof using the syntax of the language In a way yes, the syntax of a logic is always relevant when talking about notions such as entailment or derivability -- but what is the crucial feature that makes us call this notion syntactic? It isn't that the syntax of the language is involved in establishing entailment or derivability relations. The crucial feature is that the set of rules we use is purely syntactic, i.e. merely operating on strings of symbols, without making explicit reference to meaning. while on the other hand semantic entailment does not care about the syntax That's not quite accurate -- in order to establish the truth value of a formula and hence notions such as validity or entailment, we have to investigate the syntax of a formula in order to determine any truth value at all. After all, truth is defined inductively on the structure (= the syntax) of formulas: " $[[A \land B]]_v = \text{true iff} [[A]]_v = \text{true and } [[B]]_v = \text{true}...$ " If we didn't care about syntax, then we couldn't talk about semantics at all. Now to your actual question: Why should we care about syntactic proofs if we can show semantically that statements are true? The short answer is: Because syntactic proofs are often a lot easier. For propositional logic, the world is still relatively innocent: We can just write down a truth table, look at the truth value at each formula and decide whether or not it is the case that all the lines where the columns of all of the premises have a "true" also have the conclusion column as "true". As you point out, this procedure quickly explodes for formulas with many propositional variables, but it's still a deterministic procedure that's doable in finite time. We could also present a natural language proof arguing in terms of truth assignments. This can be a bit more cumbersome, but might be more instructive, and is still relatively handleable for the relatively simple language and interpretation of propositional logic. But things get worse when we go into first-order logic. Here we are confronted with formulas that quantify over models whose domains are potentially infinite. Even worse, in contrast to propositional logic where the number of assignments is (~= interpretations) always finite and completely determined by the number of propositional variables, the structures (~= interpretations) in which a first-order formula might or might not be true are unlimited in size and shape. That is, not only can the structures themselves be infinite, but we are now facing an infinite amount of structures in which our formulas can be interpreted in the first place. Simply writing down a truth table will no longer work for the language of predicate logic, so determining the truth value -- and hence semantic properties and relations such as validity and entailment -- is no longer a simple deterministic procedure for predicate logic. So if we want to present a semantic proof, we have to revert to arguments about the structures that a formula does or does not satisfy. This is where an interesting duality enters: To prove that an existentially quantified semantic meta-statement is true (For example "The formula $\phi$ is satisfiable", i.e. " There exists a model of $\phi$ ) or a universally quantified semantic meta-statement is false (For example $\not \vDash \phi$ , "The formula $\phi$ is not valid", i.e. " It is not the case that all structures satisfy $\phi$ ) it suffices to provide one (counter)model and we're done: If we find just one structure in which $\phi$ is true then we know that $\phi$ is satisfiable, and conversely, if we find one structure in which $\phi$ is not true then we know that $\phi$ is not valid. Analogously, to prove that an existentially quantified object-language formula ( $\exists x ...$ ) is true in a structure or a universally quantified object-language formula ( $\forall x ...$ ) is false in a structure, it suffices to find one element in the structure's domain that provides an example for the existentially quantified formula or, respectively, a counterexample to the universal quantification and we're done. However, to prove that a universally quantified semantic meta-statement is true (For example $\vDash \phi$ , "The formula $\phi$ is valid", i.e. " All structures satisfy $\phi$ ), or an existentially quantified semantic meta-statement is false (For example "The formula $\phi$ is unsatisfiable", i.e. " There exists no model of $\phi$ ), we are suddenly faced with the difficult task of making a claim about all possible structures. We can not simply list them, as there are infinitely many of them, so all we can do is write a natural-language text arguing over the possible truth values of formulas eventually showing that all structures must succeed or fail to fulfill a certain requirement. Analogously, to prove that an universally quantified object-language formula ( $\forall x ...$ ) is true in a structure or an existentially quantified object-language formula ( $\exists x ...$ ) is false in a structure, we have to iterate over all the elements in the structure's domain. If the domain is finite, we are lucky and can simply go through all of the possible values (although this may take quite some time if the domain is large enough), but if it is infinite, there is no way we can ever get done if we just brute force check the formula for the elements one after the other. This is a rather unpleasant situation, and exactly the point where syntactic proofs come in very handy. Recall the definition of entailment: $\Gamma \vDash \phi$ iff all interpretations that satisfy all formulas in $\Gamma$ also satisfy $\phi$ or equivalently $\Gamma \vDash \phi$ iff there is no interpretation that satisfies all formulas in $\Gamma$ but not $\phi$ . This is precisely the kind of universal quantification that makes purely semantic proofs difficult: We would have to establish a proof over the infinity of all the possible structures to check whether the semantic entailment relation does or does not hold. But now look at the definition of syntactic derivability: $\Gamma \vdash \phi$ iff there is a derivation with premises from $\Gamma$ and conclusion $\phi$ . The nasty universal quantifier suddenly became an existential one! Rather than having to argue over all the possible structures, it now suffices to show just one syntactic derivation and we're done. (The same applies to the case where we don't have any premises, i.e. $\vDash \phi$ (" $\phi$ is valid" = "true in all structures" vs. $\vdash \phi$ (= " $\phi$ is derivable" = "there is a derivation with no open assumptions and $\phi$ as the conclusion). This is an enormous advantage -- call it "superior" if you want. Now we have a kind of disparity: For some things semantics is hard whereas syntax is easy, so how can we use this disparity for good? Luckily, in the case of classical logic, we are equipped with soundness and completeness: Soundness: If $\Gamma \vdash \phi$ , then $\Gamma \vDash \phi$ -- if we found a syntactic derivation, then we know that the entailment holds semantically. Completeness: If $\Gamma \vDash \phi$ , then $\Gamma \vdash \phi$ -- if a semantic entailment holds, then we can find a syntactic derivation. While any reasonable derivation system will be sound w.r.t. to the language's semantics, completeness is a non-trivial and very powerful result: If we want to prove a semantic entailment, by completeness we know that there must be a syntactic derivation, so we can go find just one such derivation, and as soon as we do, soundness ensures us that this is indeed a proof that the entailment holds semantically. Hence, we can use syntactic proofs to avoid cumbersome semantic arguments that involve meta-logical quantification over all structures. This is pretty neat. Now note how things turn around for the syntactic calculus: To prove that a universally quantified syntactic meta-statement is true or an existentially quantified syntactic meta-statement is false (For example $\not \vdash \phi$ , "The formula $\phi$ is underivable", i.e. " There is no derivation with conclusion $\phi$ "/" All attempts to find a derivation with conclusion $\phi$ eventually fail) we would have to argue over all possible syntactic proofs, which can again be difficult. Now we can apply the soundness and completeness results in the other direction: If we want to show that a formula is not derivable, then by contraposition on completeness we know it is not valid (because if it were, then by completeness there would be a derivation), so we can carry out a semantic proof by providing just one counter-model to the validity of $\phi$ and we're almost done; because then again by contraposition on soundness, we can be sure that if the formula is not valid, there will be no derivation (because if there were a derivation for something that is not semantically valid, our system would be unsound), so we have our proof of the underivability without needing to argue over hypothetical derivations that can't exist. And this is precisely how the aforementioned duality comes about: -------------------------------------------------------------------------------- semantic syntactic -------------------------------------------------------------------------------- positive ⊨ ⊢ universal quantif. existential quantif. ("all structures"/ ("there is a derivation"/ "no structure such that not") "not all derivations fail") => difficult => easy negated ⊭ ⊬ negated universal quantif. negated existential quantif. ("not all structures"/ ("there is no syntactic proof"/ "there exists a counter-model") "all attempts at proofs fail") => easy => difficult -------------------------------------------------------------------------------- Thanks to soundness and completeness, the duality of semantic and a syntactic proofs can help us bridge the difficult parts: $\vDash$ ("all structures" -- difficult) $\ \xrightarrow{\text{completeness}}\ $ $\vdash$ ("some derivation" -- easy) $\ \xrightarrow{\text{soundness}}\ $ $\vDash$ $\not \vdash$ ("no derivation" -- difficult) $\ \xrightarrow{\text{contrapos. completeness}}\ $ $\not \vDash$ ("some countermodel" -- easy) $\ \xrightarrow{\text{contrapos. soundness}}\ $ $\not \vdash$ Putting these bridges into the picture from before: ------------------------------------------------------------------------------ semantic syntactic ------------------------------------------------------------------------------ completeness -------------> positive ⊨ ⊢ <------------- soundness contrapos. completeness <----------------------- negated ⊭ ⊬ -----------------------> contrapos. soundness ------------------------------------------------------------------------------ I think the existence of syntactic calculi already is wonderful enough simply for the mathematical beauty of this symmetry.
{ "source": [ "https://math.stackexchange.com/questions/3318422", "https://math.stackexchange.com", "https://math.stackexchange.com/users/586524/" ] }
3,319,587
I'm a freshman in university and I'm studying Computer science and engineering. This will be my second year of studying. We don't have Calculus as a mandatory class but I can take it from elective classes. More senior students are telling me calculus is a very hard class and I shouldn't take it. Should I take easier classes just to pass? Or should I take it anyway even if I'm really bad at math but enjoy math a lot? Is calculus necessary for my future as a student and would it help me in data science or AI? That's what I'm really interested in and I want to work for either of them. Would Calculus make my education easier in the future and in my work or should I just take something just to pass? In my next semesters, I want to take just AI and DS classes (Data mining, Data Science, Mechanical Learning, etc). Thanks for your time reading and answering my question.
A software engineer probably does not need to study calculus, and it is less likely to be useful than graph theory, elementary logic, study of algorithms, etc. Of course, if you are implementing algorithms for use in science and engineering, calculus and numerical methods for approximating calculus operations will show up all of the time. AI, on the other hand, is all about calculus (despite the best attempts of the machine learning community to "rebrand" concepts like numerical optimization, the chain rule, gradient descent, etc.) It's hard for me to imagine a successful data analyst or AI researcher who doesn't know at least the basics of calculus. EDIT: In response to the answer suggesting you do not need calculus to be a data scientist at a company like Google, consider this blog post from a Googler with advice on the job search: Math like linear algebra and calculus are more or less expected of anyone we’d hire as a data scientist
{ "source": [ "https://math.stackexchange.com/questions/3319587", "https://math.stackexchange.com", "https://math.stackexchange.com/users/694966/" ] }
3,321,398
Given a differentiable real-valued function $f$ , the arclength of its graph on $[a,b]$ is given by $$\int_a^b\sqrt{1+\left(f'(x)\right)^2}\,\mathrm{d}x$$ For many choices of $f$ this can be a tricky integral to evaluate, especially for calculus students first learning integration. I've found a few choices of $f$ that make the computation pretty easy: Letting $f$ be linear is super easy, but then you don't even need the formula. Taking $f$ of the form $(\text{stuff})^{\frac{3}{2}}$ might work out nicely if $\text{stuff}$ is chosen carefully. Calculating it for $f(x) = \sqrt{1-x^2}$ is alright if you remember that $\int\frac{1}{x^2+1}\,\mathrm{d}x$ is $\arctan(x)+C$ . Letting $f(x) = \ln(\sec(x))$ results in $\int\sec(x)\,\mathrm{d}x$ , which classically sucks. But it looks like most choices of $f$ suggest at least a trig substitution $f'(x) \mapsto \tan(\theta)$ , and will be computationally intensive, and unreasonable to ask a student to do. Are there other examples of a function $f$ such that computing the arclength of the graph of $f$ won't be too arduous to ask a calculus student to do?
Ferdinands, in his short note "Finding Curves with Computable Arc Length" , also comments on the difficulty of coming up with suitable examples of curves with easily-computable arclengths. In particular, he gives a simple recipe for coming up with examples: let $$f(x)=\frac12\int \left(g(x)-\frac1{g(x)}\right)\,\mathrm dx$$ for some suitably differentiable $g(x)$ over the desired integration interval for the arclength. The arclength over $[a,b]$ is then given by $$\frac12\int_a^b\left(g(x)+\frac1{g(x)}\right)\,\mathrm dx$$ $g(x)=x^{10}$ and $g(x)=\tan x$ are some of the example functions given in the article that are amenable to this recipe.
{ "source": [ "https://math.stackexchange.com/questions/3321398", "https://math.stackexchange.com", "https://math.stackexchange.com/users/167197/" ] }
3,324,375
What axiom or definition says that mathematical operations like +, -, /, and * operate on imaginary numbers? In the beginning, when there were just reals, these operations were defined for them. Then, i was created, literally a number whose value is undefined, like e.g. one divided by zero is undefined. Does anyone know about how mathematical operations' ranges and domains were expanded to include imaginaries? EDIT: An interesting comment notes a first use of complex numbers where, those values would cancel in the end. But can I refute that with, "from an inconsistency, anything is provable"? A corollary question: Could I define a new number z which is 1/0 and simply begin using it? Seems ludicrous.
We can do anything we want! Specifically, we can define anything we want (as long as our definitions don't contradict each other). So if we want to allow ourselves to use imaginary numbers, all we have to do is write something like the following: Define a complex number as an ordered pair of the form $(a, b)$ , where $a$ and $b$ are real numbers. Define $i$ as the complex number $(0, 1)$ . If $(a, b)$ and $(c, d)$ are complex numbers, define $(a, b) + (c, d)$ as $(a + c, b + d)$ . If $(a, b)$ and $(c, d)$ are complex numbers, define $(a, b) \cdot (c, d)$ as $(ac - bd, ad + bc)$ . And define subtraction and division in similar ways. Is that it? Are we done? No, there's still more that we want to do. There are a lot of useful theorems about real numbers that also apply to the complex numbers, but we don't know that they apply to the complex numbers until we prove them. For example, one very useful theorem about real numbers is: Theorem : If $a$ and $b$ are real numbers, then $a + b = b + a$ . The analogous theorem about the complex numbers is: Theorem (not yet proven): If $a$ and $b$ are complex numbers, then $a + b = b + a$ . This theorem is, in fact, true, but we didn't know that it was true until somebody proved it. Once we've proved all the theorems that we want to prove, then we can say that we're "done." (Do we have to prove these theorems? No, we don't have to if we don't want to. But without these theorems, complex numbers aren't very useful.) As for your corollary question: Could I define a new number $z$ which is $1/0$ and simply begin using it? Seems ludicrous. Yes, you absolutely can! All you have to do is write: Assume that there is a value $z$ . Define $1/0$ as $z$ . And that's perfectly valid; this definition doesn't contradict any other definitions. This is completely legal, acceptable and proper. Is that it? Are we done? Probably not; there's more we'd like to do. For example, what do you suppose $z \cdot 0$ is? There are a couple of theorems here we might like to use, but we can't. Let's take a look at them: Theorem : If $x$ is a real number, then $x \cdot 0 = 0$ . Theorem : If $x$ and $y$ are real numbers, and $y \ne 0$ , then $(x / y) \cdot y = x$ . Do you see why we can't use these theorems? Does the first theorem tell us that $z \cdot 0 = 0$ ? No, because we don't know that $z$ is a real number. So the first theorem doesn't apply. How about the second theorem? We know that $z = 1/0$ . Does the second theorem tell us that $(1 / 0) \cdot 0 = 1$ (and therefore $z \cdot 0 = 1$ )? No, because the second theorem is only applicable when the denominator is not $0$ , and here, the denominator is $0$ . So the second theorem doesn't apply, either. If we want, we can add more definitions and maybe make some of these theorems "work" for $z = 1/0$ , just like we have a lot of theorems that "work" for the complex numbers. But when we do this, we encounter a lot of problems. Rather than dealing with these problems, most mathematical writers simply refuse to define $1/0$ . (That's what the sentence " $1/0$ is undefined" means: the expression $1/0$ is an undefined expression, because we have refused to define it.)
{ "source": [ "https://math.stackexchange.com/questions/3324375", "https://math.stackexchange.com", "https://math.stackexchange.com/users/145307/" ] }
3,325,462
I was recently reading up about computational power and its uses in maths particularly to find counterexamples to conjectures. I was wondering are there any current mathematical problems which we are unable to solve due to our lack of computational power or inaccessibility to it. What exactly am I looking for? Problems of which we know that they can be solved with a finite (but very long) computation? (e. g. NOT the Riemann hypothesis or twin prime conjecture) I am looking for specific examples.
Goldbach's weak conjecture isn't a conjecture anymore, but before it was proved (in 2013), it had already been proved that it was true for every $n>e^{e^{16\,038}}$ . It was not computationally possible to test it for all numbers $n\leqslant e^{e^{16\,038}}$ though.
{ "source": [ "https://math.stackexchange.com/questions/3325462", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
3,325,466
Is $$\int_a^b \big(f(x)\big)^+\mathrm{d}x = \left( \int_a^b f(x) \mathrm{d}x \right)^+$$ provided that $f:[a,b]\to\mathbb{R}$ is integrable? This means, can taking positive part and integration be swapped for nice functions? Here, $(x)^+=\max\{x,0\}$ is the positive part of $x$ . Edit: I see that this formula does not hold in general. What about if $a=0$ and $$f(x)= C e^{-px}-De^{-qx}$$ with $C,D,p,q\in\mathbb{R}$ and $C,D\geq0$ . Does the formula hold in this special case?
Goldbach's weak conjecture isn't a conjecture anymore, but before it was proved (in 2013), it had already been proved that it was true for every $n>e^{e^{16\,038}}$ . It was not computationally possible to test it for all numbers $n\leqslant e^{e^{16\,038}}$ though.
{ "source": [ "https://math.stackexchange.com/questions/3325466", "https://math.stackexchange.com", "https://math.stackexchange.com/users/694200/" ] }
3,325,684
Consider the following two generating functions: $$e^x=\sum_{n=0}^{\infty}\frac{x^n}{n!}$$ $$\log\left(\frac{1}{1-x}\right)=\sum_{n=1}^{\infty}\frac{x^n}{n}.$$ If we live in function-land, it's clear enough that there is an inverse relationship between these two things. In particular, $$e^{\log\left(\frac{1}{1-x}\right)}=1+x+x^2+x^3+\ldots$$ If we live in generating-function-land, this identity is really not so obvious. We can figure out that the coefficient of $x^n$ in $e^{\log\left(\frac{1}{1-x}\right)}$ is given as $$\sum_{a_1+\ldots+a_k=n}\frac{1}{a_1\cdot \cdots \cdot a_k}\cdot \frac{1}{k!}$$ where the sum runs over all ways to write $n$ as an ordered sum of positive integers. Supposedly, for each choice of $n$ , this thing sums to $1$ . I really don't see why. Is there a combinatorial argument that establishes this?
In your sum, you are distinguishing between the same collection of numbers when it occurs in different orders. So you'll have separate summands for $(a_1,a_2,a_3,a_4)=(3,1,2,1)$ , $(2,3,1,1)$ , $(1,1,3,2)$ etc. Given a multiset of $k$ numbers adding to $n$ consisting of $t_1$ instances of $b_1$ up to $t_j$ instances of $b_j$ , that contributes $$\frac{k!}{t_1!\cdot\cdots\cdot t_j!}$$ (a multinomial coefficient) summands to the sum, and so an overall contribution of $$\frac{1}{t_1!b_1^{t_1}\cdot\cdots\cdot t_j!b_j^{t_j}}$$ to the sum. But that $1/n!$ times the number of permutations with cycle structure $b_1^{t_1}\cdot\cdots\cdots b_j^{t_j}$ . So this identity states that the total number of permutations of $n$ objects is $n!$ .
{ "source": [ "https://math.stackexchange.com/questions/3325684", "https://math.stackexchange.com", "https://math.stackexchange.com/users/174927/" ] }
3,325,718
I'd really appreciate your help with this one: I got the following: R = being real numbers Q = rational numbers N = natural numbers $ R^2 $ being the pair of $ (a,b)$ with $a,b$ being real numbers $$ 〖[(R - Q )∩ R^2 ] 〗^c ∪ N $$ My reasoning tell me R - Q would give me all the irrational numbers, the intersection between the irrational numbers and $R^2$ would be empty. After that this is where it gets fuzzy. Could someone clarify what would the answer be?
In your sum, you are distinguishing between the same collection of numbers when it occurs in different orders. So you'll have separate summands for $(a_1,a_2,a_3,a_4)=(3,1,2,1)$ , $(2,3,1,1)$ , $(1,1,3,2)$ etc. Given a multiset of $k$ numbers adding to $n$ consisting of $t_1$ instances of $b_1$ up to $t_j$ instances of $b_j$ , that contributes $$\frac{k!}{t_1!\cdot\cdots\cdot t_j!}$$ (a multinomial coefficient) summands to the sum, and so an overall contribution of $$\frac{1}{t_1!b_1^{t_1}\cdot\cdots\cdot t_j!b_j^{t_j}}$$ to the sum. But that $1/n!$ times the number of permutations with cycle structure $b_1^{t_1}\cdot\cdots\cdots b_j^{t_j}$ . So this identity states that the total number of permutations of $n$ objects is $n!$ .
{ "source": [ "https://math.stackexchange.com/questions/3325718", "https://math.stackexchange.com", "https://math.stackexchange.com/users/695902/" ] }
3,328,227
From the 1951 novel The Universe Between by Alan E. Nourse. Bob Benedict is one of the few scientists able to make contact with the invisible, dangerous world of The Thresholders and return—sane! For years he has tried to transport—and receive—matter by transmitting it through the mysterious, parallel Threshold. [...] Incredibly, something changed. A pause, a sag, as though some terrible pressure had suddenly been released. Their fear was still there, biting into him, but there was something else. He was aware of his body around him in its curious configuration of orderly disorder, its fragments whirling about him like sections of a crazy quilt. Two concentric circles of different radii intersecting each other at three different points . Twisting cubic masses interlacing themselves into the jumbled incredibility of a geometric nightmare. The author might be just throwing some terms together to give the reader a sense of awe, but maybe there's some non-euclidean geometry where this is possible.
Yes, with the appropriate definition of "circle". Namely, define a circle of radius $R$ centered at $x$ on manifold $M$ to be the set of points which can be reached by a geodesic of length $R$ starting at $x$ . This seems pretty reasonable, and reproduces the usual definition in Euclidean space. It's not hard to see that concentric circles on a torus or cylinder can have four intersection points. (Here's how to interpret this picture: The larger circle has been wrapped in the y-direction, reflecting a torus or cylinder topology. Coming soon: A picture of this embedded in 3D.) By flattening one side of the torus a bit, you can make one side of the larger circle intersect the smaller circle at two points, while the other side just grazes at a single point*. Thus you get three intersections. *As a technical point, this can definitely be accomplished in Finsler geometry, though I'm not sure if it can be done in Riemannian geometry.
{ "source": [ "https://math.stackexchange.com/questions/3328227", "https://math.stackexchange.com", "https://math.stackexchange.com/users/142244/" ] }
3,333,357
One common definition of arclength is to just define it as a supremum of the set of lengths obtained by approximating your curve as a union of line segments (I was asked in the comments for a more precise definition; see https://en.wikipedia.org/wiki/Arc_length#Definition_for_a_smooth_curve ). The natural analogue of this to the surface area of a surface in 3 space fails quite spectacularly thanks to constructions such as the Schwarz lantern , which shows we can approximate a cylinder by polyhedra whose surface areas approach infinity! Is there an intuitive reason that polygonal approximation works so well for curves but fails so spectacularly for surfaces?
I'd give an intuitive reason as follows: in the case of a smooth curve, if points $A$ and $B$ are on the curve, then line $AB$ , as $B\to A$ , tends to the tangent at $A$ ; while in the case of a smooth surface, if points $A$ , $B$ and $C$ are on the surface, then plane $ABC$ needn't tend to the tangent plane at $A$ as $B,C\to A$ : the limit depends on the way $B$ and $C$ approach $A$ . This is not surprising, after all, if you think how subtler limits in two or more dimensions are with respect to limits in one dimension.
{ "source": [ "https://math.stackexchange.com/questions/3333357", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
3,335,454
This is a basic question in combinatorics with a little trick. Consider the following triangular array of numbers: How many paths from a 1 on the diagonal to the 10 in the lower right where we only step to the right or down are there? For example, here is a legal path: Batominovski's edit: Attempt: It looks like each path is associated to a sequence of down steps ( $D$ ) and right steps ( $R$ ) of length $9$ . The example above corresponds to $DRDDDRRDD$ . Can it be any sequence? What is a correct answer?
You can think the problem as start with the $10$ and follow the numbers in order until you reach $1$ . Then there is two ways for each step: up or left. Then there are $2^{10-1}=2^9=512$ ways. Done!
{ "source": [ "https://math.stackexchange.com/questions/3335454", "https://math.stackexchange.com", "https://math.stackexchange.com/users/144267/" ] }
3,341,323
As the title indicates, I'm curious why direct proofs are often more preferable than indirect proofs. I can see the appeal of a direct proof, for it often provides more insight into why and how the relationship between the premises and conclusions works, but I would like to know what your thoughts are concerning this. Thanks! Edit : I understand that this question is quite subjective, but that is my intention. There are people who prefer direct proofs more than proof by contradiction, for example. My curiosity is concerning what makes a direct proof preferable to such individuals. In the past, I've had professors grimace whenever I did an indirect proof and showed me that a direct proof was possible, but I never thought to ask them why a direct proof should be done instead. What's the point?
I just did a quick lookup and it suggested that the two flavors of indirect proof was contraposition and contradiction. What I'm about to say is criticizing contradiction, because contraposition seems fine to me. Imagine you have a 1000 statement direct proof. Then every step along that way is provable. Maybe somebody reads your proof and realizes that an observation you made halfway through is exactly the idea they need to solve a problem they have. Mathematical history has many examples of lemmas that are more famous than the theorems they originally supported. By contrast, a 1000 statement proof by contradiction starts out with two hypotheses that are inconsistent. Everything you're building is a logical house of cards that is intended to collapse at the end. Nothing you wrote can be counted on outside that framework without a separate analysis. If it truly takes both hypotheses to get you to the result, then so be it. But I was rightfully dinged by my professors when I wrote a proof by contradiction that could easily be modified into a direct proof by contraposition.
{ "source": [ "https://math.stackexchange.com/questions/3341323", "https://math.stackexchange.com", "https://math.stackexchange.com/users/203407/" ] }
3,341,394
We all know, a general cubic equation is of the form $$ax^3+bx^2+cx+d=0$$ where $$a\neq0.$$ It can be easily solved with the following simple substitutions: $$x\longmapsto x-\frac{b}{3a}$$ We get, $$x^3+px+q=0$$ where, $p=\frac{3ac-b^2}{3a^2}$ and $q=\frac{2b^3-9abc+27a^2d}{27a^3}$ Then, using the Vieta substitution, $$x\longmapsto x-\frac{p}{3x}$$ We get, $$(x^3)^2-q(x^3)-\frac1{27}p^3=0$$ which is can easily turn into a quadratic equation, using the substitution: $x^3 \longmapsto x.$ And here is my question: In mathematics is there a substitution that is "different" from the substitution $x\longmapsto x-\frac{p}{3x}$ that can be used for the standard form cubic equation $x^3+px+q=0$ , which is can easily turn into a quadratic equation? I'm curious, if there's a new substitute I don't know about. Thank you!
The cubic is sufficiently venerable and sufficiently simple that there are numerous other ways to solve it. Here are three of the more notable; others can be found in, e.g., Galois' Theory of Algebraic Equations by J.-P. Tignol. Tschirnhaus Substitution The idea is to do a more sophisticated substitution for $x$ than the linear one that removes the quadratic term: we can suppose that we start in the form $$ x^3 + px + q = 0 , $$ as it makes the calculations simpler. Since a linear substitution can be used to remove one term, one may suspect that a more general substitution can be used to remove more terms, and in particular, if we put $ y = \alpha x^2 + \beta x + \gamma $ , we might hope to choose the coefficients so that both the linear term and the quadratic term vanish at the same time. This turns out to be the case, but first we must eliminate $x$ from the two equations. We have $$ \begin{align} (y-\gamma)^2 &= x^2 (\alpha x + \beta)^2 = (\alpha^2 x + 2\alpha\beta) x^3 + \beta^2 x^2 \\ &= -(\alpha^2 x + 2\alpha\beta) (px+q) + \beta^2 x^2 \\ &= (-p\alpha^2 + \beta^2)x^2 - \alpha(q\alpha + 2p\beta )x -2\alpha\beta q \end{align}$$ and $$ \begin{align} (y-\gamma)^3 &= x^3(\alpha x+\beta)^3 \\ &= -(px+q)(\alpha^3 x^3 + 3\alpha^2 \beta x^2 + 3\alpha \beta^2 x + \beta^3 ) \\ &\vdots \\ &= ( p^2\alpha^2 - 3q\alpha\beta - 3p\beta^2 ) \alpha x^2 + ( 2pq\alpha^3+ 3p^2 \alpha^2\beta - 3q\alpha\beta^2 - p\beta^3 ) x + (q\alpha^3+3p\alpha^2 \beta - \beta^3) \end{align}$$ A straightforward calculation then shows that $$ 0 = (y-\gamma)^3 + 2p\alpha (y-\gamma)^2 + (p^2\alpha^2+3q\alpha\beta + p\beta^2 ) + q(-q\alpha^3 + p\alpha^2\beta -\beta^3) , $$ and then we can expand the brackets to get $$ 0 = y^3 + (2p\alpha-3\gamma) y^2 + ( p^2\alpha^2 + 3q\alpha\beta + p\beta^2 -4p\alpha\gamma+3\gamma^2 ) y + \text{const.}, $$ where the constant is both ugly and unimportant. In particular, we can force the $y^2$ term to vanish by taking $\gamma = 2p\alpha/3$ . This then leaves $$ 0 = y^3 + ( -\tfrac{1}{3} p^2\alpha^2 + 3q\alpha\beta + p\beta^2 ) y + \text{const.} $$ At this point we see that we lose nothing by taking $\alpha=1$ , and thus the solution of the cubic has been reduced to the solution of the equations $$ -\tfrac{1}{3} p^2 + 3q\beta + p\beta^2 = 0 \\ y^3 + \text{const.} = 0 \\ x^2 + \beta x + \tfrac{2}{3}p - y = 0 , $$ all of which are at worst quadratic or just require finding a cube root. Bézout's Method This method was first considered by Bézout, and in a slightly different form by Euler. The idea is that by eliminating $z$ between the equations $$ x = a_0 + a_1 z + \dotsb + a_{n-1} z^{n-1} \\ z^n = 1 , $$ we obtain an equation of degree $n$ , and the idea is to make it the right equation of degree $n$ . But we know the solutions to $z^n=1$ , namely the $n$ th roots of unity. So eliminating $z$ , the remaining equation can be written in the form $$ \prod_{\omega} (x-(a_0+a_1 \omega + \dotsb + a_{n-1}\omega^{n-1})) = 0 . $$ It remains to choose the $a_i$ appropriately. Of course the dream is to do this for any equation, but for degree higher than $4$ , the degree of the equations one needs to solve becomes higher than that of the original equation, so is a complete nonstarter. But for cubics, we can manage: if $$ x^3 + bx^2+cx+d = 0 $$ is the equation, we have $$ 0 = ( x - (a_0+a_1+a_2) )( x - (a_0+a_1 \omega + a_2 \omega^2) )( x - (a_0+a_1 \omega^2+a_2 \omega) ), \tag{1} $$ where $\omega$ is now a fixed primitive cube root of unity. Expanding and using that $1+\omega+\omega^2=0$ , we obtain $$ (x-a_0)^3 - 3a_1 a_2 (x-a_0) -a_1^3 -a_2^3 = 0 . $$ or $$ x^3 - 3a_0 x^2 + 3(a_0^2 - a_1 a_2) x +(- a_0^3 - a_1^3 - a_2^3 + 3a_0 a_1 a_2) = 0 $$ We now solve the equations $$ -3a_0 = b \\ 3(a_0^2 - a_1 a_2) = c \\ - a_0^3 - a_1^3 - a_2^3 + 3a_0 a_1 a_2 = d ; $$ the first is linear and gives $a_0=b/3$ , and the other two become $$ a_1 a_2 = \frac{b^2-3c}{9} \\ a_1^3 + a_2^3 = d + \frac{2}{27} b^3 - \frac{bc}{3} , $$ which when one of the unknowns is eliminated becomes a bicubic equation for the other. This is not surprising: we know that if $a_1$ is a solution, so are $\omega a_1$ and $\omega^2 a_1$ , which means that $a_1^3$ has fewer possible values than $a_1$ . This is key in the next method. Lagrange Resolvents Lagrange develops the first general ideas about how solvability by radicals should be built up, by observing how various methods, including those given above, work. Key in the theory is what happens to expressions containing the roots when the roots themselves are permuted, and it is this that eventually leads to what we now call Galois theory (in its original incarnation as the most general method for understanding the structure of solutions of equations using permutations of the roots). For the sake of simplicity, take the same cubic equation as the previous section. Notice that if the roots are $x_1,x_2,x_3$ , we can specify from $(1)$ that $$ x_1 = a_0 + a_1 + a_2 \\ x_2 = a_0 + \omega a_1 + \omega^2 a_2 \\ x_3 = a_0 + \omega^2 a_1 + \omega a_2 , $$ and so the $a_i$ can be expressed in terms of the $x_j$ , as $$ a_i = \frac{1}{3} \sum_{j=1}^3 \omega^{-ij} x_j , $$ by taking linear combinations of the equations. Suppose now that we relabel the roots. What happens to the $a_i$ ? There are $6$ possible ways to label the roots, so each $a_i$ can take at most $6$ values. But they certainly don't all take this many: we have $$ 3a_0 = x_1 + x_2 + x_3 = -b $$ by one of Vieta's formulae. On the other hand, a straightforward calculation shows that $a_1$ does generally have $6$ different values. Lagrange proves that if an expression has $n$ different values under all the permutations of the roots, it is the root of an equation of degree $n$ , the coefficients of which are rational functions of the coefficients of the original equation. So $a_1$ is the root of an equation of degree $6$ . On the other hand, $a_1^3$ takes only $2$ different values under permutations of the roots: notice that $$ (x_1+\omega x_2+ \omega^2 x_3)^3 = (x_2+ \omega x_3 + \omega^2 x_1)^3 = ( x_3 + \omega x_1 + \omega^2 x_2 )^3 \\ (x_2+\omega x_1+ \omega^2 x_3)^3 = (x_1+ \omega x_3 + \omega^2 x_2)^3 = ( x_3 + \omega x_2 + \omega^2 x_1 )^3 , $$ using $\omega^3=1$ . Therefore Lagrange tells us that $a_1^3$ is a root of a quadratic equation, namely $$ (y-(\tfrac{1}{3}(x_1+\omega x_2+ \omega^2 x_3))^3)(y-(\tfrac{1}{3}(x_2+\omega x_1+ \omega^2 x_3))^3) = 0 , $$ which we can show is $$ y^2 + \tfrac{1}{27}(2b^3-9bc+27d)y + \tfrac{1}{729}(b^2-3c)^3 = 0 \tag{2} $$ using the rest of Vieta's formulae for a cubic. This is the same quadratic that keeps appearing! Finally, Lagrange proves an even stronger result: that if $V$ is an expression in the roots that does not change its value under a certain subset of the permutations of the roots, and $U$ is an expression that takes $m$ values under this subset, then $U$ is a root of an equation of degree $m$ with coefficients rational functions of $V$ and the coefficients of the equation. This is rather more than we need here, because if we take $V = (x_1+ \omega x_2 + \omega^2 x_3 )^3$ , then we know that $ U = (x_1+ \omega x_2 + \omega^2 x_3)/3 $ has three different values under the permutations that fix $V$ , namely $\sqrt[3]{V}, \omega \sqrt[3]{V} $ and $\omega^2 \sqrt[3]{V}$ , obviously the solutions to $$ u^3 = V . $$ But now we know that only one permutation fixes $U$ , namely the identity, and Lagrange's result implies that $x_1$ is the root of an equation of degree $1$ with coefficients rational functions of $U$ and the original equation's coefficients; i.e. a rational function of $U,b,c,d$ . In particular, $$ x_1 = -\frac{b}{3} + U + \frac{b^2-3c}{9U} , $$ the other roots coming from other values of $U$ . In a certain sense this process, of finding expressions that take only a few values under certain sets of permutations, is the only way to solve equations algebraically: one can do a similar analysis of Tschirnhaus's method, although it's a lot messier. But in the case of the cubic, there is essentially only one way to do this, the way we have just given. This means that any algebraic (as opposed to transcendental, using trigonometric or hyperbolic functions) calculation of the roots will go via the quadratic $(2)$ at some point.
{ "source": [ "https://math.stackexchange.com/questions/3341394", "https://math.stackexchange.com", "https://math.stackexchange.com/users/510100/" ] }
3,344,266
Are there mathematical concepts that exist in the fourth dimension, but not in the third dimension? Of course, mathematical concepts include geometrical concepts, but I don't mean to say geometrical concept exclusively. I am not a mathematician and I am more of a layman, so it would be appreciated if you could tell what the concepts are in your answer so that a layman can understand.
The one that sticks out for me the most is that there are five regular polytopes (called Platonic solids) in $3$ dimensions, and they all have analogues in $4$ dimensions, but there is another regular polytope in $4$ dimensions: the 24 cell . The kicker is that in dimensions higher than $4$ ... there are only three regular polytopes! Another thing that can happen in $4$ dimensional space but not $3$ is that you can have two planes which only intersect at the origin (and nowhere else.) In $3$ dimensions you'd get at least a line in the intersection. I don't know if this also counts, but linear transformations in $3$ -dimensions always scale one direction (that is, they have a real eigenvector). This means that in all cases, a line in one direction must either stay put or be reversed to lie upon itself. In $4$ dimensions, it's possible to have transformations (even nonsingular ones) that don't have any real eigenvectors, so all lines get shifted. Also not sure if this counts, but there are no $3$ dimensional asociative algebras over $\mathbb R$ which allow division (they're called division algebras) but there is a unique $4$ dimensional one. (Look up the Frobenius theorem
{ "source": [ "https://math.stackexchange.com/questions/3344266", "https://math.stackexchange.com", "https://math.stackexchange.com/users/700984/" ] }
3,349,594
Some context: I'm an engineer, and I tend to have a rather unusual way of understanding and thinking about things, most likely related to my being autistic. I found this question on the HNQ and upon reading it I felt rather confused. I have always understood that the indefinite and definite integrals are two sides of the same underlying concept, that there is no meaningful difference between them other than that one evaluates to a function and the other to a quantity. Essentially, the definite integral is what you get from running the numbers on the result of an indefinite integral, and saying it's unrelated to the definite integral is like saying that $\sin(x)$ is fundamentally different from $\sin(a)$ , assuming $x$ is a variable and $a$ a constant, for no reason other than that $x$ is a variable and $a$ a constant. To me, whether the definite and indefinite integrals are two sides of the same thing, or two closely related but different things, seems more like a question of philosophy than mathematics. Where is the flaw in my understanding? Is there one?
The point is, there are three slightly different concepts here: 1) If $f$ and $F$ are functions with the same domain, and $F'(x)=f(x)$ on that domain, then we say that $F$ is an antiderivative of $f$ . 2) If $f$ is a function on the interval $a \le x \le b$ , then the definite integral of $f$ , $\int_a^b f(x) \, dx$ , is (loosely speaking) the area under the graph of $y=f(x)$ for $a \le x \le b$ , or (more precisely speaking) a limit of Riemann sums. This definition does not involve any derivatives . 3) If $f$ is a function on the interval $a < x < b$ and $c$ is a point in that interval, then the function $G(x)=\int_c^x f(t) \, dt$ could be called an indefinite integral of $f$ . This definition of the function $G$ comes directly from the definition in 2) of the definite integral, so it also does not involve any derivatives . You are saying that you don't see any difference between 2) and 3). You are correct! If your mental definition of "indefinite integral" is 3), then an indefinite integral is just a definite integral with some unknown limits. The question you're linking is saying that many calculus books use 1) as the definition of an "indefinite integral": that is, they say that if $F$ is an antiderivative of $f$ , then $\int f(x) \, dx = F(x)+C$ is the indefinite integral of $f$ . They can get away with this because of the fundamental theorem of calculus, which says (in part) that $$ \frac{d}{dx}\int_c^x f(t) \, dt=f(x) \, . $$ That is, the function $G(x)$ defined in 3) is an antiderivative of the function $f$ , so these two notions of "indefinite integral" are closely related. But closely related does not mean identical! K B Dave's answer is giving you some examples of situations where the two definitions 1) and 3) diverge slightly. Specifically: We gave the definition 3) on an interval $a < x < b$ , but it doesn't work very well on a more complicated domain. This means that definition 1) is more robust when $f$ has singularities. The fundamental theorem of calculus says that every "indefinite integral" in the sense of 3) is an antiderivative. It does not say that every antiderivative is an indefinite integral, which in fact is false; you can have functions $f$ and $F$ where $F'(x)=f(x)$ , but there is no way to write $F(x)=\int_c^x f(t) \, dt$ .
{ "source": [ "https://math.stackexchange.com/questions/3349594", "https://math.stackexchange.com", "https://math.stackexchange.com/users/560186/" ] }
3,349,597
I am reading Holger Wendland's Numerical Linear Algebra and am looking through the exercises of Chapter 2. Here is my question: Consider the matrix $A=uv^T$ for $u \in \mathbb{R^m}$ and $v \in \mathbb{R^n}$ . Show that $\|A\|_2 = \|u\|_2\|v\|_2$ . I am wondering how to prove this for the case where $m = n = 2$ and for the case where $m = n$ . So far I have been trying to use the definition of the Euclidean norm, that $\|u\|_2 = \sqrt{\sum_{i=1}^n |u_i|^2}$ (same for $v$ ). Is it true that $\|uv^T\|_2 = \|u\|_2\|v\|_2$ ? I've taken multiple real analysis courses but my memory fails me in this regard. The last one I took was two years ago. Thank you!
The point is, there are three slightly different concepts here: 1) If $f$ and $F$ are functions with the same domain, and $F'(x)=f(x)$ on that domain, then we say that $F$ is an antiderivative of $f$ . 2) If $f$ is a function on the interval $a \le x \le b$ , then the definite integral of $f$ , $\int_a^b f(x) \, dx$ , is (loosely speaking) the area under the graph of $y=f(x)$ for $a \le x \le b$ , or (more precisely speaking) a limit of Riemann sums. This definition does not involve any derivatives . 3) If $f$ is a function on the interval $a < x < b$ and $c$ is a point in that interval, then the function $G(x)=\int_c^x f(t) \, dt$ could be called an indefinite integral of $f$ . This definition of the function $G$ comes directly from the definition in 2) of the definite integral, so it also does not involve any derivatives . You are saying that you don't see any difference between 2) and 3). You are correct! If your mental definition of "indefinite integral" is 3), then an indefinite integral is just a definite integral with some unknown limits. The question you're linking is saying that many calculus books use 1) as the definition of an "indefinite integral": that is, they say that if $F$ is an antiderivative of $f$ , then $\int f(x) \, dx = F(x)+C$ is the indefinite integral of $f$ . They can get away with this because of the fundamental theorem of calculus, which says (in part) that $$ \frac{d}{dx}\int_c^x f(t) \, dt=f(x) \, . $$ That is, the function $G(x)$ defined in 3) is an antiderivative of the function $f$ , so these two notions of "indefinite integral" are closely related. But closely related does not mean identical! K B Dave's answer is giving you some examples of situations where the two definitions 1) and 3) diverge slightly. Specifically: We gave the definition 3) on an interval $a < x < b$ , but it doesn't work very well on a more complicated domain. This means that definition 1) is more robust when $f$ has singularities. The fundamental theorem of calculus says that every "indefinite integral" in the sense of 3) is an antiderivative. It does not say that every antiderivative is an indefinite integral, which in fact is false; you can have functions $f$ and $F$ where $F'(x)=f(x)$ , but there is no way to write $F(x)=\int_c^x f(t) \, dt$ .
{ "source": [ "https://math.stackexchange.com/questions/3349597", "https://math.stackexchange.com", "https://math.stackexchange.com/users/689795/" ] }
3,351,115
The fact that, when solving limits of sequences ( $n \in \mathbb{N}$ ), it strikes me as very bizarre that, even though I follow all the (elementary) rules for solving limits (of sequences), I can get different results for the same example! Consider the following: $$ \lim_{n \to \infty} \left( \frac{3n + 1}{2n - 1} \right) = \frac{\lim_{n \to \infty}(3n + 1)}{\lim_{n \to \infty} (2n - 1)} = \frac{\lim_{n \to \infty}(3n) + \lim_{n \to \infty}(1)}{\lim_{n \to \infty} (2n) - \lim_{n \to \infty}(1)} = \frac{\infty + 1}{\infty - 1} \Longrightarrow \text{This sequence does not have a limit.} $$ I followed all of the rules for solving limits, yet this conclusion is wrong . The correct approach is the following. $$ \lim_{n \to \infty} \left( \frac{3n + 1}{2n - 1} \right) = \lim_{n \to \infty} \left( \frac{n(3 + \frac{1}{n})}{n(2 - \frac{1}{n})} \right) = \lim_{n \to \infty} \left( \frac{3 + \frac{1}{n}}{2 - \frac{1}{n}} \right) =\frac{\lim_{n \to \infty}(3 + \frac{1}{n})}{\lim_{n \to \infty} (2 - \frac{1}{n})} = \frac{\lim_{n \to \infty}(3) + \lim_{n \to \infty}(\frac{1}{n})}{\lim_{n \to \infty} (2) - \lim_{n \to \infty}(\frac{1}{n})} = \frac{3 + 0}{2 - 0} = \frac{3}{2}. $$ My question is: why do I get the correct answer only if I arrange the sequence a certain way?
You did not in fact follow all the rules (and this is a good example of why understanding why the rules are true is important). The relevant rule is: If both $\lim_{n\rightarrow a}f(n)$ and $\lim_{n\rightarrow a}g(n)$ exist and are finite (and $\lim_{n\rightarrow a}g(n)\not=0$ ), then $$\lim_{n\rightarrow a}{f(n)\over g(n)}={\lim_{n\rightarrow a}f(n)\over \lim_{n\rightarrow a}g(n)}.$$ There are various other rules of similar flavor. But the point is that the "rule" you've tried to apply is not one of them. (And it's a good exercise at this point to go through your textbook and look at what the rules do in fact say, and note that none of them actually get you what you want.) Indeed, the example you give is a good example of the limitations of these rules: while we can often manipulate limits "algebraically," we can't do this in all cases, and some hypotheses are needed. It's also a good example of why proofs are important, since there are plenty of plausible-sounding statements which are in fact false.
{ "source": [ "https://math.stackexchange.com/questions/3351115", "https://math.stackexchange.com", "https://math.stackexchange.com/users/454828/" ] }
3,353,800
What is the mathematical notation for rounding a given number to the nearest integer? So like a mix between the floor and the ceiling function.
I have seen $\lfloor x \rceil$ . It must have been in the context of math olympiads, so I can't point to a book that uses it. Wikipedia suggest this notation, among others: nearest integer function . Personally, I would prefer $[x]$ , being a cleaner mix of $\lfloor x \rfloor$ and $\lceil x \rceil$ . But I've seen this notation being used for the floor function. Especially in older texts, say, pre-TeX era. You could also do something like $\mathrm{nint}(x)$ , but in formulas that could be cumbersome. See also the remarks at Mathworld .
{ "source": [ "https://math.stackexchange.com/questions/3353800", "https://math.stackexchange.com", "https://math.stackexchange.com/users/701528/" ] }
3,353,826
All the vertices of quadrilateral $ABCD$ are at the circumference of a circle and its diagonals intersect at point $O$ . If $∠CAB = 40°$ and $∠DBC = 70°$ , $AB = BC$ , then find $∠DCO$ .
I have seen $\lfloor x \rceil$ . It must have been in the context of math olympiads, so I can't point to a book that uses it. Wikipedia suggest this notation, among others: nearest integer function . Personally, I would prefer $[x]$ , being a cleaner mix of $\lfloor x \rfloor$ and $\lceil x \rceil$ . But I've seen this notation being used for the floor function. Especially in older texts, say, pre-TeX era. You could also do something like $\mathrm{nint}(x)$ , but in formulas that could be cumbersome. See also the remarks at Mathworld .
{ "source": [ "https://math.stackexchange.com/questions/3353826", "https://math.stackexchange.com", "https://math.stackexchange.com/users/704276/" ] }
3,354,370
A common "trick" for obtaining a closed form of a geometric series is to define $$ R := \sum_{k=0}^{\infty} r^k, $$ then manipulate the series as follows: \begin{align} R - rR &= \sum_{k=0}^{\infty} r^{k} + \sum_{k=0}^{\infty} r^{k+1} \\ &= (1 + r + r^2 + r^3 + \dotsb) - (r + r^2 + r^3 + \dotsb) \\ &= 1 + (r + r^2 + r^3 + \dotsb) - (r + r^2 + r^3 + \dotsb) \\ &= 1. \end{align} On the other hand, $R-rR = (1-r)R$ . Hence $$ (1-r)R = 1 \implies R = \frac{1}{1-r}. $$ In this example, the formula is obtained by a sequence of relatively elementary algebraic manipulations. By a similar kind of manipulation, suppose that $$ S := 1 + 1 + 1 + 1 + \dotsb = \sum_{k=0}^{\infty} 1. $$ $S$ is unaffected by addition of $1$ , and so $S = 1+S$ . Canceling $S$ from both sides gives $0 = 1$ , which is clearly nonsense. Question: What went wrong with the second computation? Why do these arguments work well for summing the geometric series, but not for the series of ones?
To understand things like this, you have to pay careful attention to the underlying definitions. The definition of an infinite sum, like $$1 + 1 + 1 + 1 + \cdots$$ is the limit $$\lim_{n \rightarrow \infty} \underbrace{1 + 1 + \cdots + 1}_{n}$$ i.e. the sum of $n$ ones, as $n$ is allowed to approach infinity. However, this limit does not exist in the real number system, because the right-hand term grows indefinitely large. Yet, by substitution, this limit is the value you have decided to represent by the symbol $S$ . Your problem, then, is that such a value does not exist. The sum of the infinite series doesn't exist. Hence $S$ has no referent, and the associated computations are meaningless. That said, an alternative, and perhaps stronger, perspective would be to say that if an object like $S$ existed, and it permitted the manipulations you did, it would break things, because its existence would thus embody contradictions. Of course you may be wondering, then, "but what about $\infty$ ? Isn't $$\lim_{n \rightarrow \infty} \underbrace{1 + 1 + \cdots + 1}_n = \infty$$ ?" The answer is: no, not in the real number system. In the real number system, the limit does not exist . The above equation is often shown, but its meaning is not really made clear. What it "really" means is an equation in the extended real number system , where an additional element called $\infty$ has been added, and that results in the prior limit as being valid. In that case, then yes, $S = \infty$ . Yet, given the last paragraph of what I just said above, something has to break for this not to be contradictory. What breaks is that $\infty$ , as an extended real number, but not a real number. And once allow $S$ to take extended-real values, the very rules of algebra change , as you are working in a different number system - it is like going into the complex numbers by adding $i$ . Namely, in the extended real numbers you are not allowed to start with $$S = 1 + S$$ then "subtract from both sides" $$S - S = (1 + S) - S$$ and then "cancel". The subtraction is okay, but not the cancellation. You now cannot infer that the left-hand side is zero. In fact, $\infty - \infty$ is, itself, undefined, in this extended real number system. If you go this route, what you learned in grade school quits working.
{ "source": [ "https://math.stackexchange.com/questions/3354370", "https://math.stackexchange.com", "https://math.stackexchange.com/users/468350/" ] }
3,354,374
Let $X$ be a continuous random variable with uniform distribution on $[0,1]$ , i.e, the probability density function of $X$ is $f(x)=1$ on $[0,1]$ . Let $Y=X$ . Then what is the joint probability density function $h(x,y)$ of $X$ and $Y$ ? It seems that $h$ is supported on the line $y=x$ , which has $0$ measure. I guess the joint pdf does not exist or equals to Dirac delta function. Can anyone confirm?
To understand things like this, you have to pay careful attention to the underlying definitions. The definition of an infinite sum, like $$1 + 1 + 1 + 1 + \cdots$$ is the limit $$\lim_{n \rightarrow \infty} \underbrace{1 + 1 + \cdots + 1}_{n}$$ i.e. the sum of $n$ ones, as $n$ is allowed to approach infinity. However, this limit does not exist in the real number system, because the right-hand term grows indefinitely large. Yet, by substitution, this limit is the value you have decided to represent by the symbol $S$ . Your problem, then, is that such a value does not exist. The sum of the infinite series doesn't exist. Hence $S$ has no referent, and the associated computations are meaningless. That said, an alternative, and perhaps stronger, perspective would be to say that if an object like $S$ existed, and it permitted the manipulations you did, it would break things, because its existence would thus embody contradictions. Of course you may be wondering, then, "but what about $\infty$ ? Isn't $$\lim_{n \rightarrow \infty} \underbrace{1 + 1 + \cdots + 1}_n = \infty$$ ?" The answer is: no, not in the real number system. In the real number system, the limit does not exist . The above equation is often shown, but its meaning is not really made clear. What it "really" means is an equation in the extended real number system , where an additional element called $\infty$ has been added, and that results in the prior limit as being valid. In that case, then yes, $S = \infty$ . Yet, given the last paragraph of what I just said above, something has to break for this not to be contradictory. What breaks is that $\infty$ , as an extended real number, but not a real number. And once allow $S$ to take extended-real values, the very rules of algebra change , as you are working in a different number system - it is like going into the complex numbers by adding $i$ . Namely, in the extended real numbers you are not allowed to start with $$S = 1 + S$$ then "subtract from both sides" $$S - S = (1 + S) - S$$ and then "cancel". The subtraction is okay, but not the cancellation. You now cannot infer that the left-hand side is zero. In fact, $\infty - \infty$ is, itself, undefined, in this extended real number system. If you go this route, what you learned in grade school quits working.
{ "source": [ "https://math.stackexchange.com/questions/3354374", "https://math.stackexchange.com", "https://math.stackexchange.com/users/16418/" ] }
3,354,566
I see integrals defined as anti-derivatives but for some reason I haven't come across the reverse. Both seem equally implied by the fundamental theorem of calculus. This emerged as a sticking point in this question .
Let $f(x)=0$ for all real $x$ . Here is one anti-integral for $f$ : $$ g(x) = \begin{cases} x &\text{when }x\in\mathbb Z \\ 0 & \text{otherwise} \end{cases} $$ in the sense that $\int_a^b g(x)\,dx = f(b)-f(a)$ for all $a,b$ . How do you explain that the slope of $f$ at $x=5$ is not $g(5)=5$ ? The idea works better if we restrict all the functions we ever look at to "sufficiently nice" ones -- for example, we could insist that everything is real analytic. Merely looking for a continuous anti-integral wouldn't suffice to recover the usual concept of derivative, because then something like $$ x \mapsto \begin{cases} 0 & \text{when }x=0 \\ x^2\sin(1/x) & \text{otherwise} \end{cases} $$ wouldn't have a derivative on $\mathbb R$ (which is does by the usual definition).
{ "source": [ "https://math.stackexchange.com/questions/3354566", "https://math.stackexchange.com", "https://math.stackexchange.com/users/596895/" ] }
3,355,435
I was doing some software engineering and wanted to have a thread do something in the background to basically just waste CPU time for a certain test. While I could have done something really boring like for(i < 10000000) { j = 2 * i } , I ended up having the program start with $1$ , and then for a million steps choose a random real number $r$ in the interval $[0,R]$ (uniformly distributed) and multiply the result by $r$ at each step. When $R = 2$ , it converged to $0$ . When $R = 3$ , it exploded to infinity. So of course, the question anyone with a modicum of curiosity would ask: for what $R$ do we have the transition. And then, I tried the first number between $2$ and $3$ that we would all think of, Euler's number $e$ , and sure enough, this conjecture was right. Would love to see a proof of this. Now when I should be working, I'm instead wondering about the behavior of this script. Ironically, rather than wasting my CPUs time, I'm wasting my own time. But it's a beautiful phenomenon. I don't regret it. $\ddot\smile$
EDIT: I saw that you solved it yourself. Congrats! I'm posting this anyway because I was most of the way through typing it when your answer hit. Infinite products are hard, in general; infinite sums are better, because we have lots of tools at our disposal for handling them. Fortunately, we can always turn a product into a sum via a logarithm. Let $X_i \sim \operatorname{Uniform}(0, r)$ , and let $Y_n = \prod_{i=1}^{n} X_i$ . Note that $\log(Y_n) = \sum_{i=1}^n \log(X_i)$ . The eventual emergence of $e$ as important is already somewhat clear, even though we haven't really done anything yet. The more useful formulation here is that $\frac{\log(Y_n)}{n} = \frac 1 n \sum \log(X_i)$ , because we know from the Strong Law of Large Numbers that the right side converges almost surely to $\mathbb E[\log(X_i)]$ . We have $$\mathbb E \log(X_i) = \int_0^r \log(x) \cdot \frac 1 r \, \textrm d x = \frac 1 r [x \log(x) - x] \bigg|_0^r = \log(r) - 1.$$ If $r < e$ , then $\log(Y_n) / n \to c < 0$ , which implies that $\log(Y_n) \to -\infty$ , hence $Y_n \to 0$ . Similarly, if $r > e$ , then $\log(Y_n) / n \to c > 0$ , whence $Y_n \to \infty$ . The fun case is: what happens when $r = e$ ?
{ "source": [ "https://math.stackexchange.com/questions/3355435", "https://math.stackexchange.com", "https://math.stackexchange.com/users/278017/" ] }
3,355,751
From Halmos's Naive Set Theory , section 1: Observe, along the same lines, that inclusion is transitive, whereas belonging is not. Everyday examples, involving, for instance, super-organizations whose members are organizations, will readily occur to the interested reader. Belonging seems transitive. Can someone explain?
The difference between $\subset$ and $\in$ is that the former applies to expressions at the same level of nesting and the latter applies to expressions at one level of nesting apart from each other. So when you chain two $\in$ 's together you get something at two levels of nesting, which is not in general comparable to a single $\in$ . On the other hand, since $\subset$ doesn't change the level of nesting it doesn't have this problem. This is the idea behind the example given in other answers of $$ \varnothing\in \{\varnothing\}\in \{\{\varnothing\}\},\qquad \varnothing \not\in \{\{\varnothing\}\}. $$
{ "source": [ "https://math.stackexchange.com/questions/3355751", "https://math.stackexchange.com", "https://math.stackexchange.com/users/491791/" ] }
3,358,137
Take the equation: $$x^2 + 2x = \frac{1}{1+x^2}$$ I subtracted the right term over to form $~f_1(x)~$ : $$x^2 + 2x - \frac{1}{1+x^2} = 0$$ I wanted to take the derivative, so I rearranged things to make it a bit easier, call it $~f_2(x)~$ : $$x^4 + 2x^3 + x^2 + 2x - 1 = 0$$ I noticed when I graphed $~f_1(x)~$ and $~f_2(x)~$ that their plots were different $($ although they shared the same solution for $~x)~$ . Newton's method iterates down the graph line, so I'd imagine that Newton's method for these two equations are not equivalent. They'd find the same solution, but they would get there different ways. In that case, is there a way to decide which equation to use for Newton's to obtain the best/quickest result?
For your curiosity. Asking this far-from-innocent question, you are almost asking me what I have been doing during the last sixty years. My answer is : what is the transform of $f(x)$ which makes it the most linear around the solution ? If you find it, you will save a lot of iterations. One case I enjoy for demonstration is $$f(x)=\sum_{i=1}^n a_i^x- b^x$$ where $1<a_1<a_2< \cdots < a_n$ and $b > a_n$ ; on this site, you could find many problems of this kind. As written, this function is very bad since it varies very fast and it is very nonlinear. Moreover, it goes through a maximum (not easy to identify) and you must start on the right of it to converge. Now, consider the transform $$g(x)=\log\left(\sum_{i=1}^n a_i^x \right)- x\log(b)$$ It is almost a straight line ! This means that you can start from almost anywhere and converge fast. For illustration purposes, trying with $n=6$ , $a_i=p_i$ and $b=p_{n+1}$ ( $p_i$ being the $i^{th}$ prime number). Being very lazy and using $x_0=0$ , the iterates would be $$\left( \begin{array}{cc} n & x_n \\ 0 & 0 \\ 1 & 1.607120621 \\ 2 & 2.430003204 \\ 3 & 2.545372693 \\ 4 & 2.546847896 \\ 5 & 2.546848123 \end{array} \right)$$ Using $f(x)$ , you must start iterating with $x_0 > 2.14$ to have convergence (big work to know it !). Let us try with $x_0=2.2$ to get as successive iterates $$\left( \begin{array}{cc} n & x_n \\ 0 & 2.200000000 \\ 1 & 4.561765400 \\ 2 & 4.241750505 \\ 3 & 3.929819520 \\ 4 & 3.629031502 \\ 5 & 3.344096332 \\ 6 & 3.082467015 \\ 7 & 2.856023930 \\ 8 & 2.682559774 \\ 9 & 2.581375720 \\ 10 & 2.549595979 \\ 11 & 2.546866878 \\ 12 & 2.546848124 \\ 13 & 2.546848123 \end{array} \right)$$ The last point I would like to mention : even if you have a good guess $x_0$ of the solution, search for a transform $g(x)$ such that $g(x_0)\,g''(x_0) >0$ (this is Darboux theorem) in order to avoid any overshoot of the solution during the path to it.
{ "source": [ "https://math.stackexchange.com/questions/3358137", "https://math.stackexchange.com", "https://math.stackexchange.com/users/676898/" ] }
3,361,222
I was wondering about the following: Suppose a new guest arrives and wishes to be accommodated in the hotel. We can (simultaneously) move the guest currently in room 1 to room 2, the guest currently in room 2 to room 3, and so on, moving every guest from his current room n to room n+1. After this, room 1 is empty and the new guest can be moved into that room. By repeating this procedure, it is possible to make room for any finite number of new guests. However, if we have an infinite amount of guests, why can't we just say that each guest follows this procedure? Everyone would have to relocate infinitely many times, but what's the problem?
Hilbert's hotel (HH) is only a metaphor, and when pushed too far it can lead to confusions. I think this is one of those situations: the key point is "we can't obviously compose infinitely many functions," which is pretty clear, but it's obscured by the additional language. The point of HH is to illustrate how an infinite set (the set of rooms) can have lots of maps from itself to itself ("person in room $n$ goes to room $f(n)$ ") which are injective ("no two different rooms send their occupants to the same room") but not surjective ("some rooms wind up empty"). Note that already we can see an added complexity in the metaphor: the statement There is a set $X$ and a map $f:X\rightarrow X$ which is an injection but not a surjection has only one type of "individual," namely the elements of $X$ , but HH has two types of "individual," namely the rooms and the people . Now let's look at the next level of HH: getting an injection which is far from a surjection. Throwing aside the metaphor at this point, all that's happening is composition . Suppose $f:X\rightarrow X$ is an injection but not a surjection. Pick $x\in X\setminus ran(f)$ . Then it's a good exercise to check that $x\not\in ran(f\circ f)$ , $f(x)\not\in ran(f\circ f)$ , and $x\not=f(x)$ . What does this mean? Well, when we composed $f$ with itself we got a new "missed element," so that while $ran(f)$ need only miss one element of $X$ we know that $ran(f\circ f)$ is missing two elements of $X$ . Similarly, by composing $n$ times we get a self-injection of $X$ whose range misses at least $n$ elements of $X$ . At this point it should be clear why we can't proceed this way to miss an infinite set: how do we define "infinite-fold" compositions? This is what the question "where should the guest in room $1$ go?" is ultimately getting at. It's worth pointing out that there are situations where infinite composition makes sense. Certainly if $f:X\rightarrow X$ is such that for each $x\in X$ the sequence $$x,f(x),f(f(x)), f(f(f(x))),...$$ is eventually constant with eventual value $l_x$ , then it makes some amount of sense to define the "infinite composition" as $$f^\infty:X\rightarrow X: x\mapsto l_x.$$ And if $X$ has some additional structure we might be able to be even more broad: for example, when $X=\mathbb{R}$ we can use the metric structure (really, the topology ) and make sense of $f^\infty$ under the weaker assumption that the sequence $$x,f(x),f(f(x)), f(f(f(x))), ...$$ converges (in the usual calculus-y sense) for each $x\in \mathbb{R}$ . For example, the function $f(x)={x\over 2}$ would yield $f^\infty(x)=0$ under this interpretation (even though only one of the "iterating $f$ " sequences is eventually constant - namely, the $x=0$ one) . But this is not something we can do in all circumstances , and you should regard the idea of infinite composition with serious suspicion at best. (Although again, there are situations where it's a perfectly nice and useful idea!)
{ "source": [ "https://math.stackexchange.com/questions/3361222", "https://math.stackexchange.com", "https://math.stackexchange.com/users/694879/" ] }
3,362,751
The problem statement: Two trains move towards each other at a speed of $34\ km/h$ in the same rectilinear road. A certain bird can fly at a speed of $58\ km/h$ and starts flying from the front of one of the trains to the other, when they're $102\ km$ apart. When the bird reaches the front of the other train, it starts flying back to the first train, and so on. How many of these trips can the bird make before the two trains meet? What is the total distance the bird travels? Commentary: The second question of the problem seems relatively simple, since one only has to notice that the trains will take 1.5 hours to meet, therefore, the bird travels $58\cdot1.5=87 km$ . However, the first question baffles me. How can one calculate how many trips the bird makes? If I'm correct, in order to obtain the time the bird will take to make its first trip, we have to add the bird's speed and the speed at which the distance of the trains is being reduced ( $68\ km/h$ ). This means the bird will take $\frac{102}{126}\approx0.809$ hours to finish the first trip, and the trains will be $\frac{986}{21}\approx 46.95\ km$ apart. If I continue this way (now finding how long will the bird take to travel those 46.95 km), it seems that I'll never stop or that at least it will take a huge amount of trips that cannot be computed by hand. Is there a way to find a 'quick' answer to this problem? Am I making it more complicated than it actually is? Thanks in advance!
The bird will make infinitely many trips, that get smaller and smaller in distance. In fact, because of this, this question is often asked as a kind of 'trick' question. That is, like you did in the second part of your post, people trying to answer the second question will often try and calculate how much time the first trip takes, how far the bird flew during that first trip, and how far the trains are still apart at that point. Then, they'll try and compute the same for the second trip, third, etc .... but of course you never get done with this ... and the numbers are intentionally chosen to be 'ugly' as well (as they are in this case). So, many people will throw up their hands when asked the total distance made by the bird, because they try and calculate the sum of all these distances, and the calculation just gets too nasty for them. Now, of course you could use an infinite series to do this ... or you do what you did! First calculate how much time it takes for the trains to reach each other, and that tells you how much time the bird is flying back and forth, and that'll immediately tell you the answer to the total distance question. So, good for you for not being tripped up by this ... but maybe that's exactly because you didn't realize that the bird would take infinitely many trips? :)
{ "source": [ "https://math.stackexchange.com/questions/3362751", "https://math.stackexchange.com", "https://math.stackexchange.com/users/485701/" ] }
3,362,902
Consider the numbers of the form $$ a_n=\lfloor\pi\times 10^{n-1} \rfloor, $$ for $n\in\mathbb{N}$ . In other words, $\{a_n\}_{n\in\mathbb{N}}$ is the sequence composed of integers built from the first $n$ digits of $\pi$ , so that $$ a_1=3,\,\,\,a_2=31,\,\,\,a_3=314,\,\,\,a_4=3141,\,\,\,a_5=31415,\,\,\,... $$ My question is: for which values of $n$ is $a_n$ a prime number? I wrote a small code to find primes for $1\leq n\leq 10^4$ , and I've only found primes for $n\in\{1,2,6,38\}$ , which are, respectively, \begin{align} a_1&=3\\ a_2&=31\\ a_6&=314159\\ a_{38}&=31415926535897932384626433832795028841 \end{align} I found this result rather intriguing, since I'd expect to find more primes up to numbers with $10^4$ digits. Are there any more primes in this sequence? Is there an explanation for the lack of primes between $a_{38}$ and $a_{10^4}$ ?
This is not so few. If we take the probability that a number $n$ is prime to be $1/\log(n)$ , we would expect $\sum \frac 1{n \log (10)}\approx \frac 1{2.3}(\log(n)+\gamma)$ of them out to $n$ . If we do the sum from $2$ to $10^4$ it is $3.8$ while we have $3$ and if we do it out to $16208$ we should have $4$ and we do. The sum diverges, so we should have infinitely many, but it diverges very slowly.
{ "source": [ "https://math.stackexchange.com/questions/3362902", "https://math.stackexchange.com", "https://math.stackexchange.com/users/487230/" ] }
3,364,463
I came across the statement below: Let $C([0,1])$ be the space of all continuous functions over the interval $[0,1]$ equipped with the Supremum norm. Assume $A$ is a map on the space of all differentiable functions whose derivative is continuous into $C([0,1])$ . Also, $A$ is differentiation in the sense that it maps a functions to its derivative. The map $A$ (differentiation) is discontinuous. It's written that the last sentence is well-known but I can't make any sense of it. How can I arrive at such a conclusion? Actually, I am looking for an explicit counterexample. Any help would be highly appreciated.
For a counterexample, take the sequence $$\frac {\sin nx} n$$ These are all continuously differentiable, but the sequence converges to $0$ and the sequence of derivatives doesn't converge at all. The derivative of the limit is not equal to the limit of the derivatives, so it is not continuous.
{ "source": [ "https://math.stackexchange.com/questions/3364463", "https://math.stackexchange.com", "https://math.stackexchange.com/users/696612/" ] }
3,364,483
Consider the following integral equation araising in a mathematical-physical problem: $$ \int_0^r f(t) \arcsin \left( \frac{t}{r} \right) \, \mathrm{d}t + \frac{\pi}{2} \int_r^R f(t) \, \mathrm{d} t = r \, \qquad (0<r<R) \, , $$ where $f(t)$ is the unknown function, and $R$ is a positive real number. By differentiating both sides of this equation with respect to the variable $r$ , one obtains $$ -\frac{1}{r} \int_0^r \frac{f(t)t \, \mathrm{d}t}{\sqrt{r^2-t^2}} = 1 \, , $$ the solution of which can readily be obtained as $$ f(t) = -1 \, . $$ Nevertheless, upon substitution of the latter solution into the original integral equation given above, one rather gets an additional $-\pi R/2$ term on the left hand side. i was wondering whether some math details or assumptions are overlooked here during this resolution. Any help would be highly appreciated. An alternative resolution approach that leads to the desired solution is also most welcome. Thank you
Why solving a differentiated integral equation might eventually lead to erroneous solutions of the original problem? The reason is that taking a derivative is not an invertible operation . So the new equation you obtain is true, but not equivalent to the original one -- the set of solutions has increased. The simplest example is trying to solve an ordinary equation, say, $$ x=1 $$ The obvious solution is $x=1$ . But if you square both sides, you obtain $x^2=1$ , which now has two solutions, $x=\pm 1$ . The new "wrong" solution appeared because taking a square is not invertible (the kernel is the negatives). Similarly, taking a derivative is not invertible. Consider the equation $$ f(t)=t $$ The obvious solution is $f(t)=t$ . But if you take a derivative, you get $f'(t)=1$ , whose general solution is $f(t)=t+c$ , for an arbitrary $c$ . The "wrong" solutions, those with $c\neq0$ , appeared because taking a derivative is not invertible (the kernel is the constants).
{ "source": [ "https://math.stackexchange.com/questions/3364483", "https://math.stackexchange.com", "https://math.stackexchange.com/users/140351/" ] }
3,366,064
I have a baking recipe that calls for 1/2 tsp of vanilla extract, but I only have a 1 tsp measuring spoon available, since the dishwasher is running. The measuring spoon is very nearly a perfect hemisphere. My question is, to what depth (as a percentage of hemisphere radius) must I fill my teaspoon with vanilla such that it contains precisely 1/2 tsp of vanilla? Due to the shape, I obviously have to fill it more than halfway, but how much more? (I nearly posted this in the Cooking forum, but I have a feeling the answer will involve more math knowledge than baking knowledge.)
Assuming the spoon is a hemisphere with radius $R$ , let $x$ be the height from the bottom of the spoon, and let $h$ range from $0$ to $x$ . The radius $r$ of the circle at height $h$ satisfies $r^2=R^2-(R-h)^2=2hR-h^2$ . The volume of liquid in the spoon when it is filled to height $x$ is $$\int_0^x\pi r^2 dh=\int_0^x\pi(2hR-h^2)dh=\pi Rh^2-\frac13\pi h^3\mid_0^x=\pi Rx^2-\frac13\pi x^3.$$ (As a check, when the spoon is full, $x=R$ and the volume is $\frac23\pi R^3,$ that of a hemisphere.) The spoon is half full when $\pi Rx^2-\frac13\pi x^3=\frac13\pi R^3;$ i.e., $3Rx^2-x^3=R^3;$ i.e., $a^3-3a^2+1=0$ , where $a=x/R$ . The only physically meaningful solution of this cubic equation is $a\approx 65\%.$
{ "source": [ "https://math.stackexchange.com/questions/3366064", "https://math.stackexchange.com", "https://math.stackexchange.com/users/707607/" ] }
3,369,017
I know that it must be an old question, and I have encountered it before, in MAT 1993.2 (Oxford admission test). But I dimly remember the solution. The question is Twenty-three people, each with integral weight, decide to play football, separating into two teams of 11 people each, plus a referee. To keep things fair, the teams chosen must have equal total weight. It turns out that no matter who is chosen to be the referee, this can always be done. Prove that the 23 people have the same weight.
The answers above rely mostly on the fact that the weights are integers. However, the result also holds for real weights as well, and is a common (spectacular regardless, in my opinion) application of linear algebra. Let $W$ be the column vector of weights. Consider the matrix $A$ ( $23 \times 23$ with entries in $\{0,1,-1\}$ ) such that its $i$ th row represents the weight distribution of the teams ( $1$ for a team, $-1$ for the other and $A_{i,i}=0$ is the only null entry) when person $i$ is a referee. Then $AW=0$ , so $A$ has rank at most $22$ (unless $W=0$ ). Besides, in $\mathbb{F}_2$ , $A$ reduces to $A’$ which has only $1$ outside the diagonal and $0$ in the diagonal. We can check that the cofactor $(1,1)$ of $A’$ is nonzero in $\mathbb{F}_2$ , so $A$ has a nonzero cofactor, so its rank is exactly $22$ . Now, $W$ and $(1,1,\ldots,1)$ are in the kernel of $A$ . Thus they are proportional, which is the required claim.
{ "source": [ "https://math.stackexchange.com/questions/3369017", "https://math.stackexchange.com", "https://math.stackexchange.com/users/687973/" ] }
3,369,021
I need help for this question: Find the partial derivatives of this function $$ f(x,y)= \int_x^y \cos(−7t^2+4t−5)dt $$ This was what I did: $$ f(x,y)= \int_0^x \cos(−7t^2+4t−5) \text{dt} - \int_0^y \cos(−7t^2+4t−5)\text{dt} $$ So $f_x(x,y)$ would be $\cos(-7x^2+4x-5) \frac{d}{dx} \cos(-7x^2+4x-5)$ And likewise for $f_y(x,y)$ But the answer provided by my teacher is just $f_x(x,y)=\cos(-7x^2+4x-5)$ , without using the chain rule. Why is that?
The answers above rely mostly on the fact that the weights are integers. However, the result also holds for real weights as well, and is a common (spectacular regardless, in my opinion) application of linear algebra. Let $W$ be the column vector of weights. Consider the matrix $A$ ( $23 \times 23$ with entries in $\{0,1,-1\}$ ) such that its $i$ th row represents the weight distribution of the teams ( $1$ for a team, $-1$ for the other and $A_{i,i}=0$ is the only null entry) when person $i$ is a referee. Then $AW=0$ , so $A$ has rank at most $22$ (unless $W=0$ ). Besides, in $\mathbb{F}_2$ , $A$ reduces to $A’$ which has only $1$ outside the diagonal and $0$ in the diagonal. We can check that the cofactor $(1,1)$ of $A’$ is nonzero in $\mathbb{F}_2$ , so $A$ has a nonzero cofactor, so its rank is exactly $22$ . Now, $W$ and $(1,1,\ldots,1)$ are in the kernel of $A$ . Thus they are proportional, which is the required claim.
{ "source": [ "https://math.stackexchange.com/questions/3369021", "https://math.stackexchange.com", "https://math.stackexchange.com/users/708393/" ] }
3,374,098
Showing $\rho (x,y)=\frac{d(x,y)}{1+d(x,y)}$ is a metric That post has a more generalized form of a metric I occasionally see $d(x,y) = \frac{|x-y|}{1+|x-y|}$ . When would using this metric be useful exactly? It occasionally comes up when I study analysis, but I don't know why, I don't know what people use it for or what benefit it could ever bring over the standard metric. I've merely only seen it as an example of a metric in books or sites, but if so many sources mention it, then it's very unlikely that it's useless.
It's a metric that is bounded above by $1$ , while maintaining the same topology. This means that bounded metrics are just as powerful as general metrics (which is arguably interesting in itself). More concretely, there's a commonly used construction for turning a countable product of metric spaces into a metric space itself. Specifically, if we have spaces $(X_n, d_n)$ where $n \in \Bbb{N}$ and $d_n$ is bounded uniformly (e.g. $d_n \le 1$ for all $n$ ), then $\prod_n X_n$ is a metric space with the metric $$d(x, y) = \sum_{n=1}^\infty \frac{d_n(x_n, y_n)}{2^n}.$$ Boundedness is important to guarantee convergence. This function is a metric, and it proves that a countable product of metrisable spaces are metrisable . This, in turn, is used to prove a bunch of interesting metrisability theorems. Coming from a functional analysis background, one consequence I'm partial to is the metrisability of the weak topology of a separable normed linear space when restricted to the unit ball. From this, we get the handy Eberlein-Smulian theorem . Of course, this is just one field's use of this metric!
{ "source": [ "https://math.stackexchange.com/questions/3374098", "https://math.stackexchange.com", "https://math.stackexchange.com/users/709298/" ] }
3,374,247
A goat is tied to the corner of a shed 12 feet long and 10 feet wide. If the rope is 15 feet long, over how many square feet can the goat graze ? I know that this question has already been asked a number of time, but no matter what I do I cannot find the same answer as the one provided in the textbook. I proceed like in this thread so I have : $\frac{3}{4}15²\pi + \frac{1}{4}3²\pi + \frac{1}{4}5²\pi = \frac{1}{4}709\pi$ However the answer given by the textbook is $177\frac{1}{4}\pi$ . Am I missing something here or is the textbook wrong ?
In business and the trades, at least before everything went to decimal notation for fractions, you would almost never see someone write a number as (for example) $\frac 52.$ Instead they would write $2\frac12,$ which by convention was read as a single number equal to $2+\frac12.$ This notation is called a mixed fraction. It is highly discouraged in most mathematical settings, but you can still see it used sometimes, especially in old puzzle books. While I was trying not to be U.S.-centric in this answer, I should acknowledge that mixed fractions are still extremely common in the U.S. for many kinds of measurements, and as noted in the comments are seen in some contexts in at least a few other countries.
{ "source": [ "https://math.stackexchange.com/questions/3374247", "https://math.stackexchange.com", "https://math.stackexchange.com/users/704953/" ] }
3,375,516
The ellipsis " $...$ " sometimes seems to be seen as a bit informal. Its use is often justified for cases "where the intention or meaning is clear". And of course, one can go arbitrarily far with nitpicking, intentionally misunderstanding a notation, or suggesting ambiguities. In many cases, the intention really is clear. But for me, it still looks like the writer was handwavingly saying: "Yeah, and so on, y'know what I mean". Usually, the ellipsis can easily be replaced with a more rigorous notation - often involving some sort of indexing over $\mathbb{N}$ . And I wonder why this is often not done. So my question is: How acceptable is an ellipsis " $...$ " in formal mathematics? Of course, this does not refer to textbooks where the natural numbers are introduced as " $\{1,2,3,4,...\}$ ". It rather refers to mathematical research, or as a specific example: A paper about a proof where the correctness of the proof crucially depends on the right interpretation of an ellipsis, even if it is only used in a basic definition of something "trivial and obvious" like " $a_1 + ... + a_n$ ". How far should one go with trying to avoid the use of the ellipsis, in order to not be confronted with the possible ambiguities or lack of rigour? I found two questions that are related to this one: I've been told that the use of ellipsis in "$S = x_1 + x_2 + x_3 + x_4 + \dots$" is ambiguous and meaningless. Is it? More rigorous notation than "ellipsis" for "and so forth?" They refer to a particular use of ellipsis " $...$ ", and how to replace it with a more rigorous notation. Further search reveals attempts to formalize the ellipsis - for example, Proofs About Lists Using Ellipsis (A. Bundy, J. Richardson) states A notation often used in informal mathematical proofs is ellipsis (the dots in $a_1 + ... + a_n$ ) ... The first problem in formalising ellipsis is its inherent ambiguity. The reader of a formula containing ellipsis has to induce a pattern from the expressions on either side of the dots. [...] One can try todisambiguate ellipsis by putting in more context [...] but some ambiguity will always remain. More importantly, it is hard to see how we can ensure that a “proof” is in fact a proof unless it can be expressed in an unambiguous internal representation But this refers to a very specific context, and not to how acceptable the ellipsis is in proofs and definitions in general.
Discussing whether ellipsis are inherently good or bad is not that productive - that's a decision made in reference to particular writing and particular purpose. It is better to recognize that written mathematics is meant to communicate both rigor and intent and to understand the way in which elements like ellipses serve that purpose. Note that ellipses introduce elements into the text that sums do not: They explicitly substitute for the initial (and, if finite, terminal) segments of a sequence, which is useful if you want to make a point about those terms or if those values help clarify that bounds of the sum are sensible. They show the ordering of the terms. This is useful if you want to make an argument involving adjacent terms cancelling - and if you're in a non-commutative setting, this is often less ambiguous than a symbol like $\prod$ . They create some space on the page for each term. This is fantastic when you're dealing with something like generating functions where you might need pointwise operations on the coefficients of multiple series, because you can lay out the coefficients of multiple functions in a grid and can also integrate worked examples of small cases with general calculation by using notation such as $$1+2x+3x^2+\cdots+(n+1)x^n+\cdots.$$ There are also tangential benefits that depend on the audience and purpose - for instance, if you're trying to express a formal argument to an audience without so much mathematical maturity, ellipses can be a nice way to do that. Of course, ellipses also don't do some things that you might want them to: Ellipses don't always pin down what the summands are. If the pattern is just "counting, with a function applied unevaluated " - that is an expression like $f(1)+f(2)+f(3)+\cdots+f(n)$ - it's probably safe, but one has to be careful not to frustrate readers. Of course, you can always include a general term to clarify or explicitly state your intention in the text preceding the equation (and, hopefully, the equation should not come from nowhere! If it does, you haven't written enough words to introduce it!) Ellipses do not indicate the indices of the sum. This can be relevant if you need to split up the sum in some way, as often happens in analysis - there's no good way to say "here's the set of big terms, and here's the set of small terms, let's look at them separately." Ellipses cannot represent sums without order. If you're summing over the set of partitions of some set, you'd better use summation notation. There are surely more subtle things, but these are the most striking aspects of the notation that come to mind that would most often persuade me to use ellipses or to avoid them - and there are definitely situations where a creative use of notation can contradict what I wrote here and situations where it doesn't really matter what notation you choose.
{ "source": [ "https://math.stackexchange.com/questions/3375516", "https://math.stackexchange.com", "https://math.stackexchange.com/users/133498/" ] }
3,381,871
Plotting the function $f(x)=x^{1/3}$ defined for any real number $x$ gives us: Since $f$ is a function, for any given $x$ value it maps to a single y value (and not more than one $y$ value, because that would mean it's not a function as it fails the vertical line test). This function also has a vertical tangent at $x=0$ . My question is: how can we have a function that also has a vertical tangent? To get a vertical tangent we need 2 vertical points, which means that we are not working with a "proper" function as it has multiple y values mapping to a single $x$ . How is it possible for a "proper" function to have a vertical tangent? As I understand, in the graph I pasted we cannot take the derivative of x=0 because the slope is vertical, hence we cannot see the instantaneous rate of change of x to y as the y value is not a value (or many values, which ever way you want to look at it). How is it possible to have a perfectly vertical slope on a function? In this case I can imagine a very steep curve at 0.... but vertical?!? I can't wrap my mind around it. How can we get a vertical slope on a non vertical function?
No, we don't need two vertical points. By the same idea, if the graph of a function $f$ has an horizontal tangent line somewhere, then there has to be two points of the graph of $f$ with the same $y$ coordinate. However, the tangent at $0$ of $x\mapsto x^3$ (note that this is not the function that you mentioned) is horizontal, in spite of the fact that no two points of its graph have the same $y$ coordinate.
{ "source": [ "https://math.stackexchange.com/questions/3381871", "https://math.stackexchange.com", "https://math.stackexchange.com/users/711633/" ] }
3,381,887
This is what is given: $f '(x)=y=4x-5$ Find: the function, $f(x)$ , of which $f '(x)$ is the derivative of.
No, we don't need two vertical points. By the same idea, if the graph of a function $f$ has an horizontal tangent line somewhere, then there has to be two points of the graph of $f$ with the same $y$ coordinate. However, the tangent at $0$ of $x\mapsto x^3$ (note that this is not the function that you mentioned) is horizontal, in spite of the fact that no two points of its graph have the same $y$ coordinate.
{ "source": [ "https://math.stackexchange.com/questions/3381887", "https://math.stackexchange.com", "https://math.stackexchange.com/users/711642/" ] }