source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
4,098,351
We have 6 differently colored balls and 4 boxes labelled 1 through 4. How many different ways can we fill the boxes using the 6 balls such that no box is left empty? We can ignore the boxes for now, and consider grouping the balls into 4 different groups, which we can do $\dbinom{6}{3,1,1,1}+\dbinom{6}{2,2,1,1}=6\cdot 5\cdot 4+6\cdot5\cdot3\cdot2=120+180=300$ different ways. Then given one of these combinations of balls, there are $4!$ different ways to deposit them into the boxes, leaving a possible $300\cdot 4!=7200$ different ways. I am unsure of where I misplayed my cards here.
Similar to De Moivre's formula: $$\cos nx \pm i\sin nx = (\cos x\pm i\sin x)^n$$ there is the hyperbolic De Moivre formula : $$\cosh nx \pm \sinh nx = (\cosh x\pm\sinh x)^n$$ which means this: if you can represent a real number $a$ as $a=\cosh x\pm\sinh x$ , then $\sqrt[n]{a}=\cosh (x/n)\pm\sinh (x/n)$ . In other words, hyperbolic trigonometric functions can help us exponentiate and take roots . (Note the "ordinary" trigonometric functions can do the same - for roots of complex numbers.) In this case, let's take $x=\pm\cosh^{-1}\left(2+\frac{3}{8}\right)$ so that $\cosh x=2\frac{3}{8}=\frac{19}{8}$ . This (from well-known identity $\cosh^2x-\sinh^2x=1$ ) gives $\sinh x=\pm\frac{3\sqrt{33}}{8}$ . Now, take $a=\frac{1}{8}(19\pm 3\sqrt{33})=\cosh x\pm \sinh x$ . All that is left is to apply the hyperbolic De Moivre's formula with $n=3$ to take the cube root and prove that the formula from the Wikipedia article you have cited is correct.
{ "source": [ "https://math.stackexchange.com/questions/4098351", "https://math.stackexchange.com", "https://math.stackexchange.com/users/651041/" ] }
4,101,231
Another question brought this up. The only definition I have ever seen for a matrix being upper triangular is, written in component forms, "all the components below the main diagonal are zero." But of course that property is basis dependent. It is not preserved under change of basis. Yet it doesn't seem as if it would be purely arbitrary because the product of upper triangular matrices is upper triangular, and so forth. It has closure. Is there some other sort of transformation besides a basis transformation that might be relevant here? It seems as if a set of matrices having this property should have some sort of invariants. Is there some sort of isomorphism between the sets of upper triangular matrices in different bases?
Many true things can be said about upper-triangular matrices, obviously... :) In my own experience, a useful more-functional (rather than notational) thing that can be said is that the subgroup of $GL_n$ consisting of upper-triangular matrices is the stabilizer of the flag (nested sequence) of subspaces consisting of the span of $e_1$ , the span of $e_1$ and $e_2$ , ... with standard basis vectors. Concretely, this means the following. The matrix multiplication of a triangular matrix $A$ and $e_1$ , $Ae_1$ , is equal to a multiple of $e_1$ , right? However, $Ae_2$ is more than a multiple of $e_2$ : it can be any linear combination of $e_1$ and $e_2$ . Generally, if you set $V_i= \operatorname{span}(e_1, \ldots, e_i) $ , try to show that $A$ is upper triangular if and only if $A(V_i) \subseteq V_i$ . The nested sequence of spaces $$ 0 = V_0 \subset V_1 \subset \ldots \subset V_n = \mathbb{R}^n$$ is called a flag of the total space. One proves a lemma that any maximal chain of subspaces can be mapped to that "standard" chain by an element of $GL_n$ . In other words, no matter which basis you are using: being triangular is intrinsically to respect a flag with $\dim(V_i) = i$ (the last condition translate the maximality of the flag). As Daniel Schepler aptly commented, while an ordered basis gives a maximal flag, a maximal flag does not quite specify a basis. There are more things that can be said about flags versus bases... unsurprisingly... :)
{ "source": [ "https://math.stackexchange.com/questions/4101231", "https://math.stackexchange.com", "https://math.stackexchange.com/users/883326/" ] }
4,102,888
I throw 2 unfair dice, suppose that $p_i$ is the probability that the first die can give an $i$ if I throw it, for $i =1,2,3,..6$ and $q_i$ the probability that the second die can give an $i$ . If I throw the dice together, is it possible to get all possible sums $2,3,4,...12$ with the same probability? Here's what I've tried so far, the probability that I get a $2$ if I throw both dice is $p_1q_1$ , the probability that I get $3$ is $p_1q_2+p_2q_1$ , and generally the probability that I get $n$ is $$\sum_{i+j=n} p_iq_j$$ where $i=1,2,...6$ , $j=1,2,...6$ . So now in order for all possible sums to appear with the same probability, it must be true that $$p_1q_1=p_1q_2+p_2q_1$$ $$p_1q_2+p_2q_1=p_1q_3+p_2q_2+p_3q_1$$ $$........$$ has a solution, this is where I am stuck I can't find a way to prove that the system above has a solution, can you help?
This is a classical problem. Without changing the problem, we can let the digits on the dice be $0, \ldots, 5$ instead of $1, \ldots, 6$ to make our notation easier. Now we make two polynomials: $$ P(x) = \sum_{i=0}^5 p_ix^i,\qquad Q(x) = \sum_{i=0}^5q_ix^i. $$ Now we can succinctly phrase your condition on $p_i, q_i$ : it is satisfied if and only if $$ P(x)Q(x) = \frac1{11}\sum_{i=0}^{10} x^i. $$ Let's multiply both sides by $11 \times (x-1)$ , and you get $$ 11(x-1)P(x)Q(x) = x^{11} - 1. $$ The 11 zeroes of the polynomial on the right are the 11th roots of unity, which means those are also the zeroes of the polynomial on the left. The term $(x-1)$ takes care of one of the zeroes, and since $P, Q$ are both of degree 5, that means that they each have to have 5 of the other 10 zeroes. But now note: besides $1$ , all of the 11th roots of unity are complex numbers, while $P, Q$ are real polynomials. If a complex number is the root of a real polynomial, then so is its complex conjugate. That means that $P, Q$ must each have an even number of complex zeroes, but we just showed that they also have to have 5 each. We have reached a contradiction: such $P, Q$ , and thus such distributions $p_i, q_i$ , do not exist.
{ "source": [ "https://math.stackexchange.com/questions/4102888", "https://math.stackexchange.com", "https://math.stackexchange.com/users/914002/" ] }
4,109,949
I would like to evaluate this integral: $$\int_0^1 \frac{\sin(\ln(x))}{\ln(x)}\,dx$$ I tried a lot, started by integral uv formula [ integration-by-parts? ] and it went quite lengthy and never ending. Then, thought may be expansion of $\sin x$ but it didn't made any sense, it's a infinite series and there's no way here that it will end.. It is highly appreciated if you give some hints to solve it. I am a high school student,so I expect it to be simple but tricky:-)
Observe that $\displaystyle \frac{\sin(\ln(x)) }{\ln{x}} = \int_0^1 \cos(t \ln{x}) \, \mathrm dt$ . Then: $$\begin{aligned} \int_0^1\frac{\sin(\ln(x)) }{\ln{x}}\, \mathrm dx &= \int_0^1 \int_0^1 \cos(t\ln{x})\;{\mathrm dt}\;{\mathrm dx} \\& = \ \int_0^1 \int_0^1 \cos(t\ln{x})\;{\mathrm dx}\;{\mathrm dt} \\&= \int_0^1 \frac{1}{t^2+1} \;{\mathrm dt} \\& = \frac{\pi}{4}. \end{aligned}$$ Equivalently, consider the function $$\displaystyle f(t) = \int_0^1\frac{\sin(t\ln(x)) }{\ln{x}}\, \mathrm dx.$$ Then $$\displaystyle f'(t) = \int_0^1 \cos(t \ln{x})\, \mathrm{d}x = \frac{1}{1+t^2}.$$ Therefore $f(t) = \arctan(t)+C$ . But $f(0) = 0$ so $C = 0$ . Hence $f(t) = \arctan(t)$ . We seek $\displaystyle f(1) = \arctan(1) = \frac{\pi}{4}$ . Series solution: $\displaystyle I = \int_0^1\frac{\sin(\ln(x)) }{\ln{x}}\, \mathrm dx = \int_0^1\sum_{k \ge 0} \frac{(-1)^k \ln^{2k}{x}}{(2k+1)!}\, \mathrm dx = \sum_{k \ge 0} \frac{(-1)^k }{(2k+1)!} \int_0^1 \ln^{2k}{x}\, \mathrm dx $ Then we calculate $\displaystyle \int_0^1 \ln^nx \,\mathrm{d}x$ via integration by parts to find that it's equal to $(-1)^n n!$ Or consider $\displaystyle f(m) = \int_0^1 x^m \,{dx} = \frac{1}{1+m}.$ Then taking the $n$ -th derivative of both sides: $\displaystyle f^{(n)}(m) = \int_0^1 x^m \ln^{n}{x} \,{dx} = \frac{(-1)^n n! }{(1+m)^{n+1}}.$ In either case we get $\displaystyle \int_0^1 \ln^{2k}{x}\, \mathrm dx = (2k)!$ . Hence: $\displaystyle I = \sum_{k \ge 0} \frac{(-1)^k (2k)!}{(2k+1)!} = \sum_{k \ge 0} \frac{(-1)^k (2k)!}{(2k+1)(2k)!} = \sum_{k \ge 0} \frac{(-1)^k }{(2k+1)} = \frac{\pi}{4}.$ To prove the last equality, consider $\displaystyle \frac{1}{2k+1} = \int_0^1 x^{2k} \, \mathrm dx$ and the geometric series $\displaystyle \sum_{k \ge 0}(-1)^kx^{2k} = \frac{1}{1+x^2}$ . Then $\begin{aligned} \displaystyle \sum_{k \ge 0} \frac{(-1)^k}{2k+1} & = \sum_{k \ge 0}{(-1)^k}\int_0^1 x^{2k}\,{\mathrm dx} \\& = \int_0^1 \sum_{k \ge 0}(-1)^k x^{2k} \, \mathrm dx \\& = \int_0^1 \frac{1}{1+x^2}\,\mathrm dx \\& = \frac{\pi}{4}.\end{aligned}$ Regarding the integral $\displaystyle I = \int_0^1 \cos(t \ln{x})\, \mathrm{d}x $ , we let $x = e^{-y}$ . Then $\displaystyle I = \int_0^\infty e^{-y}\cos(ty)\,\mathrm{d}y.$ We get the answer by applying integration by parts (twice). Or we can consider the real part, if we're familiar with complex numbers: \begin{align} I & = \int_0^\infty e^{-y}\cos(ty)\,\mathrm{d}y =\Re\left(\int_0^\infty e^{-(1-it)y}\mathrm{d}y\right)\\ &=\Re\left(\int_0^\infty e^{-(1+t^2)y}\mathrm{d}(1+it)y\right)\\ &=\Re\left(\frac{1+it}{1+t^2}\int_0^\infty e^{-(1+t^2)y}\mathrm{d}(1+t^2)y\right)\\ &=\Re\left(\frac{1+it}{1+t^2}\right)\\& =\frac{1}{1+t^2}. \end{align}
{ "source": [ "https://math.stackexchange.com/questions/4109949", "https://math.stackexchange.com", "https://math.stackexchange.com/users/855081/" ] }
4,116,122
If $X$ is a discrete random variable that can take the values $x_1, x_2, \dots $ and with probability mass function $f_X$ , then we define its mean by the number $$\sum x_i f_X(x_i) $$ (1) when the series above is absolutely convergent . That's the definition of mean value of a discrete r.v. I've encountered in my books ( Introduction to the Theory of Statistics by Mood A., Probability and Statistics by DeGroot M.). I know that if a series is absolute convergent then it is convergent, but why do we need to ask for the series (1) to converge absolutely, instead of just asking it to converge? I'm taking my introductory courses of probabilty and so far I haven't found a situation that forces us restrict ourselves this way. Any comments about the subject are appreciated.
It's because if the series is convergent but not absolutely convergent, you can rearrange the sum to get any value. Any good notion of "mean" or "expectation" should not depend on the ordering of the $x_i$ 's. For a more abstract reason, note that we define the expectation $E[X]$ of a random variable $X$ defined on a probability space $(\Omega, \mathcal{F}, P)$ as the Lebesgue integral $\int_{\Omega} X dP$ . By definition of the Lebesgue integral, this is only well-defined if the integrand is absolutely integrable. If you learn more about measure theory, you will also learn why this definition makes sense. It is done to avoid strange situations like $\infty - \infty$ in the theory.
{ "source": [ "https://math.stackexchange.com/questions/4116122", "https://math.stackexchange.com", "https://math.stackexchange.com/users/858765/" ] }
4,135,902
Chebyshev's Inequality Let $f$ be a nonnegative measurable function on $E .$ Then for any $\lambda>0$ , $$ m\{x \in E \mid f(x) \geq \lambda\} \leq \frac{1}{\lambda} \cdot \int_{E} f. $$ What exactly is this inequality telling us? Is this saying that there is a inverse relationship between the size of the measurable set and the value of the integral?
The essential point is that $$0 \leq \lambda \cdot 1_{\{x \in E \;|\; f(x) \geq \lambda \} } \leq f$$ where $1_{\{x \in E \;|\; f(x) \geq \lambda \} }$ is the characteristic (indicator) function of $\{x \in E \;|\; f(x) \geq \lambda \}$ . Let us see. Let $A$ be $\{x \in E \;|\; f(x) \geq \lambda \} $ . Then it is clear that $$0 \leq \lambda \cdot 1_A \leq f$$ So $$0 \leq \lambda \cdot m(A)= \int_E \lambda \cdot 1_A \leq \int_E f$$ So, since $\lambda >0$ , we have $$m(\{x \in E \;|\; f(x) \geq \lambda \} )= m(A) \leq \frac{1}{\lambda}\int_E f $$
{ "source": [ "https://math.stackexchange.com/questions/4135902", "https://math.stackexchange.com", "https://math.stackexchange.com/users/801005/" ] }
4,145,968
The Galois Correspondence Theorem says that for any Galois extension of fields $K/F$ , there is a one-to-one inclusion reversing correspondence between the intermediate fields $K \supseteq E \supseteq F$ and subgroups of the Galois group $\text{Gal }K/F$ . I have two questions about this: Why this is theorem important? When I first learned it, I told myself that it was just a computational tool: it converts problems about fields, which are hard to understand, into problems about groups, which are easier to understand. But clearly, it is not just there for computational convenience, it is an important result in its own right. So I ask: is there a non-utilitarian reason why the theorem is profound? In other words, what deep "underlying truth" about the algebraic structure of fields does this theorem reveal? Why is this theorem intuitively plausible? The way I have understood it, if you place the subfield lattice of $K/F$ and the subgroup lattice of $\text{Gal }K/F$ side-by-side (with the latter flipped upside-down), then the diagrams are the same. This seems very out-of-blue to me. Why is it intuitively plausible that these two diagrams should look the same? Why should we expect that by studying the symmetries of a field extension, we can recover the structure of whole field itself? An explanation with a specific example of a field extension, would be great. Thanks for the help!
I'll limit the discussion here to finite extensions of fields. There is a Galois theory for algebraic extensions of possibly infinite degree, and this is an essential tool in modern number theory through the role of Galois representations. You could say "proof of Fermat's Last Theorem" is a widely known result that would be impossible without Galois theory (and a whole lot more mathematics). To appreciate what makes the Galois correspondence intuitive, keep in mind the following points. The mappings in both directions for the Galois correspondence make sense for an arbitrary finite extension of fields $K/F$ , but only for Galois extensions are these two mappings actually inverses of each other. For example, $\mathbf Q(\sqrt[n]{2})/\mathbf Q$ has a trivial automorphism group when $n$ is odd, but there are lots of intermediate field extensions if $n$ has lots of factors (see my reply to the MSE question here ). It's worth thinking about why the Galois correspondence doesn't work for $\mathbf Q(\sqrt[3]{2})/\mathbf Q$ or $\mathbf Q(\sqrt[4]{2})/\mathbf Q$ but does work for $\mathbf Q(\sqrt[3]{2},e^{2\pi i/3})/\mathbf Q$ and $\mathbf Q(\sqrt[4]{2},i)/\mathbf Q$ . There is something that happens when we adjoin all the roots of $x^3-2$ to $\mathbf Q$ or all the roots of $x^4-2$ to $\mathbf Q$ that doesn't work when we adjoin only a proper subset of the roots to $\mathbf Q$ . Can you articulate what that is? To give an answer to the question at the end of the previous item, expressed in the simplest way, we need to give some intuition behind the magic of splitting fields compared to finite extensions that are not splitting fields. I think the most basic explanation of why splitting fields (normal extensions) are so special is the symmetric function theorem: every symmetric polynomial in $r_1, \ldots, r_n$ is a polynomial in the elementary symmetric polynomials of $r_1, \ldots, r_n$ . Because the elementary symmetric polynomials in $r_1, \ldots, r_n$ are the coefficients of $(x-r_1)\cdots(x-r_n)$ , if that polynomial has coefficients in $\mathbf Q$ then all symmetric polynomial expressions in $r_1, \ldots, r_n$ are going to be rational, and here is why that's such a big deal: it shows that every number in $\mathbf Q(r_1,\ldots,r_n)$ has all the other roots of its minimal polynomial over $\mathbf Q$ already in that field. If $K$ is a splitting field over $F$ of some polynomial $f(x)$ then every $\alpha \in K$ has all the other roots of its minimal polynomial over $F$ also in $K$ . That is the "deep underlying truth" about Galois extensions, which can be proved using the symmetric function theorem before you prove the Galois correspondence works. Modern accounts of Galois theory do not depend on this approach, but earlier accounts of Galois theory did rely on it. Galois extensions exist in great abundance: a finite extension of fields $K/F$ in characteristic $0$ (land of intuition) can always be enlarged to a finite Galois extension $K'/F$ , so we can take advantage of Galois extensions to solve problems not originally expressed in the setting of Galois extensions. The issue of Galois extensions requiring separability should be considered a technicality not directly relevant to your intuition: intuition takes place in characteristic $0$ , where all irreducible polynomials are automatically separable. Intuitively, Galois extensions = normal extensions. This is not generally true in characteristic $p$ , but you get intuition for Galois theory in characteristic $0$ , where it is true and the symmetric function theorem explains why the symmetry in the Galois correspondence works for Galois extensions. The Galois correspondence is profound in number theory because it leads to a highly nonobvious way of turning prime ideals into field automorphisms (this uses Galois theory for number fields and finite fields). The technical term here is "Frobenius automorphism associated to a prime ideal". Another reason the Galois correspondence is profound is that it is a template for similar correspondences elsewhere in mathematics. There are inclusion-reversing correspondences between a) subgroups of ${\rm Gal}(K/F)$ and intermediate fields between $K$ and $F$ , b) subspaces of a finite-dimensional vector space and subspaces of its dual space, c) subgroups of a finite abelian group $A$ and subgroups of its dual group ${\rm Hom}(A,\mathbf C^\times)$ (generalizing to all locally compact abelian groups by Pontryagin duality) d) subvarieties of affine $n$ -space over $\mathbf C$ and radical ideals in $\mathbf C[x_1,\ldots,x_n]$ , e) subgroups of the fundamental group of a nice space $X$ and covering spaces of $X$ . All of these correspondences have similar features and it happens that historically the correspondence with field extensions (Galois correspondence) was found first. Considering example (c), if you define characters of an arbitrary finite group $G$ in the same way as you define characters of a finite abelian group (homomorphisms from the group to $\mathbf C^\times$ ), you're going to lose a lot of the nice properties because homomorphisms $G \to \mathbf C^\times$ can only see $G$ as far as the quotient group $G/[G,G]$ (abelianization of $G$ ). To make character theory work well for arbitrary finite groups, we have to allow irreducible representations of dimension greater than $1$ .
{ "source": [ "https://math.stackexchange.com/questions/4145968", "https://math.stackexchange.com", "https://math.stackexchange.com/users/628249/" ] }
4,155,362
In this New York Times article, Steven Strogatz offers the following argument for why the area of a circle is $\pi r^2$ . Suppose you divide the circle into an even number of pizza slices of equal arc length, and wedge them together in such a way that half of the slices have an arc at the bottom, and half of the slices have an arc at the top: Then, the base of the shape created has length $\pi r$ , and its height is $r$ . As the number of slices tends to infinity, the limiting case is that of a rectangle: Hence, the area of the circle is $\pi r^2$ . Although this argument is very geometrically appealing, it also seems fairly difficult to make rigorous. I suppose the most challenging part is showing that the base of the shape really does become arbitrarily flat, and its height becomes arbitrarily vertical, if that makes sense. How might we convert this intuitive argument into a rigorous proof?
For simplicity I will assume the number of slices $n$ is even. If you connect the four "corners" of the wedge figure, you obtain an "inner" parallelogram with one pair of sides having length exactly $r$ , and another pair of sides having length approximately $\pi r$ . The height of the parallelogram is $r \cos \frac{\pi}{n}$ (which tends to $r$ ); the length is $n r \sin \frac{\pi}{n}$ (which tends to $\pi r$ ). So the area of the "inner" parallelogram tends to $\pi r^2$ . You can also consider inscribing the wedge figure inside a slightly larger parallelogram. The height is again $r \cos \frac{\pi}{n}$ but with two additional "crusts" each having thickness $r(1 - \cos \frac{\pi}{n})$ , so the final height is $r(2-\cos \frac{\pi}{n})$ (which also tends to $r$ ). I think the length is the same as the that of the inner parallelogram: $nr \sin \frac{\pi}{n}$ (which tends to $\pi r$ ). So the area of the wedge figure is between the areas of the two parallelograms $\pi r^2 cos \frac{\pi}{n} \frac{\sin(\pi/n)}{\pi/n}$ and $\pi r^2 (2-\cos \frac{\pi}{n}) \frac{\sin(\pi/n)}{\pi/n}$ . Since both of these areas converge to $\pi r^2$ , so does the area of the circle. Picture of the inner and outer parallelograms:
{ "source": [ "https://math.stackexchange.com/questions/4155362", "https://math.stackexchange.com", "https://math.stackexchange.com/users/623665/" ] }
4,155,379
I'm working in the following problem in the Bishop book - Pattern Recognition and Machine Learning: I read the solution and there was a trick to prove twice of the double summation of the anti-symmetric term as zero: I do not understand the last three lines of the solution. How does this transformation hold? $$ \sum^D_{i=1}\sum^D_{j=1}w^A_{ji}x_ix_j = \sum^D_{j=1}\sum^D_{i=1}w^A_{ji}x_jx_i $$
For simplicity I will assume the number of slices $n$ is even. If you connect the four "corners" of the wedge figure, you obtain an "inner" parallelogram with one pair of sides having length exactly $r$ , and another pair of sides having length approximately $\pi r$ . The height of the parallelogram is $r \cos \frac{\pi}{n}$ (which tends to $r$ ); the length is $n r \sin \frac{\pi}{n}$ (which tends to $\pi r$ ). So the area of the "inner" parallelogram tends to $\pi r^2$ . You can also consider inscribing the wedge figure inside a slightly larger parallelogram. The height is again $r \cos \frac{\pi}{n}$ but with two additional "crusts" each having thickness $r(1 - \cos \frac{\pi}{n})$ , so the final height is $r(2-\cos \frac{\pi}{n})$ (which also tends to $r$ ). I think the length is the same as the that of the inner parallelogram: $nr \sin \frac{\pi}{n}$ (which tends to $\pi r$ ). So the area of the wedge figure is between the areas of the two parallelograms $\pi r^2 cos \frac{\pi}{n} \frac{\sin(\pi/n)}{\pi/n}$ and $\pi r^2 (2-\cos \frac{\pi}{n}) \frac{\sin(\pi/n)}{\pi/n}$ . Since both of these areas converge to $\pi r^2$ , so does the area of the circle. Picture of the inner and outer parallelograms:
{ "source": [ "https://math.stackexchange.com/questions/4155379", "https://math.stackexchange.com", "https://math.stackexchange.com/users/694423/" ] }
4,168,939
This is a question from a practice workbook for a college entrance exam. Let $$f(x) = \frac{x^3}{3} -\frac{x^2}{2} + x + \frac{1}{12}.$$ Find $$\int_{\frac{1}{7}}^{\frac{6}{7}}f(f(x))\,dx.$$ While I know that computing $f(f(x))$ is an option, it is very time consuming and wouldn't be practical considering the time limit of the exam. I believe there must be a more elegant solution. Looking at the limits, I tried to find useful things about $f(\frac{1}{7}+\frac{6}{7}-x)$ The relation I obtained was that $f(x) + f(1-x) = 12/12 = 1$ . I don't know how to use this for the direct integral of $f(f(x)).$
We know, $ \displaystyle \int_{a}^{1-a}f(f(x))\,dx = \int_{a}^{1-a}f(f(1-x))\,dx$ . So, $\displaystyle \int_{a}^{1-a}f(f(x))\,dx = \frac{1}{2} \int_a^{1-a}\left[f(f(x))+f(f(1-x)) \right] \ dx$ Now for the given function, observe that $f(x) + f(1-x) = 1 \implies f(1-x) = 1 - f(x)$ So, $f(f(x)) + f(f(1-x)) = f(f(x)) + f(1-f(x)) = 1$ So we have, $\displaystyle \int_{a}^{1-a}f(f(x))\,dx = \frac{1}{2} (1-2a)$ Here $a = \dfrac{1}{7}$ and that leads to $\dfrac{5}{14}$ .
{ "source": [ "https://math.stackexchange.com/questions/4168939", "https://math.stackexchange.com", "https://math.stackexchange.com/users/915160/" ] }
4,190,512
An integral from MIT Integration Bee: Show that $$I = \int_{0}^{2\pi}\cos(x)\cos(2x) \cos(3x)\,dx = \frac\pi2$$ This integral appeared in the 2019 paper. Below is my own solution: $$\begin{align} I &= \int_{0}^{2\pi}\cos(x)\cos(2x)(\cos x\cos2x-\sin x\sin2x)\,dx \\[6pt] &= \int_{0}^{2\pi}\cos^2(x)\cos^2(2x) \,dx -\int_{0}^{2\pi}\cos(x)\cos(2x)\sin(x)\sin(2x)\, dx \end{align}$$ Replacing $\cos^2(x)= \frac{1+\cos(2x)}{2}$ for the first integral and $\sin(x)\cos(x)= \sin(2x)/2 $ for the second, we get $$\begin{align} &\int_{0}^{2\pi}\frac{1+\cos(2x)}{2}\cos^2(2x) dx-\frac{1}{2}\int_{0}^{2\pi}\cos(2x)\sin^2(2x) dx \\[6pt] =\; &\frac{1}{2}\left(\int_{0}^{2\pi}\cos^2(2x)dx \, + \int_{0}^{2\pi}\cos(2x)\cos(4x)dx \right) \\[6pt] =\; &\frac{1}{2} \left( \pi + 0\right) \qquad \text{$\because$ the orthogonality of $\cos(mx)$} \\[6pt] =\; &\frac\pi2 \end{align}$$ This solution is rather awkward, and I'm sure there's a better and faster approach to this integral. Could anyone provide a more elegant solution(or a sketch of it)? Thanks.
By symmetry $$\displaystyle I = 4\int_0^{\pi/2} \cos x \cos 2x \cos 3x \, \mathrm dx.$$ Let $\displaystyle x \mapsto \frac{\pi}{2}-x$ then $$\displaystyle I = 4\int_0^{\pi/2} \sin x \cos 2x \sin 3x \, \mathrm dx$$ So that if we add the two $$\displaystyle 2I = 4\int_0^{\pi/2} \cos^2{2x} \, \mathrm dx = \pi. $$ Therefore $$I = \frac{\pi}{2}.$$
{ "source": [ "https://math.stackexchange.com/questions/4190512", "https://math.stackexchange.com", "https://math.stackexchange.com/users/766117/" ] }
4,198,143
I have this equation: $$3^{3x} - 3^x = (3x)!$$ We have to solve for $x$ integer. I did try to attempt but to no avail. I can't manipulate any side of this equation. I took common $3^x$ in the LHS of the equation and got a product: $(3^x) (3^{2x}-1)$ but I have no idea what to do in the RHS of the equation (which is a factorial). It looks like the answer is $x=2$ but I want to solve it algebraically. Any hints/solution would be greatly appreciated.
The main idea is that $n!>a^n$ for $n$ sufficiently large, so there is only a finite number of values to check. In this problem, a simple mathematical induction shows that $n!>3^n$ for every $n\ge 7$ . Therefore, for $x\ge 3$ , we have $(3x)! > 3^{3x} > 3^{3x} - 3^x$ . Now, the equality is satisfied for $x=2$ ( $3^6-3^2=720=6!$ ) but not for $0$ ( $3^0-3^0=0\neq 0!$ ) or $1$ ( $3^3-3^1=24\neq 3!$ )
{ "source": [ "https://math.stackexchange.com/questions/4198143", "https://math.stackexchange.com", "https://math.stackexchange.com/users/949967/" ] }
4,209,381
Consider the function $f(x)=a_0x^2$ for some $a_0\in \mathbb{R}^+$ . Take $x_0\in\mathbb{R}^+$ so that the arc length $L$ between $(0,0)$ and $(x_0,f(x_0))$ is fixed. Given a different arbitrary $a_1$ , how does one find the point $(x_1,y_1)$ so that the arc length is the same? Schematically, In other words, I'm looking for a function $g:\mathbb{R}^3\to\mathbb{R}$ , $g(a_0,a_1,x_0)$ , that takes an initial fixed quadratic coefficient $a_0$ and point and returns the corresponding point after "straightening" via the new coefficient $a_1$ , keeping the arc length with respect to $(0,0)$ . Note that the $y$ coordinates are simply given by $y_0=f(x_0)$ and $y_1=a_1x_1^2$ . Any ideas? My approach: Knowing that the arc length is given by $$ L=\int_0^{x_0}\sqrt{1+(f'(x))^2}\,dx=\int_0^{x_0}\sqrt{1+(2a_0x)^2}\,dx $$ we can use the conservation of $L$ to write $$ \int_0^{x_0}\sqrt{1+(2a_0x)^2}\,dx=\int_0^{x_1}\sqrt{1+(2a_1x)^2}\,dx $$ which we solve for $x_1$ . This works, but it is not very fast computationally and can only be done numerically (I think), since $$ \int_0^{x_1}\sqrt{1+(2a_1x)^2}\,dx=\frac{1}{4a_1}\left(2a_1x_1\sqrt{1+(a_1x_1)^2}+\arcsin{(2a_1x_1)}\right) $$ Any ideas on how to do this more efficiently? Perhaps using the tangent lines of the parabola? More generally , for fixed arc lengths, I guess my question really is what are the expressions of the following red curves for fixed arc lengths: Furthermore, could this be determined for any $f$ ? Edit: Interestingly enough, I found this clip from 3Blue1Brown. The origin point isn't fixed as in my case, but I wonder how the animation was made (couldn't find the original video, only a clip, but here's the link ) For any Mathematica enthusiasts out there, a computational implementation of the straightening effect is also being discussed here , with some applications.
Phrased differently, what we want are the level curves of the function $$\frac{1}{2}f(x,y) = \int_0^x\sqrt{1+\frac{4y^2t^2}{x^4}}\:dt = \frac{1}{2}\int_0^2 \sqrt{x^2+y^2t^2}\:dt$$ which will always be perpendicular to the gradient at that point $$\nabla f = \int_0^2 dt\left(\frac{x}{\sqrt{x^2+y^2t^2}},\frac{yt^2}{\sqrt{x^2+y^2t^2}}\right)$$ Now is the time to naturally reintroduce $a$ as the parameter for these curves. Therefore what we want is to solve the differential equation $$x'(a) = \int_0^2 \frac{-axt^2}{\sqrt{1+a^2x^2t^2}}dt \hspace{20 pt} x(0) = L$$ where we substitute $y(a) = a\cdot x^2(a)$ , thus solving for one component automatically gives us the other. EDIT: Further investigation has led me to some interesting conclusions. It seems like if $y=f_a(x)$ is a family strictly monotonically increasing continuous functions and $$\lim_{a\to0^+}f_a(x) = \lim_{a\to\infty}f_a^{-1}(y) = 0$$ Then the curves of constant arclength will start and end at the points $(0,L)$ and $(L,0)$ . Take for example the similar looking family of curves $$y = \frac{\cosh(ax)-1}{a}\implies L = \frac{\sinh(ax)}{a}$$ The curves of constant arclength are of the form $$\vec{r}(a) = \left(\frac{\sinh^{-1}(aL)}{a},\frac{\sqrt{1+a^2L^2}-1}{a}\right)$$ Below is a (sideways) plot of the curve of arclength $L=1$ (along with the family of curves evaluated at $a=\frac{1}{2},1,2,4,$ and $10$ ), which has an explicit equation of the form $$x = \frac{\tanh^{-1}y}{y}\cdot(1-y^2)$$ These curves and the original family of parabolas in question both have this property, as well as the perfect circles obtained from the family $f_a(x) = ax$ . The reason the original question was hard to tractably solve was because of the non analytically invertible arclength formula
{ "source": [ "https://math.stackexchange.com/questions/4209381", "https://math.stackexchange.com", "https://math.stackexchange.com/users/487230/" ] }
4,213,769
In his newest video , Matt Parker claims that a sphere with three holes (a pair of trousers) and a torus with one hole (a pair of trousers with the legs sewn) are homeomorphic. I assume he meant removing closed discs, since he wanted them to be manifolds. As justification, he shows that they're homotopy equivalent (to $S^1\vee S^1$ ) and then claims 'because they have thickness' they are homeomorphic. I'm not fully convinced, but cannot show either way. My thoughts are that they are homeomorphic, but that it's not as simple as he suggested (he's essentially suggesting 'rotating' the perpendicularly glued annuli so that they are glued parallel to each other, in some sort of projection, I'm not convinced this is well-defined). Thoughts?
They are not homeomorphic. They are merely homotopy equivalent. A way to see they are not homeomorphic is that they have different numbers of boundary components (three versus one). A fancier way (using homology) is to consider the fact that a sphere with three holes can be embedded in the plane, which implies that the algebraic intersection number of any pair of closed loops is $0$ modulo $2$ . But on a torus with one hole, it's easy to come up with a pair of curves that intersect in exactly one point, which means the algebraic intersection number is $1$ modulo $2$ . Addressing "they have thickness": if we are considering (as Parker does in the video) thickened surfaces, which you might formalize as products of a surface with an interval (and thus are 3-dimensional manifolds), then the thickened surfaces are indeed homeomorphic. They are both genus-2 handlebodies . In general, if an orientable surface deformation retracts onto a wedge of $g$ circles (like $S^1\vee S^1$ for $g=2$ ) then their thickenings are homeomorphic to a genus- $g$ handlebody. There is another way you can formalize a thickening that depends on the way a surface is embedded in $\mathbb{R}^3$ , which is to thicken it into the ambient space (i.e., take a product with the normal bundle, rather than with a trivial $I$ -bundle like above). You can drop the dependence on orientability with this notion of thickening. An "ambiently thickened" Mobius strip is homeomorphic to an "ambiently thickened" annulus, where both are homeomorphic to a genus- $1$ handlebody (a solid torus). But thickened in the first way, they are non-homeomorphic. The Mobius strip gives a non-orientable 3-manifold, but handlebodies are orientable.
{ "source": [ "https://math.stackexchange.com/questions/4213769", "https://math.stackexchange.com", "https://math.stackexchange.com/users/457089/" ] }
4,221,576
I am going to take a very simple example to elaborate my question. When we integrate $\sec (x)\,dx$ we divide and multiply by $\sec (x) + \tan (x)$ . $$\int \sec(x)\,dx = \int \sec (x) \left[{\sec (x) + \tan (x) \over \sec (x) + \tan (x)}\right]\, dx$$ I am just solving from here. $$\int {\sec^2(x) + \sec(x)\tan(x) \over \sec(x) + \tan(x)} \, dx $$ Then we let $\sec(x) + \tan(x) = u$ $$\implies du = (\sec^2(x) + \sec(x)\tan(x))\,dx$$ $$\implies \int {du \over u}$$ $$= \ln{\left|\sec(x) + \tan(x)\right|} + c$$ Now coming to my questions. Why do we HAVE to make that manipulation of multiplying $\sec(x) + \tan(x)$ . Like I know its to get the answer...but why does it work so well? How to even think like that? Like "if I multiply $\sec(x) + \tan(x)$ in the numerator and denominator then I'll be able to solve this very easily." What in that integral gives one direction to think of such a manipulation ?
Many otherwise-mysterious tricks in integrals involving trigonometric functions can be explained by expressing the trig functions in terms of exponentials, as in $\cos(x)=(e^{ix}+e^{-ix})/2$ . The resulting rational expressions in exponentials can always be integrated... EDIT: to explain why/how rational expressions on $e^{ix}$ can always be integrated: for example, $$ \int {1\over 1+e^{ix}} \, dx \;=\; -i \int {1\over e^{ix}(1+e^{ix})}\;d(e^{ix}) \;=\; -i \int {1\over t(1+t)}\;dt $$ with $t=e^{ix}$ . Then use partial fractions to break this up into easily-computable pieces. Once I learned this, years ago, I mostly lost interest in the tricks, because equivalents of them can be recovered by using exponentials and complex numbers. No guessing is necessary. Nevertheless, historically, I'm fully confident that people did just experiment endlessly until they found a trick to be able to compute a given indefinite integral, and then that trick was passed on to subsequent generations. In particular, if we do look at it that way, there's no real way that one can "anticipate" the necessary tricks...
{ "source": [ "https://math.stackexchange.com/questions/4221576", "https://math.stackexchange.com", "https://math.stackexchange.com/users/956504/" ] }
4,224,919
I'm asking about a question about two lines which are tangents to a circle. Most of the question is quite elementary algebra, it's just one stage I can't get my head round. Picture here: The circle $C$ has equation $(x-6)^2+(y-5)^2=17$ . The lines $l_1$ and $l_2$ are each a tangent to the circle and intersect at the point $(0,12)$ . Find the equations of $l_1$ and $l_2$ giving your answers in the form $y=mx+c$ . Both lines have equation $y = mx + 12$ where $m$ represents two gradients to be found (both negative). The circle has equation $(x-6)^2 + (y-5)^2 = 17$ . Combining the knowledge $y = mx + 12$ for both lines and $(x-6)^2 + (y-5)^2 = 17$ produces the quadratic: $$(1+m^2)x^2 + (14m−12)x + 68 = 0$$ At this point I was confused about what step to take to get $m$ or $x$ . Looking at the worked solution, it says "There is one solution so using the discriminant $b^2 − 4ac = 0$ ..." From here it's straightforward algebra again, producing another quadratic based on the $b^2 - 4ac$ of the previous quadratic: $$(14m−12)^2 - 4 x (1+m^2) + 68 = 0$$ etc. until we have $m = -4$ or $-8/19$ . My question is I don't understand how we can tell it's right to assume $b^2 - 4ac = 0$ and how we can see that's the right step to take in this question. Obviously this feels intuitively wrong since we know there are two solutions for m. Is the logic that m is a gradient which intersects with the circle once? But if so, how do you see that this is the right equation to decide it only has one solution? (I had assumed before looking at the worked example if I needed to do something more complicated based on the equations of the radii or the knowledge that the two tangents would be equal length from the circle to where they meet or something.) As you can tell, this question is based on fairly elementary algebra; I'm more concerned about knowing why this is the right step to take here. Many thanks for any answers.
When you insert $y=mx+12$ into the circle equation and obtain $$(1+m^2)\cdot x^2 + (14m−12)\cdot x + 68 = 0\label{1}\tag{1}$$ There are two things you can do: solve for $x$ or for $m$ . Which of those makes sense? Notice that in $(\ref{1})$ , if you take $x$ as a given constant and solve for $m$ , you are answering the question: Given an $x$ intercept of the line through $(0,12)$ with the circle, determine the slope of this line. It might happen that there are two points, one or none on the circle with that given $x$ coordinate, hence the number of solutions for $m$ . This isn't, however, what we are looking for. Instead, the tangent-condition offers us valuable information: there is only one $x$ solution for some $m$ we are looking for. Thus, it makes more sense to solve for $x$ in $(\ref{1})$ with the quadratic equation formula: $$x = \frac{- 7 m + 6\pm\sqrt{-19 m^2 - 84 m - 32} }{m^2 + 1}$$ This might look frightening at a first glance, but remember that we already know that there is exactly one solution for $x$ , and this happens precisely when the discriminant is zero: $$\Delta=-19m^2-84m-32=0$$ Which can be solved via the quadratic equation formula. This yields $$\fbox{$\displaystyle m\in\left\{-4, -\frac{8}{19}\right\}$}$$
{ "source": [ "https://math.stackexchange.com/questions/4224919", "https://math.stackexchange.com", "https://math.stackexchange.com/users/958678/" ] }
4,224,928
Prove/Disprove: Let V be a finite-dimensional vector space, and let $T : V → V$ be a linear transformation. If $T$ is diagonalizable, then $T^n$ is diagonalizable, for some $n \in\mathbb{R}$ . I think this one is true: If $T$ is is diagonalizable, then there exists a basis $B$ such that $[T]_B$ is a diagonal matrix. So, $[T^n]_B=([T]_B)^n$ is also diagonal (power of a diagonal matrix is a diagonal matrix), so there exists a basis for V such that the matrix representation of $T^n$ is also diagonal, so $T^n$ is diagonalizable. Is that correct? Thanks a lot!
When you insert $y=mx+12$ into the circle equation and obtain $$(1+m^2)\cdot x^2 + (14m−12)\cdot x + 68 = 0\label{1}\tag{1}$$ There are two things you can do: solve for $x$ or for $m$ . Which of those makes sense? Notice that in $(\ref{1})$ , if you take $x$ as a given constant and solve for $m$ , you are answering the question: Given an $x$ intercept of the line through $(0,12)$ with the circle, determine the slope of this line. It might happen that there are two points, one or none on the circle with that given $x$ coordinate, hence the number of solutions for $m$ . This isn't, however, what we are looking for. Instead, the tangent-condition offers us valuable information: there is only one $x$ solution for some $m$ we are looking for. Thus, it makes more sense to solve for $x$ in $(\ref{1})$ with the quadratic equation formula: $$x = \frac{- 7 m + 6\pm\sqrt{-19 m^2 - 84 m - 32} }{m^2 + 1}$$ This might look frightening at a first glance, but remember that we already know that there is exactly one solution for $x$ , and this happens precisely when the discriminant is zero: $$\Delta=-19m^2-84m-32=0$$ Which can be solved via the quadratic equation formula. This yields $$\fbox{$\displaystyle m\in\left\{-4, -\frac{8}{19}\right\}$}$$
{ "source": [ "https://math.stackexchange.com/questions/4224928", "https://math.stackexchange.com", "https://math.stackexchange.com/users/927195/" ] }
4,232,880
I want to find $\angle AGM=\theta$ in the following picture: Here $ABCDEF$ and $BAGH$ are regular hexagon and square respectively and $M$ is the midpoint of $FH$ . I found a trigonometric solution. I'm providing key ideas of the solution: Let $AB=1$ . Now we can apply cosine rule on $\triangle AHF$ to find $HF$ and $HM$ . Now in $\triangle MGH$ , we can find $GM$ using cosine rule again and then find $\angle MGH$ by sine rule. This gives $\theta=15^{\circ}$ . (I'm not providing the calculations as they are not nice and I did most of them with calculator.) But I believe there are some beautiful synthetic solution to the but didn't find one. So, I need a synthetic solution to the problem.
Let $I$ be a center of hexagon. Then $HG = ID$ and they are parallel, so $IDGH$ is a paralelogram so $K$ is also the midpoint of $GI$ , thus $G,K,I$ are collinear. Since $GEI$ is isosceles triangle and $\angle GEI = 150^{\circ}$ we have $\theta = 15^{\circ}$ .
{ "source": [ "https://math.stackexchange.com/questions/4232880", "https://math.stackexchange.com", "https://math.stackexchange.com/users/941643/" ] }
4,244,874
I came across these two results recently: $$ \int_a^b \sqrt{\left(1-\dfrac{a}{x}\right)\left(\dfrac{b}{x}-1\right)} \: dx = \pi\left(\dfrac{a+b}{2} - \sqrt{ab}\right)$$ $$ \int_a^c \sqrt[3]{\left| \left(1-\dfrac{a}{x}\right)\left(1-\dfrac{b}{x}\right)\left(1-\dfrac{c}{x}\right)\right|} \: dx = \dfrac{2\pi}{\sqrt{3}}\left(\dfrac{a+b+c}{3} - \sqrt[3]{abc}\right)$$ for $0<a\leq b\leq c$ . I haven't tried to solve the first one yet, but I have an idea of how to approach it, namely using the substitution $x=a\cos^2\theta+b\sin^2\theta$ . I have no idea how to approach the second one, however. I think that the most interesting thing about the results above is that it seems like there is a proof for the AM-GM inequality hidden within. Clearly both integrands are positive and so the AM-GM falls out for the 2 and 3 variable case. All that is required is to prove the results. My question is twofold: How would the second integral be computed? Is there an approach using elementary techniques? Can this be generalised to prove the AM-GM inequality for $n$ -variables?
$\DeclareMathOperator{\Res}{Res}$ $\DeclareMathOperator{\sgn}{sgn}$ $\newcommand{\d}{\mathrm{d}}$ $\newcommand{\e}{\mathrm{e}}$ $\newcommand{\E}{\mathrm{E}}$ $\newcommand{\P}{\mathrm{P}}$ $\newcommand{\i}{\mathrm{i}}$ $\newcommand{\abs}[1]{\left\lvert #1 \right\rvert}$ Let $0 < a_0 \leq a_1 \ldots \leq a_{n-1}$ be a monotone sequence of $n$ positive real numbers. Then $$\boxed{ \sum_{k=0}^{n-2}\frac{1}{\pi}\sin \tfrac{\pi (k+1)}{n}\int_{a_k}^{a_{k+1}} \left(\prod_{i=0}^{n-1}\sqrt[n]{\abs{x-a_i}}\right)\frac{\d x}{x} =\frac{1}{n}\sum_{i=0}^{n-1}a_i - \sqrt[n]{\prod_{i=0}^{n-1}a_i} }\text{.}$$ The left side is manifestly nonnegative, so OP's second question is answered in the affirmative, although the form of the left side for $n=2,3$ is deceptively simple and does not reflect the general case. The underlying method of this answer does not differ from that of Svyatoslav's , save for doing the bookkeeping needed to provide for the $n$ -variable case. The method can be adapted to show that, for a positive random variable $X$ that takes on a finite set of values, $$ \int_{\mathrm{ess}\,\inf X}^{\mathrm{ess}\,\sup X} \frac{\sin (\pi \P(X >x))}{\pi x}\e^{\E \ln \abs{X-x}} \d x =\E\,X - \e^{\E\ln X} \text{;}$$ it's tempting to think that this last equality holds for arbitrary positive random variables with finite arithmetic and geometric mean, but I have no proof of that. Write $I$ for the closed interval $[a_0,a_{n-1}]$ . Take the cut of $\sqrt[n]{z}$ to be the negative real axis. Write $\mathbb{C}^*$ for the Riemann sphere. Consider the meromorphic differential forms $\alpha_{U_0}$ , $\alpha_{U_1}$ , $\alpha_{U_2}$ , on $U_0 = \mathbb{C}\backslash (-\infty,a_{n-1}]$ , $U_1 = \mathbb{C}\backslash [a_0,\infty)$ , and $U_2=\mathbb{C}^*\backslash [0,a_{n-1}]$ given by $$\begin{aligned} \alpha_{U_0} &=\left(\prod_{i=0}^{n-1}\sqrt[n]{z-a_i}\right)\frac{\d z}{z} \\ \alpha_{U_1} &=-\left(\prod_{i=0}^{n-1}\sqrt[n]{a_i-z}\right)\frac{\d z}{z} \\ \alpha_{U_2} &=-\left(\prod_{i=0}^{n-1}\sqrt[n]{1-a_iz^{-1}}\right)\frac{\d (z^{-1})}{(z^{-1})^2}\text{.} \end{aligned}$$ These three forms agree pairwise on the intersections of their respective domains. That's because—for the chosen cut convention— if $a > 0$ and either $\Im z \neq 0$ or $\Re z > a$ , then $$\sqrt[n]{z-a} = \frac{\sqrt[n]{1-az^{-1}}}{\sqrt[n]{z^{-1}}}\text{;}$$ if $a > 0$ and $\Im z \neq 0$ , then $$\sqrt[n]{a-z} = \e^{-\i\pi \sgn \Im z /n}\sqrt[n]{z-a}\text{.}$$ Consequently, there is a unique meromorphic differential form $\alpha$ on $\mathbb{C}^*\backslash I = \bigcup_{i=0}^2U_i$ such that $\left. \alpha\right\rvert_{U_i} = \alpha_{U_i}$ for $i=0,1,2$ . This $\alpha$ has a simple pole at $0$ and a double pole at $\infty$ . So let $C$ be a cycle separating $I$ from $\{0,\infty\}$ and oriented in the negative sense. What is $$\frac{1}{2\pi\i}\oint_C\alpha\text{?}$$ If $C$ is taken to be a rectangle with sides parallel to and infinitesimally close to $I$ , then $$\frac{1}{2\pi\i}\oint_C \alpha = \frac{1}{2\pi\i}\left(\int_{I+\i 0^+}\alpha - \int_{I-\i0^+}\alpha\right)\text{,}$$ the contribution from the remaining sides vanishing. If $C$ is taken to encircle $\{0,\infty\}$ in a positive sense, then $$\frac{1}{2\pi\i}\oint_C \alpha = \Res_0 \alpha + \Res_{\infty}\alpha\text{.}$$ For the former choice, note that, for real $x$ , $$\sqrt[n]{x\pm \i 0^+} = \sqrt[n]{\abs{x}}\e^{\pm\i\pi [x < 0]/n}$$ where $[(-)]$ is Iverson bracket notation. Therefore $$\int_{I\pm\i0^+}\alpha = \int_I\e^{\pm\i\pi N(x)/n}\left(\prod_{i=0}^{n-1}\sqrt[n]{\abs{x-a_i}}\right)\frac{\d x}{x}$$ where $N(x)$ is the number of the $a_i$ greater than $x$ . Then $$\frac{1}{2\pi\i}\oint_C\alpha = \frac{1}{\pi}\int_I\sin \tfrac{\pi N(x)}{n} \left(\prod_{i=0}^{n-1}\sqrt[n]{\abs{x-a_i}}\right)\frac{\d x}{x}$$ which, because $N(x)$ is constant away from the $a_i$ , simplifies to $$\frac{1}{2\pi\i}\oint_C\alpha = \sum_{k=0}^{n-2}\frac{1}{\pi}\sin \tfrac{\pi (k+1)}{n}\int_{a_k}^{a_{k+1}} \left(\prod_{i=0}^{n-1}\sqrt[n]{\abs{x-a_i}}\right)\frac{\d x}{x}\text{.}$$ As for the latter choice: from the Laurent expansions of $\alpha$ at $0$ , $\infty$ $$\alpha_z = \left(-\sqrt[n]{\prod_{i=0}^{n-1}a_i} + \mathcal{O}(z)\right)\frac{\d z}{z}$$ $$\alpha_z = \left(-\frac{1}{z^{-1}} + \frac{1}{n}\sum_{i=0}^{n-1}a_i + \mathcal{O}(z^{-1}) \right)\frac{\d (z^{-1})}{z^{-1}}$$ the required residues are found to be $$\Res_0 \alpha = - \sqrt[n]{\prod_{i=0}^{n-1}a_i}$$ $$\Res_{\infty} \alpha = \frac{1}{n}\sum_{i=0}^{n-1}a_i$$ whence $$\frac{1}{2\pi\i}\oint_C\alpha = \frac{1}{n}\sum_{i=0}^{n-1}a_i - \sqrt[n]{\prod_{i=0}^{n-1}a_i}\text{.}$$ These two choices of $C$ must result in the same value for $\tfrac{1}{2\pi\i}\int_C\alpha$ , whence $$\boxed{ \sum_{k=0}^{n-2}\frac{1}{\pi}\sin \tfrac{\pi (k+1)}{n}\int_{a_k}^{a_{k+1}} \left(\prod_{i=0}^{n-1}\sqrt[n]{\abs{x-a_i}}\right)\frac{\d x}{x} =\frac{1}{n}\sum_{i=0}^{n-1}a_i - \sqrt[n]{\prod_{i=0}^{n-1}a_i} }\text{.}$$
{ "source": [ "https://math.stackexchange.com/questions/4244874", "https://math.stackexchange.com", "https://math.stackexchange.com/users/965754/" ] }
4,245,685
The problem asks us to find the probability of a full house in a well-shuffled deck of $52$ cards. The solution in the textbook states in the first line "All of the $52\choose 5$ possible hands are equally likely by symmetry, so the naive definition [of Probability] is applicable." Everything after this involves counting, which I understand. However, I do not see why all outcomes are equally likely. Here is why. Say we chose a card with rank $7$ . The number of $7$ 's left are now less than that of other ranks. This must mean the probability of choosing a different rank must be more than that of choosing another $7$ . This means that the probability of an outcome (a hand) with ranks $(2, 3, 4, 5, 6)$ must be more than that of a hand with ranks $(2, 2, 2, 3, 3)$ . Kindly explain why the naive definition works here and why we treat every hand (all $52\choose 5$ hands) to be an equally likely outcome.
Your examples of (2,3,4,5,6) and (2,2,2,3,3) are not hands for the purpose of the question. (2♦,3♣,4♥,5♠,6♦) and (2♦,2♣,2♠,3♥,3♣) are hands, and are equally likely because the chance of pulling each of the cards involved is equally likely, up to the symmetry of rearranging the order of cards in the hand. The idea that it's more likely a card will be junk than help to form a useful hand is a good intuition for why some hands are more or less likely, and can form the basis of a different method of calculating the probability of a given hand or class of hand, but is irrelevant in this case.
{ "source": [ "https://math.stackexchange.com/questions/4245685", "https://math.stackexchange.com", "https://math.stackexchange.com/users/966108/" ] }
4,249,066
An issue has cropped up recently in programming with which I could greatly benefit from the expertise of proper mathematicians. The real-world problem is that apps often need to download huge chunks of data from a server, like videos and images, and users might face the issue of not having great connectivity (say 3G) or they might be on an expensive data plan. Instead of downloading a whole file though, I've been trying to prove that it's possible to instead just download a kind of 'reflection' of it and then using the powerful computing of the smartphone accurately reconstruct the file locally using probability. The way it works is like this: A file is broken into its bits (1,0,0,1 etc) and laid out in a predetermined pattern in the shape of a cube. Like going from front-to-back, left-to-right and then down a row, until complete. The pattern doesn't matter, as long as it can be reversed afterwards. To reduce file size, instead of requesting the whole cube of data (the equivalent of downloading the whole file), we only download 3 x 2D sides instead. These sides I'm calling the reflection for want of a better term. They have the contents of the cube mapped onto them. The reflection is then used to reconstruct the 3D cube of data using probability - kind of like asking a computer to do a huge three-dimensional Sudoku. Creating the reflection and reconstructing the data from it are computationally heavy, and as much as computers love doing math, I'd like to lighten their load a bit. The way I'm picturing is like a 10x10 transparent Rubik's cube. On three of the sides, light is shone through each row. Each cell it travels through has a predetermined value and is either on or off (either binary 1 or 0). If it's on, it magnifies the light by its value. If it is off, it does nothing. All we see is the final amount of light exiting each row, after travelling through the cube from the other side. Using the three sides, the computer needs to determine what values (either 1 or 0) are inside the cube. At the moment I'm using normal prime numbers as the cell values to reduce computing time, but I'm curious to know if there is another type of prime (or another type of number completely) that might be more effective. I'm looking for a series of values that has the lowest possible combination of components from within that series. Here is a rough diagram: It might help to imagine that light shines in at the green arrows, and exits with some value at the red arrows. This happens for each row, in each of the three directions. We're left with only three 2D sides of numbers, which are used to reconstruct what's inside the cube. If you look where the 14 exits on the left, it can have two possible combinations, (3 + 11 and 2 + 5 + 7). If for arguments sake we were to assume it were 3 and 11 (coloured green), then we could say at the coordinate where 3 and 11 exist, there are active cells (magnifying the light by their value). In terms of data, we would say this is on (binary 1). In practice we can rarely say for certain (for 2 and 3 we could) what an inside value has based on its reflection on the surface, so a probability for each is assigned to that coordinate or cell. Some numbers will never be reflected on the surface, like 1, 4 or 6, since they can't be composed of only primes. The same happens in the vertical direction, where the output is 30, which has multiple possibilities of which two correspond to the possibility shown in the horizontal direction with an exit of 14, coloured blue and pink (since they hit the 23, the same as 3 in the horizontal direction). This probability is also added to that coordinate and we repeat in the front-to-back direction, doing the same a final time. After this is done for each cell in the whole cube, we have a set of three probabilities that a cell is either on or off. That is then used as the starting point to see if we can complete the cube. If it doesn't 'unlock' we try a different combination of probabilities and so forth until we have solved the 3D Sudoku. The final step of the method is once the cube is solved, the binary information is pulled out of it and arranged in the reverse pattern to how it was laid out on the server. This can then be cut up (say for multiple images) or used to create a video or something. You could cough in theory cough download something like Avatar 3D (280GB) in around 3 minutes on decent wifi. Solving it (nevermind building the pixel output) would take a while though, and this is where I'm curious about using an alternative to prime numbers. You might have guessed that my maths ability goes off a very steep cliff beyond routine programming stuff. There are three areas of concern / drawbacks to this method: it is rubbish at low levels of data transfer. A 10 x 10 x 10 cube for instance has a larger 'surface area' than volume. That's because while each cell can hold one bit (either 1 or 0), each surface cell needs to be a minimum of 8 bits (one character is one byte, or 8 bits). We can't even have 'nothing', since we need null to behave as a type of placeholder to keep the structure intact. This also accounts for why in the above diagram, a 1000x1000x1000-cell cube has its surface areas multiplied by 4 characters (the thousandth prime is 7919 - 4 characters) and the 10'000(cubed)-cell cube has its surface areas multiplied by 6 characters (10'000th prime is 104729, six characters). The aim is to keep total character length on the 2D side to a minimum. Using letters could work, as we could go from a-Z with 52 symbols, before paying double bubble for the next character (the equivalent to "10"). There are 256 unique ASCII characters, so that's the upper limit there. the factorials are still too high using prime numbers. Is there a series of numbers that are both short in character length (to avoid the problem above) and have very few possible parents? I'm leaning towards some subset of primes, but lack the maths to know which - some sort of inverted Fibonacci? The fewest possible combinations, the faster the computer will solve the cube. I haven't tested yet if its possible to use a third, fourth or nth side to increase either the capacity of the cube or the accuracy of the reflection. Using a say octahedron (yellow below) instead of a cube might be better, just hurts the brain a little to picture how it might work. I'm guessing it would tend towards a sphere, but that's beyond my ability. EDIT: Thank you for your helpful input. Many answers have referred to the Pigeonhole principle and the issue of too many possible combinations. I should also correct that a 1000 x cube would require 7 digits not 4 as I stated above, since the sum of the first 1000 primes is 3682913. I should also emphasise the idea isn't to compress in the common sense of the word, as in taking pixels out of an image, but more like sending blueprints on how to build something, and relying only on the computation and knowledge at the receiving end to fill in the blanks. There is more than one correct answer, and will mark correct the one with the most votes. Many thanks for the detailed and patient explanations.
Universal lossless compression is impossible, this cannot work. The set of binary strings of length $n$ is not injective to the set of binary strings of length $n-1$ . There are $2^{n}$ binary strings of length $n$ . Let us call the set of all binary strings of given length $S_{k}$ . When $k=n-1$ , there are $|S_{n-1}| = 1+2+2^{2}...+2^{n-1}=2^{n}-1$ Since $2^{n} > |S_{n-1}|$ , the pidgeon hole principle implies that there is no injective function from the set of all binary strings of length $n$ to any $S_{k}$ for $k<n$ . This means that a perfect lossless compression scheme such as the one described is an impossibility since it would constitute such a function. Furthermore, for any number $n$ there exists a string of length $n$ whose Kolmogorov-Chaitin complexity is $n$ . If you assume this is not the case you have some function $g:P \to X$ where $p_{x} \in P$ is a program encoding a string $x \in X$ whose complexity $C(x)<|x|$ . such that $g(p_{x})=x$ and $|p_{x}|< |x|$ . We also know that $x \neq y \implies p_{x} \neq p_{y}$ . Again by the pigeon hole principle if all $2^{n}$ strings of length $n$ have a program whose length is less than $n$ , of which there can only be $2^{n}-1$ , there must be a program that produces two different strings, otherwise the larger set isn't covered. Again this is absurd, so at least one of these strings has a complexity of at least its own length. Edit: To questions about lossy compression, this scheme seems unlikely to be ideal for the vast majority of cases. You have no way of controlling where and how it induces errors. Compression schemes such as H.264 leverage the inherent visual structure of an image to achieve better fidelity at lower sizes. The proposed method would indiscriminately choose a solution to the system of equations it generates, which would likely introduce noticeable artifacts. I believe that it's also possible for these systems to be overdetermined, using unnecessary space, whereas a DCT is an orthogonal decomposition.The solution of these equations would also probably have to be found using a SAT/SMT solver, whose runtime would also make decompression very costly if not intractable, while DCT based techniques are well studied an optimized for performance, though I am not an expert on any of these subjects by any means.
{ "source": [ "https://math.stackexchange.com/questions/4249066", "https://math.stackexchange.com", "https://math.stackexchange.com/users/368542/" ] }
4,249,549
This post shows that the “left” group axioms, which only guarantee a left-identity and left-inverses, are sufficient to guarantee that a semigroup is a group. The same idea could be used to show that the “right” group axioms are also sufficient. These sets of axioms might be considered “weak” group axioms, but I am curious whether we can get weaker. Consider the following “ultraweak” axioms: Let $G$ be a set and $*$ be a binary operation on $G$ satisfying: $*$ is associative. There exists an ultraweak identity element $e\in G$ such that for all $x\in G,$ either $e*x = x$ or $x*e=x$ (that is, the “sidedness” of $e$ may differ for each element of $G$ ). For all $x \in G$ there exists an ultraweak inverse $x^{-1}\in G$ such that either $x^{-1} * x = e$ or $x*x^{-1}=e$ (that is, each element of $G$ has at least a one-sided inverse, where the side may differ for each element). Do these axioms guarantee that $(G,*)$ is a group? And if not, how much closer to these axioms can we get, starting from just the “weak” left or right axioms? [For example, maybe assuming an ultraweak identity element with left (or right) inverses is sufficient.] REVISED UPDATE: In the comments to the accepted answer by Vincent, @Yakk asks whether the following condition is sufficient to guarantee a group (assuming associativity of $*$ ): There exists an $e\in G$ such that for all $x\in G$ , either (1) $e*x=x$ and there exists an $x'\in G$ such that $x'*x=e$ , or (2) $x*e=x$ and there exists an $x'\in G$ such that $x*x'=e$ . At first I thought this was true due to the standard "left identity + left inverses" and "right identity + right inverses" cases applying element by element, but now I realize this reasoning is flawed (these proofs also require the one-sided inverse to have their own one-sided inverse with the same sidedness). So the question remains: Does the above condition, proposed by @Yakk, guarantee a group? Please provide a proof or counterexample. The answer to the revised update is “yes;” see here . There remains a further question about even weaker conditions, where the left and right identities can be different elements. I've asked that here .
What axioms are enough to guarantee a group? Assuming associativity. "Two-sided inverses" only requires that there is a left inverse and a right inverse for every element, they don't have to be the same. Identity \ Inverse Two-sided One-sided Ultraweak Two-sided $\color{green}{\checkmark}$ $\color{green}{\checkmark}$ $\color{green}{\checkmark}$ One-sided $\color{green}{\checkmark}$ Only if same side $\color{red}{\mathcal{X}}$ Ultraweak $\color{green}{\checkmark}$ $\color{red}{\mathcal{X}}$ $\color{red}{\mathcal{X}}$ Left identity and left inverses: Enough $\color{green}{\checkmark}$ Left identity and right inverses: Not enough (Noah's answer) $\color{red}{\mathcal{X}}$ Ultraweak identity and two-sided inverses: Enough (See below) $\color{green}{\checkmark}$ Two-sided identity and ultraweak inverses: Enough (See below) $\color{green}{\checkmark}$ In summary, once either the identities or the inverses are two-sided, we have a group. But if that is not the case, the only way to still guarantee a group is if the identity and inverses are both always on the same side. Ultraweak identity and two-sided inverses are enough We only require that there is a left inverse and a right inverse for every element, they don't have to be the same. We show that $e$ is a left identity for every element. Since we have left inverses, the claim then follows from this answer . For an element $a$ , the ultraweak identity yields $ea=a$ or $ae = a$ . We only need to focus on the second case. $a$ has a right inverse $a'$ , and $a'$ has a right inverse $a''$ . Thus, $a = a e = a (a' a'') = (a a') a'' = e a''$ . This shows that $ea = e(e a'') = e a'' = a$ as required. Two-sided identity and ultraweak inverses are enough Indeed, in this case $x^2=x$ implies $x=e$ for all $x$ . Thus, if a has right-inverse $b$ , we have $ab=e$ and thus $ba=b(ab)a=(ba)^2$ , so that $ba=e$ , therefore $b$ is also the left inverse. Answer to REVISED UPDATE I decided to track this in a seperate question. See this question and answer.
{ "source": [ "https://math.stackexchange.com/questions/4249549", "https://math.stackexchange.com", "https://math.stackexchange.com/users/477746/" ] }
4,249,639
The question is more general but here's the problem that motivated it: I want to find all solutions to the integral equation $$f(x) + \int_0^x (x-y)f(y)dy = x^3.$$ Differentiating twice with respect to x this yield the second order differential equation $$f''(x) + f(x) = 6x.$$ The solution to this last equation is something of the form $f(x) = A \cos(x + \phi) + f_p$ where $f_p$ can be found by variation of parameters or other tools and $A, \phi$ are determined by initial values. However, what I want to know is whether I'm accounting for all solutions here. Do I not "lose information" when differentiating the original equation? If so, how can I construct all solutions from the ones I just found?
It's the other way around: when you differentiate both sides of an equation, you should be concerned about creating solutions. This is because $f'(x)=g'(x)$ only implies $f(x)=g(x)+c$ for some $c$ . So any solution to the IE will satisfy the DE but not the other way around. In effect the IE has the boundary conditions built into it in a way the DE does not. This isn't totally obvious, so let's walk through it in your example. Substitute $x=0$ into the original IE to see $f(0)=0$ . Differentiate once to get $$f'(x)+\int_0^x f(y) dy=3x^2$$ and then substitute $x=0$ to get $f'(0)=0$ . Thus the IE* implies the ODE IVP $f''(x)+f(x)=6x,f(0)=0,f'(0)=0$ , which has a unique solution as you probably already know. To see the ODE IVP implies the IE, you run the procedure in reverse: $$\int_0^x f''(y) + f(y) dy = \int_0^x 6y dy \\ f'(x)-f'(0)+\int_0^x f(y) dy = 3x^2.$$ Use the initial condition: $$f'(x)+\int_0^x f(y) dy = 3x^2.$$ Now you integrate again: $$\int_0^x f'(y) dy + \int_0^x \int_0^y f(z) dz dy = \int_0^x 3y^2 dy \\ f(x)-f(0) + \int_0^x \int_0^y f(z) dz dy = x^3.$$ Use the other initial condition: $$f(x)+\int_0^x \int_0^y f(z) dz dy = x^3.$$ This looks different from the original thing, but it is actually the same. One way to make it look the same is to use integration by parts together with the initial conditions on the outer integral of the second term. Another way is to interchange the order of integration; after the interchange you can simply do the inner $dy$ integral since $f(z)$ doesn't depend on $y$ . The fact that these both work is a quite general thing, cf. https://en.wikipedia.org/wiki/Order_of_integration_(calculus)#Relation_to_integration_by_parts * Technically you need the IE and a $C^2$ assumption to run this calculation. The shortcut way that I can think of to get this regularity assumption is to just assume it up front, check that the solution you get has the desired regularity (which it does) and then study the general uniqueness theory of the IE (which in this setting is the Fredholm alternative) to conclude that you didn't miss any irregular solutions.
{ "source": [ "https://math.stackexchange.com/questions/4249639", "https://math.stackexchange.com", "https://math.stackexchange.com/users/482896/" ] }
4,251,401
Consider the game of Sudoku played on an infinite board where the subsquares are also infinite, i.e. our board is indexed by $\mathbb{N}^2 \times \mathbb{N}^2$ . Let's call a solution to such a game a function $f(a, b, m, n)$ which assigns a natural number to each space $(m,n)$ in each subsquare $(a,b)$ , such that each row, column, and subsquare contains every natural number exactly once . It is clear that such a solution exists, as for any finite board state, given any natural number and any row, column, or subsquare, there are always at most a finite number of "collision" squares, and so with infinite spaces at our disposal we can always pick a space to put this number in, and continue to do this infinitely until we have filled the board. However, I'm having trouble constructing an explicit example of such a solution, which does not rely on this choice-like magic. My initial thought was to use products of primes to guarantee that you don't have a collision, but while I can get plenty of solutions with no repetitions, guaranteeing that every row, column, and subsquare contains every label seems like a lot harder of a challenge. But, I suspect I'm missing a very elegant / basic solution. Any ideas / hints?
Let $p$ be a bijection from $\mathbb N^2$ to $\mathbb N$ . One example is $p(x,y)=2^x(2y+1)-1$ (assuming that $0\in \mathbb N$ ). Let $\oplus$ be an operation on $\mathbb N$ for which $(\mathbb N,\oplus)$ is a group. For example, $\oplus$ could be nimber addition. Then you can check that $$ f(a,b,m,n)=p(a\oplus n,b\oplus m) $$ is a valid sudoku function. We just need to to check $3$ things: For each row, meaning with $a$ and $m$ fixed and $b$ and $n$ varying, the group properties of $\oplus$ imply that every ordered pair of natural numbers is represented as $(a\oplus n,b\oplus m)$ exactly once, so the fact $p$ is a bijection means every natural number appears exactly once in every row. The same logic applies to the columns. For the boxes, we instead have $a$ and $b$ fixed. Again, as $m,n$ vary, $(a\oplus n,b\oplus m)$ will assume each ordered pair of natural numbers exactly once.
{ "source": [ "https://math.stackexchange.com/questions/4251401", "https://math.stackexchange.com", "https://math.stackexchange.com/users/256704/" ] }
4,252,237
Two days ago I felt very uncomfortable with Big O notation. I've already asked two questions: Why to calculate "Big O" if we can just calculate number of steps? The main idea behind Big O notation And now almost everything has become clear. But there are few questions that I still can't understand: Suppose we have an algorithm that runs in $1000n$ steps. Why people say that $1000$ coefficient becomes insignificant when $n$ gets really large (and that's why we can throw it away)? This really confuses me because no matter how large $n$ is but $1000n$ is going to be $1000$ times bigger than $n$ . And this is very significant (in my head). Any examples why it is considered insignificant as $n$ tends to infinity would be appreciated. Why Big O is told to estimate running time in worst case? Given running time $O(n)$ , how is it considered to be worst case behavior? I mean in this case we think that our algorithm is not slower than $n$ , right? But in reality the actual running time could be $1000n$ and it is indeed slower than $n$ . According to the definition, Big O gives us a scaled upper bound of $f$ as $n \to +\infty$ , where $f$ is our function of time. But how do we interpret it? I mean, given algorithm running in $O(n)$ , we will never be able to calculate the actual number of steps this algorithm takes. We just know that if we double the size of the input, we double the computation time as well, right? But if that $O(n)$ algorithm really takes $1000n$ steps then we also need to multiply the size of the input by $1000$ to be able to visualise how it grows, because $1000n$ and $n$ have very different slopes. Thus in this case if you just double the computation time for the doubled size of the input, you're going to get wrong idea about how the running time grows, right? So how then do you visualise its growth rate? I want to add that I know the definition of Big O and how to calculate it, so there is no need in explaining it. Also I've already googled all these questions tons of times and no luck. I'm learning calculus at the moment, so I hope I asked this question in the right place. Thank you in advance!
Reading between the lines, I think you may be misunderstanding Big O analysis as being a replacement for benchmarking. It's not. An engineer still needs to benchmark their code if they want to know how fast it is. And indeed in the real world, an $\mathcal{O}(n)$ algorithm might be slower than an $\mathcal{O}(n^2)$ algorithm on real-world data sets. But , as $n$ approaches infinity, the $\mathcal{O}(n^2)$ algorithm will ultimately be slower. For the sake of example, if we allow constant factors in our Big-O notation, then an $\mathcal{O}(10n)$ algorithm will take more "steps" than an $\mathcal{O}(n^2)$ algorithm, if $n$ is less than $10$ ( $10\cdot 10 = 10^2$ ). But if $n$ is $100$ , then the $\mathcal{O}(n^2)$ algorithm takes ten times as long. If $n$ is $1000$ , it takes a hundred times as long. As $n$ grows, so too does this difference. That manner in which the two algorithms differ is what we are analyzing when we use Big O analysis. Hopefully that example also makes it clear why the constant factor is irrelevant and can be ignored. Whether it's ten, a hundred, a thousand, a million, or a quadrillion ultimately does not matter, because as $n$ approaches infinity, the $\mathcal{O}(n^2)$ algorithm is eventually going to be slower anyway. That's the power of exponential growth. The crux of it is that Big O analysis is a mathematical concept which does NOT tell an engineer how fast something is going to run or how many steps it's going to take. That's what benchmarking is for. Big O analysis is still a very helpful tool in algorithm design, but if you're interested in exactly how long something takes to run, you'll need to benchmark with real data. Great summary in the comments from @chepner: Put yet another way, Big O is not used for comparing run times, but for comparing how those run times scale as the input size grows.
{ "source": [ "https://math.stackexchange.com/questions/4252237", "https://math.stackexchange.com", "https://math.stackexchange.com/users/960230/" ] }
4,265,182
Given an angle $\theta$ , can I find a Pythagorean triple $(A,B,C)$ such that the corresponding right triangle contains an angle that is as close to $\theta$ as I want? And if so, how? For example suppose $\theta = 56.25^\circ$ . How do I find Pythagorean triples $(A,B,C)$ such that $\tan(56.25^\circ) \approx B/A$ ? Looking at Euclid's formula this is the same as asking for coprime not-both-odd integers $m$ and $n$ such that $$\tan(56.25^\circ) \approx \frac{2mn}{m^2-n^2}\,$$ but this only makes a brute-force search easier. Is there a procedural way to generate such arbitrarily precise triples?
Let $r\in[0,\infty)$ . The problem posed is equivalent to finding $m\geq n\in\mathbb N$ such that $r\sim\frac{2mn}{m^2-n^2}$ . Thus, we want $$rm^2-2mn-rn^2\sim0$$ Now suppose $r,n$ is given and we want to find $m$ that satisfies the equation above (not necessarily natural). Thus, $$m=\frac{n\left(1+\sqrt{1+r^2}\right)}r$$ So, we want to find a choice of $n$ that makes the expression above arbitrarily close to an integer. But that's relatively easy. Let $c=\frac r{1+\sqrt{1+r^2}}$ . Thus, $n=mc$ . So, we just want to find a fraction $\frac nm$ close to $c$ . Easy! Summary : Given $\theta$ , compute $$c=\frac{\sin\theta}{1+\cos\theta} = \tan\frac\theta2$$ Then, find a fraction $\frac nm$ arbitrarily close to $c$ . Substitute $m,n$ into your formula and voila!
{ "source": [ "https://math.stackexchange.com/questions/4265182", "https://math.stackexchange.com", "https://math.stackexchange.com/users/167197/" ] }
4,271,259
I have often encountered Hölder continuity in books on analysis, but the books I've read tend to pass over Hölder functions quickly, without developing applications. While the definition seems natural enough, it's not clear to me what we actually gain from knowing that a function is $\alpha$ -Hölder continuous, for some $\alpha<1$ . I have some guesses, but they are just guesses: do $\alpha$ -Hölder conditions give rise to useful weak solution concepts in PDEs? Are there important results that apply only to $\alpha$ -Hölder functions, for some fixed $\alpha$ ? For $\alpha=1$ (Lipschitz continuity) the answer to both of these questions seems to be yes, but I know nothing for lower values of $\alpha$ . I'd be interested in answers that describe specific applications, as well as answers that give more of a ''big picture''.
Hölder continuous functions do not give rise to useful weak solutions in any context I am aware of: there are notions of weak solutions that are continuous, but the Hölder modulus is not relevant for the definition. While there may be some rare results that require specific Hölder moduli with $\alpha < 1$ , I can not think of any that I use in my research. So why care about Hölder continuity at all? Here are a few reasons. I will say that this is coming from a purely PDE perspective, and that Hölder spaces are at their most useful when dealing with elliptic, parabolic, and some first-order PDE. For dispersive and wave equations, the fact that Hölder norms do not interact well with the Fourier transform is a strike against them. There are other (non-PDE) areas of analysis and geometry that find Hölder spaces useful for other reasons, but that would be for another answer. Compactness Hölder spaces have very elementary and favorable compactness properties. A sequence of functions with bounded Hölder norms will have a uniformly convergent subsequence, and the Hölder norm is lower semicontinuous under uniform convergence. Uniform convergence is extremely, surprisingly, useful when studying some types of PDE, and is often enough to pass the entire PDE to the limit. This is the case with distributional solutions of linear equations, and more strikingly with viscosity solutions. Easy to use and understand The theory of Hölder spaces is not very deep. Unlike Sobolev spaces, which interact in subtle ways with the geometry of a domain's boundary, contain functions that generally don't make sense pointwise, require dealing with distributional derivatives, etc., Hölder spaces are just spaces of equicontinuous functions with little else going on. It is easy to prove that a function is Hölder continuous, and common ways of doing so line up well with the way we approach PDE. One way to do this is to prove that $$ \max_{B_r(x)} u - \min_{B_r(x)} u \leq (1 - \theta)[ \max_{B_{2r}(x)} u - \min_{B_{2r}(x)}u] $$ for some $\theta > 0$ , a decay of oscillation . Iterating this gives that $u$ has some Hölder modulus at $x$ . This kind of statement is one we are happy to try and prove for solutions $u$ : bounding the maximum of $u$ at a given scale in terms of $u$ on a larger ball is something we actually have the tools to do, at least for elliptic equations. There are also good approaches to showing $u$ is Hölder based on Sobolev or $L^p$ bounds at every scale (Morrey/Companato inequalities), and sometimes Sobolev spaces embed into Hölder spaces directly. Another good aspect of Hölder spaces is that they let us talk about a fractional-power increase in smoothness without having to take (fractional?) derivatives, or any derivatives at all, and without needing the Fourier transform. Not having to take derivatives is a great technical convenience (see how the improvement of oscillation above is a statement about just the solution pointwise; this is great if differentiating the equation is problematic); not having to deal with anything fractional makes everything much more explicit; not needing the Fourier transform is good news for equations that interact poorly with it. Our best theorems are true when $\alpha \in (0, 1)$ Sure, Hölder spaces might be nice, but why not just use $\alpha = 1$ ? It turns out that it is much, much harder to prove something is Lipschitz, and that often it is just not true. Consider the equation $$ \Delta u = f. $$ Heuristically we expect that $u$ is two derivatives smoother than $f$ , because, well, that's what the equation seems to say: some second derivatives of $u$ equal $f$ . The actual positive results in this direction are that if $f \in C^{0, \alpha}$ , then $u \in C^{2, \alpha}$ (Schauder), that if $f \in L^p$ then $u \in W^{2, p}$ when $p \in (1, \infty)$ (Calderon-Zygmund), some similar theorems that are $k$ derivatives up from this, and much more complicated classifications of what happens at the endpoints $\alpha = 0, \alpha = 1, p = 1, p = \infty$ . In particular, none of the endpoint versions are true, they all require modifications, different spaces, etc. This fact, that harmonic analysis theorems have more complicated endpoint versions, is a running theme in the field, and means that while we would love to work with $\alpha = 1$ , often we just are not allowed to. There are other types of theorems where we can prove that there exists an $\alpha > 0$ such that solutions (or their derivatives, or something related to them) are in $C^{0, \alpha}$ . Here we may not be expecting the functions to be anywhere near Lipschitz. The most famous example of this is the De Giorgi-Nash-Moser theorem.
{ "source": [ "https://math.stackexchange.com/questions/4271259", "https://math.stackexchange.com", "https://math.stackexchange.com/users/425120/" ] }
4,278,217
The inequality $\sqrt{\frac{a^b}{b}}+\sqrt{\frac{b^a}{a}}\ge 2$ for all $a,b>0$ was shown here using first-order Padé approximants on each exponent, where the minimum is attained at $a=b=1$ . By empirical evidence, it appears that inequalities of this type hold for an arbitrary number of variables. We can phrase the generalised problem as follows. Let $(x_i)_{1\le i\le n}$ be a sequence of positive real numbers. Define $\boldsymbol a=\begin{pmatrix}a_1&\cdots&a_n\end{pmatrix}$ such that $a_k=x_k^{x_{k+1}}/x_{k+1}$ for each $1\le k<n$ and $a_n=x_n^{x_1}/x_1$ . How do we show that $$\|\boldsymbol a\|_p^p\ge n$$ for any $p\ge1$ ? As before, AM-GM is far too weak since the inequality $\displaystyle\|\boldsymbol a\|_p^p\ge 2\left(\prod_{\text{cyc}}\frac{x_1^{x_2}}{x_2}\right)^{1/{2p}}$ does not guarantee the result when at least one $x_i$ is smaller than $1$ . We can eliminate the exponent on the denominator by taking $x_i=X_i^{1/p}$ so that $\displaystyle\|\boldsymbol a\|_p^p=\sum_{\text{cyc}}\frac{X_1^{X_2^{1/p}}}{X_2}$ but the approximant approach no longer becomes feasible; even in the case where $p$ is an integer the problem reduces to a posynomial inequality of rational degrees. Perhaps there are some obscure $L^p$ -norm/Hölder-type identities of use but I'm at a loss in terms of finding references. Empirical results: In the interval $p\in[1,\infty)$ , Wolfram suggests that the minimum is $n$ ( Notebook result ) which is obtained when $\boldsymbol a$ is the vector of ones. However, we note that in the interval $p\in(0,1)$ , the empirical minimum no longer displays this consistent behaviour as can be seen in this Notebook result . The sequence $\approx(1.00,2.00,2.01,3.36,3.00,4.00)$ appears to increase almost linearly every two values, but I cannot verify it for a larger number of variables due to instability in the working precision.
Proof for $p ≥ 1$ Since $u^p - 1 ≥ p(u - 1)$ for all $u ≥ 0$ , it suffices to prove the result for $p = 1$ . That follows from $$\frac{x^y}{y} - 1 ≥ \frac{1 + y \ln x}{y} - 1 = \ln x + \frac1y - 1 ≥ \ln x + \ln \frac1y = \ln x - \ln y$$ by cyclic summation over $(x, y) = (x_i, x_{i + 1})$ . Conjectured proof for $p ≥ \frac12$ Since $u^p - 1 ≥ 2p(u^{\frac12} - 1)$ for all $u ≥ 0$ , it suffices to prove the result for $p = \frac12$ . Numerical evidence suggests that $$\left(\frac{x^y}{y}\right)^{\frac12} - 1 ≥ \frac{\ln x}{2\sqrt[4]{1 + \frac13 \ln^2 x}} - \frac{\ln y}{2\sqrt[4]{1 + \frac13 \ln^2 y}}$$ for all $x, y > 0$ . If this is true, cyclic summation yields the desired result. Counterexample for $0 < p < \frac12$ Let $g(x) = \left(\frac{x^{1/x}}{1/x}\right)^p + \left(\frac{(1/x)^x}{x}\right)^p$ . Then $g(1) = 2$ , $g'(1) = 0$ , and $g''(1) = 4p(2p - 1) < 0$ , so we have $g(x) < 2$ for $x$ in some neighborhood of $1$ . This yields counterexamples for all even $n$ : $$\left(x, \frac1x, x, \frac1x, \dotsc, x, \frac1x\right), \quad x ≈ 1, \quad 0 < p < \frac12.$$ For $n = 3$ , the best counterexample seems to be $$(0.41398215, 0.73186577, 4.77292996), \quad 0 < p < 0.39158477.$$
{ "source": [ "https://math.stackexchange.com/questions/4278217", "https://math.stackexchange.com", "https://math.stackexchange.com/users/471884/" ] }
4,322,297
Uncomputable functions: Intro The last month I have been going down the rabbit hole of googology (mathematical study of large numbers) in my free time. I am still trying to wrap my head around the seeming paradox of the existence of natural numbers that are well-defined but uncomputable (in the sense that it has been proven that they can never be calculated by a human / a Turing machine). Let me give two of the most famous examples: Busy beaver function $\Sigma(n,m)$ $\Sigma(n,m)$ "is defined as the maximum number of non-blank symbols that can be written (in the finished tape) with an $n$ -state, $m$ -color halting Turing machine starting from a blank tape before halting." It has been shown that $\Sigma$ grows faster than all computable functions and, thus, is uncomputable. Calculating $\Sigma$ for sufficiently large inputs would require an oracle Turing machine as it would literally be a solution to the halting problem. Thus, it is uncomputable, although the forumlation of $\Sigma$ in set theory is precise and clear. More details here. Rayo's number $\text{Rayo}\left(10^{100}\right)$ Rayo's number was the record holder in the googology community for a long time and it is defined as "the smallest positive integer bigger than any finite positive integer named by an expression in the language of first-order set theory with googol symbols or less." It is defined in the language of an (unspecified) second-order set theory here . (Its well-definedness is thusly a bit controversial but it would outgrow $\Sigma$ by a huge marigin if resolved.) My mathematical / existential questions Does a number like $x=\Sigma\left(10^{100},10^{100}\right)$ "exist" in set theory in the same sense like the number $4$ ? Does it even make sense to include it in a mathematical operation like $(x$ mod $4)$ or $x^x$ if we cannot even write it down in a decimal expansion? I am well aware of Gödel's incompleteness theorems and the existence of unprovable statements like the continuum hypothesis, which can neither be proven nor disproven by ZFC axioms in any finite number of steps. Is there some parallel between that and the existence of numbers that cannot be computed in any finite amount of time? Is there some version of mathematics or system of axioms which resolves this problem? (i.e. where well-definedness of an object is equivalent to computability?) I would be very happy if anyone could answer or point me in the right direction.
Replying to your three mathematical/existential questions in order: Yes. There exists a TM that, when started on a blank tape, eventually halts (after finitely many steps) with the exact decimal expansion of $x=\Sigma\left(10^{100},10^{100}\right)$ on the tape. The same is true for $x\bmod 4$ or $x^x$ or any other natural number . There does not exist a natural number "that cannot be computed in any finite amount of time". There is no such problem for natural numbers. On the other hand, provability is another kettle o' fish: There is no natural number $n$ such that ZFC proves $\ BB(7918)=n,$ and more recently we have that for any $m\ge 748,$ there is no natural number $n$ such that ZFC proves $BB(m)=n.$ (Here $BB$ is the Busy Beaver function w.r.t. the number of steps taken before halting.) NB : As your questions (and your entire Intro) seemed to be about natural numbers , that is the context of my replies above. The situation is quite different in the larger context of real numbers . Note that every natural number has a finite representation, which is the basic reason it is computable. In contrast, a real number typically requires an infinite representation, which opens the possibility of not being computable. (It turns out that almost all reals are not computable.)
{ "source": [ "https://math.stackexchange.com/questions/4322297", "https://math.stackexchange.com", "https://math.stackexchange.com/users/998803/" ] }
4,322,344
Find subgroup $\langle a,b\rangle$ of $\Bbb Z_{20}^*$ which is not cyclic. I know that $\mathbb{Z}_{20}^{*}$ is not cyclic But how can I find subgroup which is not cyclic? e.g why is $\langle 3,11\rangle$ not cyclic? Thanks and sorry if I have English mistakes :)
Replying to your three mathematical/existential questions in order: Yes. There exists a TM that, when started on a blank tape, eventually halts (after finitely many steps) with the exact decimal expansion of $x=\Sigma\left(10^{100},10^{100}\right)$ on the tape. The same is true for $x\bmod 4$ or $x^x$ or any other natural number . There does not exist a natural number "that cannot be computed in any finite amount of time". There is no such problem for natural numbers. On the other hand, provability is another kettle o' fish: There is no natural number $n$ such that ZFC proves $\ BB(7918)=n,$ and more recently we have that for any $m\ge 748,$ there is no natural number $n$ such that ZFC proves $BB(m)=n.$ (Here $BB$ is the Busy Beaver function w.r.t. the number of steps taken before halting.) NB : As your questions (and your entire Intro) seemed to be about natural numbers , that is the context of my replies above. The situation is quite different in the larger context of real numbers . Note that every natural number has a finite representation, which is the basic reason it is computable. In contrast, a real number typically requires an infinite representation, which opens the possibility of not being computable. (It turns out that almost all reals are not computable.)
{ "source": [ "https://math.stackexchange.com/questions/4322344", "https://math.stackexchange.com", "https://math.stackexchange.com/users/833177/" ] }
4,323,512
Let $S$ be a smooth oriented surface in $\mathbb R^3$ with boundary $C$ , and let $f: \mathbb R^3 \to \mathbb R^3$ be a continuously differentiable vector field on $\mathbb R^3$ . Stokes's theorem states that $$ \int_C f \cdot dr = \int_S (\nabla \times f) \cdot dA. $$ In other words, the line integral of $f$ over the curve $C$ is equal to the integral of the curl of $f$ over the surface $S$ . Here the orientation of the boundary $C$ is induced by the orientation of $S$ . Question: How might somebody have derived or discovered this formula? Where does this formula come from? The goal is to provide an intuitive explanation of Stokes's theorem, rather than a rigorous proof. (I'll post an answer below.)
Here's an intuitive way to discover Stokes's theorem. Imagine chopping up the surface $S$ into tiny pieces such that each tiny piece is a parallelogram (or at least, each tiny piece is approximately a parallelogram). Let $C_i$ be the boundary of the $i$ th tiny parallelogram. I'll assume each $C_i$ has the orientation induced by the orientation of $S$ . Notice that $$ \tag{1} \sum_i \int_{C_i} f \cdot dr = \int_C f \cdot dr. $$ This is because the sum on the left "telescopes". Everything in the middle cancels out and we are left only with boundary terms. This beautiful step in the derivation is reminiscent of the telescoping sum that appears when deriving the fundamental theorem of calculus in single variable calculus. To complete our derivation of Stokes's theorem, we must compute the integral of $f$ around the boundary of a tiny parallelogram. Below is a picture of one single tiny parallelogram which is based at a point $x = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} \in \mathbb R^3$ and which is spanned by vectors $v = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix}$ and $w = \begin{bmatrix} w_1 \\ w_2 \\ w_3 \end{bmatrix} \in \mathbb R^3$ . The orientation of the boundary of the parallelogram is indicated by the little direction arrows. Since this is a very tiny parallelogram, I'll make the approximation that the integral of $f$ along edge 1 is approximately $f(x) \cdot v$ , the integral of $f$ along edge 2 is approximately $f(x + v) \cdot w$ , the integral of $f$ along edge 3 is approximately $f(x + w) \cdot (-v)$ , and the integral of $f$ along edge 4 is approximately $f(x) \cdot (-w)$ . Summing these four terms, and pairing edge 1 with edge 3 and edge 2 with edge 4, we find that the integral of $f$ along the boundary of this parallelogram is approximately \begin{align*} &\quad \langle f(x+v) - f(x), w \rangle - \langle f(x + w) - f(x), v \rangle \\ &\approx \langle f'(x) v, w \rangle - \langle f'(x) w, v \rangle \\ &= \langle v, (f'(x)^T - f'(x)) w \rangle \\ &= \left \langle v, \begin{bmatrix} 0 & \frac{\partial f_2(x)}{\partial x_1} - \frac{\partial f_1(x)}{\partial x_2} & \frac{\partial f_3(x)}{\partial x_1} - \frac{\partial f_1(x)}{\partial x_3} \\ \frac{\partial f_1(x)}{\partial x_2} - \frac{\partial f_2(x)}{\partial x_1} & 0 & \frac{\partial f_3(x)}{\partial x_2} - \frac{\partial f_2(x)}{\partial x_3} \\ \frac{\partial f_1(x)}{\partial x_3} - \frac{\partial f_3(x)}{\partial x_1} & \frac{\partial f_2(x)}{\partial x_3} - \frac{\partial f_3(x)}{\partial x_2} & 0 \end{bmatrix} w \right\rangle \\ &= \langle v, w \times (\nabla \times f) \rangle \\ &=\tag{2} \langle \nabla \times f, \underbrace{v \times w}_{\substack{\text{Area vector}\\ \text{for this tiny} \\ \text{parallelogram}}} \rangle. \end{align*} Here $f_1, f_2$ , and $f_3$ are the component functions of $f$ and $ f'(x) = \begin{bmatrix} \frac{\partial f_1(x)}{\partial x_1} & \frac{\partial f_1(x)}{\partial x_2} & \frac{\partial f_1(x)}{\partial x_3} \\ \frac{\partial f_2(x)}{\partial x_1} & \frac{\partial f_2(x)}{\partial x_2} & \frac{\partial f_2(x)}{\partial x_3} \\ \frac{\partial f_3(x)}{\partial x_1} & \frac{\partial f_3(x)}{\partial x_2} & \frac{\partial f_3(x)}{\partial x_3} \\ \end{bmatrix} $ is the Jacobian matrix of $f$ at $x$ . The vector $\nabla \times f$ , which is called the "curl" of $f$ at $x$ , is defined by $$ \nabla \times f = \begin{bmatrix} \frac{\partial f_3(x)}{\partial x_2} - \frac{\partial f_2(x)}{\partial x_3} \\ \frac{\partial f_1(x)}{\partial x_3} - \frac{\partial f_3(x)}{\partial x_1} \\ \frac{\partial f_2(x)}{\partial x_1} - \frac{\partial f_1(x)}{\partial x_2} \end{bmatrix}. $$ This is the moment in math when we discover the curl for the first time. Technically, I should write the curl of $f$ at $x$ as $(\nabla \times f)(x)$ . The final step in our derivation of Stokes's theorem is to apply formula (2) to the sum on the left in equation (1). Let $\Delta A_i$ be the "area vector" for the $i$ th tiny parallelogram. In other words, the vector $\Delta A_i$ points outwards, and the magnitude of $\Delta A_i$ is equal to the area of the $i$ th tiny parallelogram. Let $x^i \in \mathbb R^3$ be the point where the $i$ th tiny parallelogram is based. (The $i$ here is a superscript, not an exponent.) Combining formulas (1) and (2) reveals that \begin{align} \int_C f \cdot dr &\approx \sum_i (\nabla \times f)(x_i) \cdot \Delta A_i \\ &\approx \int_S (\nabla \times f) \cdot dA. \end{align} We have discovered the Stokes's theorem formula. It seems plausible that we can make the approximation as accurate as we like by chopping up $S$ into sufficiently small pieces. Thus, we conclude that $$ \int_C f \cdot dr = \int_S (\nabla \times f) \cdot dA $$ Comments: I gave a similar derivation of Green's theorem here . I also wrote notes that attempt to give a similar derivation of the generalized Stokes's theorem here . Physicists frequently use similar arguments when deriving Stokes's theorem. Feynman, for example, integrates a vector field around a little square in the $xy$ -plane, then recognizes that the result can be expressed in terms of the curl vector. Here is the relevant passage from Feynman: However, how did Feynman discover the curl in the first place? He did it by treating the gradient operator $\nabla$ as a vector, and symbolically computing the cross product of this "vector" with $f$ . I find that to be interesting and characteristically Feynman, but I also want a more direct way to discover Stokes's theorem, the same way that we discovered Green's theorem. (See section 3-6 and section 2-5 of volume II of the Feynman Lectures on Physics for reference.) The book Div, Grad, Curl and All That computes the three components of the curl vector by integrating a vector field around small rectangles which are parallel to either the $xy$ -plane or the $xz$ -plane or the $yz$ -plane. The author remarks, "It turns out that these three quantities are the Cartesian components of a vector. To this vector we give the name 'curl of $\mathbf F$ ,' which we write $\text{curl } \mathbf F$ ." In other words, now paraphrasing and switching to my notation, they assume the existence of a vector $(\nabla \times f)(x)$ which satisfies $$ (\nabla \times f)(x) \cdot \Delta A \approx \int_{\partial E} f \cdot dr $$ for any tiny planar surface $E$ containing $x$ with area vector $\Delta A$ . By considering the special cases where $E$ is a rectangle and $\Delta A$ is parallel to either $\hat i$ or $\hat j$ or $\hat k$ , they derive the components of $(\nabla \times f)(x)$ . Here is the relevant passage: When deriving Green's theorem and the divergence theorem, physicists typically chop up the region that we are integrating over into tiny rectangles or tiny boxes. I think the most clear and elegant way to make this strategy work for Stokes's theorem is to chop up $S$ into tiny parallelograms . In fact, I think we should also use parallelograms or parallelepipeds when deriving Green's theorem and the divergence theorem. This strategy can even be used to derive the generalized Stokes's theorem and to discover the exterior derivative (by chopping up a smooth manifold into tiny parallelepipeds). One way to chop up $S$ into tiny parallelograms is to start with a rectangular region $R$ that is chopped up into tiny rectangles, then smoothly morph $R$ onto $S$ . If $S$ is not diffeomorphic to a rectangular region, then $S$ can at least be broken into simpler pieces, each of which is diffeomorphic to a rectangular region. When deriving equation (2), I used the first-order Taylor approximation $$ \tag{3} f(x + v) - f(x) \approx f'(x) v. $$ The approximation is good when $v$ is small. The Jacobian matrix $f'(x)$ is also called the derivative of $f$ at $x$ . The approximation (3), which Terence Tao refers to as "Newton's approximation", is the key idea of calculus. It is essentially the definition of $f'(x)$ . The fundamental strategy of calculus is to take a nonlinear function $f$ (difficult) and approximate it locally by a linear function (easy). When deriving the formulas of calculus, we always find that we use the approximation (3) at the crucial moment. It would also be ok to evaluate $f$ at the midpoints of the edges when approximating the integral of $f$ along each edge of the tiny parallelogram. So the integral of $f$ along edge 1 is approximately $f(x + v/2) \cdot v$ , the integral of $f$ along edge 2 is approximately $f(x + v + w/2) \cdot w$ , etc. These are typically more accurate approximations and the calculation works out equally nicely. However, since our goal is just to provide an intuitive derivation of Stokes's theorem, we might as well keep the calculation as simple as possible.
{ "source": [ "https://math.stackexchange.com/questions/4323512", "https://math.stackexchange.com", "https://math.stackexchange.com/users/40119/" ] }
4,323,522
I don't understand what does it mean that "A primitive mapping is thus one that changes at most once coordinate" ( what do we mean in darkened $x$ in the $10.5$ ? is it the set of all points $x$ $\in$ $E$ where $g(x)$ $\neq$ $0$ ? ) Hence, I don't understand from where does the $(10)$ inequality come. $G(x) = x + [g(x) - x_m]e_m$ . Any help would be appreciated.
Here's an intuitive way to discover Stokes's theorem. Imagine chopping up the surface $S$ into tiny pieces such that each tiny piece is a parallelogram (or at least, each tiny piece is approximately a parallelogram). Let $C_i$ be the boundary of the $i$ th tiny parallelogram. I'll assume each $C_i$ has the orientation induced by the orientation of $S$ . Notice that $$ \tag{1} \sum_i \int_{C_i} f \cdot dr = \int_C f \cdot dr. $$ This is because the sum on the left "telescopes". Everything in the middle cancels out and we are left only with boundary terms. This beautiful step in the derivation is reminiscent of the telescoping sum that appears when deriving the fundamental theorem of calculus in single variable calculus. To complete our derivation of Stokes's theorem, we must compute the integral of $f$ around the boundary of a tiny parallelogram. Below is a picture of one single tiny parallelogram which is based at a point $x = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} \in \mathbb R^3$ and which is spanned by vectors $v = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix}$ and $w = \begin{bmatrix} w_1 \\ w_2 \\ w_3 \end{bmatrix} \in \mathbb R^3$ . The orientation of the boundary of the parallelogram is indicated by the little direction arrows. Since this is a very tiny parallelogram, I'll make the approximation that the integral of $f$ along edge 1 is approximately $f(x) \cdot v$ , the integral of $f$ along edge 2 is approximately $f(x + v) \cdot w$ , the integral of $f$ along edge 3 is approximately $f(x + w) \cdot (-v)$ , and the integral of $f$ along edge 4 is approximately $f(x) \cdot (-w)$ . Summing these four terms, and pairing edge 1 with edge 3 and edge 2 with edge 4, we find that the integral of $f$ along the boundary of this parallelogram is approximately \begin{align*} &\quad \langle f(x+v) - f(x), w \rangle - \langle f(x + w) - f(x), v \rangle \\ &\approx \langle f'(x) v, w \rangle - \langle f'(x) w, v \rangle \\ &= \langle v, (f'(x)^T - f'(x)) w \rangle \\ &= \left \langle v, \begin{bmatrix} 0 & \frac{\partial f_2(x)}{\partial x_1} - \frac{\partial f_1(x)}{\partial x_2} & \frac{\partial f_3(x)}{\partial x_1} - \frac{\partial f_1(x)}{\partial x_3} \\ \frac{\partial f_1(x)}{\partial x_2} - \frac{\partial f_2(x)}{\partial x_1} & 0 & \frac{\partial f_3(x)}{\partial x_2} - \frac{\partial f_2(x)}{\partial x_3} \\ \frac{\partial f_1(x)}{\partial x_3} - \frac{\partial f_3(x)}{\partial x_1} & \frac{\partial f_2(x)}{\partial x_3} - \frac{\partial f_3(x)}{\partial x_2} & 0 \end{bmatrix} w \right\rangle \\ &= \langle v, w \times (\nabla \times f) \rangle \\ &=\tag{2} \langle \nabla \times f, \underbrace{v \times w}_{\substack{\text{Area vector}\\ \text{for this tiny} \\ \text{parallelogram}}} \rangle. \end{align*} Here $f_1, f_2$ , and $f_3$ are the component functions of $f$ and $ f'(x) = \begin{bmatrix} \frac{\partial f_1(x)}{\partial x_1} & \frac{\partial f_1(x)}{\partial x_2} & \frac{\partial f_1(x)}{\partial x_3} \\ \frac{\partial f_2(x)}{\partial x_1} & \frac{\partial f_2(x)}{\partial x_2} & \frac{\partial f_2(x)}{\partial x_3} \\ \frac{\partial f_3(x)}{\partial x_1} & \frac{\partial f_3(x)}{\partial x_2} & \frac{\partial f_3(x)}{\partial x_3} \\ \end{bmatrix} $ is the Jacobian matrix of $f$ at $x$ . The vector $\nabla \times f$ , which is called the "curl" of $f$ at $x$ , is defined by $$ \nabla \times f = \begin{bmatrix} \frac{\partial f_3(x)}{\partial x_2} - \frac{\partial f_2(x)}{\partial x_3} \\ \frac{\partial f_1(x)}{\partial x_3} - \frac{\partial f_3(x)}{\partial x_1} \\ \frac{\partial f_2(x)}{\partial x_1} - \frac{\partial f_1(x)}{\partial x_2} \end{bmatrix}. $$ This is the moment in math when we discover the curl for the first time. Technically, I should write the curl of $f$ at $x$ as $(\nabla \times f)(x)$ . The final step in our derivation of Stokes's theorem is to apply formula (2) to the sum on the left in equation (1). Let $\Delta A_i$ be the "area vector" for the $i$ th tiny parallelogram. In other words, the vector $\Delta A_i$ points outwards, and the magnitude of $\Delta A_i$ is equal to the area of the $i$ th tiny parallelogram. Let $x^i \in \mathbb R^3$ be the point where the $i$ th tiny parallelogram is based. (The $i$ here is a superscript, not an exponent.) Combining formulas (1) and (2) reveals that \begin{align} \int_C f \cdot dr &\approx \sum_i (\nabla \times f)(x_i) \cdot \Delta A_i \\ &\approx \int_S (\nabla \times f) \cdot dA. \end{align} We have discovered the Stokes's theorem formula. It seems plausible that we can make the approximation as accurate as we like by chopping up $S$ into sufficiently small pieces. Thus, we conclude that $$ \int_C f \cdot dr = \int_S (\nabla \times f) \cdot dA $$ Comments: I gave a similar derivation of Green's theorem here . I also wrote notes that attempt to give a similar derivation of the generalized Stokes's theorem here . Physicists frequently use similar arguments when deriving Stokes's theorem. Feynman, for example, integrates a vector field around a little square in the $xy$ -plane, then recognizes that the result can be expressed in terms of the curl vector. Here is the relevant passage from Feynman: However, how did Feynman discover the curl in the first place? He did it by treating the gradient operator $\nabla$ as a vector, and symbolically computing the cross product of this "vector" with $f$ . I find that to be interesting and characteristically Feynman, but I also want a more direct way to discover Stokes's theorem, the same way that we discovered Green's theorem. (See section 3-6 and section 2-5 of volume II of the Feynman Lectures on Physics for reference.) The book Div, Grad, Curl and All That computes the three components of the curl vector by integrating a vector field around small rectangles which are parallel to either the $xy$ -plane or the $xz$ -plane or the $yz$ -plane. The author remarks, "It turns out that these three quantities are the Cartesian components of a vector. To this vector we give the name 'curl of $\mathbf F$ ,' which we write $\text{curl } \mathbf F$ ." In other words, now paraphrasing and switching to my notation, they assume the existence of a vector $(\nabla \times f)(x)$ which satisfies $$ (\nabla \times f)(x) \cdot \Delta A \approx \int_{\partial E} f \cdot dr $$ for any tiny planar surface $E$ containing $x$ with area vector $\Delta A$ . By considering the special cases where $E$ is a rectangle and $\Delta A$ is parallel to either $\hat i$ or $\hat j$ or $\hat k$ , they derive the components of $(\nabla \times f)(x)$ . Here is the relevant passage: When deriving Green's theorem and the divergence theorem, physicists typically chop up the region that we are integrating over into tiny rectangles or tiny boxes. I think the most clear and elegant way to make this strategy work for Stokes's theorem is to chop up $S$ into tiny parallelograms . In fact, I think we should also use parallelograms or parallelepipeds when deriving Green's theorem and the divergence theorem. This strategy can even be used to derive the generalized Stokes's theorem and to discover the exterior derivative (by chopping up a smooth manifold into tiny parallelepipeds). One way to chop up $S$ into tiny parallelograms is to start with a rectangular region $R$ that is chopped up into tiny rectangles, then smoothly morph $R$ onto $S$ . If $S$ is not diffeomorphic to a rectangular region, then $S$ can at least be broken into simpler pieces, each of which is diffeomorphic to a rectangular region. When deriving equation (2), I used the first-order Taylor approximation $$ \tag{3} f(x + v) - f(x) \approx f'(x) v. $$ The approximation is good when $v$ is small. The Jacobian matrix $f'(x)$ is also called the derivative of $f$ at $x$ . The approximation (3), which Terence Tao refers to as "Newton's approximation", is the key idea of calculus. It is essentially the definition of $f'(x)$ . The fundamental strategy of calculus is to take a nonlinear function $f$ (difficult) and approximate it locally by a linear function (easy). When deriving the formulas of calculus, we always find that we use the approximation (3) at the crucial moment. It would also be ok to evaluate $f$ at the midpoints of the edges when approximating the integral of $f$ along each edge of the tiny parallelogram. So the integral of $f$ along edge 1 is approximately $f(x + v/2) \cdot v$ , the integral of $f$ along edge 2 is approximately $f(x + v + w/2) \cdot w$ , etc. These are typically more accurate approximations and the calculation works out equally nicely. However, since our goal is just to provide an intuitive derivation of Stokes's theorem, we might as well keep the calculation as simple as possible.
{ "source": [ "https://math.stackexchange.com/questions/4323522", "https://math.stackexchange.com", "https://math.stackexchange.com/users/977184/" ] }
4,347,360
Let $(f_n)$ be a sequence of functions that are all differentiable on an interval A, and suppose the sequence of derivatives $(f_n')$ converges uniformly on A to a limit function $g$ . Does it follow that $(f_n)$ converges to a limit function f on A? What I tried: As $(f_n')$ converges uniformly to $g$ , we may write that the limit of integral of $(f_n')$ is the integral of the limit of $(f_n')$ . Hence, $(f_n)$ converges point wise to the integral of $g$ . How does this sound?
If, for each $n\in\Bbb N$ , $f_n(x)=n$ , then you always have $f_n'(x)=0$ . Therefore, $(f_n')_{n\in\Bbb N}$ converges uniformly, but there is no $x\in\Bbb R$ such that $\bigl(f_n(x)\bigr)_{n\in\Bbb N}$ converges.
{ "source": [ "https://math.stackexchange.com/questions/4347360", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1008502/" ] }
4,348,172
When I first learned about the ideal class group, I learned that it measures the failure of unique factorization in a number ring. The main justification for this is that a number ring has unique factorization if and only if it has class number $1$ . This is very unsatisfying though because the exact size of the class group is not used, and neither is the entire group structure of the class group. Furthermore, the dichotomy of "UFD / not UFD", while an important first step, doesn't measure the extent to which unique factorization fails, only if it fails or not. So my questions are: In what way does the exact size of the class group measure the extent to which a number ring fails to have unique factorization? (Beyond the dichotomy of class number $1$ vs. not $1$ .) In what way does the group structure of the class group measure the extent to which a number ring fails to have unique factorization? This I have basically no feeling for: if the class group is $\mathbb{Z}/2 \times \mathbb{Z}/2$ versus $\mathbb{Z}/4$ , is that difference measuring anything related to unique factorization? What is it measuring at all? This is a question that has been asked on SE a few times before (see here and here ) but the answers weren't exactly what I was looking for, so I wanted to ask it again. Thanks for the help!
Everyone says the class group "measures" the failure of unique factorization, but the only sense of measuring that failure is exactly what you noticed: class number 1 vs. class number greater than 1. That is the only justification for such terms as "measuring the failure". Edit : By "only sense" I mean that the people who teach about ideal class groups in algebraic number theory classes have no grander meaning in mind than the distinction $h=1$ and $h > 1$ when they speak about class groups measuring the failure of unique factorization. There are descriptions of the structure of the ideal class group in terms of how elements factors into other elements (see the link in Bill Dubuque's comment above), but that is not what people have in mind when they speak about ideal class groups "measuring the failure" of unique factorization. This is typical in math: you construct a group (or vector space, etc.) for each object in some family and the group is nontrivial iff some nice property doesn't hold. Let's call the nice property "wakalixes". Then you say to the world "this group measures the failure of wakalixes" and that leads generations of students to ask "What do you mean it measures the failure? What does having one nontrivial wakalixes group or some other nonisomorphic nontrivial wakalixes group actually mean?" And the answer is "All that means is that wakalixes fails in both cases." There is no other meaning intended in geneal. In number theory, topology, etc., when you try to do something and run into a gadget that is trivial when you can do what you want and nontrivial when you can't do what you want, you call the gadget "a measure of the failure" to do what you want or "an obstruction" to do what you want. Ideal class groups have interpretations and applications that are not directly about the failure or not of unique factorization, and such applications are much more important for the role of ideal class groups in number theory than trying to intuit some down to earth meaning about a class group being cyclic of order $4$ . Maybe the following point will interest you. If you replace $\mathcal O_K$ with $\mathcal O_K[1/\alpha]$ where $\alpha$ is a nonzero element of $\mathcal O_K$ such that the ideal class group of $K$ is generated by ideal classes of the prime ideals dividing $(\alpha)$ , then $\mathcal O_K[1/\alpha]$ is a PID. For instance, $\mathbf Z[\sqrt{-5}]$ has class number 2 generated by the ideal class of the prime ideal $\mathfrak p = (2,1+\sqrt{-5})$ . We have $\mathfrak p^2 = (2)$ and $\mathbf Z[\sqrt{-5},1/2]$ is a PID. More generally, for a ring of $S$ -integers $\mathcal O_{K,S}$ , where $S$ is a finite set of places of $K$ containing all the archimedean places, the ideal class group is a quotient group of the ideal class group of $\mathcal O_K$ (details are in the answer here ), so by putting into $S$ a suitable set of primes you can gradually kill off the whole class group and you're left with a PID. In this way, the class group tells you how to enlarge $\mathcal O_K$ in a mild way to recover unique factorization while maintaining other nice properties (like a finitely generated unit group, which would not happen if you did something extreme and just replaced $\mathcal O_K$ by $K$ ). When I was a grad student I was really bothered by encountering the same slogan ("it measures the failure...") in algebraic topology with homology groups. I asked a postdoc "If I told you $H_{37}(X)$ has a particular value, does that automatically mean something to you?" And the postdoc said "Nope."
{ "source": [ "https://math.stackexchange.com/questions/4348172", "https://math.stackexchange.com", "https://math.stackexchange.com/users/628249/" ] }
4,349,050
Curry-Howard Correspondence Now, pick any 5-30 line algorithm in some programming language of choice. What is the program proving? Or, do we not also have "programs-as-proofs"? Take the GCD algorithm written in pseudo-code: function gcd(a, b) while b ≠ 0 t := b b := a mod b a := t return a What is it proving? I apologize for the broad and softness of this question, but I'm really wondering about it.
Yes. But usually the associated proofs are uninteresting. Remember that, under the correspondence, we have Types $\longleftrightarrow$ Propositions Programs $\longleftrightarrow$ Proofs So let's look at your sample code. We get a function $\mathtt{gcd} : \mathbb{N} \times \mathbb{N} \to \mathbb{N}$ . With just this type, the program proves "If there is an element of $\mathbb{N} \times \mathbb{N}$ , then there is an element of $\mathbb{N}$ " a completely uninteresting theorem, I think you'd agree. In particular, the simple function $\pi_2 (x,y) = y$ , which also has type $\mathbb{N} \times \mathbb{N} \to \mathbb{N}$ proves the same theorem. Why? Because they're two functions of the same type! Now, obviously the code for $\mathtt{gcd}$ is doing something interesting, so surely it proves something more interesting than the naive $\pi_2$ function. The answer to this question depends on how expressive your type system is. If all you have is access to a type $\mathbb{N}$ , then you're out of luck unfortunately. But thankfully, through the language of dependent types we can talk about more interesting types, and thus, more interesting propositions. In the software engineering world, we use types in order to express the desired behavior of a program. Types correspond to specifications for programs. At the most basic level they tell you what inputs and outputs a given function expects, but with dependent types, we can fully classify the desired behavior of a function. For instance, the naive type of mergesort might be $$\mathtt{mergesort} : [a] \to [a]$$ all we see is that it takes in a list of things of type $a$ and spits out a list of things of type $a$ . But again, this is not an interesting specification. The identity function, or even the function which ignores its input and returns the empty list, also inhabit this type. And again, since types are propositions, we see that $\mathtt{mergesort}$ proves a very uninteresting proposition: "If a list of $a$ s exists, then a list of $a$ s exists".... like.... yeah, obviously. So we beef up our type system, and instead look at something like this: $$\mathtt{mergesort} : [a] \to \{ L : [a] \mid \mathtt{isSorted}(L) \}$$ I'm eliding some syntax, and a formal definition of $\mathtt{isSorted} : [a] \to \mathtt{Bool}$ , but hopefully it's clear how to write such a function. So now $\mathtt{mergesort}$ has to meet a more stringent specification. It's not allowed to return any list of $a$ s. It must return a sorted list of $a$ s. As a function of this type, $\mathtt{mergesort}$ proves "If a list of $a$ s exists, then a sorted list of $a$ s exists" again, this isn't a groundbreaking theorem but it's better than what we had before. Unfortunately, the constant empty list function still matches this specification. After all, the empty list is (vacuously) sorted! But we can go again. Maybe we write a type so that $\mathtt{mergesort}$ proves "If a list of $a$ s exists, then there is a sorted list of $a$ s with exactly the same elements" which is beginning to get a bit more interesting as a proposition. This now totally pins down the desired behavior of mergesort. Moreover, think about how you would prove such a proposition. If somebody said "prove that sorted list containing the same elements exists", you would say "just sort it!" and this is exactly the proof that $\mathtt{mergesort}$ provides. What then, of your $\mathtt{gcd}$ example? Well, as with $\mathtt{mergesort}$ , the same program works to prove multiple things. It's really the type that matters. With a little bit of work, your code might be a proof that "If two natural numbers exist, then there exists a natural number dividing both of them, which is bigger than any other natural number dividing both of them" which is beginning to look like an interesting (if basic) mathematical proposition! I hope this helps ^_^
{ "source": [ "https://math.stackexchange.com/questions/4349050", "https://math.stackexchange.com", "https://math.stackexchange.com/users/26327/" ] }
4,349,070
Let X,Y be independent geometric random variables, where both are having same parameter ( $p$ ). Let $Z = max(X, Y)$ . I would like to find $P(Z = i)$ for some real values of $i$ . As we know for $K = min(X, Y)$ , $K$ is geometric distributed with parameter $(2p−p^2)$ Does $Z$ also geometric distributed $?$ . Some steps I have done: $$P(Z\le i) = P(X \le i) P(Y \le i) = (1-(1-p)^m)(1-(1-p)^m)$$ $$P(Z =i) = P(Z \le i) - P(Z \le i - 1)$$ I am not able to write $P(Z =i)$ in the form of $q(1-q)^{k-1}$ .
Yes. But usually the associated proofs are uninteresting. Remember that, under the correspondence, we have Types $\longleftrightarrow$ Propositions Programs $\longleftrightarrow$ Proofs So let's look at your sample code. We get a function $\mathtt{gcd} : \mathbb{N} \times \mathbb{N} \to \mathbb{N}$ . With just this type, the program proves "If there is an element of $\mathbb{N} \times \mathbb{N}$ , then there is an element of $\mathbb{N}$ " a completely uninteresting theorem, I think you'd agree. In particular, the simple function $\pi_2 (x,y) = y$ , which also has type $\mathbb{N} \times \mathbb{N} \to \mathbb{N}$ proves the same theorem. Why? Because they're two functions of the same type! Now, obviously the code for $\mathtt{gcd}$ is doing something interesting, so surely it proves something more interesting than the naive $\pi_2$ function. The answer to this question depends on how expressive your type system is. If all you have is access to a type $\mathbb{N}$ , then you're out of luck unfortunately. But thankfully, through the language of dependent types we can talk about more interesting types, and thus, more interesting propositions. In the software engineering world, we use types in order to express the desired behavior of a program. Types correspond to specifications for programs. At the most basic level they tell you what inputs and outputs a given function expects, but with dependent types, we can fully classify the desired behavior of a function. For instance, the naive type of mergesort might be $$\mathtt{mergesort} : [a] \to [a]$$ all we see is that it takes in a list of things of type $a$ and spits out a list of things of type $a$ . But again, this is not an interesting specification. The identity function, or even the function which ignores its input and returns the empty list, also inhabit this type. And again, since types are propositions, we see that $\mathtt{mergesort}$ proves a very uninteresting proposition: "If a list of $a$ s exists, then a list of $a$ s exists".... like.... yeah, obviously. So we beef up our type system, and instead look at something like this: $$\mathtt{mergesort} : [a] \to \{ L : [a] \mid \mathtt{isSorted}(L) \}$$ I'm eliding some syntax, and a formal definition of $\mathtt{isSorted} : [a] \to \mathtt{Bool}$ , but hopefully it's clear how to write such a function. So now $\mathtt{mergesort}$ has to meet a more stringent specification. It's not allowed to return any list of $a$ s. It must return a sorted list of $a$ s. As a function of this type, $\mathtt{mergesort}$ proves "If a list of $a$ s exists, then a sorted list of $a$ s exists" again, this isn't a groundbreaking theorem but it's better than what we had before. Unfortunately, the constant empty list function still matches this specification. After all, the empty list is (vacuously) sorted! But we can go again. Maybe we write a type so that $\mathtt{mergesort}$ proves "If a list of $a$ s exists, then there is a sorted list of $a$ s with exactly the same elements" which is beginning to get a bit more interesting as a proposition. This now totally pins down the desired behavior of mergesort. Moreover, think about how you would prove such a proposition. If somebody said "prove that sorted list containing the same elements exists", you would say "just sort it!" and this is exactly the proof that $\mathtt{mergesort}$ provides. What then, of your $\mathtt{gcd}$ example? Well, as with $\mathtt{mergesort}$ , the same program works to prove multiple things. It's really the type that matters. With a little bit of work, your code might be a proof that "If two natural numbers exist, then there exists a natural number dividing both of them, which is bigger than any other natural number dividing both of them" which is beginning to look like an interesting (if basic) mathematical proposition! I hope this helps ^_^
{ "source": [ "https://math.stackexchange.com/questions/4349070", "https://math.stackexchange.com", "https://math.stackexchange.com/users/750768/" ] }
4,349,074
We know that the Galois group of an irreducible cubic polynomial is $S_3$ or $A_3$ , but is every group extension whose Galois group is $S_3$ a splitting field of a cubic polynomial? If not, the extension must be the splitting field of a polynomial of degree 6. And therefore $S_3$ must be a transitive subgroup of $S_6$ . Unfortunately, I found (1 2 3)(4 5 6) and (1 4)(3 5)(2 6) can generate such transitive $S_3$ , but I can't find the corresponding polynomial. Is it true that every Galois extension with Galois group $S_3$ a splitting field of a cubic irreducible polynomial? Thanks for your help!
Yes. But usually the associated proofs are uninteresting. Remember that, under the correspondence, we have Types $\longleftrightarrow$ Propositions Programs $\longleftrightarrow$ Proofs So let's look at your sample code. We get a function $\mathtt{gcd} : \mathbb{N} \times \mathbb{N} \to \mathbb{N}$ . With just this type, the program proves "If there is an element of $\mathbb{N} \times \mathbb{N}$ , then there is an element of $\mathbb{N}$ " a completely uninteresting theorem, I think you'd agree. In particular, the simple function $\pi_2 (x,y) = y$ , which also has type $\mathbb{N} \times \mathbb{N} \to \mathbb{N}$ proves the same theorem. Why? Because they're two functions of the same type! Now, obviously the code for $\mathtt{gcd}$ is doing something interesting, so surely it proves something more interesting than the naive $\pi_2$ function. The answer to this question depends on how expressive your type system is. If all you have is access to a type $\mathbb{N}$ , then you're out of luck unfortunately. But thankfully, through the language of dependent types we can talk about more interesting types, and thus, more interesting propositions. In the software engineering world, we use types in order to express the desired behavior of a program. Types correspond to specifications for programs. At the most basic level they tell you what inputs and outputs a given function expects, but with dependent types, we can fully classify the desired behavior of a function. For instance, the naive type of mergesort might be $$\mathtt{mergesort} : [a] \to [a]$$ all we see is that it takes in a list of things of type $a$ and spits out a list of things of type $a$ . But again, this is not an interesting specification. The identity function, or even the function which ignores its input and returns the empty list, also inhabit this type. And again, since types are propositions, we see that $\mathtt{mergesort}$ proves a very uninteresting proposition: "If a list of $a$ s exists, then a list of $a$ s exists".... like.... yeah, obviously. So we beef up our type system, and instead look at something like this: $$\mathtt{mergesort} : [a] \to \{ L : [a] \mid \mathtt{isSorted}(L) \}$$ I'm eliding some syntax, and a formal definition of $\mathtt{isSorted} : [a] \to \mathtt{Bool}$ , but hopefully it's clear how to write such a function. So now $\mathtt{mergesort}$ has to meet a more stringent specification. It's not allowed to return any list of $a$ s. It must return a sorted list of $a$ s. As a function of this type, $\mathtt{mergesort}$ proves "If a list of $a$ s exists, then a sorted list of $a$ s exists" again, this isn't a groundbreaking theorem but it's better than what we had before. Unfortunately, the constant empty list function still matches this specification. After all, the empty list is (vacuously) sorted! But we can go again. Maybe we write a type so that $\mathtt{mergesort}$ proves "If a list of $a$ s exists, then there is a sorted list of $a$ s with exactly the same elements" which is beginning to get a bit more interesting as a proposition. This now totally pins down the desired behavior of mergesort. Moreover, think about how you would prove such a proposition. If somebody said "prove that sorted list containing the same elements exists", you would say "just sort it!" and this is exactly the proof that $\mathtt{mergesort}$ provides. What then, of your $\mathtt{gcd}$ example? Well, as with $\mathtt{mergesort}$ , the same program works to prove multiple things. It's really the type that matters. With a little bit of work, your code might be a proof that "If two natural numbers exist, then there exists a natural number dividing both of them, which is bigger than any other natural number dividing both of them" which is beginning to look like an interesting (if basic) mathematical proposition! I hope this helps ^_^
{ "source": [ "https://math.stackexchange.com/questions/4349074", "https://math.stackexchange.com", "https://math.stackexchange.com/users/889546/" ] }
4,358,766
There is a famous example of a function that has no derivative: the Weierstrass function: But just by looking at this equation - I can't seem to understand why exactly the Weierstrass Function does not have a derivative? I tried looking at a few articles online (e.g. https://www.quora.com/Why-isnt-the-Weierstrass-function-differentiable ), but I still can't seem to understand what prevents this function from having a derivative? For example, if you expand the summation term for some very large (finite) value of $n$ : $$ f(x) = a \cos(b\pi x) + a^2\cos(b^2\pi x) + a^3\cos(b^3\pi x) + ... + a^{100}\cos(b^{100}\pi x) $$ What is preventing us from taking the derivative of $f(x)$ ? Is the Weierstrass function non-differentiable only because it has "infinite terms" - and no function with infinite terms can be differentiated? For a finite value of $n$ , is the Weierstrass function differentiable? Thank you!
Nothing is preventing us from taking the derivative of any finite partial sum of this series. This is a trigonometric polynomial and it has derivatives of all orders. However, this infinite sum represents the pointwise limit of such trigonometric polynomials. A pointwise limit of differentiable functions has no obligation to be differentiable. On the other hand, the sheer fact that this function is an infinite sum doesn’t automatically imply that it’s nowhere differentiable. An infinite power series or infinite trigonometric series may be differentiable everywhere, differentiable nowhere, or differentiable in some places and not others. To check if a function is differentiable at a point $x_0$ , you must determine if the limit $\lim_{h\to 0}(f(x_0+h)-f(x_0))/h$ exists. If it doesn’t, the function isn’t differentiable at $x_0$ . There are various theorems which help us bypass the need for doing this directly. For example, we can show that the composition of differentiable functions is differentiable, and that various elementary functions are differentiable everywhere, which lets us conclude that things like $\cos(17x^2)-e^x$ are differentiable everywhere. One of these shortcuts applies when you have infinite sums with uniformly convergent derivatives. This very important result allows us to conclude that many functions defined by infinite series do indeed have derivatives, and those derivatives are what you’d expect. But this doesn’t apply here, since this sum does not have uniformly convergent derivatives. And once again, this fact alone isn’t enough to show that the function is nowhere differentiable. To show that, you have to roll up your sleeves and work out inequalities that apply everywhere and prevent the limit above from existing. This is done carefully here , for example.
{ "source": [ "https://math.stackexchange.com/questions/4358766", "https://math.stackexchange.com", "https://math.stackexchange.com/users/791334/" ] }
4,362,143
The Gauss-Bonnet Theorem (for orientable surfaces without boundary) states that for surface $M$ , with Gaussian curvature at a point $K$ , we have $$\int_M K\ dA=2\pi\chi(M).$$ Right now, this just says to me that the integral of something I don’t care about is equal to the ratio of a circumference to radius of a (Euclidean) circle times something I do care about. I get why normal curvature is interesting. I get Gaussian curvature is important in the Theorema Egregium and classification of surfaces of constant curvature. In the former, it helps classify surfaces up to isometry and show that developable surfaces are ruled. In the latter, it tells us that Riemannian manifolds with transitive isometry groups locally have a structure from a very small selection. But that’s just Gaussian curvature giving us other things. I don’t understand why I should like it for its own sake, and as such, why I should care about Gauss-Bonnet. I get that it links differential geometry and topology, but maybe there are other ways to do this, and this one feels a little convoluted, and topology is already linked to differential manifolds, and thus Riemannian manifolds, through Poincare-Hopf.
In retrospect, this post got quite long. Also, the level varies greatly - sorry! Feel free to ask any questions. I guess I am quite fond of this topic, even though I am not as knowledgeable on it as other people on here. Anyway, hopefully this is helpful to someone :-) In my opinion this theorem is one of the crown jewels of mathematics. There are a number of ways to interpret or generalize this statement. For instance, since $2\pi \chi(M)$ is a purely topological quantity, this tells us that stretching or deforming $M$ through smooth isotopies will not change the integral of the Gaussian curvature. So, we could take a sphere $S^2(r)$ of radius $r$ in $\Bbb{R}^3$ . Then the Gaussian curvature is everywhere $\frac{1}{r^2}$ . The integral is $$ \int_{S^2(r)}\frac{1}{r^2}\mathrm{vol}=\frac{4\pi r^2}{r^2}=4\pi=2\pi\chi(S^2(r)). $$ Now, we can deform this sphere $S^2(r)$ as we wish through isotopies and we see that the Gauss-Bonnet theorem gives us a "conservation law": if we deform our surface $M$ in a volume-preserving manner such a way that the curvature in one region becomes greater, then the curvature in another region must become smaller. Drawing a picture illustrates this idea: This is a nice intuition for at least some of what the theorem is saying. However, there is another interpretation. Indeed, we have a classical topological invariant $\chi(M)$ , and with some work we were able to find a "geometric quantity" to integrate to return $\chi(M)$ , establishing a connection between a global geometric quantity and a topological quantity. This can be interpreted as $$ \int_M\frac{K}{2\pi}\mathrm{vol}=\chi(M). $$ Now, the natural objects of integration on manifolds are differential forms and indeed we have integrated a $2-$ form $\frac{K}{2\pi}\mathrm{vol}$ to return $\chi(M)$ . This is a closed $2-$ form and defines a cohomology class $\omega\in H^2(M,\Bbb{R})$ . So, we arrive at a natural question: Given a compact $n-$ manifold $M$ , can we find a cohomology class $\omega \in H^n(M,\Bbb{R})$ so that $\int_M\omega=\chi(M)$ ? For $n=2$ , the result is Gauss-Bonnet. If $n$ is odd, it is an easy consequence of Poincaré Duality that $\chi(M)=0$ , so the answer is yes but for stupid reasons: simply take $0\in H^n(M,\Bbb{R})$ . For $n$ even, the question is more interesting. We next note that we had more structure in the case of the Gauss-Bonnet theorem. Indeed, we were using the Riemannian structure coming from our manifold being in $\Bbb{R}^3$ . So, we can equip $M$ with a Riemannian structure $(M,g)$ . Now that we have done that, we would like to introduce a notion of curvature. Curvature is - in some sense - a second order phenomenon, so we need to introduce a notion of a second derivative. The method of doing this is to introduce an affine connection on $TM$ . This is an operator $$\nabla:\mathfrak{X}(M)\times \mathfrak{X}(M)\to \mathfrak{X}(M)$$ written as $\nabla(X,Y)=\nabla_XY$ . This has some natural properties: $\nabla$ is $\Bbb{R}-$ linear in both $X$ and $Y$ For any $f\in C^\infty(M)$ , $\nabla_{fX}Y=f\nabla_XY$ . For any $f\in C^\infty(M)$ , $\nabla_X(fY)=X(f)Y+f\nabla_XY$ , which we call the Leibniz property. The standard example of this is the directional derivative of a vector field $Y\in \mathfrak{X}(\Bbb{R}^3)$ with respect to another vector field $X$ , written in multivariable calculus by $D_XY$ . Anyway, associated to this $\nabla$ are a pair of tensors: $$ T(X,Y)=\nabla_X Y-\nabla_YX -[X,Y] $$ called the torsion and $$ R(X,Y)Z=\nabla_X\nabla_YZ-\nabla_Y\nabla_X Z-\nabla_{[X,Y]}Z $$ called the curvature. There is a theorem which says that on a Riemannian manifold $(M,g)$ , there is a unique connection $\nabla$ satisfying $T(X,Y)\equiv 0$ and $\nabla_Xg(Y,Z)=g(\nabla_X Y,Z)+g(Y,\nabla_X Z)$ . This connection is called the Levi-Civita connection . The virtue of this is that there is a canonical connection associated to a Riemannian manifold, which will turn out to be quite important. With this in hand, we can take our Riemannian manifold $(M,g)$ with its LC connection $\nabla$ and study the properties of its curvature. (By the way, you can recover the Gaussian curvature from this mysterious $R$ in the case of a surface, but I won't explain how.) If we take a trivialization of the tangent bundle by a local frame $(e_1,\ldots, e_n)$ , we can describe the data of the connection using the equations: $$ \nabla_X e_j=\sum_i \omega^i_j(X)e_i $$ where $\omega_j^i$ are differential $1-$ forms. Furthermore, we can describe the data of the curvature using $$ R(X,Y)e_j=\sum_i \Omega^i_j(X,Y)e_i $$ for all $1\le i \le n$ , where the $\Omega^i_j$ are $2-$ forms. In particular, we get matrices $\omega=(\omega^i_j)$ and $\Omega=(\Omega^i_j)$ describing the connection and the curvature, respectively, in a local trivialization. We would like to assemble from these guys a differential form that we can integrate to get back $\chi(M)$ . The bridge is provided by the Chern-Weil homomorphism, which allows us to construct cohomology classes from these matrices of differential forms. The statement is Theorem (Chern-Weil) Suppose $E$ is a rank $r$ vector bundle with connection $\nabla$ . Suppose $P$ is a $\mathrm{GL}(r,\Bbb{R})-$ invariant polynomial on $\mathfrak{gl}(r,\Bbb{R})$ of degree $k$ . Then, the $2k-$ form $P(\Omega)$ on $M$ is globally defined, closed, and $[P(\Omega)]\in H^{2k}(M,\Bbb{R})$ is independent of the connection. The definition of a connection on $E$ is analogous to the above. A polynomial on $\mathfrak{gl}(r,\Bbb{R})$ means a polynomial that takes an $r\times r$ matrix $X=(x_j^i)$ as its input. Anyway, this theorem tells us that we can construct cohomology classes explicitly from the information of the curvature of a connection on a vector bundle on $M$ . Moreover, the cohomology class is independent of the choice of connection. One way to interpret this is that we are getting a way to construct distinguished representatives of our cohomology classes on $M$ . This is a motif that appears frequently in geometry. This lets us state our final theorem. (Chern-Gauss-Bonnet) Let $(M,g)$ denote a compact Riemannian manifold of dimension $2n$ . Then $$ \int_M \frac{1}{(2\pi)^n}\mathrm{Pf}(\Omega)=\chi(M). $$ Here, $\mathrm{Pf}$ denotes the Pfaffian, which is a certain polynomial defined on $\mathfrak{so}(2n,\Bbb{R})$ characterized (up to a sign) by $\mathrm{Pf}^2=\det$ as polynomial functions. I have swept a lot of details under the rug here - but the moral of the story is that the presence of a Riemannian metric gives a version of the above Chern-Weil theorem for $SO(2n)-$ invariant polynomials on $\mathfrak{so}(2n,\Bbb{R})$ . Lastly, here is a high level explanation of what we are doing here. In topology, one associates to a vector bundle $E\to M$ characteristic classes, which are cohomology classes in $H^*(M)$ that are invariants of the bundle. I won't get too much into this, but there is a characteristic class of oriented real vector bundles called the Euler class, $e(E)$ . It is suggestively named because in the case where $E=TM$ , $e(E)\in H^n(M,\Bbb{R})$ (viewed for instance as a de Rham cohomology class) satisfies $$ \int_M e(E)=\chi(M). $$ So, we have really constructed an explicit form $\mathrm{Pf}(\Omega)$ on $M$ representing this cohomology class $e(E)$ , and that is the content of the theorem. It turns out that characteristic classes of oriented bundles rank $r$ can be defined as cohomology classes of the classifying space $BSO(r)$ . It can be proven that the cohomology of $BG$ (for $G$ a compact Lie group) is $H^*(BG,\Bbb{R})\cong \Bbb{R}[\mathfrak{g}]^G$ , i.e. the invariant polynomials on the Lie algebra $\mathfrak{g}$ . So, the Chern-Weil homomorphism in its most general form expresses characteristic classes of a (say) oriented real bundle of rank $r$ in terms of polynomials of the curvature. So, in this setup the Gauss-Bonnet theorem is a special case of the fact that the polynomial $\mathrm{Pf}$ computes the Euler class. If this is of interest to you, I started learning about this in Tu's Differential Geometry: Connections, Curvature, and Characteristic classes , which is a clear and pleasant read for someone with a basic knowledge of manifolds and de Rham cohomology.
{ "source": [ "https://math.stackexchange.com/questions/4362143", "https://math.stackexchange.com", "https://math.stackexchange.com/users/672095/" ] }
4,362,144
I know how to find the sides of the triangle when seeking to maximize the area of ​​the triangle, since the following relationship exists $Area= \frac{abc}{4R}$ where $a, b,c$ are the sides and $R$ the radius. But I don't know what to occupy to maximize the perimeter.
In retrospect, this post got quite long. Also, the level varies greatly - sorry! Feel free to ask any questions. I guess I am quite fond of this topic, even though I am not as knowledgeable on it as other people on here. Anyway, hopefully this is helpful to someone :-) In my opinion this theorem is one of the crown jewels of mathematics. There are a number of ways to interpret or generalize this statement. For instance, since $2\pi \chi(M)$ is a purely topological quantity, this tells us that stretching or deforming $M$ through smooth isotopies will not change the integral of the Gaussian curvature. So, we could take a sphere $S^2(r)$ of radius $r$ in $\Bbb{R}^3$ . Then the Gaussian curvature is everywhere $\frac{1}{r^2}$ . The integral is $$ \int_{S^2(r)}\frac{1}{r^2}\mathrm{vol}=\frac{4\pi r^2}{r^2}=4\pi=2\pi\chi(S^2(r)). $$ Now, we can deform this sphere $S^2(r)$ as we wish through isotopies and we see that the Gauss-Bonnet theorem gives us a "conservation law": if we deform our surface $M$ in a volume-preserving manner such a way that the curvature in one region becomes greater, then the curvature in another region must become smaller. Drawing a picture illustrates this idea: This is a nice intuition for at least some of what the theorem is saying. However, there is another interpretation. Indeed, we have a classical topological invariant $\chi(M)$ , and with some work we were able to find a "geometric quantity" to integrate to return $\chi(M)$ , establishing a connection between a global geometric quantity and a topological quantity. This can be interpreted as $$ \int_M\frac{K}{2\pi}\mathrm{vol}=\chi(M). $$ Now, the natural objects of integration on manifolds are differential forms and indeed we have integrated a $2-$ form $\frac{K}{2\pi}\mathrm{vol}$ to return $\chi(M)$ . This is a closed $2-$ form and defines a cohomology class $\omega\in H^2(M,\Bbb{R})$ . So, we arrive at a natural question: Given a compact $n-$ manifold $M$ , can we find a cohomology class $\omega \in H^n(M,\Bbb{R})$ so that $\int_M\omega=\chi(M)$ ? For $n=2$ , the result is Gauss-Bonnet. If $n$ is odd, it is an easy consequence of Poincaré Duality that $\chi(M)=0$ , so the answer is yes but for stupid reasons: simply take $0\in H^n(M,\Bbb{R})$ . For $n$ even, the question is more interesting. We next note that we had more structure in the case of the Gauss-Bonnet theorem. Indeed, we were using the Riemannian structure coming from our manifold being in $\Bbb{R}^3$ . So, we can equip $M$ with a Riemannian structure $(M,g)$ . Now that we have done that, we would like to introduce a notion of curvature. Curvature is - in some sense - a second order phenomenon, so we need to introduce a notion of a second derivative. The method of doing this is to introduce an affine connection on $TM$ . This is an operator $$\nabla:\mathfrak{X}(M)\times \mathfrak{X}(M)\to \mathfrak{X}(M)$$ written as $\nabla(X,Y)=\nabla_XY$ . This has some natural properties: $\nabla$ is $\Bbb{R}-$ linear in both $X$ and $Y$ For any $f\in C^\infty(M)$ , $\nabla_{fX}Y=f\nabla_XY$ . For any $f\in C^\infty(M)$ , $\nabla_X(fY)=X(f)Y+f\nabla_XY$ , which we call the Leibniz property. The standard example of this is the directional derivative of a vector field $Y\in \mathfrak{X}(\Bbb{R}^3)$ with respect to another vector field $X$ , written in multivariable calculus by $D_XY$ . Anyway, associated to this $\nabla$ are a pair of tensors: $$ T(X,Y)=\nabla_X Y-\nabla_YX -[X,Y] $$ called the torsion and $$ R(X,Y)Z=\nabla_X\nabla_YZ-\nabla_Y\nabla_X Z-\nabla_{[X,Y]}Z $$ called the curvature. There is a theorem which says that on a Riemannian manifold $(M,g)$ , there is a unique connection $\nabla$ satisfying $T(X,Y)\equiv 0$ and $\nabla_Xg(Y,Z)=g(\nabla_X Y,Z)+g(Y,\nabla_X Z)$ . This connection is called the Levi-Civita connection . The virtue of this is that there is a canonical connection associated to a Riemannian manifold, which will turn out to be quite important. With this in hand, we can take our Riemannian manifold $(M,g)$ with its LC connection $\nabla$ and study the properties of its curvature. (By the way, you can recover the Gaussian curvature from this mysterious $R$ in the case of a surface, but I won't explain how.) If we take a trivialization of the tangent bundle by a local frame $(e_1,\ldots, e_n)$ , we can describe the data of the connection using the equations: $$ \nabla_X e_j=\sum_i \omega^i_j(X)e_i $$ where $\omega_j^i$ are differential $1-$ forms. Furthermore, we can describe the data of the curvature using $$ R(X,Y)e_j=\sum_i \Omega^i_j(X,Y)e_i $$ for all $1\le i \le n$ , where the $\Omega^i_j$ are $2-$ forms. In particular, we get matrices $\omega=(\omega^i_j)$ and $\Omega=(\Omega^i_j)$ describing the connection and the curvature, respectively, in a local trivialization. We would like to assemble from these guys a differential form that we can integrate to get back $\chi(M)$ . The bridge is provided by the Chern-Weil homomorphism, which allows us to construct cohomology classes from these matrices of differential forms. The statement is Theorem (Chern-Weil) Suppose $E$ is a rank $r$ vector bundle with connection $\nabla$ . Suppose $P$ is a $\mathrm{GL}(r,\Bbb{R})-$ invariant polynomial on $\mathfrak{gl}(r,\Bbb{R})$ of degree $k$ . Then, the $2k-$ form $P(\Omega)$ on $M$ is globally defined, closed, and $[P(\Omega)]\in H^{2k}(M,\Bbb{R})$ is independent of the connection. The definition of a connection on $E$ is analogous to the above. A polynomial on $\mathfrak{gl}(r,\Bbb{R})$ means a polynomial that takes an $r\times r$ matrix $X=(x_j^i)$ as its input. Anyway, this theorem tells us that we can construct cohomology classes explicitly from the information of the curvature of a connection on a vector bundle on $M$ . Moreover, the cohomology class is independent of the choice of connection. One way to interpret this is that we are getting a way to construct distinguished representatives of our cohomology classes on $M$ . This is a motif that appears frequently in geometry. This lets us state our final theorem. (Chern-Gauss-Bonnet) Let $(M,g)$ denote a compact Riemannian manifold of dimension $2n$ . Then $$ \int_M \frac{1}{(2\pi)^n}\mathrm{Pf}(\Omega)=\chi(M). $$ Here, $\mathrm{Pf}$ denotes the Pfaffian, which is a certain polynomial defined on $\mathfrak{so}(2n,\Bbb{R})$ characterized (up to a sign) by $\mathrm{Pf}^2=\det$ as polynomial functions. I have swept a lot of details under the rug here - but the moral of the story is that the presence of a Riemannian metric gives a version of the above Chern-Weil theorem for $SO(2n)-$ invariant polynomials on $\mathfrak{so}(2n,\Bbb{R})$ . Lastly, here is a high level explanation of what we are doing here. In topology, one associates to a vector bundle $E\to M$ characteristic classes, which are cohomology classes in $H^*(M)$ that are invariants of the bundle. I won't get too much into this, but there is a characteristic class of oriented real vector bundles called the Euler class, $e(E)$ . It is suggestively named because in the case where $E=TM$ , $e(E)\in H^n(M,\Bbb{R})$ (viewed for instance as a de Rham cohomology class) satisfies $$ \int_M e(E)=\chi(M). $$ So, we have really constructed an explicit form $\mathrm{Pf}(\Omega)$ on $M$ representing this cohomology class $e(E)$ , and that is the content of the theorem. It turns out that characteristic classes of oriented bundles rank $r$ can be defined as cohomology classes of the classifying space $BSO(r)$ . It can be proven that the cohomology of $BG$ (for $G$ a compact Lie group) is $H^*(BG,\Bbb{R})\cong \Bbb{R}[\mathfrak{g}]^G$ , i.e. the invariant polynomials on the Lie algebra $\mathfrak{g}$ . So, the Chern-Weil homomorphism in its most general form expresses characteristic classes of a (say) oriented real bundle of rank $r$ in terms of polynomials of the curvature. So, in this setup the Gauss-Bonnet theorem is a special case of the fact that the polynomial $\mathrm{Pf}$ computes the Euler class. If this is of interest to you, I started learning about this in Tu's Differential Geometry: Connections, Curvature, and Characteristic classes , which is a clear and pleasant read for someone with a basic knowledge of manifolds and de Rham cohomology.
{ "source": [ "https://math.stackexchange.com/questions/4362144", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1001467/" ] }
4,365,625
Solve the following equation in radicals. $$x^8-8x^7+8x^6+40x^5-14x^4-232x^3+488x^2-568x+1=0$$ I use Magma to verify that its Galois group is a solvable group. R := RationalField(); R < x > := PolynomialRing(R); f := x^8-8*x^7+8*x^6+40*x^5-14*x^4-232*x^3+488*x^2-568*x+1; G := GaloisGroup(f); print G; GroupName(G: TeX:=true); IsSolvable(G); The output of Magma(Online) is: Permutation group G acting on a set of cardinality 8 Order = 16 = 2^4 (2, 4)(6, 8) (1, 2, 3, 4)(5, 6, 7, 8) (1, 5)(2, 8)(3, 7)(4, 6) C_2\times D_4 true I also tried to calculate with PARI/GP(64-bit)v_2.13.3+GAP(64-bit)v_4.11.1, but failed. gap> LoadPackage("radiroot"); true gap> x := Indeterminate(Rationals,"x");; gap> g := UnivariatePolynomial( Rationals, [1,-8,8,40,-14,-232,488,-568,1]); x^8-8x^7+8x^6+40x^5-14x^4-232x^3+488x^2-568x+1 gap> RootsOfPolynomialAsRadicals(g, "latex"); "/tmp/tmp.sfoZ6C/Nst.tex" Error,AL_EXECUTABLE,the executable for PARI/GP,has to be set at /proc/ cygdrive/C/gap-4.11.1/pkg/aInuth-3.1.2/gap/kantin.gi : 205 called from
Well, in honor of an old cartoon I'll say a miracle occurs. But can we get behind the curtain to see how the special effects are made? If you take the square root of, let us say, $2358$ by the standard "long division" method, you get $48$ with a remainder of $54$ , which may be interpreted as the equation $2358=48^2+54.$ We can adapt this method to determining the square root of a polynomial, and for the one given in this problem we end with this: $x^8-8x^7+8x^6+40x^5-14x^4-232x^3+488x^2-568x+1=(x^4-4x^3-4x^2+4x+1)^2+(-192x^3+480x^2-576x)$ If the remainder were a constant times a square then we would be able to render our octic polynomial in the form $a^2-b^2=(a+b)(a-b)$ or perhaps $a^2+b^2=(a+bi)(a-bi)$ , getting a pair of quartic factors which would then be solvable by radicals in the usual way. Sadly, we can't do that because the remainder is a cubic polynomial. Nonetheless, the fact that the coefficients of this remainder have a common factor makes one go "hmmm...". What if there were a way to modify the remainder so that it has an even degree and could be a square quantity (or next best, a constant times one)? I started by noting that the square root of $2358$ as determined by the standard method comes out as $48$ with a remainder of $54$ . But did I really have to render the "quotient" as $48$ ? If I allow a negative remainder in the final stage maybe I could render the root as $49$ instead, in which case the remainder is indeed negative and we get an expression equally valid as the first one I quoted: $2358=48^2+54$ but also $2358=49^2-43.$ We might even say that the second form is superior because, with the absolutely smaller remainder, it renders the rounded value of $\sqrt{2358}$ (correctly) as $49$ instead of $48$ . Now what can we do with our polynomial square root? Let us say that, just as we rendered the last digit of the root as $9$ instead of $8$ when we extracted the square root of $2358$ , we leave the constant term in our quartic expression as something other than $1$ . We get $x^8-8x^7+8x^6+40x^5-14x^4-232x^3+488x^2-568x+1=(x^4-4x^3-4x^2+4x+h)^2+[(-2h+2)x^4+(8h-200)x^3+(8h+472)x^2+(-8h-568)x+(-h^2+1)]$ Can this remainder be a squared quantity, perhaps multiplied by a constant, for some value of $h$ , presumably rational? A necessary condition for this to occur in the quartic expression $ax^4+bx^3+cx^2+dx+e$ is $a/e=(b/d)^2$ . Here we require $\dfrac{-2h+2}{-h^2+1}=\left(\dfrac{8h-200}{-8h-568}\right)^2$ $\dfrac{2}{h+1}=\left(\dfrac{h-25}{h+71}\right)^2$ We turn this to a cubic polynomial equation for $h$ , seek rational roots and discover $h=49$ . We again go "hmmm...", for not only did we hit on a rational root but we incremented $h$ from its earlier value ( $1$ ) by half the common factor of $96$ we saw in the earlier remainder. We insert $h=49$ and obtain $x^8-8x^7+8x^6+40x^5-14x^4-232x^3+488x^2-568x+1=(x^4-4x^3-4x^2+4x+49)^2-96[x^4-2x^3-9x^2+10x+25]$ If the bracketed quantity were to be a square, it would be $(x^2-x-5)^2$ to match the degree 4, degree 3, degree 1 and degree 0 terms (which our equation for $h$ was designed to do). But do we get the proper degree 2 term? In fact: $(x^2-x-5)^2=x^4-2x^3-9x^2+10x+25$ and we have hit on our squared remainder! So now we just factor the octic polynomial as a difference of squares whose roots contain $\sqrt{96}$ or equivalently $\sqrt{6}$ : $x^8-8x^7+8x^6+40x^5-14x^4-232x^3+488x^2-568x+1=[(x^4-4x^3-4x^2+4x+49)+4\sqrt6(x^2-x-5)][(x^4-4x^3-4x^2+4x+49)-4\sqrt6(x^2-x-5)]$ and we then solve each quartic factor by the usual method. The roots, with all radicals defined as nonnegative real numbers, are $1+\sqrt2+\sqrt3+\sqrt[4]{3}$ $1-\sqrt2+\sqrt3+\sqrt[4]{3}$ $1+\sqrt2+\sqrt3-\sqrt[4]{3}$ $1-\sqrt2+\sqrt3-\sqrt[4]{3}$ $1+\sqrt2-\sqrt3\pm i\sqrt[4]{3}$ $1-\sqrt2-\sqrt3\pm i\sqrt[4]{3}$ This set of roots conforms with the $C_2×D_4$ symmetry from the Galois group calculation.
{ "source": [ "https://math.stackexchange.com/questions/4365625", "https://math.stackexchange.com", "https://math.stackexchange.com/users/469027/" ] }
4,379,359
Consider the two numbers $ab$ and $(a-b)(a+b), \gcd(a,b) = 1, 1 \le b < a$ . On an average, which of these two numbers has more distinct prime factors? All the prime factors of $a$ and $b$ divide $ab$ and similarly all the prime factors of $a-b$ and $a+b$ divide $(a-b)(a+b)$ . So one number does not seem to have an obvious advantage over the other. However if we look at the data than we see that $ab$ dominates. Let $f(x)$ be the average number of distinct prime factors in all such $ab, a \le x$ and $g(x)$ be the average number of distinct prime factors in all such $(a-b)(a+b), a \le x$ . Update : Experimental data for the first $6.1 \times 10^{9}$ pairs of $(a,b)$ shows that $f(x) - g(x) \sim 0.30318$ . Instead of distinct prime factors, if we count the number of divisors than $f(x) - g(x) \sim 0.848$ . Question : Why is $ab$ likely to have more distinct prime factors or divisors than $(a-b)(a+b)$ and what is the limiting value of $f(x) - g(x)$ ?
On one hand, \begin{align*} \sum_{a,b\le x} \omega(ab) &= \sum_{a,b\le x} \sum_{\substack{p\le x \\ p\mid ab}} 1 = \sum_{p\le x} \sum_{\substack{a,b\le x \\ p\mid ab}} 1 \\ &= \sum_{p\le x} \biggl( \sum_{\substack{a,b\le x \\ p\mid a}} 1 + \sum_{\substack{a,b\le x \\ p\mid b}} 1 - \sum_{\substack{a,b\le x \\ p\mid a,\, p\mid b}} 1 \biggr) \\ &= \sum_{p\le x} \biggl( \bigl( \tfrac xp+O(1) \bigr)(x+O(1)) + (x+O(1))\bigl( \tfrac xp+O(1) \bigr) - \bigl( \tfrac xp+O(1) \bigr)^2 \biggr) \\ &= 2x^2 \sum_{p\le x} \tfrac1p - x^2 \sum_{p\le x} \tfrac1{p^2} + O\biggl( x \sum_{p\le x} \bigl( 1+\tfrac1p \bigr) \bigg) = 2x^2 \sum_{p\le x} \tfrac1p - x^2 \sum_p \tfrac1{p^2} + o(x^2). \end{align*} On the other hand, $p\mid(a+b)$ and $p\mid(a-b)$ simultaneously if and only if either $p\mid a$ and $p\mid b$ , or $p=2$ and $a$ and $b$ are both odd. Therefore \begin{align*} \sum_{a,b\le x} \omega\bigl( (a+b)(a-b) \bigr) &= \sum_{a,b\le x} \sum_{\substack{p\le x \\ p\mid (a+b)(a-b)}} 1 = \sum_{p\le x} \sum_{\substack{a,b\le x \\ p\mid (a+b)(a-b)}} 1 \\ &= \sum_{p\le x} \biggl( \sum_{\substack{a,b\le x \\ p\mid (a+b)}} 1 + \sum_{\substack{a,b\le x \\ p\mid (a-b)}} 1 - \sum_{\substack{a,b\le x \\ p\mid (a+b),\, p\mid (a-b)}} 1 \biggr) \\ &= \sum_{p\le x} \biggl( \sum_{a\le x} \sum_{\substack{b\le x \\ b\equiv-a\,(\mathop{\rm mod}\,p)}} 1 + \sum_{a\le x} \sum_{\substack{b\le x \\ b\equiv a\,(\mathop{\rm mod}\,p)}} 1 - \sum_{\substack{a,b\le x \\ p\mid a,\, p\mid b}} 1 \biggr) \\ &\qquad{}- \sum_{\substack{a,b\le x \\ a,b \text{ both odd}}} 1 \\ &= \sum_{p\le x} \biggl( (x+O(1))\bigl( \tfrac xp+O(1) \bigr) + (x+O(1))\bigl( \tfrac xp+O(1) \bigr) - \bigl( \tfrac xp+O(1) \bigr)^2 \biggr) \\ &\qquad{}- \bigl( \tfrac x2+O(1) \bigr)^2 \\ &= 2x^2 \sum_{p\le x} \tfrac1p - x^2 \sum_p \tfrac1{p^2} - \tfrac{x^2}4 + o(x^2). \end{align*} From these two asymptotic formulas it follows that the difference of the two sums is asymptotic to $\frac{x^2}4$ , so that the average difference in the number of distinct prime factors is asymptotically $\frac14$ . Heuristically (in hindsight), the difference is entirely caused by the prime $2$ : since $a$ and $b$ are even or odd independently, there is a $\frac34$ chance that $p=2$ will contribute to $\omega(ab)$ ; but since $a+b$ and $a-b$ are both even or both odd, there is only a $\frac12$ chance that $p=2$ will contribute to $\omega\bigl( (a+b)(a-b) \bigr)$ .
{ "source": [ "https://math.stackexchange.com/questions/4379359", "https://math.stackexchange.com", "https://math.stackexchange.com/users/60930/" ] }
4,382,642
Suppose $(X, d)$ is a metric space and $u \colon X → \mathbb{R}$ . Then $u$ will be called a virtual point of $X$ if, and only if, $u$ satisfies the following three conditions: $u(x) - u(y) \leq d(x,y) \leq u(x) + u(y)$ , $\inf_{x \in X} u(x) = 0$ , and $u(z) \neq 0$ . What is the intuition behind such a definition? This is from Mícheál Ó Searcóid’s book where they have proposed an equivalent condition of $X$ having no virtual points and $X$ being complete. Can someone please explain.
This function has the properties of a function of the form $u: x\mapsto d(a,x)$ for some mysterious point $a$ which does not actually exist (thanks to condition 3; if you remove condition 3 then you can actually take $u(x)=d(a,x)$ for some $a\in X$ ). What does it mean that $a$ does not exist? Well, it might be that $X\subset Y$ , and $a\in Y$ but $a\not\in X$ , so as far as $X$ is concerned $a$ is not actually a point. For instance, you can define the function $u(x)=|x-\sqrt{2}|$ on $\mathbb{Q}$ , which will be a virtual point of $\mathbb{Q}$ , because $\mathbb{Q}$ does not see $\sqrt{2}\in \mathbb{R}$ . And you can do that for any $a\in \mathbb{R}$ instead of $\sqrt{2}$ . So the intuition is that this type of function will be able to "imitate" the presence of points in a bigger space than $X$ , while not actually knowing what this space looks like. So they are sort of "potential" points, or... "virtual" points. (Of course once you know about the completion, you can actually construct the bigger space.) EDIT: I initially just wanted to provide general intuition, but since the answer has gained attention I'll provide more details. What does each axiom say? Axiom 1 is the one that makes the function $u$ behave like $x\mapsto d(a,x)$ for some $a$ in a bigger space $Y$ . Precisely, given a metric space $X$ and a function $u:X\to \mathbb{R}$ satisfying Axiom 1, one can simply define a bigger space $Y=X\cup \{a\}$ (with some symbol $a\not\in X$ ) and extend the metric of $X$ to $Y$ by $d(x,a)=d(a,x):= u(x)$ for each $x\in X$ (and obviously $d(a,a)=0$ ). Axiom 1 exactly says that this extended $d$ satisfies the triangular inequality. This being said, with just Axiom 1, this extended $d$ could be just a quasi-metric, and not an actual metric. If there were some $x\in X$ such that $u(x)=0$ , then with this construction, we would get $d(x,a)=0$ , but $x\neq a$ , so the separation axiom would not be satisfied! This is because when Axiom 3 is not satisfied, then $u$ already corresponds to an actual point of $X$ , so we are sort of adding a second copy of a point which already exists, and those two copies get indistinguishable by the metric. So the extended $d$ to is an actual metric exactly when Axiom 3 is satisfied. (Another way of doing things would be to say that when Axiom 3 is not satisfied, then we don't add a new point to $X$ , we just take $Y=X$ , and $a\in Y$ is the unique point such that $u(a)=0$ . It has to be unique, since if $u(a)=u(b)=0$ , then $d(a,b)\leqslant u(a)+u(b)=0$ so $a=b$ .) Finally, Axiom 2 says that this $a$ is not some isolated point in this bigger $Y$ . More precisely, it is a limit point of $X$ in $Y$ : $a\in \overline{X}\subset Y$ . This is useful if we want to say that those virtual points are actually points in the completion, since that is exactly what the completion of a metric space is: we just add all the missing limit points of $X$ .
{ "source": [ "https://math.stackexchange.com/questions/4382642", "https://math.stackexchange.com", "https://math.stackexchange.com/users/676808/" ] }
4,383,353
For example, if you zoom out very far on a graph of the function $y = x^3$ , it appears like $x = 0$ , or in general, if you zoom out on the graph $x^n$ for $n > 0$ it appears either like $x = 0$ with the restriction that for even bases $y \ge 0$ . However, if you look at the graph $\tan(x)$ it looks like an entire plane. Or, if you look at the graph $y = mx - \sin(x)$ it looks like $mx$ . Or if you look at $ax\sin(x)$ it looks like two triangles where the larger $a$ is the larger the triangles are. Is there some sort of general method for finding what a graph will look like given a function? Is there a collection of rules? Is there even any way of describing "two solid triangles" in the context of a single variable function?
Zooming out a graph $f(x,y) = 0$ amounts to scaling the coordinates equally; introduce a parameter $t > 0$ and plot $f(tx, ty) = 0$ . As $t\to 0$ , the picture resembles zooming into the graph, while $t \to \infty$ correponds to zooming out. In the case of $y = x^3$ or $y - x^3 = 0$ , one has $(ty) - (tx)^3 = 0$ or $y = t^2x^3$ . One also sees that in the case of a linear function $y = ax$ , the graph looks the same at all zoom scales since the $t$ parameter cancels out. In the example you mentioned, $y = x - \sin x$ corresponds to $ty = tx - \sin tx$ or $y = x - \frac 1t \sin tx$ . Then as $t \to \infty$ , the sin term drops off and the graph approaches $y = x$ . What the graph "becomes" or ultimately looks like as $t \to \infty$ can be easily described for some simple cases, like the examples above. For a more complicated example, like $y = x \sin x$ , it is trickier to describe what is happening to the graph as $t \to \infty$ . For $y = x \sin x$ specifically, as you observed it resembles two large triangles, which together can be described as the set of points $T = \{(x,y) : -|x| \leq y \leq |x|\} \subset \mathbb R^2$ . To argue that the graph $ty = tx \sin tx$ somehow "approaches" or "converges to" $T$ as $t \to \infty$ requires some notion of what it means for a family of subsets of $\mathbb R^2$ to "approach" or "converge to" another subset. There are many ways to interpret this idea, and some will be more useful than others in certain contexts. My first guess to formalize this idea might go something like this: let $G_t = \{(x,y) : f(tx,ty) = 0\}$ be the graph of the desired function at the scale $t$ . We will say the family of subsets $G_t$ "approaches" or "converges to" the subset $S \subset \mathbb R^2$ if it holds that $(x,y) \in S$ if and only if there is a sequence $(t_n)_n$ such that $t_n \to \infty$ as $n \to \infty$ , as well as a sequence of points $(x_n,y_n) \in G_{t_n}$ such that $(x_n,y_n) \to (x,y)$ as $n\to \infty$ . With this definition, we can prove that the graph of $y = x \sin x$ approaches the two-triangles set $T$ . Here is the argument: Let $(x,y) \in T$ be arbitrary, ie $-|x| \leq y \leq |x|$ or equivalently $-1 \leq y/x \leq 1$ . There is therefore a value $\theta > 0$ such that $y/x = \sin(\theta + 2\pi n)$ for all $n$ , or equivalently $y = x \sin (\theta + 2\pi n)$ . So let $(t_n)_n$ be the sequence such that $t_n = \frac1x(\theta + 2\pi n)$ for each $n$ (in the case that $x \geq 0$ ; in the case that $x < 0$ , replace $t_n$ by $-t_n$ ), and let $(x_n,y_n)_n$ be the constant sequence $(x,y)$ . Then clearly $(x_n,y_n)_n \to (x,y)$ as $n\to \infty$ , and moreover $y_n = x_n \sin (t_n x_n)$ for each $n$ , ie $(x_n,y_n) \in G_{t_n}$ for each $n$ . In the converse case, suppose $(x,y) \not\in T$ , ie $y/x < -1$ or $y/x > 1$ . Then we want to show there is no sequence of times $t_n$ and points $(x_n,y_n) \in G_{t_n}$ such that $t_n \to \infty$ and $(x_n,y_n) \to (x,y)$ as $n \to \infty$ . We can use a little topology to argue this. Note that $G_t \subset T$ for every $t > 0$ , and $T$ is a closed subset of $\mathbb R^2$ . This means there can be no sequence of points from $T$ converging to $(x,y)$ , if $(x,y) \not\in T$ . This completes the proof, and so we can say that the graph of $y = x\sin x$ does indeed converge to the subset $T$ of two triangles as you zoom out infinitely far (relative to the definition above). There may be a way to interpret what I have outlined above using the idea of a limit set from the field of dynamical systems. Coming up with some theorem about what will happen to a general graph $f(x,y) = 0$ seems like a tall order, but there are a few things you can quickly say. Any asymptotes anywhere in the domain will be pushed to the line $x = 0$ . Aside from that, the picture $\lim_t G_t$ depends on the limiting behavior of $y = f(x)$ as $x \to \infty$ ; whether it oscillates or converges to some value or if it grows without bound. It also matters the growth rate , specifically whether it is faster or slower than a linear function. The linear functions are (more or less) the only functions which are preserved by zooming in and out, so if $y = f(x)$ grows any faster than a linear function ( all linear functions) as $x \to \infty$ , this will also cause the graph to smoosh against the line $x = 0$ as $t \to \infty$ . Conversely, anything which grows slower than a linear function (eg, $y = \sqrt x$ will be smooshed against the line $y = 0$ in the limit. There is another, more compact definition for the set $S = \lim_t G_t$ if we write it in terms of the distance from a point to a set. Given a closed subset $A \subset \mathbb R^2$ and a point $(x,y) \not\in A$ , we can define the distance $$d\big((x,y), A\big) = \inf \big\{d\big((x,y), (a_1,a_2)\big) : (a_1, a_2) \in A\big\}$$ where the distance between the points $(x,y)$ and $(a_1, a_2)$ is the usual Euclidean distance formula. This says that the distance from the point $(x,y)$ to the set $A$ is whatever the smallest/minimal distance is between $(x,y)$ and any point of $A$ . Using this notion, we can restate the definition of what we mean when we say the family of subsets $G_t$ converges to another subset. Namely, $$(x,y) \in \lim_{t\to\infty} G_t \iff \lim_{t\to\infty} d\big((x,y), G_t\big) = 0$$ Using this definition, here is a proof that the graph of $y = x^2$ converges to the half-line $\{(0,y) : y \geq 0\}$ as one zooms out infinitely far. In this case, $G_t = \{(x,y) : ty = (tx)^2\}$ or equivalently $y = tx^2$ . So pick an arbitrary $y \geq 0$ , in which case for each $t > 0$ one can let $x = \sqrt{y/t}$ . For this $x$ and $y$ it holds that $(x,y) \in G_t$ , and therefore $$d\big((0,y), G_t\big) \leq x = \sqrt{y/t}.$$ Using the squeeze theorem, we then see that $d\big((0,y), G_t\big) \to 0$ as $t \to \infty$ . This demonstrates that $\{(0,y) : y \geq 0\} \subset \lim_t G_t$ . For the reverse inclusion, we would need to argue that any point $(x,y)$ with $x \neq 0$ or $y < 0$ must stay "bounded away" from the set $G_t$ for all sufficiently large $t$ . We can make use of a simple lemma: if $A_t$ is a descending family of closed subsets such that $G_t \subset A_t$ for each $t$ , then $\lim_t G_t \subset \bigcap_t A_t$ . This is true because, for any point $(x,y) \not\in \bigcap_t A_t$ , it holds that $(x,y) \not\in A_t$ for all sufficiently large $t$ - and, because each $A_t$ is closed, any point outside of $A_t$ is a positive distance away from $A_t$ . Say, for instance, $(x,y) \not\in A_{t_0}$ and observe $G_t \subset A_t \subset A_{t_0}$ for all $t \geq t_0$ . We also have, because $A_{t_0}$ is closed, that $(x,y) \not\in A_{t_0}$ implies $d\big((x,y),A_{t_0}\big) > 0$ . Hence $$d\big((x,y), G_t\big) \geq d\big((x,y), A_{t_0}\big) > 0$$ for all $t \geq t_0$ , and therefore $d\big((x,y),G_t\big) \not\to 0$ as $t \to \infty$ . We can apply this lemma to our problem here in the following way. For each $t > 0$ , let $A_t = \{(x,y) : y \geq 2\sqrt{t}|x| - 1 \text{ and } y \geq 0\}$ . Observe that $(A_t)_t$ is a descending family of subsets and $G_t \subset A_t$ for each $t$ . By the lemma above, we then have that $$\lim_{t\to\infty} G_t \subset \bigcap_{t>0} A_t = \{(0,y) : y \geq 0\}$$ This completes the proof that the graph of $y = x^2$ converges to the half-line $x = 0$ , $y \geq 0$ as one zooms out infinitely far. See below for an illustration of $G_t$ and $A_t$ for a particular $t$ value. The idea is that we overapproximate the graph $G_t$ by a sort of triangular cone shape (also implicitly restricting to the upper half-plane). The reason why $\bigcap_t A_t = \{(0,y) : y \geq 0\}$ is clear from the picture; as $t \to \infty$ , the slope of the boundary goes to infinity as well, and so it will "sweep past" any point $(x,y)$ with $x \neq 0$ , in which case $(x,y)$ has no hope of belonging to the intersection. There is a subtle difference between the two definitions of the set $\lim_t G_t$ that I presented above. In the first definition, $(x,y) \in \lim_t G_t$ when the set $G_t$ is close to $(x,y)$ at infinitely many times $t$ , but the distance between $G_t$ and $(x,y)$ may also be large for infinitely many times. In the second definition, $(x,y) \in \lim_t G_t$ when $G_t$ is close to $(x,y)$ for all sufficiently large $t$ , which is a stronger condition. This doesn't change the validity of either of the proofs I presented above, as showing convergence in the second sense will imply convergence in the first sense (but not the other way around).
{ "source": [ "https://math.stackexchange.com/questions/4383353", "https://math.stackexchange.com", "https://math.stackexchange.com/users/671747/" ] }
4,384,127
By the compactness theorem, there are nonstandard models where don't have finite prime decomposition, and some elements are divisible by infinitely many distinct primes. But what if we want a nice nonstandard model where any element is divisible by only finitely distinct primes, is it possible? It seems hard to construct one, if it is possible. I don't know how to construct models of Peano's arithmetic where I don't want something to happen. Of course, we cannot ask for finite prime decomposition, because in any nonstandard model there are nonstandard elements whose only prime divisor is $2$ , and hence they are divisible by $2^n$ for any $n \in \mathbb{N}$ . If such a model exists, can we have one of any infinite cardinality?
There is no such model. The following is a theorem of Peano arithmetic: For every number $n$ , there is some number $n'$ that is divisible by all $k < n$ . Consider a non-standard model $\langle \mathfrak{M}, +, \cdot, 0, S\rangle$ of Peano arithmetic. Since $\mathfrak{M}$ is non-standard, it has some non-standard element $K$ so that $0 < K, 1 < K, 2 < K, \dots$ and so on, for each standard natural. By the theorem of Peano arithmetic quoted above, there is another number $K'$ that is divisible by all $k < K$ . In particular, $K'$ is divisible by $2$ , divisible by $3$ , divisible by $5$ , and so on for all the standard primes.
{ "source": [ "https://math.stackexchange.com/questions/4384127", "https://math.stackexchange.com", "https://math.stackexchange.com/users/710897/" ] }
4,384,496
I want to show that the average of an odd number of equally spaced points on the unit circle is equal to 0. More precisely, let $n$ be an odd number, $\theta_1,\ldots, \theta_n\in[0,2\pi)$ and $$ re^{i\psi}:=\frac{1}{n}\sum_{i=1}^ne^{i\theta_i}.$$ We want to show that if the $\theta_i$ s are equally spaced then $r=0$ . I remark that I do not want to use Vieta's formulas but rather prove it "by hand". I was able to prove this formula for $r$ : $$r=\frac{1}{n}\left(n+2\sum_{i=1}^{n-1}\sum_{j=i+1}^n\cos(\theta_i-\theta_j)\right)^{1/2} $$ (I wouldn't know if this is a well known formula or not...). If the angles are equally spaced, upon relabeling if necessary we have $\theta_i=\frac{2(i-1)\pi}{n},\:i=1,\ldots,n$ so that \begin{align*} r^2 & = \frac{1}{n^2}\left(n+2\sum_{i=1}^{n-1}\sum_{j=i+1}^n\cos(\theta_i-\theta_j)\right) \\ & = \frac{1}{n^2}\left(n+2\sum_{i=1}^{n-1}\sum_{j=i+1}^n\cos\left(\frac{2(i-j)\pi}{n}\right)\right) \end{align*} Then we would like to show that $$\sum_{i=1}^{n-1}\sum_{j=i+1}^n\cos\left(\frac{2(i-j)\pi}{n}\right)=-\frac{n}{2}.$$ If we set $n=2k+1\:k\in\mathbb{N}$ we can rewrite the previous formula in terms of $k$ : $$\sum_{i=1}^{2k}\sum_{j=i+1}^{2k+1}\cos\left(\frac{2(i-j)\pi}{2k+1}\right)=-k-\frac{1}{2}.$$ Expanding this double sum gives \begin{align*} \sum_{i=1}^{2k}\sum_{j=i+1}^{2k+1}\cos\left(\frac{2(i-j)\pi}{2k+1}\right) & = \cos\left(\frac{-2\pi}{2k+1}\right)+\cos\left(\frac{-4\pi}{2k+1}\right)+\ldots+\cos\left(\frac{-4k\pi}{2k+1}\right) \\ & + \cos\left(\frac{-2\pi}{2k+1}\right)+\cos\left(\frac{-4\pi}{2k+1}\right)+\ldots+ \cos\left(\frac{-(4k-2)\pi}{2k+1}\right) \\ &\vdots \\ & + \cos\left(\frac{-2\pi}{2k+1}\right)+\cos\left(\frac{-4\pi}{2k+1}\right) \\ & +\cos\left(\frac{-2\pi}{2k+1}\right) \end{align*} which can be rearranged as: $$\sum_{i=1}^{2k}\sum_{j=i+1}^{2k+1}\cos\left(\frac{2(i-j)\pi}{2k+1}\right) =2k\cos\left(\frac{2\pi}{2k+1}\right)+(2k-1)\cos\left(\frac{4\pi}{2k+1}\right)+\ldots+2\cos\left(\frac{(4k-2)\pi}{2k+1}\right)+\cos\left(\frac{4k\pi}{2k+1}\right).$$ Then, my question is the following: is it true that $$ 2k\cos\left(\frac{2\pi}{2k+1}\right)+(2k-1)\cos\left(\frac{4\pi}{2k+1}\right)+\ldots+2\cos\left(\frac{(4k-2)\pi}{2k+1}\right)+\cos\left(\frac{4k\pi}{2k+1}\right)=-k-\frac{1}{2}?$$ I checked it for some values of $k$ and try induction for the general case, but I couldn't get very far. Also, as I mentioned, one can prove the result using Vieta's formula but it seems that one should be able to prove it this way as well.
The answer by Golden_Ratio is totally fine, but here is a potentially more intuitive, less abstract way of seeing the answer using symmetry: Say you have five points. If you rotate them all by a fifth of a full turn, the five points are in the same five positions, just shuffled in a different order. The average of the five points therefore Has to be exactly the same, because the five points are the same Has to be rotated by a fifth of a full turn. The only number that can work here (i.e. be rotated but stay exactly the same), is the number $0$ . So this must be the asnwer.
{ "source": [ "https://math.stackexchange.com/questions/4384496", "https://math.stackexchange.com", "https://math.stackexchange.com/users/188415/" ] }
4,386,046
I tried to show that commutativity of addition implies associativity. For this I assumed that there is no associative property and $ a + b + c $ should be interpreted as $(a + b) + c $ . $$(a + b) + c = a + b + c = b + a + c = c + b + a = (c + b) + a=(b + c) + a$$ $$ =a+(b + c).$$ I suppose this is incorrect but I am not sure of where exactly the mistake is. Does it have to do with wrong or lack of use of parentheses? Am I required to use parentheses to apply the commutative property because the property is defined for only 2 elements so it should be $ (b + a) + c = c +( b + a ) $ instead of $b + a + c = c + b + a$ ?
The reason we can get away with writing $a + b + c$ without being concerned about exactly what it implies about the order of operations is because of the associative property, which is what you are trying to prove. So you either have to write parentheses in every one of your expressions adding three quantities, or you have to decide whether $x + y + z = (x + y) + z$ or $x + y + z = x + (y + z),$ and then you have to stick with the same choice for the entire proof. If you rely on being able to write $(a+b)+c=a+b+c$ then you are saying the leftmost addition always occurs first. In that case, we can always interpret anything of the form $x + y + z$ by explicitly putting the parentheses around the first two terms, $(x + y) + z$ , and the first few equations of your "proof" then become: $$ (a+b)+c = (a+b)+c = (b+a)+c \stackrel?= (c+b)+a. $$ Commutativity would support putting $c + (b + a)$ on the right-hand side of the last equation but in the first equation you ruled out that interpretation of the notation $c + b + a.$ You end up with something that cannot be shown merely by commutativity. (Alternatively, if you claim that $c + b + a$ means $c + (b + a),$ then neither of the first two equations can be shown by commutativity alone.)
{ "source": [ "https://math.stackexchange.com/questions/4386046", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1012467/" ] }
4,386,052
Suppose we are given a random variable $\sigma : \Omega \to S_n$ on the symmetric group of $n$ letters and we are given numbers representing the probabilities $$p_{k,l} = \mathbb P(\sigma(k) = l).$$ Ignore for now that they are defined via the distribution of $\sigma$ and just consider numbers $p_{k,l} \geq 0$ with $\sum_{k=1}^n p_{k,l} = \sum_{l=1}^n p_{k,l} = 1$ . Does there always exist a distribution, i.e. probabilities $\mathbb P(\sigma = \tau)$ summing to $1$ , such that $p_{k,l} = \mathbb P(\sigma(k) = l)$ ? Is there a canonical choice (maybe with maximal support)? Of course, we have $$\mathbb P(\sigma = \tau) = \mathbb P(\sigma(1) = \tau(1),\dots, \sigma(n) = \tau(n)),$$ but the joint distribution of $(\sigma (1),\dots, \sigma(n))$ is not known and they cannot be considered as independent. We can write $$\mathbb P(\sigma = \tau) = \mathbb P(\sigma(1) = \tau(1)|\sigma = \tau\text{ on } \{2,\dots, N\}) \mathbb P(\sigma(2) = \tau(3)|\sigma = \tau\text{ on } \{3,\dots, N\})\cdots \mathbb P(\sigma_N = \tau_N),$$ and this seems to be in the "permutation spirit", but I still don't see what probabilities one should use here. Note that we are given $N^2$ numbers, but need to determine $N!$ numbers. The distribution of $\sigma$ cannot be unique in general as the following example shows: Let $G$ be a transitive subgroup of $S_n$ and suppose $\sigma$ has uniform distribution on $G$ . Then $$\mathbb P(\sigma(k) = l) = \sum_{\tau \in G} \mathbb P(\sigma(k) = l|\sigma = \tau) \mathbb P(\sigma = \tau) = \frac{1}{|G|} |\{\tau \in G : \tau(l) = j\}|.$$ By the transitivity there exists a $\pi\in G$ , such that $\pi(j) = l$ . So $$|\{\tau \in G : \tau(l) = j\}| = |\{\tau \in G : \pi\circ \tau(l) = \pi(l)\}| = |\{\tau \in G : \tau(l) = l\}| = |\operatorname{Stb}(l)|,$$ where $\operatorname{Stb}(l)$ is the stabilizer of $l$ under the action of $G$ on $\{1,\dots, n\}$ . Since the action is transitive and by the orbit-stabilizer theorem we have $$|\operatorname{Stb}(l)| = \frac{|G|}{N}.$$ Therefore, $$\mathbb P(\sigma(k) = l) = \frac1N.$$ Notably, this does not depend on $G$ . By choosing $G = S_n$ and $G = A_n$ (alternating group) we get two different distributions for $\sigma$ with the same probabilities $\mathbb P(\sigma(k) = l)$ .
The reason we can get away with writing $a + b + c$ without being concerned about exactly what it implies about the order of operations is because of the associative property, which is what you are trying to prove. So you either have to write parentheses in every one of your expressions adding three quantities, or you have to decide whether $x + y + z = (x + y) + z$ or $x + y + z = x + (y + z),$ and then you have to stick with the same choice for the entire proof. If you rely on being able to write $(a+b)+c=a+b+c$ then you are saying the leftmost addition always occurs first. In that case, we can always interpret anything of the form $x + y + z$ by explicitly putting the parentheses around the first two terms, $(x + y) + z$ , and the first few equations of your "proof" then become: $$ (a+b)+c = (a+b)+c = (b+a)+c \stackrel?= (c+b)+a. $$ Commutativity would support putting $c + (b + a)$ on the right-hand side of the last equation but in the first equation you ruled out that interpretation of the notation $c + b + a.$ You end up with something that cannot be shown merely by commutativity. (Alternatively, if you claim that $c + b + a$ means $c + (b + a),$ then neither of the first two equations can be shown by commutativity alone.)
{ "source": [ "https://math.stackexchange.com/questions/4386052", "https://math.stackexchange.com", "https://math.stackexchange.com/users/166694/" ] }
4,387,200
A classic example of an infinite series that converges is: ${\displaystyle {\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{8}}+{\frac {1}{16}}+\cdots =\sum _{n=1}^{\infty }\left({\frac {1}{2}}\right)^{n}=1.}$ A classic example of an infinite integral that converges is: $\displaystyle\int_{1}^{\infty} \frac{1}{x^2} \,dx = 1.$ They feel very similar! But not quite the same. Is there a way to think about one in terms of the other? I ask partly because I want to borrow the nice geometric illustrations that the former converges (like this , or similarly for other geometric series ) to show the latter converging. ( Related question about illustrating the geometry of $\frac{1}{x}$ vs. $\frac{1}{x^2}$ .)
You can break the integral $\int_{1}^{\infty}\frac{1}{x^2}dx$ into a sum of integrals over intervals $[2^n,2^{n+1}]$ : $$\int_{1}^{\infty}\frac{1}{x^2}dx = \sum_{n=0}^{\infty}\int_{2^n}^{2^{n+1}}\frac{1}{x^2}dx = \sum_{n=0}^{\infty} \left(\frac{1}{2^n} - \frac{1}{2^{n+1}} \right) =\sum_{n=0}^{\infty} \frac{1}{2^{n+1}}=\sum_{n=1}^{\infty} \frac{1}{2^{n}}.$$
{ "source": [ "https://math.stackexchange.com/questions/4387200", "https://math.stackexchange.com", "https://math.stackexchange.com/users/28798/" ] }
4,397,434
A standard theorem concerning series of real numbers states that every absolutely convergent series of real numbers converges. I would like to know a counterexample to this statement when we are dealing only with rational numbers. More precisely, I would like to know an example of a series $\sum_{n=0}^\infty q_n$ of rational numbers such that $\sum_{n=0}^\infty|q_n|$ converges to a rational number and that $\sum_{n=0}^\infty q_n$ converges to an irrational number. Furthermore, I want that the reason why the example works is understandable by someone who is only aware of basic statements concerning series. If it wasn't for the last requirement, I would know how to do it. One possibility would be to consider the power series $$\sum_{n=0}^\infty\binom{-1/2}nx^n,$$ which converges to $1/\sqrt{1+x}$ in $(-1,1)$ . In particular, $$\sum_{n=0}^\infty\binom{-1/2}n\left(\frac34\right)^n=\frac2{\sqrt7}\notin\Bbb Q.$$ But \begin{align}\sum_{n=0}^\infty\left|\binom{-1/2}n\left(\frac34\right)^n\right|&=\sum_{n=0}^\infty(-1)^n\binom{-1/2}n\left(\frac34\right)^n\\&=\sum_{n=0}^\infty\binom{-1/2}n\left(-\frac34\right)^n\\&=2\in\Bbb Q.\end{align} Another possibility consists in using a counting argument (although this only proves that a counter-example exists, rather than exhibiting one). The numbers of the form $$\sum_{n=0}^\infty\frac{\varepsilon_n}{3^n},$$ where $(\varepsilon_n)_{n\in\Bbb Z_+}$ is a sequence which takes only the values $1$ and $-1$ , form an uncountable set. So, for some sequences $(\varepsilon_n)_{n\in\Bbb Z_+}$ , the sum is irrational. But $$\sum_{n=0}^\infty\left|\frac{\varepsilon_n}{3^n}\right|=\frac32\in\Bbb Q.$$
Here’s one that’s perhaps more in the spirit of direct demonstration that you were looking for: $$ \sum_{k=1}^\infty\frac1{k(k+1)}=\sum_{k=1}^\infty\left(\frac1k-\frac1{k+1}\right)=1\;, $$ $$ \sum_{k=1}^\infty\frac{(-1)^k}{k(k+1)}=\sum_{k=1}^\infty(-1)^k\left(\frac1k-\frac1{k+1}\right)=1+2\sum_{k=1}^\infty\frac{(-1)^k}k=1-2\log2\;. $$
{ "source": [ "https://math.stackexchange.com/questions/4397434", "https://math.stackexchange.com", "https://math.stackexchange.com/users/446262/" ] }
4,403,865
Originally posed by Fermat and subsequently generalized as sum of two squares theorem , we can see the following statement. An integer greater than one can be written as a sum of two squares if and only if its prime decomposition contains no factor $p^k$ , where prime $p\equiv 3 \pmod {4}$ and $k$ is odd . My question is simple. Is there any variation known to this theorem? Such as, when we refer the theorem above as $1 \pmod {4}$ version, I would like to know whether there are any $1 \pmod {6}$ version, $1 \pmod {8}$ version, $1 \pmod {12}$ version...and so on. The Diophantine equation won't have to be necessarily similar with the two square version. Such as, someone might find some property of prime decomposition regarding some modular restriction with a Diophantine equation higher that the degree 2. I've tried to make the question simple, but I'm not sure whether they could have been conveyed to the readers. If the points were not clear, please let me clarify them with further comments. Thanks. Edit : My question was posed to ask for some variation in regard of modular restriction of prime decomposition. For example let's think about some formula A which is a Diophantine polynomial. $$ A = n $$ when $A = a^2 + b^2$ , it is sum of two squares theorem, which I refer as $1 \pmod {4}$ version. My main question is, whether there is any $A$ that makes $n$ on the right hand side being factorized only with $1 \pmod {6}$ numbers, or $1 \pmod {8}$ numbers, or $1 \pmod {12}$ numbers. If there is any, we may refer them as $1 \pmod {6}$ version, $1 \pmod {8}$ version, $1 \pmod {12}$ version. I see that already some of users are sharing their examples which I appreciate. Thank you for your interests on my question.
The sum of two squares theorem can be proved by working in $\Bbb Z[i]$ . By working in $\Bbb Z[\zeta_3]$ instead, where $\zeta_3$ is a third root of unity, one may prove that an integer greater than one can be written in the form $a^2+3b^2$ (or equivalently, in the form $a^2+ab+b^2$ , see the comments) if and only if its prime decomposition contains no factor $p^k$ where $k$ is odd and $p$ is a prime $p \equiv 2 \pmod{3}$ . There is a similar statement for every quadratic number field with class number one. This may be proved by using unique factorization in the ring of integers, quadratic reciprocity and the characterization of splitting of primes in quadratic extensions in terms of Legendre symbols. Here are some more examples: By working in $\Bbb Z[\sqrt{2}]$ , one gets that an integer greater than one can be written in the form $a^2-2b^2$ if and only if its prime decomposition contains no factor $p^k$ where $k$ is odd and $p$ is a prime $p \equiv 3,5 \pmod{8}$ . By working in $\Bbb Z[\sqrt{-2}]$ , one gets that an integer greater than one can be written in the form $a^2+2b^2$ if and only if its prime decomposition contains no factor $p^k$ where $k$ is odd and $p$ is a prime $p \equiv 5,7 \pmod{8}$ . By working in $\Bbb Z[\frac{1+\sqrt{-19}}{2}]$ , one gets that an integer greater than one can be expressed as $a^2+ab+5b^2$ if and only if its prime decomposition contains no factor of the form $p^k$ , where $k$ is odd and $p=2$ or $p\equiv 3,4,8,10,12,13,14,15,18$ . Note that the quadratic form here is non-diagonal, because the ring of integers of $\Bbb Q(\sqrt{-19})$ is not $\Bbb Z[\sqrt{-19}]$ . There's another phenomenon that these examples so far haven't covered. If the number field is real quadratic and there is no unit with norm $-1$ in $\mathcal O_K$ , then we need to introduce a $\pm$ in the quadratic form to get a correct statement. For example, in $\Bbb Z[\sqrt{7}]$ one has the fundamental unit $3\sqrt{7}-8$ with norm $1$ , and so one gets that that an integer greater than one can be written in the form $\pm(a^2-7b^2)$ if and only if its prime decomposition contains no factor $p^k$ where $k$ is odd and $p$ is a prime $p \equiv 5,11,13,15,17,23 \pmod{28}$ . It is conjectured that there infinitely many real quadratic number fields of class number $1$ , so if that conjecture holds, we get infinitely many such theorems. ( LMFDB lists 177168 examples ) Here's the general theorem that may be specialized using some Legendre symbol computations (i.e. qudratic reciprocity) to give congruence conditions: Theorem Let $K=\Bbb Q(\sqrt{D})$ be a quadratic number field with $D$ square-free and suppose that the ring of integers $\mathcal O_K$ has class number $1$ . We have six cases: $D<0$ and $D \not\equiv 1 \pmod 4$ $D<0$ and $D\equiv 1 \pmod 8$ $D<0$ and $D\equiv 5 \pmod 8$ $D>0$ and $D \not\equiv 1 \pmod 4$ $D>0$ and $D \equiv 1 \pmod 8$ $D>0$ and $D \equiv 5 \pmod 8$ For the cases 4-6, we have two subcases each: a) the fundamental unit of $\mathcal O_K$ has norm $-1$ b) the fundamental unit of $\mathcal O_K$ has norm $1$ We then have the following properties: An integer greater than one can be written in the form $a^2-Db^2$ if and only if its prime decomposition contains no factor $p^k$ where $k$ is odd and $p$ is a prime such that $\left(\frac{D}{p}\right)=-1$ An integer greater than one can be written in the form $a^2+ab+\frac{1-D}{4}b^2$ if and only if its prime decomposition contains no factor $p^k$ where $k$ is odd and $p$ is a prime such that $p=2$ or $\left(\frac{D}{p}\right)=-1$ An integer greater than one can be written in the form $a^2+ab+\frac{1-D}{4}b^2$ if and only if its prime decomposition contains no factor $p^k$ where $k$ is odd and $p$ is a prime such that $\left(\frac{D}{p}\right)=-1$ An integer greater than one can be written in the form $a^2-Db^2$ (in subcase a) or in the form $\pm(a^2-Db^2)$ (in subcase b) if and only if its prime decomposition contains no factor $p^k$ where $k$ is odd and $p$ is a prime such that $\left(\frac{D}{p}\right)=-1$ An integer greater than one can be written in the form $a^2+ab+\frac{1-D}{4}b^2$ (in subcase a) or in the form $\pm(a^2+ab+\frac{1-D}{4}b^2)$ (in subcase b) if and only if its prime decomposition contains no factor $p^k$ where $k$ is odd and $p$ is a prime such that $p=2$ or $\left(\frac{D}{p}\right)=-1$ An integer greater than one can be written in the form $a^2+ab+\frac{1-D}{4}b^2$ (in subcase a) or in the form $\pm(a^2+ab+\frac{1-D}{4}b^2)$ (in subcase b) if and only if its prime decomposition contains no factor $p^k$ where $k$ is odd and $p$ is a prime such that $\left(\frac{D}{p}\right)=-1$
{ "source": [ "https://math.stackexchange.com/questions/4403865", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1031717/" ] }
4,404,052
The Traveling Salesperson Problem is originally a mathematics/computer science optimization problem in which the goal is to determine a path to take between a group of cities such that you return to the starting city after visiting each city exactly once and the total distance (longitude/latitude) traveled is minimized. For $n$ cities, there are $(n-1)!/2$ unique paths - and we can see that as $n$ increases, the number of paths to consider becomes enormous in size. For even a small number of cities (e.g. 15 cities), modern computers are unable to solve this problem using "brute force" (i.e. calculate all possible routes and return the shortest route) - as a result, sophisticated optimization algorithms and approximate methods are used to tackle this problem in real life. I was trying to explain this problem to my friend, and I couldn't think of an example which shows why the Travelling Salesperson Problem is difficult! Off the top of my head, I tried to give an example where someone is required to find the shortest route between Boston, Chicago and Los Angeles - but then I realized that the shortest path in this case is pretty obvious! (i.e. Move in the general East to West direction). Real world applications of the Travelling Salesperson Problem tend to have an additional layer of complexity as they generally have a "cost" associated between pairs of cities - and this cost doesn't have to be symmetric. For example, buses might be scheduled more frequently to go from a small city to a big city, but scheduled less frequently to return from the big city to the small city - thus, we might be able to associate a "cost" with each direction. Or even a simpler example, you might have to drive "uphill" to go from City A to City B, but drive "downhill" to go from City B to City A - thus there is likely a greater cost to go from City A to City B. Many times, these "costs" are not fully known and have to be approximated with some statistical model. However, all this can become a bit complicated to explain to someone who isn't familiar with all these terms. But I am still looking for an example to explain to my friend - can someone please help me think of an obvious and simple example of the Travelling Salesperson Problem where it becomes evidently clear that the choice of the shortest path is not obvious? Every simple example I try to think of tends to be very obvious (e.g. Manhattan, Newark, Nashville) - I don't want to overwhelm my friend with an example of 1000 cities across the USA : just something simple with 4-5 cities in which it is not immediately clear (and perhaps even counterintuitive) which path should be taken? I tried to show an example using the R programming language in which there are 10 (random) points on a grid - starting from the lowest point, the path taken involves choosing the nearest point from each current point: library(ggplot2) set.seed(123) x_cor = rnorm(5,100,100) y_cor = rnorm(5,100,100) my_data = data.frame(x_cor,y_cor) x_cor y_cor 1 43.95244 271.50650 2 76.98225 146.09162 3 255.87083 -26.50612 4 107.05084 31.31471 5 112.92877 55.43380 ggplot(my_data, aes(x=x_cor, y=y_cor)) + geom_point() + ggtitle("Travelling Salesperson Example") But even in this example, the shortest path looks "obvious" (imagine you are required to start this problem from the bottom most right point): I tried with more points: set.seed(123) x_cor = rnorm(20,100,100) y_cor = rnorm(20,100,100) my_data = data.frame(x_cor,y_cor) ggplot(my_data, aes(x = x_cor, y = y_cor)) + geom_path() + geom_point(size = 2) But my friend still argues that the "find the nearest point from the current point and repeat" (imagine you are required to start this problem from the bottom most right point): How do I convince my friend that what he is doing corresponds to a "Greedy Search" that is only returning a "local minimum" and it's very likely that a shorter path exists? (not even the "shortest path" - just a "shorter path" than the "Greedy Search") I tried to illustrate this example by linking him to the Wikipedia Page on Greedy Search that shows why Greedy Search can often miss the true minimum : https://en.wikipedia.org/wiki/Greedy_algorithm#/media/File:Greedy-search-path-example.gif Could someone help me think of an example to show my friend in which choosing the immediate nearest point from where you are, does not result in the total shortest path? (e.g. some example that appears counterintuitive, i.e. if you choose a path always based on the nearest point from your current position, you can clearly see that this is not the optimal path) Is there a mathematical proof that shows that the "Greedy Search" algorithm in Travelling Salesperson has the possibility of sometimes missing the true optimal path? Thanks!
Here's a simple explicit example in which the greedy algorithm always fails, this arrangement of cities (and euclidean distances): If you apply the greedy algorithm on this graph, it'll look like the following (or a flipped version): This is true regardless of the starting point. This means the greedy algorithm gives us a path with a total distance traveled of $20 + 2 + 2\sqrt{101} \approx 42.1$ Clearly, this isn't the optimal solution though. Just by eyeballing it, you can see that this is the best path: It has a total length of $4\sqrt{101} \approx 40.2$ , which is better than the greedy algorithm. You can explain to your friend that the reason why the greedy algorithm fails is because it doesn't look ahead. It sees the shortest path (in this case, the vertical one), and takes it. However, doing so may later force it to take a much long path, leaving it worse off in the long run. While it's simple to see in this example, detecting every case where this happens is a lot harder.
{ "source": [ "https://math.stackexchange.com/questions/4404052", "https://math.stackexchange.com", "https://math.stackexchange.com/users/791334/" ] }
4,408,998
Commutative ring with unit is defined as $(R,+,\times)$ , where $(R,+)$ is abelian group and $(R,\times)$ is commutative multiplicative monoid with $1$ and $+$ and $\times$ satisfies distributive law. Could you give me an example $(R,+,\times)$ cannnot be a ring because $+$ and $\times$ does not satisfy distributive law although $(R,+)$ is abelian group and $(R,\times)$ is commutative multiplicative monoid with $1$ .
Here is a "dumb" example. Let $R=\mathbb Z$ , and let $\times=+$ , i.e., addition and multiplication are the same thing. Now $(R,\times)$ is a commutative monoid, with a $1$ (i.e, $0\in R$ ). This is clearly not distributive: $1\times(1+1)=3\neq1\times 1+1\times 1=4$ .
{ "source": [ "https://math.stackexchange.com/questions/4408998", "https://math.stackexchange.com", "https://math.stackexchange.com/users/985884/" ] }
4,410,589
I have a very very simple quadratic function: Now building a new tunnel that has a shape of parabolic curve. The tunnel is 10 m wide and at 4 m from either side, the height of the tunnel is 6 m. Find the quadratic equation in standard form that models the ceiling of the new tunnel. The question is very simple, but I just can't understand what does at 4 m from either side mean in this question while mentioning the width in 10m? I am not a native English speaker so it is hard for me understand what the information this question is asking. Any comments and answers will be appreicated! Edit: I see comments where people say that this question is badly word, I agree. Actually, this is a question I did on test and I just get the result today, none of the people in my class get the answer. It not me made the question.
This figure would help you understand: (To-scale figure,Credit :Dan)
{ "source": [ "https://math.stackexchange.com/questions/4410589", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1009232/" ] }
4,417,325
When I say "divisibility trick" I mean "a recursive algorithm designed to show that, after multiple iterations, if the final output is a multiple of the desired number, then the original was also a multiple of the same number." Here's an example for a divisibility trick for 17. Rewrite $n$ as $10q+r$ , with $r<10$ . Then, evaluate $|q-5r|$ . Repeat this process until left with an easily factorable number. Here's another. Rewrite $n$ as $100q+r$ , with $r<100$ . Then, evaluate $|r-2q|$ . Repeat this process until left with an easily factorable number. Just to show these both work (or, at least, work for one particular number, let's try both on $31382$ . METHOD ONE: $31382\rightarrow3128\rightarrow272\rightarrow17$ , ergo $17\ |\ 31382$ . METHOD TWO: $31382\rightarrow544\rightarrow34$ , ergo $17\ |\ 31382$ . These divisibility tricks rely on breaking the number down into groups of digits and applying some linear operation to them. However, when we try to use a non-linear function, things seem to break down. For example, much as how method one here draws off the fact that $17\ |\ 51$ and the second relies off of $17\ |\ 119$ , let's try to do something with $17\ |\ 34$ . Namely: Rewrite $n$ as $10q+r$ , with $r<10$ . Then, evaluate $|6q^2-5r|$ . Repeat this process until left with an easily factorable number. We can try this with $34$ and see, yes, $6(9)-5(4)=34$ , so $17\ |\ 34$ . But this fails for most numbers. For $51$ , we have $51\rightarrow145$ , and it diverges from there (also note that $17\not|\ 145$ ). Even with $17$ , which is obviously a multiple of $17$ , we have $17\rightarrow29$ . What separates the wheat from the chaff here, so to speak? Why is it that if we break down the digits of the multiple of some prime and make a linear relation around it, it seems to be true for all other multiples of the prime, but the same doesn't work for, say, a quadratic relation?
No, divisibility tests are not restricted to linear forms. As explained here & here the rule for casting out nines: $\,9\mid 10a+b\!\iff\! 9\mid a+b\,$ extends to higher degree as $\,9\mid p(10)\!\iff\! 9\mid p(1)\,$ [by $\!\bmod 9\!:\ p(10)\equiv p(1)\,],\:\!$ for any polynomial $p(x)$ with integer coef's (by $\rm\color{#0a0}{PCR}$ below). When $\,n = p(10)\,$ then $p(1)$ is the sum of the decimal digits of $n$ . Similarly $\,11\mid 10a\!+\!b\!\iff\! 11\mid a\!-\!b\,$ extends to $\,11\mid p(10)\!\iff\! 11\mid p(-1) =$ alternating digit sum. The common tests you refer to correspond to reversed forms of the above divisibility tests, e.g. $\bmod 17\!:\ 10a+b\equiv 0 \!\iff\!$ $ 10(a+b\color{#c00}{/10})\equiv 0\!\iff\!$ $ a\color{#c00}{-5}b\equiv 0\,$ by $\,\color{#c00}{1/10\equiv -5},\,$ i.e. it arises via scaling by $\,\color{#c00}{10^{-1}\equiv -5}.\,$ Similarly, if $\,\deg p = k\,$ then scaling by $(-5)^k\equiv 10^{-k}$ changes all powers of $10$ in $\,p(10)\,$ into powers of $-5$ , effectively reversing the coef's, e.g. for a quadratic $\ \ \ \bmod \color{#c00}{17}\!:\,\ \ 0\equiv \overbrace{a\:\!10^2+b\:\!10+c}^{\large p(\color{#c00}{10})} \overset{\times\ (\color{c00}{-5})^{\large 2}\!\!}\iff\ \overbrace{0\equiv c(-5)^2+b(-5)+a}^{\large \tilde p(\color{#c00}{-5})}$ thus $\,\color{#c00}{17}\mid p(\color{#c00}{10})\!\iff\! 17\mid\tilde p(\color{#c00}{-5}) =\,$ reversed poly in radix $-5,\,$ by $\,\color{#c00}{10(-5)\equiv_{17} 1},\,$ e.g. $$ \color{#c00}{17}\mid 901\,\ \ {\rm by}\ \ 17\mid 109_{\color{#c00}{-5}} = 1(\color{#c00}{-5})^2+0(\color{#c00}{-5})+9 = 34 \quad$$ Such radix reciprocity divisibility by $\,d\,$ tests exist for any radices $r_1,\, r_2$ being reciprocal $\!\bmod d,\,$ i.e. when $\,r_1 r_2\equiv 1,\,$ e.g. for binary $\,r_2\!:\ \color{#0a0}{10(2)\equiv_{19} 1}$ and $\,\color{#c00}{10(-2)\equiv_{21} 1},\,$ so $$\begin{align} &\color{#0a0}{19}\mid 912\,\ \ {\rm by}\ \ 19\mid219_{\,\color{#0a0}2} \, =\ 2\,(\color{#0a0}2)^2\ +\ 1\,(\color{#0a0}2)\ +\ 9 = 19\\[.4em] &\color{#c00}{21}\mid 924\,\ \ {\rm by} \ \ 21\mid 429_{\color{#c00}{-2}}= 4(\color{#c00}{-2})^2+2(\color{#c00}{-2})+9 = 21\end{align}\quad $$ This is but one of numerous examples of higher-degree divisibility inferences that are ubiquitous in number theory and algebra. Such inferences become obvious once one masters congruences and modular arithmetic (see esp. $\rm\color{#0a0}{PCR}$ = Polynomial Congruence Rule , i.e. $\,a\equiv b\Rightarrow p(a)\equiv p(b)).\,$ See here for more on reverse (reciprocal) polynomials, and here for a similar application of such. Note $ $ The reason that these divisibility tests can be expressed as iterations of linear operations is because that is how polynomials can be generated (nested Horner form), e.g. $$ a_0 + a_1 x + a_2 x^2 + a_3 x^3 =\, a_0 + x(a_1 + x (a_2 + x(a_3)))\qquad$$ i.e. polynomials can be generated by iterating linear operations $\,f_{n+1} = c_{n+1}+ x f_n,\,$ so any polynomial operation (e.g. evaluation $\!\bmod d$ ) can be performed by recursively piggy-backing on this inductive generation process (see structural induction ). In this way, recursive evaluation $\!\bmod d\,$ of a polynomial (representation of an integer in radix notation) leads to a universal test for divisibility by $\,d,\,$ that works by repeatedly modding out leading chunks of digits $\!\bmod d\,$ (like longhand division but ignoring quotients). Your tests can be viewed as a reversed form of such a test. The forward form has the advantage over the reversed form that it yields the exact remainder so it can be used for much more than just divisibility testing. Let's use the forward universal test to compute $\, 43211\bmod 7.\,$ The algorithm consists of repeatedly replacing the first two leading digits $\rm\ \color{#0a0}{d_n\ d_{n-1}}\ $ by $\rm\, \color{#0a0}{(\color{#000}3\, d_n + d_{n-1})}\bmod 7,\,$ since $\,10d_n+d_{n-1}\equiv 3d_n+d_{n-1}\pmod{\!7}$ $$\begin{array}{rrl}\bmod 7\!:\ &\color{#0A0}{4\ 3}\ 2\ 1\ 1^{\phantom{|^{|}}}\!\!\!&\\ \equiv\!\!\!\! &\color{#c00}{1\ 2}\ 1\ 1 &\!{\rm by}\ \ \:\! \smash[t]{\overbrace{3\cdot \color{#0a0}4 + \color{#0a0}3}^{\rm\textstyle\color{#0a0}{\,\color{#000} 3\,\ d_n\! + d_{n-1}}\!\!\!\!\!\!\!}} \equiv\ \color{#c00}1\\ \equiv\!\!\!\! &\color{#0af}{5\ 1}\ 1&\!{\rm by}\ \ \ 3\cdot \color{#c00}1 + \color{#c00}2\ \equiv\ \color{#0af}5\\ \equiv\!\!\!\! & \color{#f60}{2\ 1}&\!{\rm by}\ \ \ 3\cdot \color{#0af}5 + \color{#0af}1\ \equiv\ \color{#f60}2\\ \equiv\!\!\!\! &\color{#8d0}0&\!{\rm by}\ \ \ 3\cdot \color{#f60}2 + \color{#f60}1\ \equiv\ \color{#8d0}0 \end{array}\qquad\qquad\quad\ \, $$ Hence $\rm\ 43211\equiv 0\pmod{\!7},\,$ indeed $\rm\ 43211 = 7\cdot 6173.\:$ Generally the modular arithmetic is simpler if we use least magnitude residues, e.g. $\rm\, \pm\{0,1,2,3\}\ \:(mod\ 7),\,$ by allowing negative digits, e.g. here . Note that for modulus $11$ or $9\:$ the above method reduces to the well-known divisibility tests by $11$ or $9\:$ (a.k.a. "casting out nines" for modulus $9$ ).
{ "source": [ "https://math.stackexchange.com/questions/4417325", "https://math.stackexchange.com", "https://math.stackexchange.com/users/982888/" ] }
4,417,342
Suppose we have a linear transformation T that takes in a 2x2 matrix and outputs a 2x2 matrix. From my understanding, the idea of the matrix of this transformation (we will call this $M_T$ ) is that the matrix multiplication $M_T X$ is the same as the evaluation $T(X)$ for all 2x2 matrix inputs X, that is, the input/output mappings of the transformation can just be represented with a simple matrix. However, I have then been told that for the transformation T described above, $M_T$ will be a 4x4 matrix. This is confusing to me because the matrix multiplication between a 4x4 matrix and a 2x2 matrix is not defined, so how is this correct? Can someone explain this to me? Or am I misunderstanding the idea of a matrix of a linear transformation?
No, divisibility tests are not restricted to linear forms. As explained here & here the rule for casting out nines: $\,9\mid 10a+b\!\iff\! 9\mid a+b\,$ extends to higher degree as $\,9\mid p(10)\!\iff\! 9\mid p(1)\,$ [by $\!\bmod 9\!:\ p(10)\equiv p(1)\,],\:\!$ for any polynomial $p(x)$ with integer coef's (by $\rm\color{#0a0}{PCR}$ below). When $\,n = p(10)\,$ then $p(1)$ is the sum of the decimal digits of $n$ . Similarly $\,11\mid 10a\!+\!b\!\iff\! 11\mid a\!-\!b\,$ extends to $\,11\mid p(10)\!\iff\! 11\mid p(-1) =$ alternating digit sum. The common tests you refer to correspond to reversed forms of the above divisibility tests, e.g. $\bmod 17\!:\ 10a+b\equiv 0 \!\iff\!$ $ 10(a+b\color{#c00}{/10})\equiv 0\!\iff\!$ $ a\color{#c00}{-5}b\equiv 0\,$ by $\,\color{#c00}{1/10\equiv -5},\,$ i.e. it arises via scaling by $\,\color{#c00}{10^{-1}\equiv -5}.\,$ Similarly, if $\,\deg p = k\,$ then scaling by $(-5)^k\equiv 10^{-k}$ changes all powers of $10$ in $\,p(10)\,$ into powers of $-5$ , effectively reversing the coef's, e.g. for a quadratic $\ \ \ \bmod \color{#c00}{17}\!:\,\ \ 0\equiv \overbrace{a\:\!10^2+b\:\!10+c}^{\large p(\color{#c00}{10})} \overset{\times\ (\color{c00}{-5})^{\large 2}\!\!}\iff\ \overbrace{0\equiv c(-5)^2+b(-5)+a}^{\large \tilde p(\color{#c00}{-5})}$ thus $\,\color{#c00}{17}\mid p(\color{#c00}{10})\!\iff\! 17\mid\tilde p(\color{#c00}{-5}) =\,$ reversed poly in radix $-5,\,$ by $\,\color{#c00}{10(-5)\equiv_{17} 1},\,$ e.g. $$ \color{#c00}{17}\mid 901\,\ \ {\rm by}\ \ 17\mid 109_{\color{#c00}{-5}} = 1(\color{#c00}{-5})^2+0(\color{#c00}{-5})+9 = 34 \quad$$ Such radix reciprocity divisibility by $\,d\,$ tests exist for any radices $r_1,\, r_2$ being reciprocal $\!\bmod d,\,$ i.e. when $\,r_1 r_2\equiv 1,\,$ e.g. for binary $\,r_2\!:\ \color{#0a0}{10(2)\equiv_{19} 1}$ and $\,\color{#c00}{10(-2)\equiv_{21} 1},\,$ so $$\begin{align} &\color{#0a0}{19}\mid 912\,\ \ {\rm by}\ \ 19\mid219_{\,\color{#0a0}2} \, =\ 2\,(\color{#0a0}2)^2\ +\ 1\,(\color{#0a0}2)\ +\ 9 = 19\\[.4em] &\color{#c00}{21}\mid 924\,\ \ {\rm by} \ \ 21\mid 429_{\color{#c00}{-2}}= 4(\color{#c00}{-2})^2+2(\color{#c00}{-2})+9 = 21\end{align}\quad $$ This is but one of numerous examples of higher-degree divisibility inferences that are ubiquitous in number theory and algebra. Such inferences become obvious once one masters congruences and modular arithmetic (see esp. $\rm\color{#0a0}{PCR}$ = Polynomial Congruence Rule , i.e. $\,a\equiv b\Rightarrow p(a)\equiv p(b)).\,$ See here for more on reverse (reciprocal) polynomials, and here for a similar application of such. Note $ $ The reason that these divisibility tests can be expressed as iterations of linear operations is because that is how polynomials can be generated (nested Horner form), e.g. $$ a_0 + a_1 x + a_2 x^2 + a_3 x^3 =\, a_0 + x(a_1 + x (a_2 + x(a_3)))\qquad$$ i.e. polynomials can be generated by iterating linear operations $\,f_{n+1} = c_{n+1}+ x f_n,\,$ so any polynomial operation (e.g. evaluation $\!\bmod d$ ) can be performed by recursively piggy-backing on this inductive generation process (see structural induction ). In this way, recursive evaluation $\!\bmod d\,$ of a polynomial (representation of an integer in radix notation) leads to a universal test for divisibility by $\,d,\,$ that works by repeatedly modding out leading chunks of digits $\!\bmod d\,$ (like longhand division but ignoring quotients). Your tests can be viewed as a reversed form of such a test. The forward form has the advantage over the reversed form that it yields the exact remainder so it can be used for much more than just divisibility testing. Let's use the forward universal test to compute $\, 43211\bmod 7.\,$ The algorithm consists of repeatedly replacing the first two leading digits $\rm\ \color{#0a0}{d_n\ d_{n-1}}\ $ by $\rm\, \color{#0a0}{(\color{#000}3\, d_n + d_{n-1})}\bmod 7,\,$ since $\,10d_n+d_{n-1}\equiv 3d_n+d_{n-1}\pmod{\!7}$ $$\begin{array}{rrl}\bmod 7\!:\ &\color{#0A0}{4\ 3}\ 2\ 1\ 1^{\phantom{|^{|}}}\!\!\!&\\ \equiv\!\!\!\! &\color{#c00}{1\ 2}\ 1\ 1 &\!{\rm by}\ \ \:\! \smash[t]{\overbrace{3\cdot \color{#0a0}4 + \color{#0a0}3}^{\rm\textstyle\color{#0a0}{\,\color{#000} 3\,\ d_n\! + d_{n-1}}\!\!\!\!\!\!\!}} \equiv\ \color{#c00}1\\ \equiv\!\!\!\! &\color{#0af}{5\ 1}\ 1&\!{\rm by}\ \ \ 3\cdot \color{#c00}1 + \color{#c00}2\ \equiv\ \color{#0af}5\\ \equiv\!\!\!\! & \color{#f60}{2\ 1}&\!{\rm by}\ \ \ 3\cdot \color{#0af}5 + \color{#0af}1\ \equiv\ \color{#f60}2\\ \equiv\!\!\!\! &\color{#8d0}0&\!{\rm by}\ \ \ 3\cdot \color{#f60}2 + \color{#f60}1\ \equiv\ \color{#8d0}0 \end{array}\qquad\qquad\quad\ \, $$ Hence $\rm\ 43211\equiv 0\pmod{\!7},\,$ indeed $\rm\ 43211 = 7\cdot 6173.\:$ Generally the modular arithmetic is simpler if we use least magnitude residues, e.g. $\rm\, \pm\{0,1,2,3\}\ \:(mod\ 7),\,$ by allowing negative digits, e.g. here . Note that for modulus $11$ or $9\:$ the above method reduces to the well-known divisibility tests by $11$ or $9\:$ (a.k.a. "casting out nines" for modulus $9$ ).
{ "source": [ "https://math.stackexchange.com/questions/4417342", "https://math.stackexchange.com", "https://math.stackexchange.com/users/476357/" ] }
4,432,171
The lim is about "when x approachs a, then y approachs L". Then, shouldn't the epsilon and delta be like "For all delta, no matter how small the delta is, you can always find an epsilon that makes ε < f(x)-L < ε"? But, the conventional explanation says like "for all epsilon, you find delta", which feels like to me, "when y approachs L, x goes to a".
This is a common misunderstanding, and the only response I can ever think of is to just examine what the definition is really getting at. The statement $\lim\limits_{x\to a}f(x)=L$ means that, when $x$ is close to $a$ , $f(x)$ is close to $L$ . So we want something like "if $|x-a|$ is small, then $|f(x)-L|$ is small." But then we have to decide what "small" means. If $|x-a|$ is smaller than some $\delta>0$ , how small should $|f(x)-L|$ be? The answer is that we want to ensure $|f(x)-L|$ is arbitrarily small as long as $x$ is sufficiently close to $a$ . This means that, if we want $f(x)$ within $0.001$ of $L$ , I can tell you how close $x$ must be to $a$ . And there is no reason to find the optimal choice for this distance; the important thing is that there is some $\delta>0$ for which $|x-a|<\delta$ is enough to guarantee $|f(x)-L|<0.001$ . This should work not just for $\varepsilon=0.001$ , of course, but for any $\varepsilon>0$ . (By the way I should be saying $0<|x-a|<\delta$ but you get the point.) It's also worth examining why the reverse definition is not what we want. We could try saying "for any $\delta>0$ there exists $\varepsilon>0$ such that $0<|x-a|<\delta$ implies $|f(x)-L|<\varepsilon$ ." The immediate issue is that $\varepsilon$ might always be huge. In the correct definition, the "for all $\varepsilon>0$ " includes every arbitrarily small value for $\varepsilon$ . But in the reverse definition, the smallness of $\delta$ doesn't guarantee the smallness of $\varepsilon$ .
{ "source": [ "https://math.stackexchange.com/questions/4432171", "https://math.stackexchange.com", "https://math.stackexchange.com/users/800956/" ] }
4,435,375
My question is referring to the exact definition mathematicians use when describing the decimal expansions of irrational numbers as "nonterminating and nonrepeating." Now, I understand, at least ostensibly, what is meant by "nonterminating" and the phrase "nonrepeating" seems simple enough to understand, but I've always wondered what is meant by the exact definition: It was always my understanding that the term "nonrepeating" was referring to a specific sequence of numbers showing up no more than once in the decimal expansion. I'm confused as to the exact criteria for fulfilling this requirement. Surely it can't be just a sequence of $1$ number. In the sense that $\pi$ starts with the number $3$ and then the number $3$ shows up again, and again and again an infinite number of times. Is it a sequence of $2$ numbers repeating then? For instance in the golden ratio $\phi = 1.61803398874989$ we could take any $2$ number sequence, say $61$ or $98$ or $33$ and would it be sufficient to say that that particular sequence never shows up again? That seems highly unlikely given the "nonterminating" nature of the decimal expansions for irrational numbers. If not, then what sequence of $n$ numbers is sufficient to declare a number "nonrepeating?" Moreover, philosophically, how does it make sense that any sequence of numbers doesn't show up more than once? When, necessarily, an irrational number has an infinitely long decimal expansion and a sequence of numbers (at least for practical determination) would be finite up to some $n \in \mathbb{N}$ I mean, the idea that the sequence length be infinitely long just seems like a convenient workaround that dilutes the significance of the "nonrepeating" quality of irrational numbers in the first place. Since, if you ever came upon a sequence that repeated for whatever $n$ -digit sequence you had you could always just say "oh actually I meant this $(n+1)$ -digit sequence instead!" and keep adding digits to the sequence ad infinitum. Perhaps the term isn't referring to repeating sequences of numbers but rather the same numbers repeating one after another. But this cannot be the case as we saw above with the Golden Ratio, in the short approximation written out we have $2$ cases where the same number is repeated immediately (i.e., $33$ and $88$ ) we also see this in this approximation for $\pi = 3.1415926535897932384626433$ So, if the term "nonrepeating" doesn't refer to repetition of sequences of numbers $n$ digits long, nor does it the consecutive repetition of the same number, then what else would it refer to?
The phrase "non-repeating" can be a bit confusing when first introduced. A more precise, if less snappy, term is " not eventually periodic " (and this is what mathematicians mean when they say "non-repeating" in the context in question). A sequence of numbers $(a_i)_{i\in\mathbb{N}}$ is eventually periodic iff there are $m,k$ such that for all $n>m$ we have $a_n=a_{n+k}$ . The "eventually" here is connected to the " $m$ " - the sequence $$0,1,2,3,4,5,6,4,5,6,4,5,6,...$$ is not periodic but it is eventually periodic (take $m=4$ and $k=3$ ). On the other hand, the sequence $$0,1,0,0,1,0,0,0,1,0,0,0,0,1,...$$ is not even eventually periodic (although of course it does have lots of repetition in it). The connection with irrationality is this: For a real number $r$ , the following are equivalent: $r$ is irrational. Some decimal expansion of $r$ is not eventually periodic. No decimal expansion of $r$ is eventually periodic. (The issue re: these last two bulletpoints is that a few numbers have multiple decimal expansions. But this isn't a big deal to focus on at first.) In particular, the number $$0.01001000100001000001...$$ is irrational . And base $10$ , unsurprisingly, plays no role here: the above characterization works with "decimal expansion" replaced with "base- $b$ expansion" for any $b$ .
{ "source": [ "https://math.stackexchange.com/questions/4435375", "https://math.stackexchange.com", "https://math.stackexchange.com/users/421394/" ] }
4,438,180
A well-known overkill proof of the irrationality of $2^{1/n}$ ( $n \geqslant 3$ an integer) using Fermat's Last Theorem goes as follows: If $2^{1/n} = a/b$ , then $2b^n = b^n + b^n = a^n$ , which contradicts FLT. (See this , and see this comment for the reason this is a circular argument when using Wiles' FLT proof) The same method of course can't be applied to prove the irrationality of $\sqrt{2}$ , since FLT doesn't say anything about the solutions of $x^2 + y^2 = z^2$ . Often this fact is stated humorously as, "FLT is not strong enough to prove that $\sqrt{2} \not \in \mathbb{Q}$ ." But clearly, the failure of one specific method that works for $n \geqslant 3$ does not rule out that some other argument could work in the case $n = 2$ in which the irrationality of $\sqrt{2}$ is related to a Fermat-type equation. ( For example , if we knew that there are integers $x,y,z$ such that $4x^4 + 4y^4 = z^4$ , then with $\sqrt{2} = a/b$ , we would have $a^4 x^4 / b^4 + a^4 y^4 / b^4 = z^4$ and hence \begin{align} X^4 + Y^4 = Z^4, \quad \quad (X, Y, Z) = (ax, ay, bz) \in \mathbb{Z}^3, \end{align} a contradiction to FLT.) Is there a proof along these lines that $\sqrt{2} \not \in \mathbb{Q}$ using Fermat's Last Theorem?
$$ \left(18+17\sqrt{2}\right)^3 + (18-17\sqrt{2})^3 = 42^3, $$ so $\sqrt{2}\in \mathbb{Q}$ would contradict FLT (once you know that $\sqrt{2}\not\in\{\pm 18/17\}$ of course). Source: this article , which also show that this is 'the only way' to show $\sqrt{2}$ is irrational using FLT, because FLT is almost true in $\mathbb{Q}(\sqrt{2})$ -- only in exponent $3$ do we get counterexamples and all of them are 'generated' (see Lemma $2.1$ and the discussion immediately following its proof at the bottom half of page $4$ ) by the counterexample given above.
{ "source": [ "https://math.stackexchange.com/questions/4438180", "https://math.stackexchange.com", "https://math.stackexchange.com/users/436911/" ] }
4,438,278
This is not a complaint about my proofs course being difficult, or how I can learn to prove things better, as all other questions of this flavour on Google seem to be. I am asking in a purely technical sense (but still only with regards to mathematics; that's why I deemed this question most appropriate to this Stack Exchange). To elaborate: it seems to me that if you have a few (mathematical) assumptions and there is a logical conclusion which can be made from those assumptions, that conclusion shouldn't be too hard to draw. It literally follows from the assumptions! However, this clearly isn't the case (for a lot of proofs, at least). The Poincaré Conjecture took just short of a century to prove. I haven't read the proof itself , but it being ~320 pages long doesn't really suggest easiness. And there are countless better examples of difficulty. In 1993, Wiles announced the final proof of Fermat's Last Theorem, which was originally stated by Fermat in 1637 and was "considered inaccessible to prove by contemporaneous mathematicians" ( Wikipedia article on the proof ). So clearly, in many cases, mathematicians have to bend over backwards to prove certain logical conclusions. What is the reason for this? Is it humans' lack of intelligence? Lack of creativity? There is the field of automated theorem proving which I tried to seek some insight from, but it looks like the algorithms produced from this field are subpar when compared to humans, and even these algorithms are obscenely difficult to implement. So seemingly certain proofs are actually inherently difficult. So I plead again - why is this? (EDIT) To rephrase my question: are there any inherent mathematical reasons that contribute to explaining why proofs can be incredibly difficult?
Although this question may superficially look opinion-based, in actual fact there is an objective answer. The core reason is that the halting problem cannot be solved computably, and statements about halting behaviour get arbitrarily difficult to prove, and elementary arithmetic is sufficient to express notions that are at least as general as statements about halting behaviour. Now the details. First understand the incompleteness theorem . Next, observe that any reasonable foundational system can reason about programs, via the use of Godel coding to express and reason about finite program execution. Notice that all this reasoning about programs can occur within a tiny fragment of PA (1st-order Peano Arithmetic) called PA − . Thus the incompleteness theorem imply that, no matter what your foundational system is (as long as it is reasonable), there would always be true arithmetical sentences that it cannot prove, and these sentences can be explicitly written down and are not that long. Furthermore, the same reduction to the halting problem implies that you cannot even computably determine whether some arithmetical sentence is a theorem of your favourite foundational system S or not. This actually implies that there is no computable bound on the length of a shortest proof of a given theorem! To be precise, there is no computable function $f$ such that for every string $X$ we have that either $X$ is not a theorem of S or there is a proof of $X$ over S of length at most $f(X)$ . This provides the (at first acquaintance surprising) answer to your question: Logically forced conclusions from an explicitly described set of assumptions may take a big number of steps to deduce, so big that there is no computable bound on that number of steps! So, yes, proofs are actually inherently hard! Things are even worse; if you believe that S does not prove any false arithmetic sentence (which you should otherwise why are you using S?), then we can explicitly construct an arithmetical sentence Q such that S proves Q but you must believe that no proof of Q over S has less than $2^{10000}$ symbols! And in case you think that such phenomena may not occur in the mathematics that non-logicians come up with, consider the fact that the generalized Collatz problem is undecidable , and the fact that Hilbert's tenth problem was proposed with no idea that it would be computably unsolvable. Similarly, many other discrete combinatorial problems such as Wang tiling were eventually found to be computably unsolvable.
{ "source": [ "https://math.stackexchange.com/questions/4438278", "https://math.stackexchange.com", "https://math.stackexchange.com/users/397764/" ] }
4,456,466
There are arbitrarily long arithmetic progressions of primes e.g. $5, 11, 17, 23, 29$ for a $5$ -length progression, but no (infinite) arithmetic sequence of primes with common difference $d\neq 0$ , as $d\in\mathbb{Z}$ is an obvious constraint and $(a+nd)_{n\in\mathbb{N}}$ contains $a+ad=a(1+d)$ . A natural question is then: what is the longest geometric progression of primes? If $r>1$ is an integer then you can't get a progression longer than $1$ , as $ar$ has at least three distinct factors: $1, ar, a, r$ (possibly $a=r$ ). But what about arbitrary $r\in\mathbb{R}$ ? You can get a sequence of $2$ e.g. $2, 3$ by taking first term $a=2$ and common ratio $r=1.5$ . But it doesn't seem to be possible to get more. So my question is: Prove that if $a,ar,ar^2,\dots,ar^n$ is a list of prime numbers then either $r=1$ or $n\le 1$ . (Self-answering because I'm surprised not to find this question asked before; it seems elementary but interesting.)
(Edited for more generality) If $p, q, r$ are ANY three primes in geometric progression, then $q^2=pr$ so, by prime factorization, $p=q=r$ . Therefore the ratio of the progression is $1$ .
{ "source": [ "https://math.stackexchange.com/questions/4456466", "https://math.stackexchange.com", "https://math.stackexchange.com/users/454779/" ] }
4,460,646
I want to calculate cosine of 452175521116192774 radians (it is around $4.52\cdot10^{17}$ ) Here is what different calculators say: Wolframalpha Desmos Geogebra Python 3.9 (standard math module) Python 3.9 (mpmath library) Obviously there is only one solution. There could be errors in floating point precision for these calculators, but this stumbles me. My calculator (TI-30XIS) says domain error (which is weird because cosine of, for example, a billion works just fine). How can I get the cosine of very large integers?
The problem is that your integer $$n=452175521116192774$$ can't be stored exactly as a standard 64-bit IEEE double precision floating point number. The closest double happens to be $$x=452175521116192768,$$ as can be seen from the binary representation $$ \begin{aligned} n &= 11001000110011100110011110110100000010000100000000000\color{red}{000110}_2 \\ x &= 11001000110011100110011110110100000010000100000000000\color{red}{000000}_2 \end{aligned} $$ where those last few bits in $n$ are lost, since the double format only stores the first 52 digits after the leading “1”. So in systems that use standard floating point (like Desmos, Geogebra, and the Python math module) you will actually get $x$ when you enter $n$ in a place where a double is expected; in Python you can verify this as follows: > print("%.310g" % 452175521116192774) 452175521116192768 Conseqently, when you ask for $\cos n$ these systems will answer with $$\cos x = -0.2639 \ldots$$ (which in itself is computed correctly; it's just that the input is not what you thought). In contrast, Wolfram Alpha and mpmath work with the exact number $n$ , and give the correct answer $$\cos n = -0.5229 \ldots$$
{ "source": [ "https://math.stackexchange.com/questions/4460646", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1040211/" ] }
4,469,506
I am currently juggling some integrals. In a physics textbook, Chaikin-Lubensky [1] , Chapter 6, (6.1.26), I came upon an integral that goes \begin{equation} \int_0^{1} \textrm{d} y\, \frac{1 - J_0(y)}{y} - \int_{1}^{\infty} \textrm{d} y\, \frac{J_0(y)}{y} = -.116. \end{equation} They give the result only as a floating point value without naming sources. The value looks suspiciously like $\gamma - \ln(2)$ to me ( $\gamma$ being the Euler-Mascheroni constant), which would solve a problem I have elsewhere. I am unfamiliar with the typical manipulations one uses on this kind of integrals and the various definitions of the Euler-Mascheroni constant. I fumbled around a bit with cosine integrals $\textrm{Ci}(y)$ but did not get far with it. So I am happy about suggestions.
A relatively elementary way is to start with known $$\gamma=\int_0^1\frac{1-\cos t}{t}\,dt-\int_1^\infty\frac{\cos t}{t}\,dt.$$ Put $t=ax$ for $a>0$ and do some rearrangements, to get $$\int_0^1\frac{1-\cos ax}{x}\,dx-\int_1^\infty\frac{\cos ax}{x}\,dx=\gamma+\log a.$$ Now the integral representation $J_0(y)=\frac2\pi\int_0^{\pi/2}\cos(y\cos x)\,dx$ yields $$\int_0^1\frac{1-J_0(y)}{y}\,dy-\int_1^\infty\frac{J_0(y)}{y}\,dy=\frac2\pi\int_0^{\pi/2}(\gamma+\log\cos x)\,dx$$ after interchanging integrations (which is not hard to justify). The result now follows from $\int_0^{\pi/2}\log\cos x\,dx\color{gray}{=\int_0^{\pi/2}\log\sin x\,dx}=-(\pi/2)\log2$ .
{ "source": [ "https://math.stackexchange.com/questions/4469506", "https://math.stackexchange.com", "https://math.stackexchange.com/users/875730/" ] }
4,476,061
Let $T$ be a linear operator on a finite-dimensional vector space $V$ over the field $K$ , with $\dim V=n$ . Is there a definition of the determinant of $T$ that (1) does not make reference to a particular basis of $V$ , and (2) does not require $K$ to be a particular field? As motivation, if $K=\mathbb C$ , I know of three ways to define $\det(T)$ , two of which refer to a choice of basis, and the other of which relies on $\mathbb C$ being algebraically closed: Choose an ordered basis $B$ of $T$ , and let $\mathcal M(T)$ denote the matrix of $T$ with respect to this basis. Then apply any of the formulas/algorithms for calculating a determinant to $\mathcal M(T).$ [This works for any field, but requires choosing a basis to express $\mathcal M(T)$ .] Choose an ordered basis $B$ of $T$ , and let $\det_n$ be an alternating multilinear map from $V^n\to K$ . Then the determinant of $T$ can be defined as $\det_n(TB)/\det_n(B)$ . [This works for any field, but requires choosing a basis to extract "column vectors of the matrix of $T$ ."] Define $\det(T)$ as the product of eigenvalues of $T$ , repeated according to their algebraic multiplicity. [This makes no reference to a basis, but only works because $\mathbb C$ is algebraically closed.]
This answer was edited quite a few times after receiving valuable input from several users. While its present form reflects quite faithfully the process that led to it, the patchy nature of the text is perhaps not especially pleasing to read. I thus decided to add a (hopefully) last edit down at the bottom, with a substantially streamlined and complete proof which could only be written in hindsight. The reader might therefore prefer to jump straight into it. You will find it under "EDIT 4". The determinant is the unique multiplicative map $$ \varphi : \text{End}(V)\to K $$ such that $\varphi (I+T)=1$ , when $T^2=0$ , $\varphi (\alpha P+I-P) = \alpha $ , when $P$ is idempotent and has rank one. (See EDIT 2, below, for a proof of the fact that condition (1), above, is superfluous). EDIT: Now that I have some free time, let me give some justification for my perhaps a bit too blunt (sorry!) answer above. It is not so hard to see that the determinant satisfies the above properties, so I will only prove uniqueness. The whole point of the question is to get rid of coordinates but I believe it doesn't hurt if the proof is based on coordinates. In other words, let us speak of $n\times n$ matrices. So we suppose that $$ \varphi : M_n(K)\to K $$ is a multiplicative map satisfying the above conditions and let us prove that $\varphi $ coincides with the determinant. For every $i$ and $j$ , consider the $n\times n$ matrix $E_{i,j}$ whose entries are all zero except for the $(i,j)$ entry, which is equal to 1. We then observe that if $A$ is any $n\times n$ matrix, $\lambda $ is any scalar, and $i\neq j$ , then $$ (I+\lambda E_{i,j})A $$ is the matrix one gets by applying to $A$ , the elementary operation of adding $\lambda $ times the $j^{th}$ row of $A$ to the $i^{th}$ row. Since $i\neq j$ , one has that $(\lambda E_{i,j})^2=0$ , so the hypothesis gives $$ \varphi \big ((I+\lambda E_{i,j})A\big ) = $$ $$=\varphi (I+\lambda E_{i,j} ) \varphi (A) = \varphi (A) . $$ This implies that the value of $\varphi (A)$ remains unchanged no matter how many elementary row operations we apply to $A$ . As observed in "EDIT 3" below, we are also able to swap any two rows of $A$ by means of a sequence of elementary operations, as long as we change the sign of one of the rows. We are therefore able to bring $A$ to it's reduced row echelon form, keeping the value of $\varphi (A)$ unchanged, except that we cannot make the leading entries of each row equal to 1, since this requires multiplying a row by a scalar, an operation under which $\varphi$ is certainly not invariant. Letting $A'$ be this quasi reduced row echelon form of $A$ , (with leading entties not necessarily equal to 1), we consequently have that $\varphi (A)= \varphi (A')$ . Case 1: $A$ is invertible and hence $A'$ is diagonal. Letting $a_i$ denote the $i^{th}$ diagonal entry of $A'$ , we then have that $$ A'=\prod_{i=1}^n(a_iE_{i,i}+I-E_{i,i}), $$ whence $$ \varphi (A)= \varphi (A')=\prod_{i=1}^n\varphi (a_iE_{i,i}+I-E_{i,i}) = $$ $$ = \prod_{i=1}^na_i=\text{det}(A')=\text{det}(A). $$ Case 2: $A$ is not invertible and hence the last row of $A'$ is identically equal to zero. In this case $E_{n,n}A' =0$ , so $$ A' = (I-E_{n,n})A' = (0E_{n,n}+I-E_{n,n})A', $$ whence $$ \varphi (A)=\varphi (A') = \varphi \big ((0E_{n,n}+I-E_{n,n})A'\big )= $$ $$= \varphi (0E_{n,n}+I-E_{n,n})\varphi (A') = 0\varphi (A') = 0 = \text{det}(A). $$ EDIT 2: Notice that the hypothesis " $\varphi (I+T)=1$ , when $T^2=0$ " in the above proof was used exclusively to argue that $\varphi (I+\lambda E_{i,j}) = 1$ . Here we will prove that this hypothesis is superfluous. I thank user @math54321 for a comment which led to the proof of this result without any special hypothesis on $K$ . Theorem . The determinant is the unique multiplicative map $\varphi :M_n(K)\to K$ such that $\varphi (\alpha P+I-P) = \alpha $ , when $P$ is idempotent and has rank one. Proof . Given any such $\varphi $ , and in view of the discussion above, it is enough to show that $\varphi (I+\lambda E_{i,j}) = 1$ , whenever $i\neq j$ . Given $a\in K$ , nonzero, a simple computation shows that $$ (1+aE_{i,j})\Big (aE_{i,i}+1-E_{i,i}\Big ) = \Big (aE_{i,i}+1-E_{i,i}\Big )(1+E_{i,j}), $$ and since $$ \varphi \Big (aE_{i,i}+1-E_{i,i}\Big ) = a \neq 0, $$ we get $$ \varphi (1+aE_{i,j})=\varphi (1+E_{i,j}). \qquad (*) $$ The proof will then be concluded once we prove that $\varphi (1+E_{i,j})=1$ , which we do by considering two cases: Case 1) The characteristic of $K$ is 2. In this case notice that $$ (1+E_{i,j})^2 = 1+2E_{i,j}=1, $$ so $\varphi (1+E_{i,j})^2 = 1$ , and we see that $\varphi (1+E_{i,j})$ is the unique solution of the polynomial equation $x^2=1$ , namely 1 (recall that $1=-1$ here). Case 2) The characteristic of $K$ is not 2. In this case we have that $$ (1+E_{i,j})^2 = 1+2E_{i,j}, $$ and since $2\neq 0$ , we have $$ \varphi (1+E_{i,j})^2 = \varphi (1+2E_{i,j}) \mathrel{\buildrel (*)\over =} \varphi (1+E_{i,j}). $$ Since $1+E_{i,j}$ is invertible (with inverse $1-E_{i,j}$ ), and hence $\varphi (1+E_{i,j})\neq 0$ , we deduce that $\varphi (1+E_{i,j})$ is the unique nonzero solution of the polynomial equation $x^2=x$ , namely 1. $\qquad$ QED EDIT 3: As pointed out by user @math54321, in order to bring a matrix to its reduced row echelon form one also needs to be able to swap rows. However, since swapping rows causes a change of sign in the determinant, it is not reasonable to expect $\varphi (A)$ to be invariant under such an elementary operation. Instead, we will show invariance of $\varphi (A)$ under a row swap, followed by a change of sign of one of the rows involved. Clearly this is equally effective in the task of bringing a matrix to its reduced row echelon form. We will soon see that the key computation to support this claim is that, defining $$ \Sigma _{i, j} := (1+E_{i,j})(1-E_{j,i})(1+E_{i,j}), $$ one has $$ \Sigma _{i, j} = 1 - E_{j,j} - E_{i,i} + E_{i,j} - E_{j,i}. \qquad (**) $$ For example, in case $n=3$ , $i=2$ , and $j=1$ , this becomes $$ \Sigma _{2, 1} = \pmatrix{ 1 & 0 & 0 \cr 1 & 1 & 0 \cr 0 & 0 & 1 } \pmatrix{ 1 & -1 & 0 \cr 0 & 1 & 0 \cr 0 & 0 & 1 } \pmatrix{ 1 & 0 & 0 \cr 1 & 1 & 0 \cr 0 & 0 & 1 } = \pmatrix{ 0 & -1 & 0 \cr 1 & 0 & 0 \cr 0 & 0 & 1 }. $$ The computation in $(**)$ amounts to saying that $\Sigma _{i,j}$ is the $n\times n$ matrix that coincides with the identity matrix everywhere outside the $2\times 2$ submatrix formed by the rows and columns with indices $i$ and $j$ , where it instead looks like $ \pmatrix{ 0 & -1\cr 1 & 0 }. $ Moreover, given any matrix $A$ , the matrix $\Sigma _{i,j}A$ is easily seen to be matrix obtained from $A$ by swapping the $i^{th}$ and $j^{th}$ rows, followed by a change of sign of the $i^{th}$ row (which was formerly known as the $j^{th}$ row). A glance into the definition of $\Sigma _{i,j}$ is then enough to convice the reader that $\varphi (\Sigma _{i,j})=1$ , and hence that $\varphi (\Sigma _{i,j}A)=\varphi (A)$ . This shows that $\varphi $ is invariant under our row swapping/sign changing operation. We thank user @math54321 for pointing out the need to verify this extra point. EDIT 4. A streamlined proof. Theorem . The determinant is the unique multiplicative map $\varphi :M_n(K)\to K$ such that $$ \varphi (a P+I-P) = a , $$ for every $a$ in $K$ , and every idempotent matrix $P$ with rank one. Proof . It is clear that the determinant satisfies the above property, so we move on to the proof of uniqueness. Thus, supposing that $$ \varphi : M_n(K)\to K $$ is a multiplicative map satisfying the above condition, we must prove that $\varphi $ coincides with the determinant. For every $i$ and $j$ , consider the $n\times n$ matrix $E_{i,j}$ whose entries are all zero except for the $(i,j)$ entry, which is equal to 1. We then claim that $$ \varphi (I+a E_{i,j}) = 1, \tag{1} $$ for every $a\in K$ , and every $i\neq j$ . To prove this, and supposing first that $a$ is nonzero, a simple computation shows that $$ (1+aE_{i,j})\Big (aE_{i,i}+1-E_{i,i}\Big ) = \Big (aE_{i,i}+1-E_{i,i}\Big )(1+E_{i,j}), $$ and since $$ \varphi \Big (aE_{i,i}+1-E_{i,i}\Big ) = a \neq 0, $$ we get $$ \varphi (1+aE_{i,j})=\varphi (1+E_{i,j}). \tag{2 } $$ The proof will then be concluded once we prove that $\varphi (1+E_{i,j})=1$ , which we do by considering two cases: Case 1) The characteristic of $K$ is 2. In this case notice that $$ (1+E_{i,j})^2 = 1+2E_{i,j}=1, $$ so $\varphi (1+E_{i,j})^2 = 1$ , and we see that $\varphi (1+E_{i,j})$ is the unique solution of the polynomial equation $x^2=1$ , namely 1 (recall that $1=-1$ here). Case 2) The characteristic of $K$ is not 2. In this case we have that $$ (1+E_{i,j})^2 = 1+2E_{i,j}, $$ and since $2\neq 0$ , we have by $(2) $ that $$ \varphi (1+E_{i,j})^2 = \varphi (1+2E_{i,j}) = \varphi (1+E_{i,j}). $$ Noticing that $1+E_{i,j}$ is invertible (with inverse $1-E_{i,j}$ ), and hence that $\varphi (1+E_{i,j})\neq 0$ , we deduce that $\varphi (1+E_{i,j})$ is the unique nonzero solution of the polynomial equation $x^2=x$ , namely 1. This takes care of claim $(1)$ for any nonzero $a$ , but if $a=0$ , the claim simply states that $\varphi (1)=1$ , which follows immediately from the hypothesis (choosing $P$ to be any rank one projection and $a=1$ ). Next consider the subgroup $H\subseteq GL_n(K)$ generated by the union of the following two sets $$ \big \{a E_{i, i}+I-E_{i, i}: a\in K^\times , \ 1\leq i\leq n\big \}, $$ and $$ \big \{ 1+aE_{i,j}: a\in K, \ 1\leq i,j\leq n,\ i\neq j\big \}. $$ Observe that the hypothesis together with $(1)$ imply that $\varphi $ coincides with the determinant on the generators of $H$ , and hence $$ \varphi (U)=\text{det}(U), \quad\forall \, U\in H.\tag{3} $$ We then claim that if $A$ is any $n\times n$ matrix, and $A'$ is the matrix obtained from $A$ by any one of the following so called elementary row operations , then there exists some $U\in H$ such that $UA=A'$ . The operations are: a) Replacing the $i^{th}$ row of $A$ with itself plus $\lambda $ times the $j^{th}$ row, where $i\neq j$ , and $\lambda \in K$ . b) Multiplying the $i^{th}$ row of $A$ by a nonzero $\lambda \in K$ . c) Swapping the $i^{th}$ row of $A$ with the $j^{th}$ row. In order to verify the claim under (a), it is enough to take $U=I+\lambda E_{i,j}$ . Under (b) one takes the diagonal matrix $U=\lambda E_{i,i}+I-E_{i,i}$ , so it remains to check the claim under (c). Defining $$ \Sigma _{i, j} = (1+E_{i,j})(1-E_{j,i})(1+E_{i,j}), $$ a simple computation gives $$ \Sigma _{i, j} = 1 - E_{i,i} - E_{j,j} + E_{i,j} - E_{j,i}. \tag{4} $$ For example, in the case of $3\times 3$ matrices, if $i=2$ , and $j=1$ , this becomes $$ \Sigma _{2, 1} = $$ $$ = \pmatrix{ 1 & 0 & 0 \cr 1 & 1 & 0 \cr 0 & 0 & 1 } \pmatrix{ 1 & -1 & 0 \cr 0 & 1 & 0 \cr 0 & 0 & 1 } \pmatrix{ 1 & 0 & 0 \cr 1 & 1 & 0 \cr 0 & 0 & 1 } = $$ $$ = \pmatrix{ 0 & -1 & 0 \cr 1 & 0 & 0 \cr 0 & 0 & 1 }. $$ The computation in $(4)$ amounts to saying that $\Sigma _{i,j}$ is the $n\times n$ matrix that coincides with the identity matrix everywhere outside the $2\times 2$ sub-matrix formed by the rows and columns with indices $i$ and $j$ , where it instead looks like $ \pmatrix{ 0 & -1\cr 1 & 0 }. $ Moreover, given any matrix $A$ , the matrix $\Sigma _{i,j}A$ is easily seen to be matrix obtained from $A$ by swapping the $i^{th}$ and $j^{th}$ rows, followed by a change of sign of the $i^{th}$ row (which was formerly known as the $j^{th}$ row). This unwanted change of sign can clearly be undone by further multiplying $A$ on the left by $$ -E_{i,i}+I-E_{i,i}, $$ so the claim is proved. We then see that all of the steps needed to bring $A$ to its reduced row echelon form can be performed by multiplying $A$ on the left by some member of the subgroup $H$ . This implies that, if $A'$ is now the reduced row echelon form of $A$ , then there exists some $U$ in $H$ such that $UA=A'$ . This allows us to conclude the proof that $\varphi $ coincides with the determinant, as follows: Case 1) $A$ is invertible and hence $A'$ is the identity. As seen above, there is some $U$ in $H$ such that $UA=I$ , whence $A=U^{-1}\in H$ , so the conclusion follows from $(3)$ . Incidentally, it is interesting to observe that we have just shown that $H=GL_n(K)$ ! Case 2) $A$ is not invertible and hence the last row of $A'$ is identically equal to zero. In this case $E_{n,n}A' =0$ , so $$ A' = (I-E_{n,n})A' = (0E_{n,n}+I-E_{n,n})A', $$ whence $$ \varphi (U)\varphi (A)=\varphi (UA)=\varphi (A') = \varphi \big ((0E_{n,n}+I-E_{n,n})A'\big )= $$ $$= \varphi (0E_{n,n}+I-E_{n,n})\varphi (A') =$$ $$= 0\varphi (A') = 0, $$ so $$ \varphi (A)=0 = \text{det}(A). $$ QED. I'd like to thank all users who gave important feedback to earlier versions of this result, including, but not limited to, @math54321 and @Aaratrick.
{ "source": [ "https://math.stackexchange.com/questions/4476061", "https://math.stackexchange.com", "https://math.stackexchange.com/users/477746/" ] }
4,476,062
Let $f$ be twice differentiable function, and assume $$ \begin{cases} f\left(0\right)=0\\ f\left(1\right)=1\\ f'\left(0\right)=0\\ f'\left(1\right)=0 \end{cases} $$ I want to prove that there exists $x_0 \in [0,1]$ such that $ |f''\left(x_{0}\right)|\geq4 $ . Now, I know that this question already been asked before , but I am intrested in a solution that does not use integrals nor taylor expansions (I am familier with those solutions). I want to find a solution based solely on Lagrange mean value theorem. Here's my work: Using Lagrange's theorem in $(0,1/2)$ and then in $(1/2,1)$ yields existence of points $c_1 \in (0,1/2)$ and $c_2 \in (1/2,1)$ such that $ f'\left(c_{1}\right)=2f\left(\frac{1}{2}\right) $ and $ f'\left(c_{2}\right)=2\left(1-f\left(\frac{1}{2}\right)\right) $ . Now, using Lagrange's theorem on $f'$ in $(0,c_1)$ and $(c_2,1)$ yields existence of $\theta_1 \in (0,c_1)$ and $\theta_2 \in (c_2,1) $ such that $$ \begin{cases} f''\left(\theta_{1}\right)=\frac{f'\left(c_{1}\right)}{c_{1}}=\frac{2f\left(\frac{1}{2}\right)}{c_{1}}\\ f''\left(\theta_{2}\right)=\frac{-f'\left(c_{2}\right)}{1-c_{2}}=\frac{2\left(1-f\left(\frac{1}{2}\right)\right)}{1-c_{2}} \end{cases} $$ I thought that if I'll assume that $ f''\left(\theta_{1}\right)<4 $ and $ f''\left(\theta_{2}\right)>-4$ will lead me to a contradiction (and thus one of them must be false and that's the point we want), but I actually could not reach a contradiction. Any help would be appreciated. Thanks in advance.
This answer was edited quite a few times after receiving valuable input from several users. While its present form reflects quite faithfully the process that led to it, the patchy nature of the text is perhaps not especially pleasing to read. I thus decided to add a (hopefully) last edit down at the bottom, with a substantially streamlined and complete proof which could only be written in hindsight. The reader might therefore prefer to jump straight into it. You will find it under "EDIT 4". The determinant is the unique multiplicative map $$ \varphi : \text{End}(V)\to K $$ such that $\varphi (I+T)=1$ , when $T^2=0$ , $\varphi (\alpha P+I-P) = \alpha $ , when $P$ is idempotent and has rank one. (See EDIT 2, below, for a proof of the fact that condition (1), above, is superfluous). EDIT: Now that I have some free time, let me give some justification for my perhaps a bit too blunt (sorry!) answer above. It is not so hard to see that the determinant satisfies the above properties, so I will only prove uniqueness. The whole point of the question is to get rid of coordinates but I believe it doesn't hurt if the proof is based on coordinates. In other words, let us speak of $n\times n$ matrices. So we suppose that $$ \varphi : M_n(K)\to K $$ is a multiplicative map satisfying the above conditions and let us prove that $\varphi $ coincides with the determinant. For every $i$ and $j$ , consider the $n\times n$ matrix $E_{i,j}$ whose entries are all zero except for the $(i,j)$ entry, which is equal to 1. We then observe that if $A$ is any $n\times n$ matrix, $\lambda $ is any scalar, and $i\neq j$ , then $$ (I+\lambda E_{i,j})A $$ is the matrix one gets by applying to $A$ , the elementary operation of adding $\lambda $ times the $j^{th}$ row of $A$ to the $i^{th}$ row. Since $i\neq j$ , one has that $(\lambda E_{i,j})^2=0$ , so the hypothesis gives $$ \varphi \big ((I+\lambda E_{i,j})A\big ) = $$ $$=\varphi (I+\lambda E_{i,j} ) \varphi (A) = \varphi (A) . $$ This implies that the value of $\varphi (A)$ remains unchanged no matter how many elementary row operations we apply to $A$ . As observed in "EDIT 3" below, we are also able to swap any two rows of $A$ by means of a sequence of elementary operations, as long as we change the sign of one of the rows. We are therefore able to bring $A$ to it's reduced row echelon form, keeping the value of $\varphi (A)$ unchanged, except that we cannot make the leading entries of each row equal to 1, since this requires multiplying a row by a scalar, an operation under which $\varphi$ is certainly not invariant. Letting $A'$ be this quasi reduced row echelon form of $A$ , (with leading entties not necessarily equal to 1), we consequently have that $\varphi (A)= \varphi (A')$ . Case 1: $A$ is invertible and hence $A'$ is diagonal. Letting $a_i$ denote the $i^{th}$ diagonal entry of $A'$ , we then have that $$ A'=\prod_{i=1}^n(a_iE_{i,i}+I-E_{i,i}), $$ whence $$ \varphi (A)= \varphi (A')=\prod_{i=1}^n\varphi (a_iE_{i,i}+I-E_{i,i}) = $$ $$ = \prod_{i=1}^na_i=\text{det}(A')=\text{det}(A). $$ Case 2: $A$ is not invertible and hence the last row of $A'$ is identically equal to zero. In this case $E_{n,n}A' =0$ , so $$ A' = (I-E_{n,n})A' = (0E_{n,n}+I-E_{n,n})A', $$ whence $$ \varphi (A)=\varphi (A') = \varphi \big ((0E_{n,n}+I-E_{n,n})A'\big )= $$ $$= \varphi (0E_{n,n}+I-E_{n,n})\varphi (A') = 0\varphi (A') = 0 = \text{det}(A). $$ EDIT 2: Notice that the hypothesis " $\varphi (I+T)=1$ , when $T^2=0$ " in the above proof was used exclusively to argue that $\varphi (I+\lambda E_{i,j}) = 1$ . Here we will prove that this hypothesis is superfluous. I thank user @math54321 for a comment which led to the proof of this result without any special hypothesis on $K$ . Theorem . The determinant is the unique multiplicative map $\varphi :M_n(K)\to K$ such that $\varphi (\alpha P+I-P) = \alpha $ , when $P$ is idempotent and has rank one. Proof . Given any such $\varphi $ , and in view of the discussion above, it is enough to show that $\varphi (I+\lambda E_{i,j}) = 1$ , whenever $i\neq j$ . Given $a\in K$ , nonzero, a simple computation shows that $$ (1+aE_{i,j})\Big (aE_{i,i}+1-E_{i,i}\Big ) = \Big (aE_{i,i}+1-E_{i,i}\Big )(1+E_{i,j}), $$ and since $$ \varphi \Big (aE_{i,i}+1-E_{i,i}\Big ) = a \neq 0, $$ we get $$ \varphi (1+aE_{i,j})=\varphi (1+E_{i,j}). \qquad (*) $$ The proof will then be concluded once we prove that $\varphi (1+E_{i,j})=1$ , which we do by considering two cases: Case 1) The characteristic of $K$ is 2. In this case notice that $$ (1+E_{i,j})^2 = 1+2E_{i,j}=1, $$ so $\varphi (1+E_{i,j})^2 = 1$ , and we see that $\varphi (1+E_{i,j})$ is the unique solution of the polynomial equation $x^2=1$ , namely 1 (recall that $1=-1$ here). Case 2) The characteristic of $K$ is not 2. In this case we have that $$ (1+E_{i,j})^2 = 1+2E_{i,j}, $$ and since $2\neq 0$ , we have $$ \varphi (1+E_{i,j})^2 = \varphi (1+2E_{i,j}) \mathrel{\buildrel (*)\over =} \varphi (1+E_{i,j}). $$ Since $1+E_{i,j}$ is invertible (with inverse $1-E_{i,j}$ ), and hence $\varphi (1+E_{i,j})\neq 0$ , we deduce that $\varphi (1+E_{i,j})$ is the unique nonzero solution of the polynomial equation $x^2=x$ , namely 1. $\qquad$ QED EDIT 3: As pointed out by user @math54321, in order to bring a matrix to its reduced row echelon form one also needs to be able to swap rows. However, since swapping rows causes a change of sign in the determinant, it is not reasonable to expect $\varphi (A)$ to be invariant under such an elementary operation. Instead, we will show invariance of $\varphi (A)$ under a row swap, followed by a change of sign of one of the rows involved. Clearly this is equally effective in the task of bringing a matrix to its reduced row echelon form. We will soon see that the key computation to support this claim is that, defining $$ \Sigma _{i, j} := (1+E_{i,j})(1-E_{j,i})(1+E_{i,j}), $$ one has $$ \Sigma _{i, j} = 1 - E_{j,j} - E_{i,i} + E_{i,j} - E_{j,i}. \qquad (**) $$ For example, in case $n=3$ , $i=2$ , and $j=1$ , this becomes $$ \Sigma _{2, 1} = \pmatrix{ 1 & 0 & 0 \cr 1 & 1 & 0 \cr 0 & 0 & 1 } \pmatrix{ 1 & -1 & 0 \cr 0 & 1 & 0 \cr 0 & 0 & 1 } \pmatrix{ 1 & 0 & 0 \cr 1 & 1 & 0 \cr 0 & 0 & 1 } = \pmatrix{ 0 & -1 & 0 \cr 1 & 0 & 0 \cr 0 & 0 & 1 }. $$ The computation in $(**)$ amounts to saying that $\Sigma _{i,j}$ is the $n\times n$ matrix that coincides with the identity matrix everywhere outside the $2\times 2$ submatrix formed by the rows and columns with indices $i$ and $j$ , where it instead looks like $ \pmatrix{ 0 & -1\cr 1 & 0 }. $ Moreover, given any matrix $A$ , the matrix $\Sigma _{i,j}A$ is easily seen to be matrix obtained from $A$ by swapping the $i^{th}$ and $j^{th}$ rows, followed by a change of sign of the $i^{th}$ row (which was formerly known as the $j^{th}$ row). A glance into the definition of $\Sigma _{i,j}$ is then enough to convice the reader that $\varphi (\Sigma _{i,j})=1$ , and hence that $\varphi (\Sigma _{i,j}A)=\varphi (A)$ . This shows that $\varphi $ is invariant under our row swapping/sign changing operation. We thank user @math54321 for pointing out the need to verify this extra point. EDIT 4. A streamlined proof. Theorem . The determinant is the unique multiplicative map $\varphi :M_n(K)\to K$ such that $$ \varphi (a P+I-P) = a , $$ for every $a$ in $K$ , and every idempotent matrix $P$ with rank one. Proof . It is clear that the determinant satisfies the above property, so we move on to the proof of uniqueness. Thus, supposing that $$ \varphi : M_n(K)\to K $$ is a multiplicative map satisfying the above condition, we must prove that $\varphi $ coincides with the determinant. For every $i$ and $j$ , consider the $n\times n$ matrix $E_{i,j}$ whose entries are all zero except for the $(i,j)$ entry, which is equal to 1. We then claim that $$ \varphi (I+a E_{i,j}) = 1, \tag{1} $$ for every $a\in K$ , and every $i\neq j$ . To prove this, and supposing first that $a$ is nonzero, a simple computation shows that $$ (1+aE_{i,j})\Big (aE_{i,i}+1-E_{i,i}\Big ) = \Big (aE_{i,i}+1-E_{i,i}\Big )(1+E_{i,j}), $$ and since $$ \varphi \Big (aE_{i,i}+1-E_{i,i}\Big ) = a \neq 0, $$ we get $$ \varphi (1+aE_{i,j})=\varphi (1+E_{i,j}). \tag{2 } $$ The proof will then be concluded once we prove that $\varphi (1+E_{i,j})=1$ , which we do by considering two cases: Case 1) The characteristic of $K$ is 2. In this case notice that $$ (1+E_{i,j})^2 = 1+2E_{i,j}=1, $$ so $\varphi (1+E_{i,j})^2 = 1$ , and we see that $\varphi (1+E_{i,j})$ is the unique solution of the polynomial equation $x^2=1$ , namely 1 (recall that $1=-1$ here). Case 2) The characteristic of $K$ is not 2. In this case we have that $$ (1+E_{i,j})^2 = 1+2E_{i,j}, $$ and since $2\neq 0$ , we have by $(2) $ that $$ \varphi (1+E_{i,j})^2 = \varphi (1+2E_{i,j}) = \varphi (1+E_{i,j}). $$ Noticing that $1+E_{i,j}$ is invertible (with inverse $1-E_{i,j}$ ), and hence that $\varphi (1+E_{i,j})\neq 0$ , we deduce that $\varphi (1+E_{i,j})$ is the unique nonzero solution of the polynomial equation $x^2=x$ , namely 1. This takes care of claim $(1)$ for any nonzero $a$ , but if $a=0$ , the claim simply states that $\varphi (1)=1$ , which follows immediately from the hypothesis (choosing $P$ to be any rank one projection and $a=1$ ). Next consider the subgroup $H\subseteq GL_n(K)$ generated by the union of the following two sets $$ \big \{a E_{i, i}+I-E_{i, i}: a\in K^\times , \ 1\leq i\leq n\big \}, $$ and $$ \big \{ 1+aE_{i,j}: a\in K, \ 1\leq i,j\leq n,\ i\neq j\big \}. $$ Observe that the hypothesis together with $(1)$ imply that $\varphi $ coincides with the determinant on the generators of $H$ , and hence $$ \varphi (U)=\text{det}(U), \quad\forall \, U\in H.\tag{3} $$ We then claim that if $A$ is any $n\times n$ matrix, and $A'$ is the matrix obtained from $A$ by any one of the following so called elementary row operations , then there exists some $U\in H$ such that $UA=A'$ . The operations are: a) Replacing the $i^{th}$ row of $A$ with itself plus $\lambda $ times the $j^{th}$ row, where $i\neq j$ , and $\lambda \in K$ . b) Multiplying the $i^{th}$ row of $A$ by a nonzero $\lambda \in K$ . c) Swapping the $i^{th}$ row of $A$ with the $j^{th}$ row. In order to verify the claim under (a), it is enough to take $U=I+\lambda E_{i,j}$ . Under (b) one takes the diagonal matrix $U=\lambda E_{i,i}+I-E_{i,i}$ , so it remains to check the claim under (c). Defining $$ \Sigma _{i, j} = (1+E_{i,j})(1-E_{j,i})(1+E_{i,j}), $$ a simple computation gives $$ \Sigma _{i, j} = 1 - E_{i,i} - E_{j,j} + E_{i,j} - E_{j,i}. \tag{4} $$ For example, in the case of $3\times 3$ matrices, if $i=2$ , and $j=1$ , this becomes $$ \Sigma _{2, 1} = $$ $$ = \pmatrix{ 1 & 0 & 0 \cr 1 & 1 & 0 \cr 0 & 0 & 1 } \pmatrix{ 1 & -1 & 0 \cr 0 & 1 & 0 \cr 0 & 0 & 1 } \pmatrix{ 1 & 0 & 0 \cr 1 & 1 & 0 \cr 0 & 0 & 1 } = $$ $$ = \pmatrix{ 0 & -1 & 0 \cr 1 & 0 & 0 \cr 0 & 0 & 1 }. $$ The computation in $(4)$ amounts to saying that $\Sigma _{i,j}$ is the $n\times n$ matrix that coincides with the identity matrix everywhere outside the $2\times 2$ sub-matrix formed by the rows and columns with indices $i$ and $j$ , where it instead looks like $ \pmatrix{ 0 & -1\cr 1 & 0 }. $ Moreover, given any matrix $A$ , the matrix $\Sigma _{i,j}A$ is easily seen to be matrix obtained from $A$ by swapping the $i^{th}$ and $j^{th}$ rows, followed by a change of sign of the $i^{th}$ row (which was formerly known as the $j^{th}$ row). This unwanted change of sign can clearly be undone by further multiplying $A$ on the left by $$ -E_{i,i}+I-E_{i,i}, $$ so the claim is proved. We then see that all of the steps needed to bring $A$ to its reduced row echelon form can be performed by multiplying $A$ on the left by some member of the subgroup $H$ . This implies that, if $A'$ is now the reduced row echelon form of $A$ , then there exists some $U$ in $H$ such that $UA=A'$ . This allows us to conclude the proof that $\varphi $ coincides with the determinant, as follows: Case 1) $A$ is invertible and hence $A'$ is the identity. As seen above, there is some $U$ in $H$ such that $UA=I$ , whence $A=U^{-1}\in H$ , so the conclusion follows from $(3)$ . Incidentally, it is interesting to observe that we have just shown that $H=GL_n(K)$ ! Case 2) $A$ is not invertible and hence the last row of $A'$ is identically equal to zero. In this case $E_{n,n}A' =0$ , so $$ A' = (I-E_{n,n})A' = (0E_{n,n}+I-E_{n,n})A', $$ whence $$ \varphi (U)\varphi (A)=\varphi (UA)=\varphi (A') = \varphi \big ((0E_{n,n}+I-E_{n,n})A'\big )= $$ $$= \varphi (0E_{n,n}+I-E_{n,n})\varphi (A') =$$ $$= 0\varphi (A') = 0, $$ so $$ \varphi (A)=0 = \text{det}(A). $$ QED. I'd like to thank all users who gave important feedback to earlier versions of this result, including, but not limited to, @math54321 and @Aaratrick.
{ "source": [ "https://math.stackexchange.com/questions/4476062", "https://math.stackexchange.com", "https://math.stackexchange.com/users/727735/" ] }
4,488,289
Let us consider the continuous functions over $[0,1]$ fulfilling $$ \int_{0}^{1} f(x) x^n\,dx = 0 $$ for $n=0$ and for every $n\in E\subseteq\mathbb{N}^+$ . The Müntz–Szász theorem gives that $$ \sum_{n\in E}\frac{1}{n} = +\infty \Longleftrightarrow f(x)\equiv 0 $$ so there is a non-zero continuous function $f(x)$ such that $$ \int_{0}^{1} f(x) x^{n^2}\,dx = 0 \tag{1}$$ holds for every $n\in\mathbb{N}$ . Question : can we construct a nice, explicit function $f\neq 0$ fulfilling $(1)$ for every $n\in\mathbb{N}$ ? We may consider functions of the form $$ f(x) = \sum_{n\geq 0} c_n P_n(2x-1) $$ with $P_n(2x-1)$ being the $n$ -th shifted Legendre polynomial. The orthogonality to $1$ and $x$ translates into $c_0=c_1=0$ , the orthogonality to $x^4$ translates into $\frac{2}{35}c_2+\frac{1}{70}c_3+\frac{1}{630}c_4=0$ , the orthogonality to $x^9$ translates into $\frac{3}{55}c_2+\frac{21}{715}c_3+ \frac{9}{715}c_4 +\frac{3}{715}c_5+\frac{3}{2860}c_6+\frac{9}{48620}c_7+\frac{1}{48620}c_8+\frac{1}{923780}c_9=0$ and so on. The minimal (with respect to the $\ell^2$ norm) solution of this infinite system with $c_2=1$ (or $c_4=1$ ) should give a sequence $\{c_n\}_{n\geq 0}$ ensuring the continuity of $f(x)$ , but this is non-trivial and I would appreciate a more explicit construction / example of such $f$ . Addendum : another possible construction is to apply the Gram-Schmidt process to $\{1,x,x^4,x^9,\ldots\}$ in order to get a sequence of polynomials $\{p_n(x)\}_{n\geq 0}$ such that $p_n(x)=\sum_{k=0}^{n} c_k x^{k^2}$ $n\neq m \Longrightarrow \langle p_n(x),p_m(x)\rangle = 0$ $\max_{x\in [0,1]} |p_n(x)| = 1$ or $\langle p_n(x),p_n(x)\rangle = 1$ then take $f(x)$ as the pointwise limit of a convergent subsequence of $\{p_n(x)\}_{n\geq 0}$ . Still, not really explicit. A more promising approach is to consider some lacunary Fourier series, like $$ g(\theta) = \sum_{n\geq 1}\frac{\cos(n\theta)}{n^2} - \sum_{n\geq 1}\frac{\cos(n^2\theta)}{n^4}, $$ which clearly fulfills $\int_{-\pi}^{\pi}g(\theta)\cos(n^2\theta)\,d\theta = 0$ , then turn such $g(\theta)$ into an $f(x)$ fulfilling $(1)$ via some slick substitution. Yet another way is to consider the inverse Laplace transform of $$ \frac{1}{s}\prod_{k=0}^{n}\frac{k^2+1-s}{k^2+1+s} $$ evaluated at $-\log x$ . This gives a polynomial, bounded between $-1$ and $1$ , which is orthogonal to $1,x,x^4,\ldots,x^{n^2}$ . Is this sequence of polynomials (or a subsequence of this sequence) convergent to a continuous function? I do not know. If so, $$ f(x)=\mathcal{L}^{-1}\left(\frac{\sin(\pi\sqrt{s-1})}{\sqrt{s-1}}\cdot\frac{\sqrt{s+1}}{\sin(\pi\sqrt{s+1})}\cdot \frac{1}{s}\right)(-\log x)$$ is an explicit solution. Here it is a plot of the first polynomials produced by the last approach:
Short Answer. Expanding @orangeskid 's answer, let $$ F(x) := \frac{1}{\Psi_{\infty}(0)^2} - x - \frac{1}{\Psi_{\infty}(0)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2 \operatorname{sinhc}(\pi\sqrt{k^2 + 3})}{(k^2 + 1)(k^2 + 2)} x^{k^2+2}, $$ where $\operatorname{sinhc}(x) = \frac{\sinh x}{x}$ and $$ \Psi_{\infty}(0) = \frac{\operatorname{sinhc}(\pi\sqrt{p+1})}{\operatorname{sinhc}(\pi\sqrt{p})}. $$ Although the above series converges only for $x \in [0, 1)$ , we can prove that $F$ extends to an absolutely continuous function on $[0, 1]$ by setting $F(1) = 0$ and satisfies $$ \int_{0}^{1} F(x) x^{n^2} \, \mathrm{d}x = 0 $$ for any $ n = 1, 2, \ldots$ Below is the graph of $f(x)$ on $[0, 1]$ : Proof of the claim. Step 1. Consider a sequence $-\frac{1}{2} < \alpha_1 < \alpha_2 < \ldots$ . Also, we define the function $f_n$ by \begin{align*} f_n(x) &:= \frac{ \begin{vmatrix} 1 & x^{\alpha_1} & \cdots & x^{\alpha_n} \\ \langle 1, t^{\alpha_1} \rangle & \langle t^{\alpha_1}, t^{\alpha_1} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_1} \rangle \\ \vdots & \vdots & \ddots & \vdots \\ \langle 1, t^{\alpha_n} \rangle & \langle t^{\alpha_1}, t^{\alpha_n} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_n} \rangle \end{vmatrix} }{ \begin{vmatrix} \langle t^{\alpha_1}, t^{\alpha_1} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_1} \rangle \\ \vdots & \ddots & \vdots \\ \langle t^{\alpha_1}, t^{\alpha_n} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_n} \rangle \end{vmatrix} }, \end{align*} where $\langle g(t), h(t) \rangle = \int_{0}^{1} g(t)h(t) \, \mathrm{d}t$ denotes the inner product on $L^2([0,1])$ . Then, as explained in @orangeskid's answer, for $\alpha > -\frac{1}{2}$ with $\alpha \notin \{\alpha_1, \alpha_2, \ldots\}$ , $$ x^{\alpha} = f_n + (x^{\alpha} - f_n) $$ is an orthogonal decomposition of $x^{\alpha}$ , where $x^{\alpha} - f_n \in V_n := \operatorname{span}(x^{\alpha_1}, \ldots, x^{\alpha_n})$ and $f_n \perp V_n$ . Since $V_n$ is increasing in $n$ , this implies that $f_n$ converges in $L^2([0,1])$ . Moreover, $$ \|f_n\|^2 = \operatorname{dist}(t^{\alpha}, V_n)^2 = \frac{G(t^{\alpha}, t^{\alpha_1}, \ldots, t^{\alpha_n})}{G(t^{\alpha_1}, \ldots, t^{\alpha_n})}, $$ where $G(v_1, \ldots, v_n) = \det[\langle v_i, v_j \rangle]$ is the Gram determinant. Step 2. We can expand the determinant in the numerator along the first row and compute the coefficients using Cauchy determinants . After a bit of computations, it turns out that \begin{align*} f_n(x) = x^{\alpha} - \sum_{k=1}^{n} \frac{\psi_{n,k}(\alpha_k)}{\psi_{n,k}(\alpha)} x^{\alpha_k}, \end{align*} where $\phi_{n,k}(\alpha)$ is the rational function in $\alpha$ given by $$ \psi_{n,k}(\alpha) := (\alpha_k - \alpha) \Psi_n(\alpha) \qquad\text{and}\qquad \Psi_n(\alpha) := \frac{\prod_{j=1}^{n} (\alpha_j + \alpha + 1)}{\prod_{j=1}^{n} (\alpha_j - \alpha)}. $$ A similar computation also shows that $$ \| f_n \|^2 = \frac{\prod_{j=1}^{n} (\alpha - \alpha_j)^2}{(2\alpha+1)\prod_{j=1}^{n} (\alpha + \alpha_j + 1)^2} = \frac{1}{(2\alpha+1)\Phi_n(\alpha)^2}. $$ Step 3. Now let us specialize to the case where $(\alpha_k)$ is of the form $\alpha_k = k^2 + p$ . Then Euler's reflection formula for the gamma function and Stirling's approximation show that \begin{align*} \prod_{j=1}^{n} (j^2 - q) = \frac{(n-\sqrt{q})!(n+\sqrt{q})!}{(-\sqrt{q})!\sqrt{q}!} = (n!)^2 \operatorname{sinc} (\pi \sqrt{q}) \prod_{j=n+1}^{\infty} \frac{j^2}{j^2 - q}, \end{align*} where $\operatorname{sinc}(x) = \frac{\sin x}{x}$ is the (unnormalized) sinc function and $s! = \Gamma(s+1)$ . Plugging this to $\Phi_n(\alpha)$ , we get \begin{align*} \Psi_{n}(\alpha) &= \frac{\operatorname{sinhc}(\pi\sqrt{p+\alpha+1})}{\operatorname{sinc}(\pi\sqrt{\alpha - p})} \prod_{j=n+1}^{\infty} \frac{j^2 + p - \alpha}{j^2 + p + \alpha + 1}. \end{align*} and \begin{align*} \psi_{n,k}(\alpha_k) &= \lim_{s \to k} \left( k^2 - s^2 \right) \Psi_{n}(s^2 + p) \\ &= (-1)^{k-1} 2k^2 \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}) \prod_{j=n+1}^{\infty} \frac{j^2 - k^2}{j^2 + k^2 + 2p + 1}. \end{align*} Note that both $\Psi_n(\alpha)$ and $\psi_{n,k}(\alpha_k)$ converges as $n \to \infty$ : \begin{gather*} \Psi_{\infty}(\alpha) := \lim_{n\to\infty} \Psi_{n}(\alpha) = \frac{\operatorname{sinhc}(\pi\sqrt{p+\alpha+1})}{\operatorname{sinhc}(\pi\sqrt{p-\alpha})}, \\[0.5em] \lim_{n\to\infty} \psi_{n,k}(\alpha_k) = (-1)^{k-1} 2k^2 \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}). \end{gather*} Moreover, it is clear from the formula above that $$ |\psi_{n,k}(\alpha_k)| \leq \frac{2k^2}{\pi\sqrt{k^2 + 2p + 1}} e^{\pi\sqrt{k^2 + 2p + 1}}$$ for all $1 \leq k \leq n$ . Therefore, by the dominated convergence theorem, for each $x \in [0, 1)$ , \begin{align*} f(x) &:= \lim_{n\to\infty} f_n(x) \\ &= x^{\alpha} - \lim_{n\to\infty} \frac{1}{\Psi_n(\alpha)} \sum_{k=1}^{n} \frac{\psi_{n,k}(\alpha_k)}{k^2 + p - \alpha} x^{k^2 + p} \\ &= \bbox[color:navy;border:1px dotted navy;padding:3px]{x^{\alpha} + \frac{1}{\Psi_{\infty}(\alpha)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2}{k^2 + p - \alpha} \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}) x^{k^2+p}.} \end{align*} Of course, this $f(x)$ must coincide with the $L^2$ -limit of $f_n$ . Therefore, for each $n = 1, 2, \ldots,$ $$ \int_{0}^{1} f(x)x^{n^2+p} \, \mathrm{d}x = \lim_{N \to \infty} \int_{0}^{1} f_N(x)x^{n^2+p} \, \mathrm{d}x = 0. $$ Below is the graph of $f(x)$ for $x \in [0, 1)$ when $\alpha = 0$ and $p = 1$ . Step 4. The function $f$ is almost good, but the issue is that $f$ seems suffering from the discontinuity at $x = 1$ . To make amend of this, we now fix $\alpha = 0$ , so that the corresponding $f$ is given by $$ f(x) = 1 + \frac{1}{\Psi_{\infty}(0)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2}{k^2 + p} \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}) x^{k^2+p}. $$ Then $$ \int_{0}^{1} f(x) \, \mathrm{d}x = \langle f, 1 \rangle = \| f \|^2 + \underbrace{\langle f, 1-f \rangle}_{=0} = \frac{1}{\Psi_{\infty}(0)^2}. $$ Using this, we define $F : [0, 1] \to \mathbb{R}$ as \begin{align*} F(x) &= \int_{x}^{1} f(t) \, \mathrm{d}t \\ &= \int_{0}^{1} f(x) \, \mathrm{d}x - \int_{0}^{x} f(x) \, \mathrm{d}x \\ &= \bbox[color:navy;border:1px dotted navy;padding:3px]{\frac{1}{\Psi_{\infty}(0)^2} - x - \frac{1}{\Psi_{\infty}(0)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2 \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}})}{(k^2 + p)(k^2 + p + 1)} x^{k^2+p + 1}.} \end{align*} Then $F$ is absolutely continuous on all of $[0, 1]$ . Also, performing integration by parts, \begin{align*} 0 = \int_{0}^{1} f(x)x^{n^2+p} \, \mathrm{d}x &= [ -F(x)x^{n^2+p} ]_{0}^{1} + (n^2 + p - 1)\int_{0}^{1} F(x) x^{n^2+p-1} \, \mathrm{d}x \end{align*} and hence $$ \int_{0}^{1} F(x) x^{n^2+p-1} \, \mathrm{d}x = 0, \qquad n = 1, 2, \ldots $$ So by choosing $p = 1$ , the main claim follows.
{ "source": [ "https://math.stackexchange.com/questions/4488289", "https://math.stackexchange.com", "https://math.stackexchange.com/users/44121/" ] }
4,488,291
Where $\overline X_{n}=\frac{\sum_{i=1}^{n} X_{i}}{n}$ , and $a\in (0,1)$ . Prove that $U_{n}$ converges to a constant $"c"$ in probability. And find $"c"$ So we have to prove $P(|U_{n}-c|>\epsilon) \rightarrow 0$ when $n \rightarrow \infty$ How can I solve this?. I solved asymptotic theory exercises before, but this one is different because I do not see where can apply a theorem like Slutsky, Weak law of the great numbers etc.
Short Answer. Expanding @orangeskid 's answer, let $$ F(x) := \frac{1}{\Psi_{\infty}(0)^2} - x - \frac{1}{\Psi_{\infty}(0)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2 \operatorname{sinhc}(\pi\sqrt{k^2 + 3})}{(k^2 + 1)(k^2 + 2)} x^{k^2+2}, $$ where $\operatorname{sinhc}(x) = \frac{\sinh x}{x}$ and $$ \Psi_{\infty}(0) = \frac{\operatorname{sinhc}(\pi\sqrt{p+1})}{\operatorname{sinhc}(\pi\sqrt{p})}. $$ Although the above series converges only for $x \in [0, 1)$ , we can prove that $F$ extends to an absolutely continuous function on $[0, 1]$ by setting $F(1) = 0$ and satisfies $$ \int_{0}^{1} F(x) x^{n^2} \, \mathrm{d}x = 0 $$ for any $ n = 1, 2, \ldots$ Below is the graph of $f(x)$ on $[0, 1]$ : Proof of the claim. Step 1. Consider a sequence $-\frac{1}{2} < \alpha_1 < \alpha_2 < \ldots$ . Also, we define the function $f_n$ by \begin{align*} f_n(x) &:= \frac{ \begin{vmatrix} 1 & x^{\alpha_1} & \cdots & x^{\alpha_n} \\ \langle 1, t^{\alpha_1} \rangle & \langle t^{\alpha_1}, t^{\alpha_1} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_1} \rangle \\ \vdots & \vdots & \ddots & \vdots \\ \langle 1, t^{\alpha_n} \rangle & \langle t^{\alpha_1}, t^{\alpha_n} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_n} \rangle \end{vmatrix} }{ \begin{vmatrix} \langle t^{\alpha_1}, t^{\alpha_1} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_1} \rangle \\ \vdots & \ddots & \vdots \\ \langle t^{\alpha_1}, t^{\alpha_n} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_n} \rangle \end{vmatrix} }, \end{align*} where $\langle g(t), h(t) \rangle = \int_{0}^{1} g(t)h(t) \, \mathrm{d}t$ denotes the inner product on $L^2([0,1])$ . Then, as explained in @orangeskid's answer, for $\alpha > -\frac{1}{2}$ with $\alpha \notin \{\alpha_1, \alpha_2, \ldots\}$ , $$ x^{\alpha} = f_n + (x^{\alpha} - f_n) $$ is an orthogonal decomposition of $x^{\alpha}$ , where $x^{\alpha} - f_n \in V_n := \operatorname{span}(x^{\alpha_1}, \ldots, x^{\alpha_n})$ and $f_n \perp V_n$ . Since $V_n$ is increasing in $n$ , this implies that $f_n$ converges in $L^2([0,1])$ . Moreover, $$ \|f_n\|^2 = \operatorname{dist}(t^{\alpha}, V_n)^2 = \frac{G(t^{\alpha}, t^{\alpha_1}, \ldots, t^{\alpha_n})}{G(t^{\alpha_1}, \ldots, t^{\alpha_n})}, $$ where $G(v_1, \ldots, v_n) = \det[\langle v_i, v_j \rangle]$ is the Gram determinant. Step 2. We can expand the determinant in the numerator along the first row and compute the coefficients using Cauchy determinants . After a bit of computations, it turns out that \begin{align*} f_n(x) = x^{\alpha} - \sum_{k=1}^{n} \frac{\psi_{n,k}(\alpha_k)}{\psi_{n,k}(\alpha)} x^{\alpha_k}, \end{align*} where $\phi_{n,k}(\alpha)$ is the rational function in $\alpha$ given by $$ \psi_{n,k}(\alpha) := (\alpha_k - \alpha) \Psi_n(\alpha) \qquad\text{and}\qquad \Psi_n(\alpha) := \frac{\prod_{j=1}^{n} (\alpha_j + \alpha + 1)}{\prod_{j=1}^{n} (\alpha_j - \alpha)}. $$ A similar computation also shows that $$ \| f_n \|^2 = \frac{\prod_{j=1}^{n} (\alpha - \alpha_j)^2}{(2\alpha+1)\prod_{j=1}^{n} (\alpha + \alpha_j + 1)^2} = \frac{1}{(2\alpha+1)\Phi_n(\alpha)^2}. $$ Step 3. Now let us specialize to the case where $(\alpha_k)$ is of the form $\alpha_k = k^2 + p$ . Then Euler's reflection formula for the gamma function and Stirling's approximation show that \begin{align*} \prod_{j=1}^{n} (j^2 - q) = \frac{(n-\sqrt{q})!(n+\sqrt{q})!}{(-\sqrt{q})!\sqrt{q}!} = (n!)^2 \operatorname{sinc} (\pi \sqrt{q}) \prod_{j=n+1}^{\infty} \frac{j^2}{j^2 - q}, \end{align*} where $\operatorname{sinc}(x) = \frac{\sin x}{x}$ is the (unnormalized) sinc function and $s! = \Gamma(s+1)$ . Plugging this to $\Phi_n(\alpha)$ , we get \begin{align*} \Psi_{n}(\alpha) &= \frac{\operatorname{sinhc}(\pi\sqrt{p+\alpha+1})}{\operatorname{sinc}(\pi\sqrt{\alpha - p})} \prod_{j=n+1}^{\infty} \frac{j^2 + p - \alpha}{j^2 + p + \alpha + 1}. \end{align*} and \begin{align*} \psi_{n,k}(\alpha_k) &= \lim_{s \to k} \left( k^2 - s^2 \right) \Psi_{n}(s^2 + p) \\ &= (-1)^{k-1} 2k^2 \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}) \prod_{j=n+1}^{\infty} \frac{j^2 - k^2}{j^2 + k^2 + 2p + 1}. \end{align*} Note that both $\Psi_n(\alpha)$ and $\psi_{n,k}(\alpha_k)$ converges as $n \to \infty$ : \begin{gather*} \Psi_{\infty}(\alpha) := \lim_{n\to\infty} \Psi_{n}(\alpha) = \frac{\operatorname{sinhc}(\pi\sqrt{p+\alpha+1})}{\operatorname{sinhc}(\pi\sqrt{p-\alpha})}, \\[0.5em] \lim_{n\to\infty} \psi_{n,k}(\alpha_k) = (-1)^{k-1} 2k^2 \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}). \end{gather*} Moreover, it is clear from the formula above that $$ |\psi_{n,k}(\alpha_k)| \leq \frac{2k^2}{\pi\sqrt{k^2 + 2p + 1}} e^{\pi\sqrt{k^2 + 2p + 1}}$$ for all $1 \leq k \leq n$ . Therefore, by the dominated convergence theorem, for each $x \in [0, 1)$ , \begin{align*} f(x) &:= \lim_{n\to\infty} f_n(x) \\ &= x^{\alpha} - \lim_{n\to\infty} \frac{1}{\Psi_n(\alpha)} \sum_{k=1}^{n} \frac{\psi_{n,k}(\alpha_k)}{k^2 + p - \alpha} x^{k^2 + p} \\ &= \bbox[color:navy;border:1px dotted navy;padding:3px]{x^{\alpha} + \frac{1}{\Psi_{\infty}(\alpha)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2}{k^2 + p - \alpha} \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}) x^{k^2+p}.} \end{align*} Of course, this $f(x)$ must coincide with the $L^2$ -limit of $f_n$ . Therefore, for each $n = 1, 2, \ldots,$ $$ \int_{0}^{1} f(x)x^{n^2+p} \, \mathrm{d}x = \lim_{N \to \infty} \int_{0}^{1} f_N(x)x^{n^2+p} \, \mathrm{d}x = 0. $$ Below is the graph of $f(x)$ for $x \in [0, 1)$ when $\alpha = 0$ and $p = 1$ . Step 4. The function $f$ is almost good, but the issue is that $f$ seems suffering from the discontinuity at $x = 1$ . To make amend of this, we now fix $\alpha = 0$ , so that the corresponding $f$ is given by $$ f(x) = 1 + \frac{1}{\Psi_{\infty}(0)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2}{k^2 + p} \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}) x^{k^2+p}. $$ Then $$ \int_{0}^{1} f(x) \, \mathrm{d}x = \langle f, 1 \rangle = \| f \|^2 + \underbrace{\langle f, 1-f \rangle}_{=0} = \frac{1}{\Psi_{\infty}(0)^2}. $$ Using this, we define $F : [0, 1] \to \mathbb{R}$ as \begin{align*} F(x) &= \int_{x}^{1} f(t) \, \mathrm{d}t \\ &= \int_{0}^{1} f(x) \, \mathrm{d}x - \int_{0}^{x} f(x) \, \mathrm{d}x \\ &= \bbox[color:navy;border:1px dotted navy;padding:3px]{\frac{1}{\Psi_{\infty}(0)^2} - x - \frac{1}{\Psi_{\infty}(0)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2 \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}})}{(k^2 + p)(k^2 + p + 1)} x^{k^2+p + 1}.} \end{align*} Then $F$ is absolutely continuous on all of $[0, 1]$ . Also, performing integration by parts, \begin{align*} 0 = \int_{0}^{1} f(x)x^{n^2+p} \, \mathrm{d}x &= [ -F(x)x^{n^2+p} ]_{0}^{1} + (n^2 + p - 1)\int_{0}^{1} F(x) x^{n^2+p-1} \, \mathrm{d}x \end{align*} and hence $$ \int_{0}^{1} F(x) x^{n^2+p-1} \, \mathrm{d}x = 0, \qquad n = 1, 2, \ldots $$ So by choosing $p = 1$ , the main claim follows.
{ "source": [ "https://math.stackexchange.com/questions/4488291", "https://math.stackexchange.com", "https://math.stackexchange.com/users/908338/" ] }
4,488,326
Let's say we randomly pick a box from a bag of boxes with outer surfaces colored into three: WHITE, RED, and BLUE. What is unknown? Number of WHITE, RED, or BLUE boxes in the bag. What is known? Average weight of a box with surface colored WHITE: 52 Average weight of a box with surface colored RED: 24 Average weight of a box with surface colored BLUE: 36 Is it correct to say that the average weight of a box picked at random from the bag = (1/3)*54 + (1/3)*24 + (1/3)*36 = 38 Edit: If there is an equal change of getting a 'WHITE','RED' and 'BLUE' color ball.
Short Answer. Expanding @orangeskid 's answer, let $$ F(x) := \frac{1}{\Psi_{\infty}(0)^2} - x - \frac{1}{\Psi_{\infty}(0)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2 \operatorname{sinhc}(\pi\sqrt{k^2 + 3})}{(k^2 + 1)(k^2 + 2)} x^{k^2+2}, $$ where $\operatorname{sinhc}(x) = \frac{\sinh x}{x}$ and $$ \Psi_{\infty}(0) = \frac{\operatorname{sinhc}(\pi\sqrt{p+1})}{\operatorname{sinhc}(\pi\sqrt{p})}. $$ Although the above series converges only for $x \in [0, 1)$ , we can prove that $F$ extends to an absolutely continuous function on $[0, 1]$ by setting $F(1) = 0$ and satisfies $$ \int_{0}^{1} F(x) x^{n^2} \, \mathrm{d}x = 0 $$ for any $ n = 1, 2, \ldots$ Below is the graph of $f(x)$ on $[0, 1]$ : Proof of the claim. Step 1. Consider a sequence $-\frac{1}{2} < \alpha_1 < \alpha_2 < \ldots$ . Also, we define the function $f_n$ by \begin{align*} f_n(x) &:= \frac{ \begin{vmatrix} 1 & x^{\alpha_1} & \cdots & x^{\alpha_n} \\ \langle 1, t^{\alpha_1} \rangle & \langle t^{\alpha_1}, t^{\alpha_1} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_1} \rangle \\ \vdots & \vdots & \ddots & \vdots \\ \langle 1, t^{\alpha_n} \rangle & \langle t^{\alpha_1}, t^{\alpha_n} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_n} \rangle \end{vmatrix} }{ \begin{vmatrix} \langle t^{\alpha_1}, t^{\alpha_1} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_1} \rangle \\ \vdots & \ddots & \vdots \\ \langle t^{\alpha_1}, t^{\alpha_n} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_n} \rangle \end{vmatrix} }, \end{align*} where $\langle g(t), h(t) \rangle = \int_{0}^{1} g(t)h(t) \, \mathrm{d}t$ denotes the inner product on $L^2([0,1])$ . Then, as explained in @orangeskid's answer, for $\alpha > -\frac{1}{2}$ with $\alpha \notin \{\alpha_1, \alpha_2, \ldots\}$ , $$ x^{\alpha} = f_n + (x^{\alpha} - f_n) $$ is an orthogonal decomposition of $x^{\alpha}$ , where $x^{\alpha} - f_n \in V_n := \operatorname{span}(x^{\alpha_1}, \ldots, x^{\alpha_n})$ and $f_n \perp V_n$ . Since $V_n$ is increasing in $n$ , this implies that $f_n$ converges in $L^2([0,1])$ . Moreover, $$ \|f_n\|^2 = \operatorname{dist}(t^{\alpha}, V_n)^2 = \frac{G(t^{\alpha}, t^{\alpha_1}, \ldots, t^{\alpha_n})}{G(t^{\alpha_1}, \ldots, t^{\alpha_n})}, $$ where $G(v_1, \ldots, v_n) = \det[\langle v_i, v_j \rangle]$ is the Gram determinant. Step 2. We can expand the determinant in the numerator along the first row and compute the coefficients using Cauchy determinants . After a bit of computations, it turns out that \begin{align*} f_n(x) = x^{\alpha} - \sum_{k=1}^{n} \frac{\psi_{n,k}(\alpha_k)}{\psi_{n,k}(\alpha)} x^{\alpha_k}, \end{align*} where $\phi_{n,k}(\alpha)$ is the rational function in $\alpha$ given by $$ \psi_{n,k}(\alpha) := (\alpha_k - \alpha) \Psi_n(\alpha) \qquad\text{and}\qquad \Psi_n(\alpha) := \frac{\prod_{j=1}^{n} (\alpha_j + \alpha + 1)}{\prod_{j=1}^{n} (\alpha_j - \alpha)}. $$ A similar computation also shows that $$ \| f_n \|^2 = \frac{\prod_{j=1}^{n} (\alpha - \alpha_j)^2}{(2\alpha+1)\prod_{j=1}^{n} (\alpha + \alpha_j + 1)^2} = \frac{1}{(2\alpha+1)\Phi_n(\alpha)^2}. $$ Step 3. Now let us specialize to the case where $(\alpha_k)$ is of the form $\alpha_k = k^2 + p$ . Then Euler's reflection formula for the gamma function and Stirling's approximation show that \begin{align*} \prod_{j=1}^{n} (j^2 - q) = \frac{(n-\sqrt{q})!(n+\sqrt{q})!}{(-\sqrt{q})!\sqrt{q}!} = (n!)^2 \operatorname{sinc} (\pi \sqrt{q}) \prod_{j=n+1}^{\infty} \frac{j^2}{j^2 - q}, \end{align*} where $\operatorname{sinc}(x) = \frac{\sin x}{x}$ is the (unnormalized) sinc function and $s! = \Gamma(s+1)$ . Plugging this to $\Phi_n(\alpha)$ , we get \begin{align*} \Psi_{n}(\alpha) &= \frac{\operatorname{sinhc}(\pi\sqrt{p+\alpha+1})}{\operatorname{sinc}(\pi\sqrt{\alpha - p})} \prod_{j=n+1}^{\infty} \frac{j^2 + p - \alpha}{j^2 + p + \alpha + 1}. \end{align*} and \begin{align*} \psi_{n,k}(\alpha_k) &= \lim_{s \to k} \left( k^2 - s^2 \right) \Psi_{n}(s^2 + p) \\ &= (-1)^{k-1} 2k^2 \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}) \prod_{j=n+1}^{\infty} \frac{j^2 - k^2}{j^2 + k^2 + 2p + 1}. \end{align*} Note that both $\Psi_n(\alpha)$ and $\psi_{n,k}(\alpha_k)$ converges as $n \to \infty$ : \begin{gather*} \Psi_{\infty}(\alpha) := \lim_{n\to\infty} \Psi_{n}(\alpha) = \frac{\operatorname{sinhc}(\pi\sqrt{p+\alpha+1})}{\operatorname{sinhc}(\pi\sqrt{p-\alpha})}, \\[0.5em] \lim_{n\to\infty} \psi_{n,k}(\alpha_k) = (-1)^{k-1} 2k^2 \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}). \end{gather*} Moreover, it is clear from the formula above that $$ |\psi_{n,k}(\alpha_k)| \leq \frac{2k^2}{\pi\sqrt{k^2 + 2p + 1}} e^{\pi\sqrt{k^2 + 2p + 1}}$$ for all $1 \leq k \leq n$ . Therefore, by the dominated convergence theorem, for each $x \in [0, 1)$ , \begin{align*} f(x) &:= \lim_{n\to\infty} f_n(x) \\ &= x^{\alpha} - \lim_{n\to\infty} \frac{1}{\Psi_n(\alpha)} \sum_{k=1}^{n} \frac{\psi_{n,k}(\alpha_k)}{k^2 + p - \alpha} x^{k^2 + p} \\ &= \bbox[color:navy;border:1px dotted navy;padding:3px]{x^{\alpha} + \frac{1}{\Psi_{\infty}(\alpha)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2}{k^2 + p - \alpha} \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}) x^{k^2+p}.} \end{align*} Of course, this $f(x)$ must coincide with the $L^2$ -limit of $f_n$ . Therefore, for each $n = 1, 2, \ldots,$ $$ \int_{0}^{1} f(x)x^{n^2+p} \, \mathrm{d}x = \lim_{N \to \infty} \int_{0}^{1} f_N(x)x^{n^2+p} \, \mathrm{d}x = 0. $$ Below is the graph of $f(x)$ for $x \in [0, 1)$ when $\alpha = 0$ and $p = 1$ . Step 4. The function $f$ is almost good, but the issue is that $f$ seems suffering from the discontinuity at $x = 1$ . To make amend of this, we now fix $\alpha = 0$ , so that the corresponding $f$ is given by $$ f(x) = 1 + \frac{1}{\Psi_{\infty}(0)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2}{k^2 + p} \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}) x^{k^2+p}. $$ Then $$ \int_{0}^{1} f(x) \, \mathrm{d}x = \langle f, 1 \rangle = \| f \|^2 + \underbrace{\langle f, 1-f \rangle}_{=0} = \frac{1}{\Psi_{\infty}(0)^2}. $$ Using this, we define $F : [0, 1] \to \mathbb{R}$ as \begin{align*} F(x) &= \int_{x}^{1} f(t) \, \mathrm{d}t \\ &= \int_{0}^{1} f(x) \, \mathrm{d}x - \int_{0}^{x} f(x) \, \mathrm{d}x \\ &= \bbox[color:navy;border:1px dotted navy;padding:3px]{\frac{1}{\Psi_{\infty}(0)^2} - x - \frac{1}{\Psi_{\infty}(0)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2 \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}})}{(k^2 + p)(k^2 + p + 1)} x^{k^2+p + 1}.} \end{align*} Then $F$ is absolutely continuous on all of $[0, 1]$ . Also, performing integration by parts, \begin{align*} 0 = \int_{0}^{1} f(x)x^{n^2+p} \, \mathrm{d}x &= [ -F(x)x^{n^2+p} ]_{0}^{1} + (n^2 + p - 1)\int_{0}^{1} F(x) x^{n^2+p-1} \, \mathrm{d}x \end{align*} and hence $$ \int_{0}^{1} F(x) x^{n^2+p-1} \, \mathrm{d}x = 0, \qquad n = 1, 2, \ldots $$ So by choosing $p = 1$ , the main claim follows.
{ "source": [ "https://math.stackexchange.com/questions/4488326", "https://math.stackexchange.com", "https://math.stackexchange.com/users/807098/" ] }
4,492,566
To which degree must I rotate a parabola for it to be no longer the graph of a function? I have no problem with narrowing the question down by only concerning the standard parabola: $$f(x)=x^2.$$ I am looking for a specific angle measure. One such measure must exist as the reflection of $f$ over the line $y=x$ is certainly no longer well-defined. I realize that preferentially I should ask the question on this site with a bit of work put into it but, alas, I have no intuition for where to start. I suppose I know immediately that it must be less than $45^\circ$ as such a rotation will cross the y-axis at $(0,0)$ and $(0,\sqrt{2})$ . Any insight on how to proceed?
Rotating the parabola even by the smallest angle will cause it to no longer be well defined. Intuitively, you can prove this for yourself by considering the fact that the derivative of a parabola is unbounded. This means that the parabola becomes arbitrarily "steep" for large (or small) values of $x$ , i.e. its angle being closer and closer to $90^\circ$ , and rotating it by even a little will tip it over the $90$ degrees. For a formal proof, first, we need to explain exactly what a rotation of a parabola is. In general, a rotation in $\mathbb R^2$ is multiplication with a rotation matrix, which has, for a rotation by $\phi$ , the form $$\begin{bmatrix}\cos\phi&-\sin\phi\\\sin\phi&\cos\phi\end{bmatrix}$$ In other words, if we start with a parabola $P= \{(x,y)|x\in\mathbb R\land y=x^2\}$ , then the parabola, rotated by an angle of $\phi$ , is $$\begin{align}P_\phi &= \left.\left\{\begin{bmatrix}\cos\phi&-\sin\phi\\\sin\phi&\cos\phi\end{bmatrix}\cdot\begin{bmatrix}x\\y\end{bmatrix}\right| x\in\mathbb R, y=x^2\right\}\\ &=\{(x\cos\phi - y\sin\phi, x\sin\phi + y\cos\phi)|x\in\mathbb R, y=x^2\}\\ &= \{(x\cos\phi-x^2\sin\phi, x\sin\phi + x^2\cos\phi)| x\in\mathbb R\}\end{align}.$$ The question now is which values of $\phi$ construct a well defined parabola $P_\phi$ , where by "well defined", we mean "it is a graph of a function", i.e., for each $\overline x\in\mathbb R$ , there exists exactly one value $\overline y$ such that $(\overline x,\overline y)\in P_\phi$ . Clearly, if $\phi = 0$ , we have $P_0=\{(x, x^2)|x\in\mathbb R\}$ which is well defined, because for every $\overline x$ , the value $\overline y=\overline x^2$ is the unique value required for $(\overline x,\overline y)$ to be in $P_0$ . Also, if $\phi=\pi$ , then $P_\pi = \{(-x, -x^2)|x\in\mathbb R\}$ is also well defined because if $(\overline x,\overline y)\in P_\pi$ , then $\overline y=-\overline x^2$ . Now, observe what happens if $\phi\notin\{0,\pi\}$ . For now, let's assume that $\phi\in(0,\frac\pi2)$ . In that case, $\sin\phi\neq 0$ , which means that the equation $$x\cos\phi-x^2\sin\phi=0$$ has two solutions. One solution is $x=0$ , the other is $x=\frac{\cos\phi}{\sin\phi} = \cot\phi$ . This means that, if we take $\overline x=0$ , there are two values of $x$ that can create a point $(\overline x, \overline y)$ , and we have two possible values of $\overline y$ as well. One is $\overline y_1 = 0$ , the other is $$\overline y_2 = x\sin\phi + x^2\cos\phi = \frac{\cos\phi}{\sin\phi} \sin\phi + \left(\frac{\cos\phi}{\sin\phi}\right)^2\cos\phi =\cos\phi + \frac{\cos^3\phi}{\sin\phi}$$ and, because $\phi\in(0,\frac\pi2)$ , we know that $\overline y_2>0$ , which means $\overline y_2\neq \overline y_1$ , and therefore, $P_\phi$ is not a graph of a function. Note that the options when $\phi$ is in one of the other three quadrants can be solved similarly as the one above, or, you can use symmetry to translate all of the other three cases to the one already solved above.
{ "source": [ "https://math.stackexchange.com/questions/4492566", "https://math.stackexchange.com", "https://math.stackexchange.com/users/723470/" ] }
4,494,410
As we know, complex conjugation is the reflection symmetry of a complex with respect to the real axis. Is there a standard terminology for a pair of complex numbers which have identical imaginary parts and opposite real parts? If not, why such kinds of complex numbers are not as important as conjugate complex so that there is even no special math concept for them?
One key point is that the conjugation map plays reasonably well with the algebraic structure: $\overline{x+y}=\overline{x}+\overline{y}$ and $\overline{x\cdot y}=\overline{x}\cdot \overline{y}$ . Put another way, conjugation is an automorphism of the complex field. In fact, it's the only "reasonably nice" (e.g. continuous) nontrivial automorphism of $\mathbb{C}$ at all. On the other hand, the map $$a+bi\mapsto (a+bi)^\star:=-a+bi$$ is not nearly as algebraically nice. It still plays well with addition, but not with multiplication: for example, $$1^\star\cdot1^\star=1\not=-1=(1\cdot 1)^\star.$$
{ "source": [ "https://math.stackexchange.com/questions/4494410", "https://math.stackexchange.com", "https://math.stackexchange.com/users/388851/" ] }
4,505,306
Problem : Determine a limit $$\lim_{x\to\infty}xe^{\sin x}$$ exists or not. If it exists, find a limit. Since $-1\le \sin x \le 1$ , I can say that $\displaystyle0<\frac{1}{e}\le e^{\sin x}\le e$ . Multiply $x$ both side, $\displaystyle0<\frac{x}{e}\le xe^{\sin x}\le ex$ Since $\displaystyle\lim_{x\to\infty} \frac{x}{e}=\infty$ , I think $\displaystyle\lim_{x\to\infty}xe^{\sin x}=\infty$ also. But, WA says it is indeterminate form : Am I correct? or Am I wrong? If I'm wrong, where did I make a mistake?
No, you are perfectly right. Maybe what WolframAlpha is telling you is that it could not resolve the limit. Just because you are smarter than the computer does not mean you did something wrong.
{ "source": [ "https://math.stackexchange.com/questions/4505306", "https://math.stackexchange.com", "https://math.stackexchange.com/users/672369/" ] }
4,505,314
Let $p\in[1,\infty)$ and $m$ be an $L^p(\mathbb R^n)$ multiplier. Let $\psi\in L^1(\mathbb R^n)$ . Prove that $m\hat{\psi}$ and $m*\psi$ are $L^p(\mathbb R^n)$ multipliers, and $$\left\|T_{m\hat{\psi}}\right\|_{L^p\to L^p}\leq\|\psi\|_{L^1}\left\|T_{m}\right\|_{L^p\to L^p},\tag{1}$$ $$\left\|T_{m*\psi}\right\|_{L^p\to L^p}\leq\|\psi\|_{L^1}\left\|T_{m}\right\|_{L^p\to L^p}.\tag{2}$$ Let me explain the notations first. $m: \mathbb R^n\to\mathbb C$ is called an $L^p(\mathbb R^n)$ multiplier if $m\in L^\infty(\mathbb R^n)$ and $$\|T_m f\|_{L^p} := \|(m\hat{f})^\vee\|_{L^p} \leq C\|f\|_{L^p}$$ for some $C>0$ , where $\hat{f}$ is the Fourier transform of $f$ , and $f^\vee$ is its inverse Fourier transform: $$\hat f(\xi)=\int_{\mathbb{R}^n} f(x)e^{-2\pi ix\cdot\xi}\,dx,\qquad f^\vee(x)=\int_{\mathbb{R}^n} f(\xi)e^{2\pi ix\cdot\xi}\,d\xi.$$ In this case, $T_m$ is a bounded linear operator from $L^p(\mathbb R^n)$ to $L^p(\mathbb R^n)$ , and we denote its operator norm by $\left\|T_{m}\right\|_{L^p\to L^p}$ . Here is my attempt. I can prove $(1)$ . We have $$(T_{m\hat{\psi}}f)\hat{ }= m\hat\psi\hat f= \hat\psi(T_mf)\hat{ }=(\psi*T_mf)\hat{},\tag{3}$$ hence $T_{m\hat{\psi}}f=\psi*T_mf$ . Therefore, $(1)$ follows from Young's inequality. As for $(2)$ , I tried to do something like $(3)$ , but it just didn't work. So I think there must be something I missed. Thanks in advance!
No, you are perfectly right. Maybe what WolframAlpha is telling you is that it could not resolve the limit. Just because you are smarter than the computer does not mean you did something wrong.
{ "source": [ "https://math.stackexchange.com/questions/4505314", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1025631/" ] }
4,508,469
According to Bézout's Theorem, two polynomials of degree $m,n$ intersect at most at $mn$ points. So, two circles should intersect at most at $4$ points as well. But I have so far known that 2 circles intersect at most at $2$ points. Why? Why doesn't Bézout's theorem apply for circles? Is this a special case of the theorem? Can anyone explain this in easy terms? (I'm only in high school!)
According to your statement of Bézout's Theorem, (the curves defined by) two polynomials of degree $m,n$ intersect at most at $mn$ points. Since $2 \le 4$ everything's fine. There is however a stronger version of Bézout's Theorem: the curves defined by two polynomials of degree $m,n$ intersect at exactly $mn$ points. To get this, we need to add some technicalities here and there: you require that the polynomials are distinct and irreducible (they can't be split in simpler factors). This assures that the curves do not have big pieces in common: we want they just intersect at points. you don't use the real numbers, but a bigger number set (the complex numbers, if you know what they are) to be sure that there are indeed roots. In the case of two circles in the real plane, they may not intersect. You count roots with multiplicities : each root may be 'hit' more than once, as in the case of tangent circles. you don't draw your curves in a plane, but in a bit more sophisticated geometric environment, known as projective plane, in which two curves may encounter at special points (points at infinity). The reason why you always get at most $2$ is that the other two solutions do indeed appear when you consider your curves as lying in this special environment: the complex projective plane .
{ "source": [ "https://math.stackexchange.com/questions/4508469", "https://math.stackexchange.com", "https://math.stackexchange.com/users/911097/" ] }
4,508,474
I would like to solve the ODE $$\left(x-k_1\right)\frac{dY}{dx} + \frac{1}{a}x^2\frac{d^2 Y}{d x^2}-Y+k_2=0$$ with boundary conditions $Y\left(k_3\right)=0$ and $\frac{d Y}{d x}\left(\infty\right)=1-b$ (meaning if $x \to \infty$ , the derivative approaches $1-b$ ). Not sure if it helps, but we also have $0\leq b \leq 1$ , $a \geq 0$ , and $k_2 + k_3 \leq k_1$ . $x$ and $Y$ should also both be $\geq 0$ . I came across this equation years ago and remember that the solution involved gamma functions in some form, but otherwise I'm not sure where to start (or even what term to Google). Would anyone know how to find the solution here?
According to your statement of Bézout's Theorem, (the curves defined by) two polynomials of degree $m,n$ intersect at most at $mn$ points. Since $2 \le 4$ everything's fine. There is however a stronger version of Bézout's Theorem: the curves defined by two polynomials of degree $m,n$ intersect at exactly $mn$ points. To get this, we need to add some technicalities here and there: you require that the polynomials are distinct and irreducible (they can't be split in simpler factors). This assures that the curves do not have big pieces in common: we want they just intersect at points. you don't use the real numbers, but a bigger number set (the complex numbers, if you know what they are) to be sure that there are indeed roots. In the case of two circles in the real plane, they may not intersect. You count roots with multiplicities : each root may be 'hit' more than once, as in the case of tangent circles. you don't draw your curves in a plane, but in a bit more sophisticated geometric environment, known as projective plane, in which two curves may encounter at special points (points at infinity). The reason why you always get at most $2$ is that the other two solutions do indeed appear when you consider your curves as lying in this special environment: the complex projective plane .
{ "source": [ "https://math.stackexchange.com/questions/4508474", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1084876/" ] }
4,511,691
If you think of the bee-hive problem, you want to make 2D cells that divide the plane of honey into chunks of area while expending the least perimeter (since the perimeter of the cells is what takes up resources/effort). The solution ends up being the hexagonal tiling. What is the analogous "tiling" for 3D space that's optimal in a similar sense? (more volume, less surface area) And if possible, I'd like to know the general solution for $n$ -D space. To make the problem statement clear: assume that each "cell" has a volume of at most 1. With what polyhedron should you divide the cells to minimize the ratio of surface area to volume? For example, if you tile everything with hypercubes, the ratio would be $2n$ , which probably isn't optimal.
This is known as the Kelvin problem; the best known (and conjectured optimal) solution is the Weaire–Phelan structure , but proving this is likely very very hard. I don't know what the best results in $n$ dimensions are, but I'd be shocked if they were solved for $n>3$ .
{ "source": [ "https://math.stackexchange.com/questions/4511691", "https://math.stackexchange.com", "https://math.stackexchange.com/users/268283/" ] }
4,515,842
We have a special function $S$ from the real interval $[0, 1)$ to the real-interval $(0, 1]$ which I will define near the end of this post. Someone claims that the following proof-schema is valid: We wish to prove some statement $P(x)$ for each real number in the closed interval $[0, 1]$ . First, we show that the statement $P(0)$ is true. Next we let $x$ be an arbitrary element of the interval $[0, 1]$ . We assume that $P(x)$ is true, and then we prove that $P \begin{pmatrix} S(x) \end{pmatrix}$ is also be true. We conclude that $\forall x \in [0, 1], P(x)$ An answer to this stack-exchange question is a proof that the above proof-schema is correct or incorrect. If you think that induction on real-numbers is not valid, then an answer to this question is a proof of the existence of a predicate $P$ such that: $P(0)$ is true. $\exists x \in (0, 1]$ such that $P(x)$ is false. There is no real number $x$ in the interval $[0, 1)$ such that $P(x)$ is true and $P(S(x))$ is false. Equivalently, show that $\forall x \in [0, 1)$ if the statement $P(S(x))$ is false then the statement $P(x)$ is also false. Informal Description of function $S$ Informally,you can compute $S(x)$ by the following procedure: STEP ONE : Get a decimal-expansion of $x$ . Look only at digits to the right of the decimal point. Go from left to right until you find a digit which is not nine. For example, if $x = 0.9990123$ then the left-most digit which is not a nine is a zero. Now add one to the digit which is not a nine. After adding $1$ to the left-most-non-nine digit in $0.9990123$ we get $0.9991123$ . STEP TWO : Replace the leftmost nines with zeros. if $x = 0.9990123$ then $S(x) = 0.0001123$ Approximation of $x$ Approximation of $S(x)$ $x$ $S(x)$ $0.99531416$ $0.99631416$ $ 0.995 + \frac{pi}{10^{-4}}$ $ x + 10^{-3}$ $0.141421356237$ $0.241421356237$ $\frac{\sqrt{2}}{10}$ $ 0.1 + \frac{\sqrt{2}}{10}$ $0$ $0.1$ $0$ $0.1$ $0.1$ $0.2$ $0.1$ $0.2$ $0.1$ $0.2$ $0.1$ $0.2$ $0.2$ $0.3$ $0.2$ $0.3$ $0.3$ $0.4$ $0.3$ $0.4$ $0.999991$ $0.999992$ $0.999991$ $0.999992$ $0.93$ $0.94$ $0.93$ $0.94$ $0.999999999 \dots 0000 \dots$ $0.000000000 \dots 10000 \dots$ $1- 10^{-100}$ $10^{-101}$ If $S(x) = 1$ then $x = 0.9$ If $S(x) = 0.9$ then $x = 0.8$ If $S(x) = 0.8$ then $x = 0.7$ $$\dots$$ If $S(x) = 0.2$ then $x = 0.1$ Formal Definition of Function $S$ Let $S$ be a function from the real interval $[0, 1)$ to the real-interval $(0, 1]$ such that $\forall x \in [0, 1)$ , $S(x) = x +10^{-(1 + g(x))} + 10^{-g(x)}$ . Function $g$ is defined as follows: $$\forall x \in [0, 1], g(x) = \begin{cases} 1, & \text{if } \lfloor 10* x \rfloor \neq 9 \\ 1 + g(10*x - \lfloor 10* x \rfloor), & \text{ otherwise } \end{cases}$$ Some Notes About Function $S$ Note 1: Irrational Inputs like $\pi$ are Okay $S(x)$ is well-defined for irrational inputs such as $x = \pi$ and $x = \sqrt{2}$ Note 2: What is $\lfloor 10* x \rfloor \neq 9$ ? Note that $\lfloor 10* x \rfloor \neq 9$ if and only if the left-most digit is $9$ . For example, $\frac{\pi}{10}$ is approximately $0.31459$ , which has a $3$ as its left-most digit. So, $\lfloor 10*\frac{\pi}{10} \rfloor \neq 9$ $\lfloor x \rfloor$ is the "floor" function. What if the $9$ s never end? Note that the only real number in the interval $[0, 1)$ which has only nines in its decimal-expansion is the number $1$ which can be expanded as $0.999999 \dots$ . However, $1$ is not a valid input to function $S$ . All numbers in $[0, 1]$ besides the number $1$ have at least one non-nine digit. As such, we can always find the left-most digit which is not-a-nine. A Different Informal way to Compute $S(x)$ Informally, we can: Find a decimal expansion of real number $x$ Delete the decimal point and write the digits in reverse order. For example, if $x = \sqrt{2} ≈ 0.1414213 \dots$ then write $\dots 3124141$ Add one to the result from step $2$ as if the step $2$ result was a natural number (it is not actually a natural. The result is not eventually all zero as you move leftward). Re-reverse the digits. Note that the result step $2$ is NOT a natural number. The result of step $2$ is a function $F$ from $\mathbb{N}$ to the digits $\begin{Bmatrix} 0, 1, 2, 3, 4, 5, 6, 7, 8, 9\end{Bmatrix}$ such that we probably have: $\not\exists n \in \mathbb{N}: \forall m \geq n, F(m) = 0$ For example if $x = \frac{\pi}{10}$ then: $F(1) = 3$ $F(2) = 1$ $F(3) = 4$ $F(4) = 1$ $F(5) = 5$ $F(6) = 9$ In general, $F(k)$ is one of the digits of the decimal expansion of $\pi$ .
This is obviously invalid regardless of what $S$ is: let $$\mathscr{X}=\{0,S(0),S(S(0)),...\}=\{S^i(0): i\in\mathbb{N}\}$$ (using the convention that $0\in\mathbb{N}$ and $S^0(x)=x$ here), consider the property $P(x):=x\in\mathscr{X}$ , and remember that $[0,1]$ is uncountable. There is a type of argument which works on $[0,1]$ and is arguably "induction-like," but it's quite different - see these notes of Pete Clark .
{ "source": [ "https://math.stackexchange.com/questions/4515842", "https://math.stackexchange.com", "https://math.stackexchange.com/users/697677/" ] }
4,515,848
A paper I'm reading claims that we can find a countably closed forcing notion that collapses $\mathfrak{c}$ to $\aleph_1$ , but I can't think of one. I know of the Lévy Collapse, but I don't think that is countably closed, since the conditions are $\textit{finite}$ partial functions.
This is obviously invalid regardless of what $S$ is: let $$\mathscr{X}=\{0,S(0),S(S(0)),...\}=\{S^i(0): i\in\mathbb{N}\}$$ (using the convention that $0\in\mathbb{N}$ and $S^0(x)=x$ here), consider the property $P(x):=x\in\mathscr{X}$ , and remember that $[0,1]$ is uncountable. There is a type of argument which works on $[0,1]$ and is arguably "induction-like," but it's quite different - see these notes of Pete Clark .
{ "source": [ "https://math.stackexchange.com/questions/4515848", "https://math.stackexchange.com", "https://math.stackexchange.com/users/539948/" ] }
4,517,493
I'm interested in proofs of claims of the form "Finite objects $A$ and $B$ are isomorphic" which are nonconstructive, in the sense that the proof doesn't exhibit the actual isomorphism at hand. A stronger (and more precisely specified) requirement would be a case in which it's computationally easy to write a proof, but computationally hard to extract the isomorphism given the proof, e.g. a class of graphs for which one can easily generate triples $(G,H,P)$ with $P$ a proof that $G$ and $H$ are isomorphic but for which there's no (known) efficient algorithm to take in $(G,H,P)$ and return a permutation of the vertices exhibiting the isomorphism. Some examples of things that would fit the bill, were they to exist: (give a nonconstructive proof that) all objects of type $X$ are uniquely specified by their values on five specific measurements; observe that $A$ and $B$ align on all such measurements. The easy-to-compute function of graphs $f(G,H)$ turns out to be equal to the product, over all relabelings of $G$ , of the number of edges in the symmetric differences between the edge sets of $H$ and the relabeled copy of $G$ . (This occurs because of some cute algebraic cancellation or something, like how one can compute determinants in time $O(n^3)$ .) Now we observe that $f(G,H) = 0$ via direct computation, and conclude that a relabeling with no difference at all to $H$ must exist. Groups $G$ and $H$ of order $n$ , specified by their multiplication tables, can be easily shown to embed as subgroups of a larger group $K$ , which we can classify and more easily prove things about. But the existence of two non-isomorphic subgroups of order $n$ in $K$ would imply something about its Sylow subgroups that we know to be false. Are there good examples of this phenomenon? Reasons to think it doesn't happen? I would also be interested in any pointers to literature on related topics here.
The simplest example I know: the existence of primitive roots tells us that if $p$ is a prime then the group $(\mathbb{Z}/p\mathbb{Z})^{\times}$ of units $\bmod p$ is cyclic, hence isomorphic to $C_{p-1}$ . However, exhibiting such an isomorphism amounts to finding such a primitive root, and the proof does not do this, nor does it really supply an efficient algorithm to do it. I don't know what the time complexity of finding a primitive root is. The "obvious" probabilistic algorithm is to factor $p - 1$ , then pick a random $a \in (\mathbb{Z}/p\mathbb{Z})^{\times}$ and test whether it's a primitive root by computing $a^{\frac{p-1}{q}} \bmod p$ for all prime divisors $q$ of $p-1$ . This should be pretty fast in practice although it will take at least as long as factoring $p - 1$ and I don't know what the deterministic time complexity is.
{ "source": [ "https://math.stackexchange.com/questions/4517493", "https://math.stackexchange.com", "https://math.stackexchange.com/users/214490/" ] }
4,517,497
$E=C([0,1],\mathbb{R})$ is a Banach space with respect to norm $$ ||f||=\sup_{0\le x\le 1}|f(x)|e^{-Mx}\\ $$ for $f\in E$ , and $M>0$ is a fixed real number. The question is to show one can choose the value of $M$ such that: $$ T:E\to E,\ T(f)(x)=\alpha+\int_0^x af(t^b) \text{d}t\\ \alpha\in\mathbb{R},a>0,b>1 $$ is a contraction mapping.
The simplest example I know: the existence of primitive roots tells us that if $p$ is a prime then the group $(\mathbb{Z}/p\mathbb{Z})^{\times}$ of units $\bmod p$ is cyclic, hence isomorphic to $C_{p-1}$ . However, exhibiting such an isomorphism amounts to finding such a primitive root, and the proof does not do this, nor does it really supply an efficient algorithm to do it. I don't know what the time complexity of finding a primitive root is. The "obvious" probabilistic algorithm is to factor $p - 1$ , then pick a random $a \in (\mathbb{Z}/p\mathbb{Z})^{\times}$ and test whether it's a primitive root by computing $a^{\frac{p-1}{q}} \bmod p$ for all prime divisors $q$ of $p-1$ . This should be pretty fast in practice although it will take at least as long as factoring $p - 1$ and I don't know what the deterministic time complexity is.
{ "source": [ "https://math.stackexchange.com/questions/4517497", "https://math.stackexchange.com", "https://math.stackexchange.com/users/954099/" ] }
4,517,500
An ellipse is given in parametric form as follows $P(t) = C + E_1 \cos(t) + E_2 \sin(t) $ where $C, E_1, E_2 $ are $2-$ or $3-$ dimensional vectors, and $ t \in [0, 2 \pi)$ . I would like to find the points on the ellipse that are nearest and farthest from a straight line given in parametric form as $L(t) = Q_0 + t V $ with $ t \in \mathbb{R} $ , and $Q_0, V$ are $2-$ or $3-$ dimensional vectors. This problem is an exercise in constrained optimization, and that is its context.
The simplest example I know: the existence of primitive roots tells us that if $p$ is a prime then the group $(\mathbb{Z}/p\mathbb{Z})^{\times}$ of units $\bmod p$ is cyclic, hence isomorphic to $C_{p-1}$ . However, exhibiting such an isomorphism amounts to finding such a primitive root, and the proof does not do this, nor does it really supply an efficient algorithm to do it. I don't know what the time complexity of finding a primitive root is. The "obvious" probabilistic algorithm is to factor $p - 1$ , then pick a random $a \in (\mathbb{Z}/p\mathbb{Z})^{\times}$ and test whether it's a primitive root by computing $a^{\frac{p-1}{q}} \bmod p$ for all prime divisors $q$ of $p-1$ . This should be pretty fast in practice although it will take at least as long as factoring $p - 1$ and I don't know what the deterministic time complexity is.
{ "source": [ "https://math.stackexchange.com/questions/4517500", "https://math.stackexchange.com", "https://math.stackexchange.com/users/948761/" ] }
4,529,185
$$\int_{0}^{\infty}\tan^{-1}\left(\frac{2x}{1+x^2}\right)\frac{x}{x^2+4}dx$$ I am given solution for this definite integral is $\frac{\pi}{2}\left(\ln\frac{\sqrt2+3}{\sqrt2+1}\right)$ . Any idea or approach you would use to solve this?
\begin{align} &\int_{0}^{\infty}\tan^{-1}\left(\frac{2x}{1+x^2}\right)\frac{x}{x^2+4}\ dx\\ =& \int_{0}^{\infty}\int_{\sqrt2-1}^{\sqrt2+1}\frac{x}{x^2+y^2}\frac{x}{x^2+4}\ dy \ dx\\ =& \ \frac\pi2 \int_{\sqrt2-1}^{\sqrt2+1}\frac1{y+2}dy= \frac{\pi}{2}\ln\frac{\sqrt2+3}{\sqrt2+1} \end{align}
{ "source": [ "https://math.stackexchange.com/questions/4529185", "https://math.stackexchange.com", "https://math.stackexchange.com/users/221836/" ] }
4,562,314
Coming from a physics background, my understanding of geometry (in a very generic sense) is that it involves taking a space and adding some extra structure to it. The extra structure takes some local data about the space as its input and outputs answers to local or global questions about the space + structure. We can use it to probe either the structure itself or the underlying space it lives on. For example, we can take a smooth manifold and add a Riemannian metric and a connection, and then we can ask about distances between points, curvature, geodesics, etc. In symplectic geometry, we take an even-dimensional manifold and add a symplectic form, and then we can ask about... well, honestly, I don't know. But I'm sure there is interesting stuff you can ask. Knowing very little about algebraic geometry, I am wondering what the "geometry" part is. I am assuming that the spaces in this case are algebraic varieties, but what is the extra structure that gets added? What sorts of questions can we answer with this extra structure that we couldn't answer without it? I have to guess that this is a little more complicated than just taking a manifold and adding a metric, otherwise I would expect to be able to find this explained in a relatively straightforward way somewhere. If it turns out the answer is "it's hard to explain, and you just need to read an algebraic geometry text," then that's fine. In that case, it would be interesting to try to get a sense of why it's more complicated. (I have a guess, which is that varieties tend to be a lot less tame than manifolds, so you have to jump through more technical hoops to tack on extra stuff to them, but that's pure speculation.)
This is a big complicated question and many different kinds of answers could be given at many different levels of sophistication. The very short answer is that the geometry in algebraic geometry comes from considering only polynomial functions as the meaningful functions. Here is essentially the simplest nontrivial example I know of: Consider the intersection of the unit circle $\{ x^2 + y^2 = 1 \}$ with a vertical line $\{ x = c \}$ , for different values of the parameter $c$ . If $-1 < c < 1$ we get two intersection points. If $c > 1$ or $c < -1$ we get no (real) intersection points. But something special happens at $c = \pm 1$ : in this case the vertical lines $x = \pm 1$ are tangent to the circle. This tangency is invisible if we just consider the "set-theoretic intersection" of the circle and the line, which consists of a single point; for various reasons (e.g. to make Bezout's theorem true) we'd like a way to formalize the intuition that this intersection has "multiplicity two" in some sense, and so is geometrically more interesting than just a single point. This can be done by taking what is called the scheme-theoretic intersection . This is a complicated name for a simple idea: instead of asking directly what the intersection is, we ask what the ring of polynomial functions on the intersection is. The ring of polynomial functions on the unit circle is the quotient ring $\mathbb{R}[x, y]/(x^2 + y^2 - 1)$ , while the ring of polynomial functions on the vertical line is the quotient ring $\mathbb{R}[x, y]/(x - c) \cong \mathbb{R}[y]$ . It turns out that the ring of polynomial functions on the intersection is the quotient by both of the defining polynomials, which gives, say at $x = 1$ to be concrete, $$\mathbb{R}[x, y]/(x^2 - y^2 - 1, x - 1) \cong \mathbb{R}[y]/y^2.$$ This is a funny ring: it has a nontrivial nilpotent ! That nilpotent $y$ is exactly telling us the sense in which the intersection has "multiplicity two"; it's saying that a function on the scheme-theoretic intersection records not only its value at the intersection point but its derivative with respect to tangent vectors at the intersection point, reflecting the geometric fact that the unit circle and the line are tangent and so share a tangent space. In other words it is saying, roughly speaking, that the intersection is "two points infinitesimally close together, connected by an infinitesimally short vector." Adding nilpotents to geometry takes some getting used to but it turns out to be very useful; among other things it is possible to define tangent spaces in algebraic geometry this way ( Zariski tangent spaces ), hence to define Lie algebras of algebraic groups in a purely algebraic way. So, this is one story you can tell about what kind of geometry algebraic geometry captures, and there are many others, for example the rich story of arithmetic geometry and its applications to number theory. It's difficult to say anything remotely complete here because algebraic geometry is absurdly general and the sorts of geometry it is capable of capturing veer off in wildly different directions depending on what you're interested in.
{ "source": [ "https://math.stackexchange.com/questions/4562314", "https://math.stackexchange.com", "https://math.stackexchange.com/users/133698/" ] }
4,567,581
Is there a topology $T$ on the set of real numbers $\mathbb{R}$ , such that the set of $T$ -continuous functions from $\mathbb{R}$ to $\mathbb{R}$ is precisely the set of differentiable functions on $\mathbb{R}$ ?
Νo. Indeed, if such a topology exists, then the functions $f_{1},f_{2}:\mathbb R \to \mathbb R$ defined by $f_{1}(x)=-x$ and $f_{2}(x)=x$ are continuous. Now we can restrict the domain $f_1:(-\infty,0] \to \mathbb R$ and $f_2:[0,\infty) \to \mathbb R$ and it will stay continuous. Now define $f(x)=|x|$ ; it is continuous from the gluing lemma with the functions $f_1,f_2$ . But it is not differentiable. Edit: As Mark Saving correctly points out, we need to prove that $(-\infty,0]$ and $[0,\infty)$ are closed. To prove that we first prove that $T$ is $T_1$ . Let $a \neq b \in \mathbb R$ . Choose a an open set $B \in T$ such that $B \neq \mathbb R, \emptyset$ - it exists as otherwise T is the trivial topology. Define $A = \mathbb R \setminus B$ . Now the function $f$ defined by $a$ on $A$ and $b$ on $B$ can't be continuous as its image is two points (hence not continuous in the standard topology on $\mathbb R$ and thus not differentiable). Now the preimages of $f$ are $\emptyset, A, B, \mathbb R$ thus $A$ can't be open (as otherwise every preimage is open and thus $f$ is continuous) and as $f$ is not continuous and $A$ is the only preimage which is not open, there is an open set $U \in T$ such that $f^{-1}(U)=A$ and thus $U$ is an open set containing $a$ and not $b$ . And thus $T$ is $T_1$ . Thus every singleton is closed in $T$ . Now we use the next theorem: let $A$ be a closed set in $\mathbb R$ thus there exists a differentiable function $f:\mathbb R \to \mathbb R$ such that $f^{-1}(\{0\})=A$ . Now as $f$ is differentiable it is continuous in T and because $\{0\}$ is closed we have that $A= f^{-1}(\{0\})$ is closed in $T$ thus every closed set in the standard topology is closed in $T$ .
{ "source": [ "https://math.stackexchange.com/questions/4567581", "https://math.stackexchange.com", "https://math.stackexchange.com/users/107952/" ] }
4,608,782
The year 2023 is near and today I found this nice way to write that number: $\displaystyle\color{blue}{\pi}\left(\frac{(\pi !)!-\lceil\pi\rceil\pi !}{\pi^{\sqrt{\pi}}-\pi !}\right)+\lfloor\pi\rfloor=2023$ where $\color{blue}{\pi}$ is the counting function of prime numbers. My question is, do you know any other interesting way to write 2023? By the way, happy new year everyone
$$(2+0+2+3)(2^2 + 0^2 + 2^2 + 3^2)^2 = 2023$$
{ "source": [ "https://math.stackexchange.com/questions/4608782", "https://math.stackexchange.com", "https://math.stackexchange.com/users/603951/" ] }
4,609,217
Start with an integer like n = 100 and set it equal to a uniformaly random integer between [0,n] inclusive. Keep cutting it this way until n = 0 . What's the expected value of the number of cuts needed? For me, intuition gives an expected value of $\log_2 n ≈ 6.64$ , but empirical simulation in Python: import random cuts = 0 expectedValue = 0 trials = 100000 for i in range(trials): startingValue = 100 while startingValue > 0: startingValue = random.randint(0, startingValue) cuts += 1 expectedValue = cuts / trials print(expectedValue) results in $≈6.18$ . Does there exist an explicit solution for n = 100 or for any integer n ?
If $\ e_n\ $ is the expected number of cuts to reach $\ \{0\}\ $ from $\ \bigcup_\limits{i=0}^n\{i\}\ $ , then $\ e_n\ $ satisfies the recursion \begin{align} e_0&=0\\ e_n&=1+\sum_{i=0}^n\frac{e_i}{n+1}\ . \end{align} It's not difficult ${}^\dagger$ to show by induction that the solution of this recursion is given by $$ e_n=1+\sum_{i=1}^n\frac{1}{i} $$ for $\ n\ge1\ $ . As is well-known , $\ \lim_\limits{n\rightarrow\infty}\left(\sum_\limits{i=1}^n\frac{1}{i}-\ln n\right)=\gamma\ $ , where $\ \gamma\approx0.57722\ $ is the Euler-Mascheroni constant , so therefore $\ \lim_\limits{n\rightarrow\infty}\left(e_n-\ln n\right)=\gamma+1\ $ , and $$ e_n\approx1+\gamma+\ln n $$ for sufficiently large $\ n $ . For $\ n=100\ $ this gives \begin{align} e_{100}&\approx1.57722+\ln100\\ &\approx6.1824 \end{align} in good agreement with the result of your python simulation. ${}^\dagger$ Especially if you use the observation made by TheBestMagician in a comment below. Addendum According to Wolfram alpha the exact value of $\ e_{100}\ $ rounded to $20$ significant figures is $$\color{green}{6.1873775176396}\color{red}{202608},$$ which agrees with the value obtained by Stinking Bishop to its $13^\text{th}$ decimal place.
{ "source": [ "https://math.stackexchange.com/questions/4609217", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1042403/" ] }
4,614,130
I have the following problem: Show that 9 consecutive integers cannot be partitioned into two sets such that the products of the first and second set are equal. I know this question has been asked multiple times. Nearly all of the answers argue with prime factorization and I was just wondering where the flaw in my argumentation is: Suppose such a partition exists. Then the product of the 9 consecutive integers needs to be a perfect square $n^2$ . The product can also be represented as $k(k+1)\dots{}(k+8)$ which when multiplied out yields a polynomial of the form $$ k^9 + \text{polynomial of degree} \leq 8. $$ In order for this to be a perfect square, we would need to be able to represent the polynomial in the form $(\dots{})^2$ . Since the degree of the polynomial is odd, we cannot do this and hence such a partition does not exist. Edit : I know that there is a theorem (Erdos-Selfridge) that states the product of consecutive integers can never be a perfect power $x^l$ . However, I was wondering if my argument about the even/odd property for this special case holds.
Consider two distinct questions about polynomials: Can a polynomial of odd degree be a square of another polynomial? Can a polynomial of odd degree take a value which is a square? Your attempted proof depends on a negative answer to (2), but in fact (2) is true, for example $k^9+k^4+1$ is of odd degree, but when $k=2$ takes the value $529=23^2$ .
{ "source": [ "https://math.stackexchange.com/questions/4614130", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1137898/" ] }
4,624,973
I found that the series $$s(n) = \frac{2}{n} \cdot \sum_{i=1}^n \sqrt{\frac{n}{i-\frac{1}{2}}-1}$$ converges to $\pi$ as $n \to \infty$ . To verify this I have computed some values: $n$ $s(n)$ $10^1$ 2.76098 $10^3$ 3.10333 $10^5$ 3.13776 $10^6$ 3.14038 $10^7$ 3.14121 Which seems to support the claim, however, this is no proof of the convergence. I would not know how to begin on a proof of this limit and did not find any similar formula in known approximation formulas . Does anyone have an idea on how such a proof can be constructed?
Note that: $$\frac{1}{n}\sum_{k=1}^n\sqrt{\frac{1}{k/n-1/2n}-1}\to\int_0^1\sqrt{x^{-1}-1}\,\mathrm{d}x=\frac{\pi}{2}$$ Substituting $x=t^{-1},t=u+1,u=w^2$ in that order equates the integral with: $$2\int_0^\infty\frac{w^2}{(w^2+1)^2}\,\mathrm{d}w$$ And this is manageable. Or, let $x=\sin^2t$ . You then deal with: $$\int_0^{\pi/2}2\sin(t)\cos(t)\cot(t)\,\mathrm{d}t=2\int_0^{\pi/2}\cos^2t\,\mathrm{d}t$$ Which is even easier. To justify treating the partial sums as Riemann sums, it is sufficient to demonstrate: $$\lim_{n\to\infty}S_n=\lim_{n\to\infty}\left[\left(\frac{1}{n}\sum_{k=1}^n\sqrt{\frac{n}{k-\frac{1}{2}}-1}\right)-\left(\frac{1}{n}\sum_{k=1}^n\sqrt{\frac{n}{k}-1}\right)\right]=0$$ I present a proof of this by elementary bounds. No need for fancy convergence theorems here! Using the difference of two squares we can bound, for every $k\le 1$ and every $n\ge k$ : $$\begin{align}0&\le\sqrt{\frac{n}{k-\frac{1}{2}}-1}-\sqrt{\frac{n}{k}-1}\\&=\frac{\frac{\frac{1}{2n}}{\frac{k}{n}\left(\frac{k}{n}-\frac{1}{2n}\right)}}{\sqrt{\frac{n}{k-\frac{1}{2}}-1}+\sqrt{\frac{n}{k}}}\\&\le\frac{1}{2}\sqrt{\frac{k}{n}}\cdot\frac{n}{k(2k-1)}\\&=\frac{1}{2}\frac{\sqrt{n}}{(2k-1)\sqrt{k}}\\&\le\frac{\sqrt{n}}{2k\sqrt{k}}\end{align}$$ Summing in $k$ and dividing by $n$ finds: $$0\le S_n\le\frac{1}{2\sqrt{n}}\sum_{k=1}^nk^{-3/2}\le\frac{1}{2\sqrt{n}}\left(3-\frac{2}{\sqrt{n}}\right)$$ Where we bounded the sum by the integral $\int x^{-3/2}\,\mathrm{d}x$ to get that last bound. By the squeeze theorem, it is now clear that $S_n$ vanishes - as desired.
{ "source": [ "https://math.stackexchange.com/questions/4624973", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1049661/" ] }
4,627,425
It is well known that, via the polarization identity, a norm (which captures the notion of length) uniquely specifies an inner product; equivalently, if two inner products induce the same norm then they are the same inner product. My question: If the above is true then how is it that we say that the inner product encodes angle? In a fuzzy sense, it doesn't seem to me like "angle" should be determined by the notion of "length" in a given space, and yet a norm implies an inner product -- which is to say a notion of length implies a notion of angle? Edit: I realize I may (I'm not sure) have to specify that I'm talking about real vector spaces.
Firstly: yet a norm implies an inner product -- which is to say a notion of length implies a notion of angle? A norm actually need not specify an inner product. There are norms which do not come from an inner product. Let's be more specific. For $\newcommand{\nc}{\newcommand} \nc{\para}[1]{\left( #1 \right)} \nc{\abs}[1]{\left| #1 \right|} \nc{\br}[1]{\left[ #1 \right]} \nc{\set}[1]{\left\{ #1 \right\}} \nc{\ip}[1]{\left \langle #1 \right \rangle} \nc{\n}[1]{\left\| #1 \right\|} \nc{\norm}[1]{\left\| #1 \right\|} \nc{\floor}[1]{\left \lfloor #1 \right \rfloor} \nc{\ceil}[1]{\left \lceil #1 \right \rceil} \nc{\setb}[2]{\set{#1 \, \middle| \, #2}} \nc{\dd}{\mathrm{d}} \nc{\dv}[2]{\frac{\dd #1}{\dd #2}} \nc{\p}{\partial} \nc{\pdv}[2]{\frac{\partial #1}{\partial #2}} \nc{\a}{\alpha} \nc{\b}{\beta} \nc{\g}{\gamma} \nc{\d}{\delta} \nc{\ve}{\varepsilon} \nc{\t}{\theta} \nc{\m}[1]{\begin{bmatrix} #1 \end{bmatrix}} \nc{\C}{\mathbb{C}} \nc{\N}{\mathbb{N}} \nc{\R}{\mathbb{R}} \nc{\P}{\mathbb{P}} \nc{\Q}{\mathbb{Q}} \nc{\Z}{\mathbb{Z}} \nc{\AA}{\mathcal{A}} \nc{\BB}{\mathcal{B}} \nc{\CC}{\mathcal{C}} \nc{\FF}{\mathcal{F}} \nc{\GG}{\mathcal{G}} \nc{\II}{\mathcal{I}} \nc{\JJ}{\mathcal{J}} \nc{\KK}{\mathcal{K}} \nc{\PP}{\mathcal{P}} \nc{\RR}{\mathcal{R}} \nc{\SS}{\mathcal{S}} \nc{\TT}{\mathcal{T}} \nc{\UU}{\mathcal{U}} V$ a vector space over a field $F$ , an inner product $\ip{\cdot,\cdot}$ and a norm $\n{\cdot}$ on $V$ are functions that meet certain properties. An inner product is special in this, given one, it defines a norm: $$ \norm{x} := \sqrt{\ip{x,x}} $$ However, a norm need not define an inner product. (After all, one needs to take in two vectors, and the other just one.) One can show, for instance, that a norm is induced by an inner product if and only if the norm satisfies the parallelogram law ( MSE post ): $$2 \n{x}^2 + 2 \n{y}^2 = \n{x-y}^2 + \n{x+y}^2$$ Examples would be the so-called $p$ -norms; when $p\ne 2$ , they are not induced by an inner product. Recall that we define, for $x := (x_i)_{i=1}^n \in \R^n$ , $$\n{x}_p := \para{ \sum_{i=1}^n \abs{x_i}^p }^{1/p}$$ (Note that our familiar Euclidean norm is the $p=2$ norm, and is induced by the dot product.) Now onto your question: essentially, how do inner products and angles relate? In $\R^n$ , under the usual scenarios (Euclidean distance and norm, inner product is the dot product), we may define the angle $\t$ between $x,y \in \R^n$ by $$\t = \arccos \para{ \frac{\ip{x,y}}{\n{x}\n{y}}}$$ This comes from one way of defining the dot product: $$\ip{x,y} := \n{x} \n{y} \cos \t$$ This $\t$ matches up with the angle we think of in the ordinary sense. We can see why the $\t$ arises in the following way... First, take it as given that we define $$ \ip{x,y} := \sum_{i=1}^n x_i y_i $$ One can prove a polarization identity of inner products: $$\ip{x,y} = \frac{\n{x+y}^2 - \n{x-y}^2}{4}$$ One also has the law of cosines. In the language of vectors, one has that $$\n{x-y}^2 = \n{x}^2 + \n{y}^2 - 2 \n{x} \n{y} \cos \t$$ for $\t$ (in the geometric sense) the angle between $x,y$ . But, using that this norm $\n \cdot$ is induced by $\ip{\cdot,\cdot}$ , and various properties of inner products in general, one has that $$\n{x-y}^2 = \n{x}^2 + \n{y}^2 - 2 \ip{x,y}$$ Equating these two thus yields $$\ip{x,y} = \n{x} \n{y} \cos \t$$ Of course, looking at an inner product in general, how much do we really know? I tell you that $\ip{x,y} = 0.35$ ; does this tell us anything? It does tell us one key property of very common interest all throughout mathematics -- that the vectors are not orthogonal . Two vectors are orthogonal if and only if $\ip{x,y} = 0$ . In the Euclidean- $\R^n$ sense, this amounts to meeting at right angles in the plane they span. Of course, we have long-since generalized this notion to other spaces, e.g. functions, on which the axioms of an inner product can be met, even if the notion of "angle" becomes fuzzy, because orthogonality makes it very easy to represent elements of a vector space in certain bases (bases of elements which are pairwise orthogonal). Much of the elegance and applicability of Fourier analysis, for instance, comes from the fact that $\set{\sin(kx),\cos(kx)}_{k=1}^\infty$ forms an orthogonal basis of square-integrable functions under the inner product $$\ip{f,g}_{L^2[-\pi,\pi]} := \int_{-L}^L f(x) g(x) \, \dd x$$ In particular, $$\begin{align*} \int_{-\pi}^\pi \sin(mx) \cos(nx) \, \dd x &= 0 \\ \int_{-\pi}^\pi \sin(mx) \sin(nx) \, \dd x &= \begin{cases} \pi, & m = n \\ 0 , & \text{otherwise} \end{cases} \\ \int_{-\pi}^\pi \cos(mx) \cos(nx) \, \dd x &= \begin{cases} \pi, & m = n \\ 0 , & \text{otherwise} \end{cases} \end{align*}$$ In fact, "nice enough" functions can be easily written as an infinite sum of (scaled and modulated) sines and cosines owing to this fact: a Fourier series.
{ "source": [ "https://math.stackexchange.com/questions/4627425", "https://math.stackexchange.com", "https://math.stackexchange.com/users/726560/" ] }
4,630,428
I would like to understand the reason behind this pattern: $$\begin{align} \sqrt 1 &= 1 \\[4pt] \sqrt{0.1} &= 0.31622 \\[4pt] \sqrt{0.01} &= 0.1 \\[4pt] \sqrt{0.001} &=0.03162 \\[4pt] \sqrt{0.0001}&=0.01 \\[4pt] \sqrt{0.00001}&=0.003162 \end{align}$$ I expected $\sqrt{0.1}$ to "behave" in a similar way to $\sqrt 1$ ... Why this intermittent pattern? What does $3162\ldots$ represent? Does it represent an irrational number like $\pi$ or a ratio like Fibonacci? Edit: As some comments have kindly let me see, it all comes from $\sqrt{10} =3.16227766,$ so my question becomes: what does this number represent? I notice it's really close to $\pi,$ are the two things related? Also still not clear why this intermittent pattern as to why for example $\sqrt{1} = 1$ and $\sqrt{0.01}=0.1$ and so on and they are not something like $0.31622...$
Don’t be discouraged by the comments. The numbers you are considering are of the form $\sqrt{10^{-n}} = 10^{\frac{-n}{2}}$ where $n$ is a natural number. If $n$ is even then (like for $0.01, 0.0001$ ) then $\frac{n}{2}$ is also a natural number, call it $k$ . Then $10^{\frac{-n}{2}}=10^{-k}$ which is $0.00..1000$ with 1 at k-th position. Now if $n$ is odd write $n=2k+1$ (like $0.1,0.001$ ) then $10^{-\frac{2k+1}{2}}=10^{-k-\frac{1}{2}} = 10^{-k}10^{-\frac{1}{2}} = 10^{-k} \cdot 3.16..$ So this is why the numbers seem “alternating”. The fact that $\sqrt{10}$ is 3.16… (which happens to be close to $\pi$ ) is just because that happens to be the number that when squared is $10$
{ "source": [ "https://math.stackexchange.com/questions/4630428", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1145602/" ] }
4,633,221
I'm creating a program to find the real roots of any given polynomial. It takes a user given polynomial as input and tries to find its roots. As an aid, mainly to check if I've missed any after the process, I'd like to know how many roots the polynomial has beforehand. I am completely oblivious to the given polynomial. The only thing I can do with it is calculate the value for a given x. I do not know the degree of the polynomial. The only lead I've found in this matter is Sturm's theorem but I don't see how it can be applied into this program. I understand that given this restraint it may sound impossible but since there are so many mathematical algorithms that work solely based on this functionality, I thought it would be possible for there to be one that could solve this that I would be unaware of.
My understanding of the question is that an algorithm is sought that will use the input polynomial as a black-box for computational function evaluation, but without knowing the nature of the polynomial or even its degree. In this case, I claim, there can be no algorithm for determining the number of real roots. To see this, suppose that we had such an algorithm. Apply it to the polynomial $x^2+1$ , say. The algorithm must eventually return the answer of 0 real roots. In the course of its computation, it will make a number of calls to the black-box input function. But only finitely many. Thus, the answer of 0 real roots was determined on the basis of those finitely many input/output values of the function. But there are many other polynomials that agree exactly on those input/output values, but which do have other roots. For example, let us imagine a new point $(a,0)$ where $a$ was not used as one of the black-box function calls during the computation. The finitely many points that were used plus this new point $(a,0)$ determine a polynomial of that degree (equal to the number of points), which has at least one real root. And the point is that the root-counting algorithm would behave exactly the same for this modified polynomial, because it agrees on the finitely many evaluated points, and so the computation would proceed just as for $x^2+1$ . The root-counting algorithm would therefore still give the answer of zero many real roots, but this answer would be wrong for this modified polynomial. So there can be no such algorithm.
{ "source": [ "https://math.stackexchange.com/questions/4633221", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1146980/" ] }
4,636,892
I made an observation that for two finite sets $A$ , $B$ that most $R \subseteq A \times B$ where $R$ is a function also 'appear to be' non-linear. It got me wondering if this is true in the highly general case of functions from $\mathbb{R}^n$ to $\mathbb{R}^m$ . Let $\Omega$ be the set of a linear functions $f: \mathbb{R}^n \mapsto \mathbb{R}^m$ and $\Xi$ be the set of all non-linear maps $g: \mathbb{R}^n \mapsto \mathbb{R}^m$ . Is there some measure $\mu : Q \mapsto \mathbb{R}_{\geq 0}$ (or other precise way of quantifying the "size" of sets) that shows whether $\mu(\Omega) \leq \mu (\Xi)$ ?
The set of linear functions from $\mathbb{R}^n$ to $\mathbb{R}^m$ is in correspondence with the set of $m\times n$ matrices with real coefficients. This is a basic result of Linear Algebra. This set has the same cardinality as the reals, namely $\mathfrak{c}=2^{\aleph_0}$ . The set of all functions from $\mathbb{R}$ to itself has cardinality $|\mathbb{R}|^{|\mathbb{R}|} = 2^{\mathfrak{c}}$ , which by Cantor's Theorem is strictly larger than $\mathfrak{c}=|\mathbb{R}|$ . The same is true for the set of all functions from $\mathbb{R}^n$ to $\mathbb{R}^m$ .
{ "source": [ "https://math.stackexchange.com/questions/4636892", "https://math.stackexchange.com", "https://math.stackexchange.com/users/230586/" ] }
4,643,970
Bayesian probability is an alternative probability theory that uses data from past outcomes to predict future outcomes. Do they have some work-around for the gamblers fallacy or do they just ignore it? Or is Bayesian probability literally just the gamblers fallacy by another name?
In fact, a Bayesian approach is the exact opposite of the gambler's fallacy. The gambler's fallacy supposes that if a certain event has happened more often than expected in a series of independent trials¹ - for example, a roulette wheel has been observed to mostly pick red numbers over the last day - it is likely to happen less often in the future, until it has evened out. A fallacious gambler will therefore bet on black in this instance, because a black is "due". The Bayesian approach is that, supposing there was some initial uncertainty about the exact probability of the event, the fact that it has happened more often provides some evidence for what the actual probability is, and it will tend to revise our estimate for the probability upwards. So a Bayesian gambler will therefore bet on red , reasoning that the wheel may be slightly biased in favour of red. This method of identifying small biases from observations of roulette tables was famously used to "break the bank" at Monte Carlo in the 19th century by Joseph Jagger . ¹ If the trials are not independent, e.g. cards dealt in blackjack, this may not be fallacious.
{ "source": [ "https://math.stackexchange.com/questions/4643970", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1152553/" ] }
4,643,974
So first I show that $y$ is dependent on $x$ . Let $f(x,y)=x^y+\sin y$ then $f_y=x^y\ln x+\cos y$ and $f_y(1,0)=1\neq0$ . So if we derive $f$ with respect to $x$ we get $x^yy'\ln x+y'\cos y=0\iff y'(x)=0$ but $y'(x)=\frac{-f_x}{f_y}=\frac{-yx^{y-1}}{x^y\ln x+\cos y}$ . I should always be able to use the formula and directly derive and get the same answer right? So what is wrong here?
In fact, a Bayesian approach is the exact opposite of the gambler's fallacy. The gambler's fallacy supposes that if a certain event has happened more often than expected in a series of independent trials¹ - for example, a roulette wheel has been observed to mostly pick red numbers over the last day - it is likely to happen less often in the future, until it has evened out. A fallacious gambler will therefore bet on black in this instance, because a black is "due". The Bayesian approach is that, supposing there was some initial uncertainty about the exact probability of the event, the fact that it has happened more often provides some evidence for what the actual probability is, and it will tend to revise our estimate for the probability upwards. So a Bayesian gambler will therefore bet on red , reasoning that the wheel may be slightly biased in favour of red. This method of identifying small biases from observations of roulette tables was famously used to "break the bank" at Monte Carlo in the 19th century by Joseph Jagger . ¹ If the trials are not independent, e.g. cards dealt in blackjack, this may not be fallacious.
{ "source": [ "https://math.stackexchange.com/questions/4643974", "https://math.stackexchange.com", "https://math.stackexchange.com/users/995106/" ] }
5
Graphics are tightly integrated into the Mathematica interface. The Front End is programmable, and Mathematica has functions to interface with the web, so the question naturally comes up: Could we make it possible to upload images to StackExchange directly from Mathematica, using a palette button, and without the need to first save them to disk? See the answer below for an implementation!
Update 30th March 2015: As of today, the old repository will no longer be maintained. All the former functionality can now be found in the new SE Tools package . The online repository contains a very detailed description how it works and how it can be installed. Original Post This is my implementation (with contributions from @halirutan and help from a number of people) of an image uploader palette, which I would like to share with the community to make it more convenient to use this site. USAGE: When correctly installed, you should see a palette like this: To upload , just select a graphics (or any other part of the notebook), and press the "Upload to SE" button to get a preview before uploading: On Windows there are two buttons: "Upload to SE" and "Upload to SE (pp)". The "pp" (pixel perfect) one will rasterize the selected notebook element exactly the same way you see it in the notebook. Unfortunately I haven't (yet) been able to make this work on platforms other than Windows. The "Upload to SE" button will reformat everything to a width of 650 pixels and will discard any style/magnification information. You can also see the history using the "History..." button. This will show you your recent uploads and you can click on an existing image to copy its URL or clear the history. To update , simply use the Update... button. If there is an update available, this button will turn pink. The palette automatically checks for updates every few days (if it is open). You can also watch a screencast showing how to use the palette. BUG REPORTS, SUGGESTIONS, DEVELOPMENT: Ideas, suggestion, code improvements, problem reports are most welcome! Pease use the GitHub bug tracker for bug reports or feature requests. Just comment on this post for anything else. The source code is available through GitHub. Feel free to fix problems yourself and send pull requests. This can even be done through the GitHub web interface . KNOWN ISSUES There are a few problems that have happened to people, but I am not able to reproduce them. If you can come up with a way to reproduce any of these, please contact me! Sometimes none of the palette buttons do anything. Pressing Update or History will not bring up a new window either. If this happens to you, close the palette and re-open it from the Palettes menu. If that doesn't fix the problem, then open the palette, evaluate SEUploader`checkOnlineVersion[] , then close and re-open the palette. Sometimes no palette buttons show at all, just the Mathematica.SE logo on the left. If you can reproduce this, please contact me. The thumbnails of old uploads (upload history) may get corrupted for reasons unknown to me. The symptom is an error or hang when you press the History button or an error every time you try to upload. To fix this, first try clearing the history in the History... dialog. If the front end hangs when you try to open the History window, clear the history as follows: Close the palette and restart the Front End Identify the file name of the palette. It's found here: SystemOpen@FileNameJoin[{$UserBaseDirectory, "SystemFiles", "FrontEnd", "Palettes"}] . Let's call it SE Uploader.nb . Evaluate CurrentValue[$FrontEnd, {"PalettesMenuSettings", "SE Uploader.nb", TaggingRules}] = {} and restart the front end. Make sure you use the correct file name for your system in place of SE Uploader.nb . Only use the file name, not the full path. Alternatively this front end option can be cleared using Format -> Option Inspector after selecting Global Preferences in the top left dropdown. Restart the Front End again, open the palette and check that the History... button brings up an empty window. If the problem was due to corrupted history entries, it should be fixed now. When running in HiDPI mode on OS X, there may be a thin line on the right edge of uploaded images. UNINSTALLING If you used the suggested method to install the palette, the following will remove it completely and clear all settings. This is useful if you are having problems with the palette and want to try reinstalling it. Close the palette, then evaluate the following: DeleteFile[FileNameJoin[{$UserBaseDirectory, "SystemFiles", "FrontEnd", "Palettes", "SE Uploader.nb"}]] CurrentValue[$FrontEnd, {"PalettesMenuSettings", "SE Uploader.nb"}] = {} CurrentValue[$FrontEnd, TaggingRules] = DeleteCases[ CurrentValue[$FrontEnd, TaggingRules], "SEUploaderLastUpdateCheck" | "SEUploaderVersion" -> _] Now restart the front end (quit Mathematica completely).
{ "source": [ "https://mathematica.meta.stackexchange.com/questions/5", "https://mathematica.meta.stackexchange.com", "https://mathematica.meta.stackexchange.com/users/12/" ] }
8
Contexts have backticks, which conflict with the normal way to enter inline code. How do I enter an inline context, since the initial approach: `System`` doesn't work ( `System`` ).
This is so common in Mathematica that I suggest this should be specially included in the editor help or the FAQ on this site. Let me list the most common usages: Contexts, for example, Global` . Markdown: ``Global` `` (note the space before the closing `` ) Precision of numbers, for example, 2.3`40 . Markdown: ``2.3`40`` Accuracy of numbers, for example, 2.3``3 . Markdown: ```2.3``3``` StringForm expressions, for example, StringForm["x = ``", x] Markdown: ```StringForm["x = ``", x]``` And here the question comes up, how to include a double backtick ( `` ) in inline code? Generally, if you want to include n consecutive backticks, surround the inline code span with n+1 backticks. For this reason, if the inline code has a backtick at the very end, you need to put a space after it. Don't worry, this space will be stripped when the MarkDown is rendered .
{ "source": [ "https://mathematica.meta.stackexchange.com/questions/8", "https://mathematica.meta.stackexchange.com", "https://mathematica.meta.stackexchange.com/users/69/" ] }
244
Recently I have been posting an answer that is not yet complete and then continuing to make additions over the next few minutes. Undeniably I enjoy getting first post, but there is another reason that I have been doing this: to get some ink on the page so that others know what is coming. I personally dislike putting time into an answer only to see a longer and better one of the same method pop up before I post. I would rather that the person somehow inform me that an answer of type XYZ in imminent so that I would not start or continue working on mine. I think that a "stub" answer that actually contains meaningful content, and not just a comment or a "me first" placeholder, is reasonable and potentially helpful for the reason above. On the other hand if this practice is too much gamesmanship or simply annoys people I shall cease doing it. What does the community think of this?
Please do not take my answer as a critique of any regulars here The Mathematica site has a problem that is a good problem to have: we have some really expert, really dedicated users who can answer a wide range of questions very well . And these experts live right around the globe in many timezones, so they can answer just about any time. This is very good for people with questions. But it has a side effect that some might not see as so positive: it is hard for less experienced users (or those of us who can't -- or shouldn't -- devote much time to the site) to find a question they can answer, that hasn't already been answered really well. This will be a particular issue for those who learn by trying to find out the answer for themselves: speaking for myself, I am a much better Mathematica programmer since I started participating on this site and its StackOverflow tag predecessor. I have personally posted answers and then revised them quickly. Usually it's because I thought of something else to add rather than deliberately putting a stake in the ground. But the knowledge that there are fast-posting experts has definitely often added to my haste in clicking the Post Your Answer button. This is an inevitable issue as the site grows and attracts new people who would like to be regulars. I think the fact that mid-range-reputation users have posted answers to this question has shown how keen some relatively new users are. The regulars in the SO mathematica tag need to be mindful that we are growing a community here, and new members want to feel that they are contributing. This does not mean that high-rep users should hold off from posting, but breathing for five minutes every so often won't hurt. Some final, short observations designed to encourage people to post answers anyway. The existence of long and highly upvoted answers has not stopped some newer users from posting a subsequent answer to the same question ( recent example ). Indeed, sometimes it is worth doing so even if there is an accepted answer already . For the simple questions, there is often more than one way to do things, so more answers are fine. StackOverflow is actually designed around getting multiple answers. (Area 51 recommends an average of at least 2.5 per question.) Even if your answer isn't accepted, it will probably still attract some upvotes. Posting a duplicate inadvertently isn't a crime, and the later one might actually be explained better. You can always delete afterwards if necessary. You can gain reputation by posting some really awesome questions , too. Conclusion: while there is no harm in posting a partial answer and then editing, there is no great benefit to everyone else in doing so. The rest of us should remember that we shouldn't hold off from posting answers even if someone else has already done so, unless they really are duplicates. Addendum: I just wanted to note an important point exemplified by this recent post . b.gatessucks's answer was first, correct and accepted. But by providing a more extensive answer that explained in more detail why things work that way, my later answer currently has a couple more votes and the OP found it very useful. Personally, this is my favorite kind of answer to write. The question was quite simple and many of us could quickly come up with a drive-by solution, but I find building understanding to be more rewarding, regardless of the rep points.
{ "source": [ "https://mathematica.meta.stackexchange.com/questions/244", "https://mathematica.meta.stackexchange.com", "https://mathematica.meta.stackexchange.com/users/121/" ] }
274
I would like to start a discussion on how to deal with posts where the OP has written up a clear question, but s/he has also clearly not done his/her homework. There have been several cases recently. A couple of random examples: How to impose custom style to the edges of a graph Delete duplicate elements from a list Function for Autocorrelation How do you define the domain of a plot? All of these take a simple search in the doc centre to find the solution, yet most of them receive very detailed replies that are essentially repeating the docs. However, these posts usually do get answers. There's always a nice and helpful soul who will write up one, often in great detail and repeating some parts of the documentation. Since the questions are answered, this encourages the asker to post more of these. I am worried that this is going to be harmful to the community in the long term and might lead to burnout in some of the regulars (without them realizing the danger). Do you think this is a problem? What can we do to ameliorate this and encourage people to do their homework before asking? We cannot (and shouldn't) stop people from answering, but I worry that too may of these will both dilute the site and might induce a tedium in some regular answerers that might make them go to the other opposite or simply leave. Burnout is a real danger---it has happened to me in the past, and I stopped visiting the forum in question for months because of it. I would especially like to hear from those who have been there when other beta sites have started, and have some actual experience. Somewhat related, though not the same: Help Vampires: A Spotter’s Guide .
Horrible low-quality questions with unreadable "leetspeak" and incomprehensible illogic should be closed as unclear what you're asking . Duplicate questions should be closed quickly so as to avoid replication of efforts. Rude gimme the codez NOW!!!1!11 questions should be downvoted, closed, and otherwise ignored. If these rules are followed then I think that questions are really what the community makes of them. From your own list of examples, the delete-duplicates question gave me an opportunity to post this which I was glad for and over a dozen people appreciated. If the only answer to a simple question is a quote form the documentation (which I have done myself on more than one occasion) I think this shows a lack of imagination. If the question is really so drab and clear-cut that no other answer is appropriate edit the question into something better, and answer that . If upgrading the question would result in it being a duplicate, then perhaps it should be closed as one anyway. Of course there are going to be exceptions to this where the volume of low-quality questions cannot be handled gracefully. This then becomes the moderators' problem to keep the site running smoothly and there is another forum for that discussion. In the two and a half years since penning this answer I believe I have observed a shift in community reaction to be more likely and faster to close simple questions. I believe this is partly due to the introduction of the " simple mistake ... or else it is easily found in the documentation " close reason. I have matched this apparent norm with my own behavior. This does at least to a degree contravene the policy I described above however it should not nullify it. Rather it is my hope that the quick closes will help prevent wasted effort on questions that are trivial. The principles above may still be applied to closed questions. Users are encouraged to transform "simple mistake / easily found" questions into more interesting ones whenever possible, again so long as a duplicate is not created. At any time if a user has a nontrivial answer of general interest to give to a closed question he should flag the question for moderator attention and state this, requesting a reopening.
{ "source": [ "https://mathematica.meta.stackexchange.com/questions/274", "https://mathematica.meta.stackexchange.com", "https://mathematica.meta.stackexchange.com/users/12/" ] }
377
Just curious if anyone employed by Wolfram is officially representing the company in this forum? Also is it reasonable to assume that these forums are thoroughly monitored by Wolfram's software testing team? If so, is there any need then for us to forward a copy of posts describing unexpected behavior of the product that we believe might arise from a bug in the kernel, an error in the documentation, user interface issues, inaccurate domain knowledge, etc. to their support team?
To the best of my knowledge, none of the Wolfram employees is officially representing the company on the site. Certainly I'm not. I would not assume that just because an issue has been raised on the site, that Wolfram Research is immediately aware of it. Speaking for myself, I know that there have been questions that I've missed the first time they came through, or which had late traffic that was of interest, etc... Whether or not an issue initially raised here ends up being discussed within the company, there are several advantages to officially communicating with our support staff at [email protected] : it gets officially tracked and someone in the company is guaranteed to look at it if the issue is in fact a bug, you'll be notified when it's fixed some additional weight is given to bugs with multiple external reporters the support staff is in a better position to diagnose issues that are specific to your environment and that the StackExchange community can't easily reproduce (in which case a StackExchange question may be closed as 'too localized')
{ "source": [ "https://mathematica.meta.stackexchange.com/questions/377", "https://mathematica.meta.stackexchange.com", "https://mathematica.meta.stackexchange.com/users/1149/" ] }
571
I am new to Mathematica and find this site very helpful. Much more than the MathGroup that is listed on the the community page of Mathematica, see Wolfram Support page . So in the interest of getting even more attention from users of Mathematica, I am wondering what this community is doing to get listed on Wolfram's support page? If it is a policy of Stackexchange to stay away from technology owners in the ecosystem, it would also be interesting to understand.
I don't think that this should be currently one of our primary concerns. I actually think that we will develop better when people have to do a little research to find us. Besides, with the number of answered questions growing fast, and those answers being indexed by search engines, very soon it will be quite straightforward to find us for anyone interested. On the other hand, WRI takes some responsibility for anything the company puts on their official recommendation list. Since WRI has no control over this community (which I think is a good thing, in many respects), mentioning us there may be a responsibility they don't want to take. Finally, we are still quite young as a community, and it takes time for other interested parties (WRI etc) to become convinced that we are here to stay. All in all, I would not worry about this much and just keep going. Things like that should IMO always be side effects of our true usefulness and popularity, not the other way around.
{ "source": [ "https://mathematica.meta.stackexchange.com/questions/571", "https://mathematica.meta.stackexchange.com", "https://mathematica.meta.stackexchange.com/users/1635/" ] }
1,043
Do you hate the guys trying to optimize every single part of a process? I'm one of those. Believe me, it's even harder for me. When I sit next to someone watching him writing something, taking the hand off the keyboard to mark a portion of text with the mouse, make a right-click just to copy&paste ... I could explode. Although, I'm quite fast with our SE editor, there are some things which could be further optimized. One big first step was the Image Uploader . But there is another thing I always liked to have. A fast way to insert links to the online documentation of Wolfram. If you look at the URL style of e.g. http://reference.wolfram.com/mathematica/ref/Plot.html you see, that Plot can be exchanged with most other Mathematica functions and it works, because most of them have a reference page. Now wouldn't it be awesome if we had an additional button on the editor toolbar which when clicked transforms the marked text PlotStyle into PlotStyle ? This won't work for all functions but I'm sure, it helps quite a lot. Another thing is, I'm the theta kind of guy. While I can at least live with the θ guys, I absolutely hate the \[Theta] ones. I think everyone agrees that the last option is the least readable. If there would be another editor button which just replaces the FullForm greek characters of a marked text, I'm sure this would be used very much and would help the readability of the code on our site. Question: Is there a way to extend the editor on our main site?
Quick installation for the impatient. Choose your browser below: Chrome or Vivaldi – Install from the Chrome Web Store . Firefox – Install Greasemonkey , then click here: m_toolbar.user.js Safari – Install NinjaKit , reload the page, then click here: m_toolbar.user.js . Extra instructions for MacOS 10.14 below. Detailed instructions for any browser are below. I hereby proudly present a browser user-script (userscript) which adds the required functionality to the editor of mathematica.stackexchange.com. This script is the slightly changed version of the Ask Ubuntu Toolbar Buttons which only exists due to the incredible work of Nathan Osman . Additionally, I added a button which is maybe rarely used, but when it is used, it saves a lot of tedious html-tag typing. With it you can insert shortcuts easily. Just click the button and insert the short-keys separated by space. Therefore, when you type Ctrl C in the dialog you get Ctrl + C . Installation Chrome Chrome users can download this extension directly in the Google Chrome Webstore . Additionally, the script can be found on GitHub where you can have a look at the code or download it. To install it, please use this direct link to m_toolbar.user.js and install it into your browser. I have tested the script in Linux and Mac OSX. In Chrome you install it by storing the file m_toolbar.user.js locally onto your hard-drive. Then you go in Chrome to Menu->Tools->Extensions and drag&drop the file there. After a reload of the SE page, the buttons should appear. Safari In Safari (I tried OSX 10.6 and 10.8) one easy way is to use NinjaKit , which is an extension that lets you install user scripts. First, download the file NinjaKit.safariextz and install it by double-click. After that you should see a ninja-star-like button in Safari where you can manage your user scripts. Safari 13: The following workaround using "Extension Builder" does not work anymore . MacOS 10.14: Extensions such as NinjaKit are depreciated as of MacOS 10.14. Use the workaround detailed here to get around this problem. You may need to re-enable NinjaKit every time you restart Safari using the Safari menu Develop > Show Extension Builder. Second, click on the link to m_toolbar.user.js . You should see a pop-up like this A click on Install finishes everything up. Another click on the ninja star button shows you now the installed Mathematica script If clicking on the above link to m_toolbar.user.js doesn't work, try the following: Download m_toolbar.user.js]( https://github.com/halirutan/SE-Editor-Buttons/raw/master/src/m_toolbar.user.js ) as a text file. Click the NinjaKit button then NinjaKit's "Add new script" button. Copy the downloaded m_toolbar.user.js and paste it into the "Add new script" window, overwriting the skeleton script that's already there. Click the Save button at the very bottom of the window. (The name of the script embedded in the copied .js will automatically be used for the name.) Other browsers For other browsers please read the existing article on stackapps about how to install user scripts. Update: 8. July 2013 Issue concerning Canceled dialog-boxes and empty input fixed Updated tool-tips for a better English. Thanks to m_goldberg for the help. 23. October 2013 The dependent jquery.livequery.js is now loaded from googlecode Introduced another button for the stripping of In[3]:= and Out[3]= marks. For this you have to select the complete codeblock where the I of In is the first letter in the selection. Pressing the button removes the marks and comments out the output. Added \[Element] to the list of replaced glyphs Buttons-style is now more consistent with the webpage. 27. October 2013 ssch was so nice to extend the list of unicode characters by all characters of the Listing of Named Characters which have a unicode representation. 28. October 2013 ssch extended the In[] / Out[] cell label remover to work with several in- and outputs. Additionally, it can handle all kinds of Forms (like FullForm or Short ) I moved the code into an official repository which is linked under the section Installation above. 21. April 2015 The Toolbar can now officially be downloaded in the Google Chrome Webstore
{ "source": [ "https://mathematica.meta.stackexchange.com/questions/1043", "https://mathematica.meta.stackexchange.com", "https://mathematica.meta.stackexchange.com/users/187/" ] }