source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
3,383,769 |
I am just beginning to learn about elliptic functions. Wikipedia defines an elliptic function as a function which is meromorphic on $\Bbb C$ , and for which there exist two non-zero complex numbers $\omega_1$ and $\omega_2$ , with $\frac{\omega_1}{\omega_2}\not\in \Bbb R$ , which satisfy $$f(z)=f(z+\omega_1)=f(z+\omega_2).$$ That's all fine and dandy, but what does this have to do with an ellipse? I sort of know (but not really) about the Jacobi elliptic functions. I am told by the internet that the Jacobi elliptic functions can be defined as inverses of elliptic integrals, which relate to the arc lengths of ellipses. But other than that, I have no idea how elliptic functions relate to ellipses. I have looked at several sources, like this , this , and this . From what I can understand, any elliptic function can be expressed in terms of the Jacobi elliptic functions and Weierstrass elliptic functions, but I have yet to understand why that is true. Perhaps it has something to do with what ODE's elliptic functions satisfy? I do not know. I would really appreciate some help and/or a good source on the introduction to the study of elliptic functions in the context of elliptic integrals, because I do work best with integrals. Thanks!
|
The theory of elliptic functions started with elliptic integrals and the key players were Gauss, Legendre, Abel, Jacobi and finally Ramanujan. A parallel approach using complex analysis was developed by Weierstrass. I will present a brief outline of the approach based on elliptic integrals and at the end mention a thing or two about complex analytical approach. Elliptic integrals arise while evaluating the arc length of an ellipse. If the equation of ellipse is $$x=a\cos t, y=b\sin t$$ then the arc length is given as $$L(t) =\int_{0}^{t}\sqrt{a^2\sin^2x +b^2\cos^2x}\,dx$$ The above is a typical (but slightly difficult) example of elliptic integral. In standard notation we define elliptic integral of first kind via $$u=F(\phi, k) =\int_{0}^{\phi}\frac{dx} {\sqrt{1-k^2\sin^2x}}, \phi\in\mathbb {R}, k\in(0,1)$$ The parameter $k$ is a fixed constant called modulus . Sometimes one uses the parameter $m$ instead of $k^2$ and then the notation is $F(\phi\mid m) $ . Since the integrand is positive it follows that $u=F(\phi, k) $ is a strictly increasing function of $\phi$ and therefore is invertible. We write $\phi=\operatorname{am} (u, k) $ and say that $\phi$ is the amplitude of $u$ . The elliptic functions are then defined by \begin{align}
\operatorname {sn} (u, k) & =\sin\operatorname {am} (u, k) =\sin\phi\notag\\
\operatorname {cn} (u, k) & =\cos\operatorname {am} (u, k) =\cos\phi\notag\\
\operatorname {dn} (u, k) & =\sqrt{1-k^2\operatorname {sn} ^2(u,k)}\notag
\end{align} It appears from the above definition that the parameter $k$ is a silent spectator, but the most interesting aspects of the theory are hidden in $k$ . But to deal with it we need to fix $\phi$ and we define two integrals $$K(k) =\int_{0}^{\pi/2}\frac{dx}{\sqrt{1-k^2\sin^2x}},E(k)=\int_{0}^{\pi/2}\sqrt{1-k^2\sin^2x}\,dx$$ and introduce the complementary modulus $k'=\sqrt{1-k^2}$ . The above integrals satisfy a key relation $$K(k) E(k') +K(k') E(k) - K(k) K(k') =\frac{\pi} {2}$$ which goes by the name of Legendre's identity. Usually if the value of $k$ is known from context one writes $K, K', E, E'$ instead of $K(k), K(k'), E(k), E(k') $ . If $k=0$ or $k=1$ the elliptic integrals reduce to elementary functions and the magical properties of elliptic integrals and functions (yet to be described) vanish. Elliptic functions satisfy addition formulas like the circular functions. Thus functions of argument $u+v$ can be expressed using functions of $u$ and $v$ . The key aspect is that the formula uses algebraic combination of functions of $u, v$ . The converse also holds. Any sufficiently nice function (keyword is analytic) with an algebraic addition formula is necessarily an elliptic or a circular function . The key formula here is $$\operatorname {sn} (u+v) =\frac{\operatorname {sn} u\operatorname {cn} v\operatorname {dn} v+\operatorname {sn} v\operatorname {cn} u\operatorname {dn} u} {1-k^2\operatorname {sn} ^2u\operatorname {sn} ^2v} $$ The formula is usually proved via smart use of derivatives of elliptic functions (which themselves can be obtained using their definitions). The truly magical property of elliptic functions (with no analogue for circular functions) is the transformation formulas between elliptic functions of two different but related moduli. For each positive integer $n>1$ there are two sets of transformation formulas: one which relates the elliptic functions of given modulus with those of a greater modulus (ascending transformation) and another which relates elliptic functions with those of a lesser modulus (descending transformation). The simplest case is for $n=2$ which is famous by the name Landen transformation. The transformation is neither obvious nor easy to prove. The ascending transformation (by John Landen) starts with $$u=\int_{0}^{\phi}\frac{dx}{\sqrt{1-k^2\sin^2x}}$$ and uses substitution $$\sin(2t-x)=k\sin x$$ and after reasonable algebra one gets $$\frac{dx} {\sqrt{1-k^2\sin^2x}}=\frac{2}{1+k}\cdot\frac{dt}{\sqrt{1-l^2\sin^2t}}$$ where $l=2\sqrt{k}/(1+k)$ . The corresponding formula for elliptic functions is $$\operatorname {sn} (u, k) =\frac{2}{1+k}\cdot\dfrac{\operatorname {sn} \left(\dfrac{(1+k)u} {2},\dfrac{2\sqrt{k}}{1+k} \right)\operatorname {cn} \left(\dfrac{(1+k)u} {2},\dfrac{2\sqrt{k}}{1+k} \right) }{\operatorname {dn} \left(\dfrac{(1+k)u} {2},\dfrac{2\sqrt{k}}{1+k} \right)}$$ The descending transformation uses the substitution (given by Gauss) $$\sin t=\frac{(1+k)\sin x} {1+k\sin^2x}$$ to get $$\frac{dt} {\sqrt{1-l^2\sin^2t} }= (1+k)\frac{dx}{\sqrt{1-k^2\sin^2x}}$$ The corresponding formula for elliptic functions is $$\operatorname {sn} \left((1+k)u, \frac{2\sqrt{k}}{1+k}\right)=\frac{(1+k)\operatorname {sn} (u, k) } {1+k\operatorname {sn} ^2(u,k)} $$ Even more important is the relationship between $K(k), K(k'),K(l), K(l') $ (these are typically denoted by $K, K', L, L'$ ) $$L=(1+k) K, K=\frac{1+l'}{2}\cdot L$$ It can be seen that the relationship between $k, l$ is same as that between $l', k'$ and hence we get $$K'=(1+l') L', L'=\frac{1+k}{2}\cdot K'$$ From the above two relations we get $$\frac{K'} {K} =2\cdot\frac{L'}{L}$$ Jacobi further gave transformation formulas when $n$ is prime and showed that the relationship between $k, l$ is algebraic and $K'/K=nL'/L$ . The theory can be easily extended to all values of $n$ and the above result holds. Given a positive integer $n$ , finding an algebraic relationship between moduli $k, l$ such that $K'/K=nL'/L$ is a computational challenge. Such a relationship is called a modular equation of degree $n$ . Using transformation theory Jacobi derived infinite product and series representations for elliptic functions. A key parameter in such representations is $q=e^{-\pi K'/K} $ which is called the nome corresponding to the modulus $k$ . Jacobi introduced his theta functions which use the nome $q$ and expressed elliptic functions as ratios of theta functions. Theta functions themselves are very interesting with wide applications in other fields (number theory for example) and their beauty lies in large number of algebraic relationships between them. Jacobi gave a complete description of theta functions and a host of formulas related to elliptic and theta functions. Ramanujan somehow fell in love with these topics and developed his theory of theta functions and elliptic functions using different notation and technique and went far ahead of Jacobi. He had almost magical powers in this field and till date no one knows how he derived a large number of modular equations and related formulas. Most of his results have only been verified using symbolic software. Notable here is the fact that both Jacobi and Ramanujan avoided complex analytic techniques (Ramanujan had no serious idea of complex analysis but his achievements remain still unparalleled in the field of elliptic function theory). Liouville and Weierstrass on the other hand championed the methods of complex analysis to deal with elliptic functions. The starting point in this approach is the study of doubly periodic functions and one learns that elliptic functions are doubly periodic and conversely doubly periodic functions can be expressed in terms of elliptic functions. In this approach elliptic integrals take backstage and transformation theory of Jacobi and modular equations of Ramanujan are presented in a very different framework called modular forms . Another important feature of elliptic functions is complex multiplication . Using addition formula for elliptic functions it is easy to see that if $n$ is a positive integer then we can express elliptic functions of argument $nu$ in terms of elliptic functions of argument $u$ . However it turns out that for some values of $k$ there exist complex number $\alpha\in\mathbb{C} \setminus \mathbb{R} $ such that elliptic functions of argument $\alpha u$ can be expressed in terms of functions of argument $u$ . This happens only when the value of $k$ is such that $K'/K$ is the square root of a rational number. Under these circumstances $k$ turns out to be an algebraic number. Let $n$ be a positive integer and $F$ be the smallest subfield of $\mathbb{C} $ which contains the imaginary number $i\sqrt{n} $ and let $\mathbb{Z} _{F} $ be the set of algebraic integers in $F$ . Let $k\in(0,1)$ such that $K'/K=\sqrt{n} $ . Then for any $\alpha\in\mathbb {Z} _F$ we can express $\operatorname {sn} (\alpha u, k) $ in terms of $\operatorname {sn} (u, k) $ . The link between elliptic function theory and imaginary quadratic extensions of $\mathbb{Q} $ is the most fascinating and difficult one. Abel worked in this direction and Kronecker understood its importance well enough. Kronecker was working on his theorem regarding abelian extensions of $\mathbb{Q} $ and realized that a similar result would hold for abelian extensions of imaginary quadratic extensions of $\mathbb{Q} $ and elliptic functions would play a central role there. All of this later developed into class field theory .
|
{
"source": [
"https://math.stackexchange.com/questions/3383769",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/583016/"
]
}
|
3,383,771 |
I have the integral where a is a constant: $$\int x^2\sqrt {1+4a^2x^2}dx$$ When I tried to solve this through substitutions such as $u=4a^2x^2$ or $u=1+4a^2x^2$ or through integration by parts, I just ended up making the equation even more complicated and even harder to solve. When I asked for help, I got answers asking me to do it through trigonometric substitutions such as $$x=\tan(u)/2a$$ As I have not learnt this yet, is there another way of solving this integral without using a complicated substitution like above? Upon researching, I found this answer , but I was not sure how to connect it to my question. Edit: This integral was obtained for solving the surface area of a wine barrel formed through a solid of revolution. The equation for the solid of revolution was a parabola as: $$f(x) = Ax^2 + B$$ This was substituted into the surface area of the solid formed when this is rotated fully about the x-axis: $$\int_a^b2f(x)\pi \sqrt{1+f'(x)^2}dx$$ I split the integral into 2 parts and factored out the A to obtain the simplified integral above. Edit Mike's answer seems to be working; however, as the partial decomposition was very lengthy I used Wolfram's partial fraction decomposer tool and got the following result: decomposition into partial fractions result
|
The theory of elliptic functions started with elliptic integrals and the key players were Gauss, Legendre, Abel, Jacobi and finally Ramanujan. A parallel approach using complex analysis was developed by Weierstrass. I will present a brief outline of the approach based on elliptic integrals and at the end mention a thing or two about complex analytical approach. Elliptic integrals arise while evaluating the arc length of an ellipse. If the equation of ellipse is $$x=a\cos t, y=b\sin t$$ then the arc length is given as $$L(t) =\int_{0}^{t}\sqrt{a^2\sin^2x +b^2\cos^2x}\,dx$$ The above is a typical (but slightly difficult) example of elliptic integral. In standard notation we define elliptic integral of first kind via $$u=F(\phi, k) =\int_{0}^{\phi}\frac{dx} {\sqrt{1-k^2\sin^2x}}, \phi\in\mathbb {R}, k\in(0,1)$$ The parameter $k$ is a fixed constant called modulus . Sometimes one uses the parameter $m$ instead of $k^2$ and then the notation is $F(\phi\mid m) $ . Since the integrand is positive it follows that $u=F(\phi, k) $ is a strictly increasing function of $\phi$ and therefore is invertible. We write $\phi=\operatorname{am} (u, k) $ and say that $\phi$ is the amplitude of $u$ . The elliptic functions are then defined by \begin{align}
\operatorname {sn} (u, k) & =\sin\operatorname {am} (u, k) =\sin\phi\notag\\
\operatorname {cn} (u, k) & =\cos\operatorname {am} (u, k) =\cos\phi\notag\\
\operatorname {dn} (u, k) & =\sqrt{1-k^2\operatorname {sn} ^2(u,k)}\notag
\end{align} It appears from the above definition that the parameter $k$ is a silent spectator, but the most interesting aspects of the theory are hidden in $k$ . But to deal with it we need to fix $\phi$ and we define two integrals $$K(k) =\int_{0}^{\pi/2}\frac{dx}{\sqrt{1-k^2\sin^2x}},E(k)=\int_{0}^{\pi/2}\sqrt{1-k^2\sin^2x}\,dx$$ and introduce the complementary modulus $k'=\sqrt{1-k^2}$ . The above integrals satisfy a key relation $$K(k) E(k') +K(k') E(k) - K(k) K(k') =\frac{\pi} {2}$$ which goes by the name of Legendre's identity. Usually if the value of $k$ is known from context one writes $K, K', E, E'$ instead of $K(k), K(k'), E(k), E(k') $ . If $k=0$ or $k=1$ the elliptic integrals reduce to elementary functions and the magical properties of elliptic integrals and functions (yet to be described) vanish. Elliptic functions satisfy addition formulas like the circular functions. Thus functions of argument $u+v$ can be expressed using functions of $u$ and $v$ . The key aspect is that the formula uses algebraic combination of functions of $u, v$ . The converse also holds. Any sufficiently nice function (keyword is analytic) with an algebraic addition formula is necessarily an elliptic or a circular function . The key formula here is $$\operatorname {sn} (u+v) =\frac{\operatorname {sn} u\operatorname {cn} v\operatorname {dn} v+\operatorname {sn} v\operatorname {cn} u\operatorname {dn} u} {1-k^2\operatorname {sn} ^2u\operatorname {sn} ^2v} $$ The formula is usually proved via smart use of derivatives of elliptic functions (which themselves can be obtained using their definitions). The truly magical property of elliptic functions (with no analogue for circular functions) is the transformation formulas between elliptic functions of two different but related moduli. For each positive integer $n>1$ there are two sets of transformation formulas: one which relates the elliptic functions of given modulus with those of a greater modulus (ascending transformation) and another which relates elliptic functions with those of a lesser modulus (descending transformation). The simplest case is for $n=2$ which is famous by the name Landen transformation. The transformation is neither obvious nor easy to prove. The ascending transformation (by John Landen) starts with $$u=\int_{0}^{\phi}\frac{dx}{\sqrt{1-k^2\sin^2x}}$$ and uses substitution $$\sin(2t-x)=k\sin x$$ and after reasonable algebra one gets $$\frac{dx} {\sqrt{1-k^2\sin^2x}}=\frac{2}{1+k}\cdot\frac{dt}{\sqrt{1-l^2\sin^2t}}$$ where $l=2\sqrt{k}/(1+k)$ . The corresponding formula for elliptic functions is $$\operatorname {sn} (u, k) =\frac{2}{1+k}\cdot\dfrac{\operatorname {sn} \left(\dfrac{(1+k)u} {2},\dfrac{2\sqrt{k}}{1+k} \right)\operatorname {cn} \left(\dfrac{(1+k)u} {2},\dfrac{2\sqrt{k}}{1+k} \right) }{\operatorname {dn} \left(\dfrac{(1+k)u} {2},\dfrac{2\sqrt{k}}{1+k} \right)}$$ The descending transformation uses the substitution (given by Gauss) $$\sin t=\frac{(1+k)\sin x} {1+k\sin^2x}$$ to get $$\frac{dt} {\sqrt{1-l^2\sin^2t} }= (1+k)\frac{dx}{\sqrt{1-k^2\sin^2x}}$$ The corresponding formula for elliptic functions is $$\operatorname {sn} \left((1+k)u, \frac{2\sqrt{k}}{1+k}\right)=\frac{(1+k)\operatorname {sn} (u, k) } {1+k\operatorname {sn} ^2(u,k)} $$ Even more important is the relationship between $K(k), K(k'),K(l), K(l') $ (these are typically denoted by $K, K', L, L'$ ) $$L=(1+k) K, K=\frac{1+l'}{2}\cdot L$$ It can be seen that the relationship between $k, l$ is same as that between $l', k'$ and hence we get $$K'=(1+l') L', L'=\frac{1+k}{2}\cdot K'$$ From the above two relations we get $$\frac{K'} {K} =2\cdot\frac{L'}{L}$$ Jacobi further gave transformation formulas when $n$ is prime and showed that the relationship between $k, l$ is algebraic and $K'/K=nL'/L$ . The theory can be easily extended to all values of $n$ and the above result holds. Given a positive integer $n$ , finding an algebraic relationship between moduli $k, l$ such that $K'/K=nL'/L$ is a computational challenge. Such a relationship is called a modular equation of degree $n$ . Using transformation theory Jacobi derived infinite product and series representations for elliptic functions. A key parameter in such representations is $q=e^{-\pi K'/K} $ which is called the nome corresponding to the modulus $k$ . Jacobi introduced his theta functions which use the nome $q$ and expressed elliptic functions as ratios of theta functions. Theta functions themselves are very interesting with wide applications in other fields (number theory for example) and their beauty lies in large number of algebraic relationships between them. Jacobi gave a complete description of theta functions and a host of formulas related to elliptic and theta functions. Ramanujan somehow fell in love with these topics and developed his theory of theta functions and elliptic functions using different notation and technique and went far ahead of Jacobi. He had almost magical powers in this field and till date no one knows how he derived a large number of modular equations and related formulas. Most of his results have only been verified using symbolic software. Notable here is the fact that both Jacobi and Ramanujan avoided complex analytic techniques (Ramanujan had no serious idea of complex analysis but his achievements remain still unparalleled in the field of elliptic function theory). Liouville and Weierstrass on the other hand championed the methods of complex analysis to deal with elliptic functions. The starting point in this approach is the study of doubly periodic functions and one learns that elliptic functions are doubly periodic and conversely doubly periodic functions can be expressed in terms of elliptic functions. In this approach elliptic integrals take backstage and transformation theory of Jacobi and modular equations of Ramanujan are presented in a very different framework called modular forms . Another important feature of elliptic functions is complex multiplication . Using addition formula for elliptic functions it is easy to see that if $n$ is a positive integer then we can express elliptic functions of argument $nu$ in terms of elliptic functions of argument $u$ . However it turns out that for some values of $k$ there exist complex number $\alpha\in\mathbb{C} \setminus \mathbb{R} $ such that elliptic functions of argument $\alpha u$ can be expressed in terms of functions of argument $u$ . This happens only when the value of $k$ is such that $K'/K$ is the square root of a rational number. Under these circumstances $k$ turns out to be an algebraic number. Let $n$ be a positive integer and $F$ be the smallest subfield of $\mathbb{C} $ which contains the imaginary number $i\sqrt{n} $ and let $\mathbb{Z} _{F} $ be the set of algebraic integers in $F$ . Let $k\in(0,1)$ such that $K'/K=\sqrt{n} $ . Then for any $\alpha\in\mathbb {Z} _F$ we can express $\operatorname {sn} (\alpha u, k) $ in terms of $\operatorname {sn} (u, k) $ . The link between elliptic function theory and imaginary quadratic extensions of $\mathbb{Q} $ is the most fascinating and difficult one. Abel worked in this direction and Kronecker understood its importance well enough. Kronecker was working on his theorem regarding abelian extensions of $\mathbb{Q} $ and realized that a similar result would hold for abelian extensions of imaginary quadratic extensions of $\mathbb{Q} $ and elliptic functions would play a central role there. All of this later developed into class field theory .
|
{
"source": [
"https://math.stackexchange.com/questions/3383771",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/711737/"
]
}
|
3,390,870 |
The Fibonacci sequence is $0, 1, 1, 2, 3, 5, 8, 13, 21, 34,\ldots$ , where each term after the first two is the sum of the two previous terms. Can we find the next Fibonacci number if we are given any Fibonacci number? For example, if $n = 8$ then the answer should be $13$ because $13$ is the next Fibonacci number after $8$ .
|
The ratio of any two consecutive entries in the Fibonacci sequence rapidly approaches $\varphi=\frac{1+\sqrt5}2$ . So if you multiply your number by $\frac{1+\sqrt5}2$ and round to the nearest integer, you will get the next term unless you're at the very beginning of the sequence.
|
{
"source": [
"https://math.stackexchange.com/questions/3390870",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/670843/"
]
}
|
3,400,721 |
On this MIT lecture , the difference between the heat equation and the wave equation includes signal travelling infinitely fast in the heat equation, while it has finite speed in the wave equation: I guess I don't get the idea of "signal" because in the heat equation there is a partial derivative with respect to time of the function that assigns a temperature to each point in space at each time, and this partial derivative with respect to time, which is not infinite, would seem to be the speed. So what is it exactly that travels infinitely fast. After a Google search it seems as though this is the "paradox of instantaneous heat propagation" and it gets into other equations, and even has a relativistic counterpart. In other words, I couldn't find any entry-level explanations.
|
Say you have a one dimensional heat problem. (Adding dimensions does not change the results, but does make drawings harder.) Initially, the line is at temperature $0$ . At time $t = 0$ , you drop some heat on the origin, say $f(x,0) = \delta(x)$ and then let the dynamics run. (Here, $\delta$ is the Dirac delta function . This is a distribution. It can be defined as the limit of a Gaussian distribution as the standard deviation decreases to zero -- the distribution is zero everywhere except at $x=0$ and the integral of the distribution over all of space is $1$ .) For all time afterwards, the distribution of heat is a Gaussian distribution with positive (and increasing) standard deviation. This distribution is nonzero everywhere. This means that immediately after time zero, the heat at every point in the universe jumps to a positive amount. That is, the information that heat was added at the origin is instantaneously transmitted to every point in the universe.
|
{
"source": [
"https://math.stackexchange.com/questions/3400721",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/152225/"
]
}
|
3,403,076 |
I'm going through my first year of teaching AP Calculus. One of the things I like to do is to impress upon my students why the topics I introduce are interesting and relevant to the big picture of understanding the nature of change. That being said, while I know that the Mean Value Theorem is one of the central facts in the study of calculus, I'm not really clear on why. I feel that it's a bit like IVT for the derivatives of a continuous and differentiable function. But I feel that the only thing I did with it when I studied Calc is to identify the point where the tangent was parallel to the secant of the endpoints. If the class had been a little smaller or I had been a little bolder, I might have raised my hand and asked the professor "So what?" But I didn't, so here we are. To be clear, I am not at all arguing that MVT is not critical, so I don't plan on the answers being opinion-based. But can you discuss some uses of MVT that justify the lofty place it has in the curriculum?
|
When I teach the MVT in a Calculus class, I do three things: a) Show the one real-world example I know and which everyone gets: Police has two radar controls at a highway, say at kilometre $11$ and at kilometre $20$ . Speed limit is $70$ km/h. They measure a truck going through the first control, at 11.11am, at $65$ km/h, and going through the second control at 11.17am, at $67$ km/h. They issue a speeding ticket. Why? Let the class think about this. Every time I've taught this, someone realised after a short while that the truck passed $9$ km in $6$ minutes, so its average speed was $90$ km/h. Then someone says something like: you cannot go at an average speed of $90$ km/h without ever going at a speed of $90$ km/h (and certainly not without ever going more than $70$ km/h). This is totally common sense, but also it is exactly the MVT. Draw a graph of the position function, realise that the numbers $65$ and $67$ were just red herrings (tangent slopes at the endpoints, irrelevant to the argument), discuss whether there is some way out: Can the function have discontinuites? Well, a jump discontinuity would be a wormhole the truck fell through, or more realistically some shortcut off-highway which would be illegal too. Points where the derivative does not exist? Actually yes, if the truck braked somewhere, but it cannot have done that more than finitely many times, and then we break down the problem into subintervals. Turns out: No, even sharp braking cannot create a "sharp turn" of the function under standard assumptions of physics, see comments by users @leftaroundabout and @llama. b) Mention that aside from that, it is a "workhorse theorem" which we never see but which makes the entire curve sketching routine work . How do you prove Positive derivative means increasing function : with the MVT. How do you prove Derivative $0$ on an interval means constant : with the MVT. Of course we never think of the proofs of those, we just use them as "well-known", but without MVT, they would not be there. c) Related to b, it comes up crucially in the Fundamental Theorem later, compare Arturo Magidin's answer. I point it out when I'm there. Added : As this answer seems to get a lot of attention, I want to put in one more thing part of which I try to get across in class when the MVT is up. d) The derivative is a cool thing because it carries a lot of information about the original function, but in a subtle way. To the non-initiated, the graphs of an $f$ and $f'$ would most often look totally unrelated. But the initiated, i.e. your calculus class, at this point should already "get" intuitively "hmm, $f'$ is very negative around here, so $f$ should decrease with a steep slope in this neighbourhood". Now the MVT is the one theorem which attaches actual numbers to this intuition , it is the first result which gives an explicit (albeit subtle) relation between values of $f$ and values of $f'$ . That is why it underlies the proofs of all the fancy machinery that, later, gives seemingly much stronger relations between $f$ and $f'$ , like Curve Sketching, the Fundamental Theorem, Taylor Series, and even L'Hôpital's rules (thanks @JavaMan for pointing out this one). They get all the limelight, but in a way, they all are refined versions of repeated applications of the MVT plus special conditions. Further update: Since the "speeding" application of the MVT keeps getting mentioned everywhere (and of course I don't even remember where I got it from originally), I googled a little and see that it's been around for quite a while. This educational video of the MAA's from 1966 is almost of historical value (although I hardly understand the voice-over due to its very American accent). As for the question whether this is actually done, thanks to User Bracco23 for providing one source from Italy in a comment. Here is another one from Scotland: http://news.bbc.co.uk/2/hi/uk_news/scotland/4681507.stm The internet has more hearsay and debates: 1 2 3 . Cf. also this answer .
|
{
"source": [
"https://math.stackexchange.com/questions/3403076",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/-1/"
]
}
|
3,403,819 |
I happen to watch the video here ,
which gives a solution to the definite integral below using the power series approach. Then answer is $\frac{\pi^2}{6}$ , given by: $$\int_0^1 \frac{\ln x}{x-1}dx=\int_{-1}^0 \frac{\ln(1+u)}{u}du=\sum_{n=0}^{\infty}\frac{1}{(n+1)^2}=\frac{\pi^2}{6},$$ where the power seires expansion of the function $\ln(1+u)$ is used. I tried for some time, but could not find another approach. Does anyone know any alternative methods to evaluate above definite integral without using the infinite series expansion? Any comments, or ideas, are really appreciated.
|
Here is to integrate without resorting to power series. Note \begin{align}
\int_0^1\frac{\ln x}{1-x}dx& =\frac43\int_0^1 \frac{\ln x }{1-x}dx -\frac13\int_0^1 { \frac{\ln x }{1-x} } \overset{x\to x^2}{dx} \\
&= \frac43\int_0^1 \frac{\ln x}{1-x^2}dx
= \frac23\int_0^\infty \frac{\ln x}{1-x^2}dx=\frac23J(1)
\end{align} where $ J(\alpha) =-\frac 12 \int_0^\infty \frac{\ln (1-\alpha^2 + \alpha^2 x^2)}{x^2-1}dx $ $$ J'(\alpha) =-\int_0^\infty \frac{\alpha dx}{1-\alpha^2 + \alpha^2 x^2} = -\frac{\pi/2}{\sqrt{1-\alpha^2}}$$ Thus $$ \int_0^1\frac{\ln x}{1-x}dx = \frac23\int_0^1 J'(\alpha) d\alpha =-\frac{\pi}{3}\int_0^1 \frac{d\alpha}{\sqrt{1-\alpha^2}}= -\frac{\pi^2}{6}$$
|
{
"source": [
"https://math.stackexchange.com/questions/3403819",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/11211/"
]
}
|
3,403,822 |
Let $T_n$ be the permutation matrix that interchanges rows of an $n \times n$ matrix in the following way: row $j$ is moved to row $j + 1$ for $j \in \{1, 2, \dots, n − 1\}$ and the last row is moved to the first. Find $\det(T_3)$ . I really don't know where to start with this problem, I tried to think of a $3 \times 3$ matrix and just follow the interchanges, but I'm not sure if that's the right of way of solving this problem. Any help is appreciated.
|
Here is to integrate without resorting to power series. Note \begin{align}
\int_0^1\frac{\ln x}{1-x}dx& =\frac43\int_0^1 \frac{\ln x }{1-x}dx -\frac13\int_0^1 { \frac{\ln x }{1-x} } \overset{x\to x^2}{dx} \\
&= \frac43\int_0^1 \frac{\ln x}{1-x^2}dx
= \frac23\int_0^\infty \frac{\ln x}{1-x^2}dx=\frac23J(1)
\end{align} where $ J(\alpha) =-\frac 12 \int_0^\infty \frac{\ln (1-\alpha^2 + \alpha^2 x^2)}{x^2-1}dx $ $$ J'(\alpha) =-\int_0^\infty \frac{\alpha dx}{1-\alpha^2 + \alpha^2 x^2} = -\frac{\pi/2}{\sqrt{1-\alpha^2}}$$ Thus $$ \int_0^1\frac{\ln x}{1-x}dx = \frac23\int_0^1 J'(\alpha) d\alpha =-\frac{\pi}{3}\int_0^1 \frac{d\alpha}{\sqrt{1-\alpha^2}}= -\frac{\pi^2}{6}$$
|
{
"source": [
"https://math.stackexchange.com/questions/3403822",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/635433/"
]
}
|
3,405,684 |
I have been recently reading about the Weierstrass function, a function that is continuous everywhere but differentiable nowhere. It made me think of a similar puzzle involving functions: find $f: \mathbb R \to \mathbb R$ such that $f$ can be computed anywhere, is well defined, but is continuous nowhere. I first thought of maybe mapping the reals on to a fractal and doing something with that point but that’s just a fuzzy idea and I doubt one could compute it everywhere. In my research I could find no such function that is defined for all real numbers, both rational and irrational. If anyone has a proof this is impossible (or even just an idea of how you might prove that), or an example of a function that has those properties, that would be great.
|
First off, the "majority" of functions (where majority is defined properly) have this property, but are insanely hard to describe. An easy example, though, of a function $f:\mathbb R\to\mathbb R$ with the aforementioned property is $$f(x)=\begin{cases}x&x\in\mathbb Q\\x+1&x\notin\mathbb Q\end{cases}$$ This example has the added benefit of being a bijection!
|
{
"source": [
"https://math.stackexchange.com/questions/3405684",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/617984/"
]
}
|
3,405,690 |
I have problems to solve this problem. An ambulance goes up and down on a street $L$ long at constant speed. At some point it happens an accident in a casual point of the street [It means that the distance between this point and the beginning of the street is distributed with uniform law $(0,L)$ ]. Assuming that the position of the ambulance at that moment, in an indipendent manner by the accident, it is itself distributed with uniform law $(0,L)$ , calculates the distribution of her distance from the point of accident . Now, it's clear that I have to calculate the distribution of $Z=|X-Y|$ with $X \perp Y \sim U(0,L)$ but I'm having a hard time to handle the module and fix the extremes of integration. Any ideas? Thanks in advance to everyone!
|
First off, the "majority" of functions (where majority is defined properly) have this property, but are insanely hard to describe. An easy example, though, of a function $f:\mathbb R\to\mathbb R$ with the aforementioned property is $$f(x)=\begin{cases}x&x\in\mathbb Q\\x+1&x\notin\mathbb Q\end{cases}$$ This example has the added benefit of being a bijection!
|
{
"source": [
"https://math.stackexchange.com/questions/3405690",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/592540/"
]
}
|
3,406,925 |
There's an RPG (Role-Playing Game) called Numenera , set on the Earth a billion years in the future, which is covered in the partially-functional and generally weird and mysterious technological ruins of a billion years of advanced civilizations. In one of its sourcebooks, it lists possible fragmentary transmissions that a player character might receive from its global "datasphere", one of which is listed as the following: An irrational number that may be a four-dimensional equivalent of $\pi$ . When I saw this, my first thought was "I'm pretty sure that's probably just pi multiplied by a constant", followed by "If it's not, I'm sure mathematicians have already worked it out." When I did I Google search, I couldn't find anything obvious. So, what is the equivalent to pi for a four-dimensional hypersphere?
|
You're close . The "volume" of a $4$ -dimensional ball is given by $$
V = \frac{\pi^2}{2}R^4
$$ and its "surface area" is given by $$
S = 2 \pi^2 R^3.
$$ If we take the $n$ -dimensional equivalent of $\pi$ to be the ratio between the volume of the $n$ -ball and $R^n$ (the volume the $n$ -cube with side length $R$ ), then the 4-D equivalent of $\pi$ is $\frac{\pi^2}{2}$ . More generally, we would have (for positive integers $k$ ) $$
\begin{align}\pi_{2k} &= \frac{\pi^k}{k!}, \\
\pi_{2k+1} &= \frac{2^{k+1}\pi^k}{(2k+1)!!} = \frac{2(k!)(4\pi)^k}{(2k+1)!}.\end{align}
$$ where $\pi_n$ is the $n$ -dimensional equivalent of $\pi$ . Because $\pi$ is known to be transcendental, we can conclude that these are all irrational (and transcendental as well).
|
{
"source": [
"https://math.stackexchange.com/questions/3406925",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/514365/"
]
}
|
3,414,219 |
I have trouble understanding a whole array of things in complex analysis, which I have basically tracked to the statement "real and imaginary parts of a complex analytic function are not independent." Because of that, I don't really understand the Cauchy-Riemann equations, the fact that for an analytic function, if its real part is constant, then the whole function is constant, and other fundamental things, such as Cauchy's Integral formula, Maximum modulus principle, etc. (the last two just make zero sense to me.) The thing is, I pretty much understand the proofs, starting from the beginning, when we define differentiability of a complex function. I don't have any problems with the introduction of complex numbers as well, and different identities. But I just don't have any intuition for why things are like that, and it's very frustrating, because I always feel like I don't understand complex numbers at all, and just do some standard exercises in class, relying on proven facts that I just assume to be true as a starting point. But as soon as I go and try to understand the meaning of things we in the class work with, I just immediately stop understanding anything. Can anyone help me understand why the real and imaginary parts of a complex function are not independent?
|
It's really just a question of the definition of the derivative. If $z=x+yi,$ $f(z)=u(x,y)+iv(x,y)$ can be any pair of functions $u,v.$ But if $f$ is differentiable, then: $$f'(z)=\lim_{h\to 0}\frac{f(z+h)-f(z)}{h}\tag{1}$$ then $h$ can approach $0$ in many different ways, since $h$ is complex. For example, you can have $h\to 0$ on the real line. Then: $$f'(z)=\frac{\partial u}{\partial x} +i\frac{\partial v}{\partial x}$$ But if $h\to 0$ along the imaginary part, then: $$\begin{align}f'(z)&=\frac{1}{i}\left(\frac{\partial u}{\partial y}+i\frac{\partial v}{\partial y}\right)\\
&=\frac{\partial v}{\partial y}-i\frac{\partial u}{\partial y}
\end{align}$$ So for the limit to be independent of any path you take $h\to 0$ you must have at minimum that $$\frac{\partial u}{\partial x}=\frac{\partial v}{\partial y},\\\frac{\partial v}{\partial x}=-\frac{\partial u}{\partial y}\tag{2}$$ So for (1) to be true, we need $u,v$ to satisfy the differential equations in (2). It turns out that $(2)$ is enough to ensure that $(1)$ converges to a single value, but that is not 100% obvious. The equations in (2) are called the Cauchy-Riemann equations . Another way of looking at it is, given a function $f:\mathbb R^2\to\mathbb R^2$ mapping $\begin{pmatrix}x\\y\end{pmatrix}\mapsto \begin{pmatrix}u(x,y)\\v(x,y)\end{pmatrix}$ there is a matrix derivative standard from multi-variable calculus: $$Df\begin{pmatrix}x\\y\end{pmatrix}=\begin{pmatrix}\frac{\partial u}{\partial x}&\frac{\partial u}{\partial y}\\\frac{\partial v}{\partial x}&\frac{\partial v}{\partial y}\end{pmatrix}\tag{3}$$ For small vectors $$\mathbf h=\begin{pmatrix}h_1\\h_2\end{pmatrix}$$ you get $f\left(\begin{pmatrix}x\\y\end{pmatrix}+\mathbf h\right)\approx f\begin{pmatrix}x\\y\end{pmatrix}+Df\begin{pmatrix}x\\y\end{pmatrix}\mathbf h.$ In particular, $Df$ is in some sense the "best" matrix, $\mathbf A,$ for estimating $f(\mathbf v+\mathbf h)\approx f(\mathbf v)+\mathbf A\mathbf h.$ Now, these matrices are not complex numbers. But an interesting fact is that the set of matrices of the form: $$\begin{pmatrix}a&-b\\b&a\end{pmatrix}\tag{4}$$ are a ring isomorphic to the ring of complex numbers. Specifically, the above matrix corresponds to $a+bi.$ We also have that: $$\begin{pmatrix}a&-b\\b&a\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}=
\begin{pmatrix}ax-by\\bx+ay\end{pmatrix}$$ compare that with: $$(a+bi)(x+yi)=(ax-by)+(ay+bx)i.$$ So these matrices (4) act on $(x,y)^T$ the same way that $a+bi$ acts on $x+yi$ by multiplication. The Cauchy-Riemann equations (2) just mean that $Df\begin{pmatrix}x\\y\end{pmatrix}$ is an example of (4) - that is, when the Cauchy-Riemann equations are true for $u,v$ then the multi-variate derivative (3) can be thought of as a complex number. So we see that when we satisfy Cauchy-Riemann, $Df\begin{pmatrix}x\\y\end{pmatrix}\cdot\mathbf h$ can be seen as multiplication of complex numbers, $f'(z)$ and $h=h_1+h_2i.$ Then you have: $$f(z+h)\approx f(z)+f'(z)h.$$ where $f'(z)$ is not just the best estimating complex number for this approximation, but also $f'(z)$ is the best linear operation on $h$ for this estimation. So complex analysis is taking the vector function and asking, $f$ "when does it make sense to think of the derivative of the $\mathbb R^2\to\mathbb R^2$ as a complex number?" That is exactly when Cauchy-Riemann is true. In the general case $f:\mathbb R^2\to\mathbb R^2,$ we can't really take the second derivative and get an estimate $f(z+h)\approx f(z)+Df(z)\cdot h +\frac{1}{2}D^2f(z)\cdot h^2+\cdots.$ We can't get easy equivalents to power series approximations of $f.$ But when $Df$ satisfies Cauchy-Riemann, we can think if $Df$ as a complex-valued function. So complex analysis is a subset of the real analysis of functions $\mathbb R^2\to\mathbb R^2$ such that the derivative matrix $Df$ can be thought of as a complex number. This set of functions turns out to have a lot of seemingly magical properties. This complex differentiability turns out to be fairly strong property on the functions we study. The niceness of the Cauchy-Riemann equations gives up some truly lovely results.
|
{
"source": [
"https://math.stackexchange.com/questions/3414219",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/716102/"
]
}
|
3,425,688 |
There's a trick I've been using to solve a common class of limit problems for a while now. I've never seen it taught in a textbook, but I once wrote out a few lines of work to justify it to myself in one of my notebooks. Here is a sample problem to illustrate my technique: $$\lim_{x\to\infty}\sqrt{x^2+x}-x=\lim_{x\to\infty}\sqrt{x^2+x+\frac14}-x=\lim_{x\to\infty}\left(x+\frac12\right)-x=\frac12$$ It's such a shortcut compared to rationalization or however you're "supposed" to solve that, and I'm quite certain that it's valid. But I'm starting to feel a little leery posting this as a solution to MSE problems since I don't quite remember the few lines of justification all those years ago. Could someone please provide a proof that $$\lim_{x\to\infty}\sqrt{x^2+2\alpha x}-\sqrt{x^2+2\alpha x+\alpha^2}=0$$ or whatever equivalent formulation you would prefer? I'm sure that delta-epsilon drudgery is not necessary at all. (If nobody gets to this by the end of the day, I'll self-answer just to have something to link to.) Thanks!
|
It need not be $\alpha^2$ . Adding any constant $\beta$ doesn't change the limit: $$\lim\limits_{x\to\infty}\sqrt{x^2+2\alpha x+\beta}-\sqrt{x^2+2\alpha x}=\lim\limits_{x\to\infty}\dfrac{\beta}{\sqrt{x^2+2\alpha x+\beta}+\sqrt{x^2+2\alpha x}}= 0$$
|
{
"source": [
"https://math.stackexchange.com/questions/3425688",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/-1/"
]
}
|
3,427,744 |
I saw this today, I checked in Mathematica and the integral comes out to $\pi$ , but I have no idea how to solve it. FREE Wi-Fi: The Wi-Fi password is the first $10$ digits of the answer. $$\int_{-2}^2\left(x^3\cos\frac x2+\frac12\right)\sqrt{4-x^2}\ dx$$
|
The integrand is the sum of an odd and even function, and only the latter contributes, so it's $\int_0^2\sqrt{4-x^2}dx$ . This is a quarter of the area of a radius- $2$ circle, i.e. $\pi$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3427744",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/486986/"
]
}
|
3,427,749 |
Given $A= \begin{bmatrix} 0 &1&0&0\\0&0&1&0\\0&0&0&1\\1&0&0&0 \ \end{bmatrix}$ Is there any short cut method to find the characteristic and minimal polynomial of $A$ ?
|
The integrand is the sum of an odd and even function, and only the latter contributes, so it's $\int_0^2\sqrt{4-x^2}dx$ . This is a quarter of the area of a radius- $2$ circle, i.e. $\pi$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3427749",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/557708/"
]
}
|
3,433,841 |
When I talk about my research with non-mathematicians who are, however, interested in what I do, I always start by asking them basic questions about the primes. Usually, they start getting reeled in if I ask them if there's infinitely many or not, and often the conversation remains by this question. Almost everyone guesses there are infinitely many, although they "thin out", and seem to say it's "obvious": "you keep finding them never mind how far along you go" or "there are infinitely many numbers so there must also always be primes". When I say that's not really an argument there then they may surrender this, but I can see they're not super convinced either. What I would like is to present them with another sequence which also "thins out" but which is finite . Crucially, this sequence must also be intuitive enough that non-mathematicians (as in, people not familiar with our terminology) can grasp the concept in a casual conversation. Is there such a sequence?
|
An example would be the narcissistic numbers , which are the natural numbers whose decimal expansion can be written with $n$ digits and which are equal to sum of the $n$ th powers of their digits. For instance, $153$ is a narcissistic number, since it has $3$ digits and $$153=1^3+5^3+3^3.$$ Of course, any natural number smaller than $10$ is a narcissistic number, but there are only $79$ more of them, the largest of which is $$115\,132\,219\,018\,763\,992\,565\,095\,597\,973\,971\,522\,401.$$
|
{
"source": [
"https://math.stackexchange.com/questions/3433841",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/653050/"
]
}
|
3,435,144 |
I'm currently relearning Taylor series and yersterday I thought about something that left me puzzled. As far as I understand, whenever you take the Taylor series of any function $f(x)$ around a point $x = a$ , the function is exactly equal to its Taylor series, that is: $$ f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n $$ For example, if we take $f(x) = e^x$ and $x = 0$ , we obtain: $ e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!} $ My doubt is: the only variables in the Tayor series formula are $f(a), f'(a), f''(a),$ etc., that is, the successive derivatives of the function $f$ evaluated in one point $x = a$ . But the Taylor series of $f(x)$ determine the whole function! How is it possible that the successive derivatives of the function evaluated in a single point determine the whole function? Does this mean that if we know the values of $f^{(n)}(a)$ , then $f$ is uniquely determined? Is there an intuition as to why the succesive derivatives of $f$ on a single point encode the necessary information to determine $f$ uniquely? Maybe I'm missing a key insight and all my reasoning is wrong, if so please tell where is my mistake. Thanks!
|
You're right, in general $f$ is not determined by its derivatives at one single point. Functions satisfying this condition are called analytic. But not all smooth functions are analytic, for example $$x\mapsto\left\{\begin{array}{c}e^{-\frac{1}{x^2}}, x>0\\0, x\leq 0\end{array}\right.$$ is a smooth function and the derivatives at zero are all zero, hence the Taylor series developed at zero does not determine the function. Furthermore the exact statement of Taylor's theorem is quite different from what you said. It is as follows: If $f\in C^{k+1}(\mathbb{R})$ , then $$f(x)=\sum_{n=0}^k f^{(n)}(a)(x-a)^n\frac{1}{n!} + f^{(k+1)}(\xi)\frac{1}{(k+1)!}(x-a)^{k+1}$$ If you now take $k\rightarrow\infty$ it is in general not clear, that this error term converges to zero.
|
{
"source": [
"https://math.stackexchange.com/questions/3435144",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/722891/"
]
}
|
3,436,145 |
To reiterate the question, basically there is some number, $n$ that exists that is divisible all the integers $1, \dots, 200$ , except for two consecutive numbers in that range. The goal is to find what those two consecutive integers are. The answer isn't trivial though, since $n$ needs to be divisible by all those numbers, it is difficult to find two numbers next to each other such the multiples of those numbers aren't less than $200$ and such that those cannot be prime factorized into numbers that are in the prime factorization of $n$ . I have tried doing this computationally, but the LCM of all the numbers in the range (less two of them) is ginormous and checking the divisibility condition doesn't seem to work on my computer. The problem would be simple if the two numbers didn't have to be consecutive, since we could just select two prime numbers, but since one must be even, this is not possible. I am trying to think of properties of divisibility that could help, but haven't found anything that worked yet. For example, I was looking for numbers that a prime such that a number before or after it is the square of a prime number. This way, we could say that the prime number itself is omitted from $n$ and that there is only one factor of the square root of the other number in $n$ . I am not sure if that would definitely work, but regardless I couldn't find those numbers. I tried another perfect square and a prime number, $196$ and $197$ , but there must be enough factors to make two $14$ s in $n$ , so that doesn't work either. I am not experienced at all in number theory or discrete math, this is just a brainteaser I have heard. (Also for reference, I do not know the answer to reverse engineer something from). Any help would be appreciated! Thanks!
|
Excellent question! The answer is $127$ and $128$ ... but why? If you wanted to find a number divisible by $1,2,3,4$ you might first multiply these numbers and say $24$ . However, you soon realize $4$ is already a multiple of $2$ ; you can use just $3\times4$ to get $12$ . Therefore, you need only multiply the largest powers of the primes that factor all of the digits from $2$ to $200$ to get a number that is divisible by all of the integers from $1$ to $200$ . If you do this; you will find the number is $2^7\cdot3^4\cdot5^3\cdot7^2\cdot11^2\cdot13^2\cdot17\cdot19\cdot23\cdot29\cdot\ldots$ (the rest of the primes up to $199$ ) = a very large number. Next we need to find a restriction to eliminate two consecutive numbers. One of the two numbers must be even. The only way to remove an even number from the above calculation without modifying any of the other primes is to reduce the power of $2^7$ to $2^6$ ; this removes the number $128$ from the list. Since $127$ is also a prime number, it can also be removed from the list without affecting any of the other primes in the list... I hope this helps.
|
{
"source": [
"https://math.stackexchange.com/questions/3436145",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/463358/"
]
}
|
3,438,635 |
Removing finitely many point from an open set in $\mathbb{R}^n$ gives an open set. Is this true in general for any space? My intuition is that this is the case, however, how does one (dis)prove this? The only idea that comes in mind is that, since singletons are closed and unions of closed sets are closed, a union of singletons is closed. Now, let $U$ be an open set. Then the complement of $U$ is closed, and any point $x$ removed from $U$ is a singleton that is "unioned" with the closed complement of $U$ to form a bigger closed set $C$ . $C$ is the complement of $U\setminus\{x\},$ so $U\setminus\{x\}$ is open. One can thus repeat this argument inductively for any finite number of removed points. Does this make sense?
|
If you allow arbitrary topologies ("any general space"), then the answer is no . For any set $M$ with at least two elements you can define a topology $\mathcal T=\{\emptyset, M\}$ (i.e. only the empty set and $M$ are open sets in $M$ ). Then, by removing one point from $M$ , you get a set which is not open. Edit: Let me add the following: Proposition. Let $(M,\mathcal T)$ be a topological space. Then the following two statements are equivalent: For every $x\in M$ , $\{x\}$ is closed (i.e. $M\setminus\{x\}\in\mathcal T$ ), For every open set $O\in \mathcal T$ and point $x\in M$ , we have that $O\setminus\{x\}$ is also open. Proof. 1. $\implies$ 2.: Suppose that 1. is true and let $O\in \mathcal T$ , $x\in M$ . Then $M\setminus\{x\}$ is open and hence $$O\setminus\{x\}=O\cap\big(M\setminus\{x\}\big)\in\mathcal T.$$ 2. $\implies$ 1.: Suppose that 2. is true and let $x\in M$ . Then, since $M$ is open, $M\setminus\{x\}$ is open too, so $\{x\}$ is closed. $\square$
|
{
"source": [
"https://math.stackexchange.com/questions/3438635",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/436899/"
]
}
|
3,440,647 |
Start with any positive fraction $\frac{a}{b}$ .
First add the denominator to the numerator: $$\frac{a}{b} \rightarrow \frac{a+b}{b}$$ Then add the (new) numerator to the denominator: $$\frac{a+b}{b} \rightarrow \frac{a+b}{a+2b}$$ So $\frac{2}{5} \rightarrow \frac{7}{5} \rightarrow \frac{7}{12}$ . Repeating this process appears to map every fraction to $\phi$ and $\frac{1}{\phi}$ : $$
\begin{array}{ccccccccccc}
\frac{2}{5} & \frac{7}{5} & \frac{7}{12} & \frac{19}{12} & \frac{19}{31} & \frac{50}{31} & \frac{50}{81} & \frac{131}{81} & \frac{131}{212} & \frac{343}{212} & \frac{343}{555} \\
0.4 & 1.40 & 0.583 & 1.58 & 0.613 & 1.61 & 0.617 & 1.62 & 0.618 & 1.62 & 0.618 \\
\end{array}
$$ Another example: $$
\begin{array}{ccccccccccc}
\frac{11}{7} & \frac{18}{7} & \frac{18}{25} & \frac{43}{25} & \frac{43}{68} & \frac{111}{68} & \frac{111}{179} & \frac{290}{179} & \frac{290}{469} & \frac{759}{469} & \frac{759}{1228} \\
1.57143 & 2.57 & 0.720 & 1.72 & 0.632 & 1.63 & 0.620 & 1.62 & 0.618 & 1.62 & 0.618 \\
\end{array}
$$ Q . Why?
|
Instead of representing $\frac{a}{b}$ as a fraction, represent it as the vector $\left( \begin{array}{c} a \\ b \end{array} \right)$ . Then, all you are doing to generate your sequence is repeatedly multiplying by the matrix $\left( \begin{array}{cc} 1 & 1 \\ 1 & 2 \end{array} \right)$ . One of the eigenvectors of this matrix is $\left( \begin{array}{c} \frac{\sqrt{5}-1}{2} \\ 1 \end{array} \right)$ , which has a slope equal to the "golden ratio". This is a standard example of a linear discrete dynamical system, and asymptotic convergence to an eigenvector is one of the typical things that can happen. You can also guess at the long-term behavior of the system by looking at its vector field. https://kevinmehall.net/p/equationexplorer/#%5B-100,100,-100,100%5D&v%7C(x+y)i+(x+2y)j%7C0.1 In this case you see everything that starts in the first quadrant diverges to infinity along the path of the eigenvector I mentioned before. For your sequence, you started at $\left( \begin{array}{c} 2 \\ 5 \end{array} \right)$ , which lies in the first quadrant. Side note: There is nothing particularly special about the golden ratio, the matrix above, or the starting point of $\left( \begin{array}{c} 2 \\ 5 \end{array} \right)$ for this sequence. You can change the starting point to be in the negative quadrant if you want to diverge in the opposite direction, and you can change the matrix if you want to diverge along a differently sloped eigenvector.
|
{
"source": [
"https://math.stackexchange.com/questions/3440647",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/237/"
]
}
|
3,440,648 |
Prove that $$ \lim_{x \to 0} \frac{ \sqrt{x + \sqrt{ x + \sqrt{x}}} }{\sqrt[8]{x}}= 1$$ Let $x=y^8$ with $t>0$ . Then after we get $$
\lim \frac{ \sqrt { t^8 + \sqrt{ t^8 + \sqrt{t^8}}}}{ t} = \lim \frac{ \sqrt{ t^8 + \sqrt{ t^8 + t^4}}}{ t}
$$ and can't go further than that
|
Instead of representing $\frac{a}{b}$ as a fraction, represent it as the vector $\left( \begin{array}{c} a \\ b \end{array} \right)$ . Then, all you are doing to generate your sequence is repeatedly multiplying by the matrix $\left( \begin{array}{cc} 1 & 1 \\ 1 & 2 \end{array} \right)$ . One of the eigenvectors of this matrix is $\left( \begin{array}{c} \frac{\sqrt{5}-1}{2} \\ 1 \end{array} \right)$ , which has a slope equal to the "golden ratio". This is a standard example of a linear discrete dynamical system, and asymptotic convergence to an eigenvector is one of the typical things that can happen. You can also guess at the long-term behavior of the system by looking at its vector field. https://kevinmehall.net/p/equationexplorer/#%5B-100,100,-100,100%5D&v%7C(x+y)i+(x+2y)j%7C0.1 In this case you see everything that starts in the first quadrant diverges to infinity along the path of the eigenvector I mentioned before. For your sequence, you started at $\left( \begin{array}{c} 2 \\ 5 \end{array} \right)$ , which lies in the first quadrant. Side note: There is nothing particularly special about the golden ratio, the matrix above, or the starting point of $\left( \begin{array}{c} 2 \\ 5 \end{array} \right)$ for this sequence. You can change the starting point to be in the negative quadrant if you want to diverge in the opposite direction, and you can change the matrix if you want to diverge along a differently sloped eigenvector.
|
{
"source": [
"https://math.stackexchange.com/questions/3440648",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/-1/"
]
}
|
3,444,198 |
I am beginning real analysis and got stuck on the first page (Peano Postulates). It reads as follows, at least in my textbook. Axiom 1.2.1 (Peano Postulates) . There exists a set $\Bbb N$ with an element $1 \in \Bbb N$ and a function $s:\Bbb N \to \Bbb N$ that satisfies the following three properties. a. There is no $n \in \Bbb N$ such that $s(n) = 1$ . b. The function $s$ is injective. c. Let $G \subseteq \Bbb N$ be a set. Suppose that $1 \in G$ , and that $g \in G \Rightarrow s(g) \in G$ . Then $G = \Bbb N$ . Definition 1.2.2. The set of natural numbers, denoted $\Bbb N$ , is the set the existence of which is given in the Peano Postulates. My question is: From my understanding of the postulates, we could construct an infinite set which satisfies the three properties. For example, the odd numbers $\{1,3,5,7, \ldots \}$ , or the powers of 5 $\{1,5,25,625 \ldots \}$ , could be constructed (with a different $s(n)$ , of course, since $s(n)$ is not defined in the postulates anyway). How do these properties uniquely identify the set of the natural numbers?
|
Yes, you can find other sets on which a successor function is defined that satisfies all the Peano axioms. What makes the natural numbers unique is that you can use the Peano postulates to prove that when you have two such sets you can build a bijection between them that maps one successor function to the other. That means the sets are really "the same" - the elements just have different names. So you might as well use the traditional names $ 1, 2, 3,\ldots$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3444198",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/718315/"
]
}
|
3,447,575 |
I am teaching the foundations of trig and find it a bit weird that tangent is called that. I've never questioned it before, but what I keep finding online is the phrase 'tangent of an angle'. Is anyone able to explain, maybe using some visual intuition, why we call it the tangent of an angle? Especially in the context of the unit circle. Does it relate to the definition of a tangent to a curve?
|
I can't say for certain that this is the origin of the name of the function. But if you construct a line that is tangent to the unit circle, using either of these constructions: then the red segment has a measure of $\tan x$ . In the second figure, the green segment has measure $\cot x$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3447575",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/674589/"
]
}
|
3,449,350 |
I need to justify the following equation is true: $$
\begin{vmatrix}
a_1+b_1x & a_1x+b_1 & c_1 \\
a_2+b_2x & a_2x+b_2 & c_2 \\
a_3+b_3x & a_3x+b_3 & c_3 \\
\end{vmatrix} = (1-x^2)\cdot\begin{vmatrix}
a_1 & b_1 & c_1 \\
a_2 & b_2 & c_2 \\
a_3 & b_3 & c_3 \\
\end{vmatrix}
$$ I tried dividing the determinant of the first matrix in the sum of two, so the first would not have $b's$ and the second wouldn't have $a's$ . Then I'd multiply by $\frac 1x$ in the first column of the second matrix and the first column of the second, so I'd have $x^2$ times the sum of the determinants of the two matrices. I could then subtract column 1 to column 2 in both matrices, and we'd have a column of zeros in both, hence the determinant is zero on both and times $x^2$ would still be zero, so I didn't prove anything. What did I do wrong?
|
\begin{align}
&\phantom {=}\,\ \begin{vmatrix}
a_1+b_1x & a_1x+b_1 & c_1 \\
a_2+b_2x & a_2x+b_2 & c_2 \\
a_3+b_3x & a_3x+b_3 & c_3
\end{vmatrix} \\
&=
\begin{vmatrix}
a_1 & a_1x+b_1 & c_1 \\
a_2 & a_2x+b_2 & c_2 \\
a_3 & a_3x+b_3 & c_3
\end{vmatrix}
+ \begin{vmatrix}
b_1x & a_1x+b_1 & c_1 \\
b_2x & a_2x+b_2 & c_2 \\
b_3x & a_3x+b_3 & c_3
\end{vmatrix} \\&= \begin{vmatrix}
a_1 & b_1 & c_1 \\
a_2 & b_2 & c_2 \\
a_3 & b_3 & c_3
\end{vmatrix} + x \begin{vmatrix}
b_1 & a_1x & c_1 \\
b_2 & a_2x & c_2 \\
b_3 & a_3x & c_3
\end{vmatrix} \\&= \begin{vmatrix}
a_1 & b_1 & c_1 \\
a_2 & b_2 & c_2 \\
a_3 & b_3 & c_3
\end{vmatrix} + x^2 \begin{vmatrix}
b_1 & a_1 & c_1 \\
b_2 & a_2 & c_2 \\
b_3 & a_3 & c_3
\end{vmatrix} \\&= 1\cdot \begin{vmatrix}
a_1 & b_1 & c_1 \\
a_2 & b_2 & c_2 \\
a_3 & b_3 & c_3
\end{vmatrix} + (-1) x^2 \begin{vmatrix}
a_1 & b_1 & c_1 \\
a_2 & b_2 & c_2 \\
a_3 & b_3 & c_3
\end{vmatrix} \\&=
(1-x^2)\cdot\begin{vmatrix}
a_1 & b_1 & c_1 \\
a_2 & b_2 & c_2 \\
a_3 & b_3 & c_3 \\
\end{vmatrix}.
\end{align}
|
{
"source": [
"https://math.stackexchange.com/questions/3449350",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/717967/"
]
}
|
3,450,380 |
I am currently learning matlab and linear algebra side by side and I stumbled upon this example from mathworks A = [1 2 0; 0 4 3];
b = [8; 18];
x = A\b
x = 3×1
0
4.0000
0.6667 which in my mind translates to $$
A = \left[
\begin{matrix}
1 & 2 & 0 \\
0 & 4 & 3
\end{matrix}\right]
B = \left[
\begin{matrix}
8 \\ 18
\end{matrix}\right]
x = \left[
\begin{matrix}
a \\ b \\ c
\end{matrix}\right]
$$ $$
Ax = \left[
\begin{matrix}
1 & 2 & 0 \\
0 & 4 & 3
\end{matrix}\right] \times \left[
\begin{matrix}
a \\ b \\ c
\end{matrix}\right]
= \left[
\begin{matrix}
a + 2b \\ 4b + 3c
\end{matrix}\right]
$$ which boils down to $$
\left[
\begin{matrix}
a + 2b \\ 4b + 3c
\end{matrix}\right] = \left[
\begin{matrix}
8\\ 18
\end{matrix}\right] \Rightarrow \begin{matrix}a + 2b = 8 \\4b + 3c = 18\end{matrix}
$$ which is an equation with 3 unknown (a, b and c) with two equations, which is impossible! Yes there is a solution $$
x = \left[
\begin{matrix}
0 \\ 4 \\ 2/3
\end{matrix}\right]
$$ How can I solve an impossible equation (three unknown and two equations) using linear algebra?
|
If there are fewer equations than unknowns, usually there are many solutions. It is not impossible, but indeterminate. An extreme example is this: one unknown, but no equation !
|
{
"source": [
"https://math.stackexchange.com/questions/3450380",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/728831/"
]
}
|
3,453,270 |
From Linear map , the sixth example : The translation $x \rightarrow x+1$ is not a linear transformation. Why? What about $x \rightarrow x +dx$ ? Is this translation a linear transformation? Does it matter if the transformation is not linear?
|
OP's transformations are affine transformations . Whether they are called linear transformations depends on context and conventions. Within the context of linear algebra , a linear transformation maps the zero vector into the zero vector. Then OP's transformations are generically not linear. In other contexts/conventions, linear & affine transformations are the same thing.
|
{
"source": [
"https://math.stackexchange.com/questions/3453270",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/715050/"
]
}
|
3,453,274 |
If we consider $\vec{E}$ the electric field in $R^n$ and we have : $$\int_{R}^{\infty}\vec{E}(\vec{r})d\vec{r}=0$$ where $R$ is in $R^n$ and $\vec{r}$ is in $R^n$ $$\vec{E}(\vec{r})=\frac{1}{4\pi \epsilon_0 \mid \vec{r}\mid^2}\vec{e_r}$$ Then do we have that $\vec{E}=0$ everywhere is we make the assumption that $E(\vec{r})=E(r)$ ?
Thanks
|
OP's transformations are affine transformations . Whether they are called linear transformations depends on context and conventions. Within the context of linear algebra , a linear transformation maps the zero vector into the zero vector. Then OP's transformations are generically not linear. In other contexts/conventions, linear & affine transformations are the same thing.
|
{
"source": [
"https://math.stackexchange.com/questions/3453274",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/666401/"
]
}
|
3,455,580 |
I had just learned in measure theory class about the monotone convergence theorem in this version: For every monotonically increasing sequence of functions $f_n$ from measurable space $X$ to $[0, \infty]$ , $$
\text{if}\quad
\lim_{n\to \infty}f_n = f,
\quad\text{then}\quad
\lim_{n\to \infty}\int f_n \, \mathrm{d}\mu = \int f \,\mathrm{d}\mu
.
$$ I tried to find out why this theorem apply only for a Lebesgue integral, but I didn't find a counter example for Riemann integrals, so I would appreciate your help. (I guess that $f$ might not be integrable in some cases, but I want a concrete example.)
|
Riemann integrable functions (on a compact interval) are also Lebesgue integrable and the two integrals coincide. So the theorem is surely valid for Riemann integrals also. However the pointwise increasing limit of a sequence of Riemann integrable functions need not be Riemann integrable. Let $(r_n)$ be an ennumeration of the rationals in $[0,1]$ , and let $f_n$ be as follows: $$f_n(x) =
\begin{cases}
1 & \text{if $x \in \{ r_0, r_1, \dots, r_{n-1} \}$} \\
0 & \text{if $x \in \{ r_n, r_{n+1}, \dots \}$} \\
0 & \text{if $x$ is irrational} \\
\end{cases}$$ Then the limit function is nowhere continuous, hence not Riemann integrable.
|
{
"source": [
"https://math.stackexchange.com/questions/3455580",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/730086/"
]
}
|
3,456,402 |
Consider a divergent series that tends to infinity such as $1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots$ . The limit of this series is unbounded, and I have often seen people say that the sum 'equals infinity' as a shorthand for this. However, is it acceptable to write $1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots = \infty$ in formal mathematics, or is it better to denote that the limit is equal to infinity? If so, how does one do this?
|
Yes - it is both very common and entirely correct to do so. There is a bit of formal trickery here because $\infty$ is not a number, but you can do analysis with it anyways - meaning limits and that sort of thing. In particular, there is a set called the affinely extended reals which is basically the real numbers $\mathbb R$ along with two new objects $\infty$ and $-\infty$ , one at each 'end'. This is a topological space , meaning that you can take limits in it, but be careful that some things like $∞-∞$ , $0·∞$ , $0/0$ and $∞/∞$ are undefined. Consider that, for real numbers $x$ , the definition of a sequence $s_n$ converging to $x$ is as follows: For any $\varepsilon >0$ , there exists some $N$ such that if $n>N$ then $|s_n-x|<\varepsilon$ . This can be rewritten as saying: For any open interval $I$ containing $x$ , there exists some $N$ such that for all $n > N$ we have $s_n\in I$ . The idea behind either of these definition is that if we choose some "neighborhood" of $x$ - consisting of $x$ and at least some positive radius around $x$ - the sequence eventually is constrained in that neighborhood. More formally, a neighborhood of a real number is any set $S$ containing an open interval around $x$ . Then, you can define convergence to $x$ as follows: For any neighborhood $I$ of $x$ , there exists some $N$ such that for all $n >N $ we have $s_n\in I$ . To define limits to $\infty$ and $-\infty$ , one just needs to define their neighborhoods. In particular, $\infty$ is meant to be the "upper end" of the real line - and being close to $\infty$ means that a number is very large. So one defines a neighborhood of $\infty$ to be any set $I$ containing an interval of the form $(C,\infty]$ for some $C\in\mathbb R$ . Then, we say $\lim_{n\rightarrow\infty} s_n = \infty$ if for every neighborhood $I$ of $\infty$ , there exists some $N$ such that if $n>N$ then $s_n\in I$ . This is equivalent to saying that $s_n$ converges to $\infty$ if, for every $C$ , there exists an $N$ such that if $n>N$ then $s_n > C$ - which is the usual definition you find in textbooks (but note that it is actually a theorem - a consequence of the definition of $\infty$ !) - and that in any context that you might allow a statement like $\lim_{n\rightarrow\infty}s_n = \infty$ , you might as well be working in the extended reals. Then, since infinite sums are just limits of partial sums, it is perfectly rigorous to write $$\sum_{n\rightarrow\infty}\frac{1}n = \infty$$ and to know that this truly means that the left hand side evaluates to $\infty$ , not to think that this is some special statement where equality is not equality. This is actually very common in real analysis (the branch of mathematics dealing with limits, continuity, differentiability, and all that stuff) - especially in subfields like measure theory and sometimes in the theory of metric spaces as well. However, it is also important to know that many people do not share the view that $\infty$ is always a perfectly valid object, defined by its neighborhoods. So, even though you would technically be right to write such an equality, it might not go over well with your audience nonetheless - and you should keep your audience in mind whenever you write anything because "formal correctness" is no substitute for "understood by your audience" - and you will often encounter examples of things which are technically correct, but might confuse or annoy your audience nonetheless. (Sidenote: The limit $n\rightarrow\infty$ in the subscript $\lim_{n\rightarrow\infty}s_n$ , as you might notice, is also defined by the neighborhoods: we get to restrict $s_n$ by forcing $n$ to lie in some neighborhood of $\infty$ that we get to choose - which is what's going on when we say that there's some $N$ so that if $n>N$ , blah blah blah)
|
{
"source": [
"https://math.stackexchange.com/questions/3456402",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/623665/"
]
}
|
3,456,667 |
There must be an error in my "proof" since it is evident that the sum of two irrational numbers may be rational, but I am struggling to spot it. A hint would be appreciated. The "proof" is by contradiction: Assume that the sum of two irrational numbers a and b is rational.
Then we can write $$ a + b = \frac{x}{y} $$ $$ \implies a + b + a - a = \frac{x}{y} $$ $$ \implies 2a + (b - a) = \frac{x}{y} $$ $$ \implies 2a = \frac{x}{y} + (-1)(b + (-1)(a)) $$ -> from our assumption that the sum of two irrational numbers is rational, it follows that $(b + (-1)(a))$ is rational -> therefore, the right side is rational, being the sum of two rational numbers -> but the left side, $2a$ , is irrational, because the product of a rational and irrational number is irrational -> this is a contradiction; since assuming that the sum of two irrational numbers is rational leads to a contradiction, the sum of two irrational numbers must be irrational.
|
To say that it is not true that all swans are white does not mean that all swans are non-white; it only means that at least one swan is non-white. Similarly, to say that it is not true that every sum of two irrational numbers is irrational does not mean that every sum of two irrational numbers is rational; it only means that at least one sum of two irrational numbers is rational. You start by assuming, not that the sum of (every) two irrational numbers is rational, but rather that the sum of two irrational numbers $a$ and $b$ is rational, i.e. that there is one instance of two irrational numbers whose sum is rational. That assumption is true. For example: If $a=\pi$ and $b=4-\pi,$ then the sum of the two irrational numbers $a$ and $b$ is the rational number $4.$ And the sum of the two irrational numbers $a$ and $-b$ is the irrational number $2\pi-4.$ The fact that the sum of two irrational numbers $a$ and $b$ is rational does not mean that the sum of the two irrational numbers $a$ and $-b$ is rational, nor that any other sum of two irrational numbers is rational.
|
{
"source": [
"https://math.stackexchange.com/questions/3456667",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/574302/"
]
}
|
3,456,797 |
The professor teaching a class I am taking wants me to find the eigenvalues and the eigenvectors for the following matrix below. $$\begin{bmatrix}-5 & 5\\4 & 3\end{bmatrix}$$ I have succeeded in getting the eigenvalues, which are $\lambda= \{ 5,-7 \}$ .
When finding the eigenvector for $\lambda= 5$ , I get $\begin{bmatrix}1/2\\1 \end{bmatrix}$ . However, the correct answer is $\begin{bmatrix}1\\2 \end{bmatrix}$ . I have tried doing this question using multiple online matrix calculators. One of which gives me $\begin{bmatrix}1/2\\1 \end{bmatrix}$ , and the other gives me $\begin{bmatrix}1\\2 \end{bmatrix}$ . The online calculator that gave me $\begin{bmatrix}1\\2 \end{bmatrix}$ explains, that y=2, hence $\begin{bmatrix}1/2·2\\1·2 \end{bmatrix} = \begin{bmatrix}1\\2 \end{bmatrix}$ . What I do not understand is, why is y must equal to 2?Is it because there cannot be a fraction in an eigenvector?
|
By definition, an eigenvalue $\lambda$ and one corresponding eigenvector $v$ must satisfy the following equation: $$Av = \lambda v.$$ Now, consider the vector $$w = \alpha v,$$ where $\alpha \neq 0$ . Then, notice that: $$Aw = A(\alpha v) = \alpha (Av) = \alpha (\lambda v) = \lambda (\alpha v) =\lambda w.$$ Therefore: $$Aw = \lambda w,$$ and hence $w$ is another eigenvector associated to the eigenvalue $\lambda$ . In general, it is not true that there is only one eigenvector associated to the eigenvalue $\lambda$ . Instead, there is a linear subspace , also known as the eigenspace associated to $\lambda$ . In other words, there are infinitely many eigenvectors to $\lambda$ , which belong to a certain eigenspace. Given one eigenvector (say $v$ ), then all the multiples of $v$ except for $0$ (i.e. $w = \alpha v$ with $\alpha \neq 0$ ) are also eigenvectors.
|
{
"source": [
"https://math.stackexchange.com/questions/3456797",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/724505/"
]
}
|
3,456,822 |
Suppose I have a function $f(x)=x$ And I can rewrite the function as follows : $$x=e^{\ln{x}}$$ Rewrite it again in Maclaurin series form: $$e^{\ln{x}}=\sum_{n=0}^{\infty} \frac{\ln^n{x}}{n!}$$ So we have $$x=\sum_{n=0}^{\infty} \frac{\ln^n{x}}{n!}$$ When I plug $x=1$ into the equation, then: $$\begin{align}
1&=\sum_{n=0}^{\infty} \frac{\ln^n{1}}{n!}\\
&=\sum_{n=0}^{\infty} \frac{(\ln{1})^n}{n!}\\
&=\sum_{n=0}^{\infty} \frac{0^n}{n!}\\
&=\sum_{n=0}^{\infty} \frac{0}{n!}\\
&=\sum_{n=0}^{\infty} 0\\
&=0\\
\\
\therefore 1&=0
\end{align}$$ Why this can be happen? Please tell me. Thanks.
|
By definition, an eigenvalue $\lambda$ and one corresponding eigenvector $v$ must satisfy the following equation: $$Av = \lambda v.$$ Now, consider the vector $$w = \alpha v,$$ where $\alpha \neq 0$ . Then, notice that: $$Aw = A(\alpha v) = \alpha (Av) = \alpha (\lambda v) = \lambda (\alpha v) =\lambda w.$$ Therefore: $$Aw = \lambda w,$$ and hence $w$ is another eigenvector associated to the eigenvalue $\lambda$ . In general, it is not true that there is only one eigenvector associated to the eigenvalue $\lambda$ . Instead, there is a linear subspace , also known as the eigenspace associated to $\lambda$ . In other words, there are infinitely many eigenvectors to $\lambda$ , which belong to a certain eigenspace. Given one eigenvector (say $v$ ), then all the multiples of $v$ except for $0$ (i.e. $w = \alpha v$ with $\alpha \neq 0$ ) are also eigenvectors.
|
{
"source": [
"https://math.stackexchange.com/questions/3456822",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/664855/"
]
}
|
3,466,870 |
Suppose $$a^2 = \sum_{i=1}^k b_i^2$$ where $a, b_i \in \mathbb{Z}$ , $a>0, b_i > 0$ (and $b_i$ are not necessarily distinct). Can any positive integer be the value of $k$ ? The reason I am interested in this: in a irreptile tiling where the smallest piece has area $A$ , we have $a^2A = \sum_{i=1}^k b_i^2A$ , where we have $k$ pieces scaled by $b_i$ to tile the big figure, which is scaled by $a$ . I am wondering what constraints there are on the number of pieces. Here is an example tiling that realizes $4^2 = 3^2 + 7 \cdot 1^2$ , so $k = 8$ .
|
This holds far more generally. OP is the special case $S$ = integer squares, which is closed under multiplication $\,a^2 b^2 = (ab)^2,\,$ and has an element that is a sum of $\,2\,$ others, e.g. $\,5^2 = 4^2+3^2 $ . Theorem $ $ If $\,S\,$ is a set of integers $\rm\color{#0a0}{closed}$ under multiplication then $\qquad\qquad \begin{align}\phantom{|^{|^|}}\forall\,n\ge 2\!:\text{ there is a }\,t_n\in S\,&\ \text{that is a sum of $\,n\,$ elements of $\,S$ }\\[.1em]
\iff\! \text{ there is a }\,t_2\in S\,&\ \text{that is a sum of $\,2\,$ elements of $\,S\!$}\\
\end{align}$ Proof $\ \ (\Rightarrow)\ $ Clear. $\ (\Leftarrow)\ $ We induct on $n$ . The base case $\,n = 2\,$ is true by hypothesis, i.e. we are given that $\,\color{#c00}{a = b + c}\,$ for some $\,a,b,c\in S.\,$ If the statement is true for $\,n\,$ elements then $$\begin{align}
s_0 &\,=\, s_1\ \ +\ s_2\ + \cdots +s_n,\ \ \ \ \,{\rm all}\ \ s_i\,\in\, S\\
\Rightarrow\ s_0a &\,=\, s_1 a + s_2 a + \cdots +s_n\color{#c00} a,\ \ \color{#0a0}{\rm all}\ \ s_ia\in S \\[.1em]
&\,=\, s_1 a + s_2 a + \cdots + s_n \color{#c00}b + s_n \color{#c00}c \end{align}\qquad\qquad$$ so $\,s_0 a\in S\,$ is a sum of $\,n\!+\!1\,$ elements of $S$ , completing the induction. Remark $ $ A comment asks for further examples. Let's consider some "minimal" examples. $S$ contains $\,a,b,c\,$ wth $\,a = b + c\:$ so - being closed under multiplication - $\,S\,$ contains all products $\,a^j b^j c^k\ne 1$ . But these products are already closed under multiplication so we can take $S$ to be the set of all such products. Let's examine how the above inductive proof works in this set. $$\begin{align} \color{#c00}a &= b + c\\
\smash{\overset{\times\ a}\Longrightarrow}\qquad\qquad\, a^2 = ab+\color{#c00}ac\, &= b(a+c)+c^2\ \ \ {\rm by\ substituting}\,\ \color{#c00}a = b+c\\
\smash{\overset{\times\ a}\Longrightarrow}\ \ a^3 = b(a^2+ac)+ \color{#c00}ac^2 &= b(a^2+ac+c^2)+c^3 \\[.4em]
{a^{n}} &\ \smash{\overset{\vdots_{\phantom{|^|}\!\!}}= \color{#0a0}b (a^{n-1} + \cdots + c^{n-1}) + c^{n}\ \ \text{ [sum of $\,n\!+\!1\,$ terms]}}\\[.2em]
{\rm by}\ \ \ a^{n}-c^{n} &= (\color{#0a0}{a\!-\!c}) (a^{n-1} + \cdots + c^{n-1})\ \ \ {\rm by}\ \ \color{#0a0}{b = a\!-\!c} \end{align}\quad\ \ \ $$ So the proof's inductive construction of an element that is a sum of $n+1$ terms boils down here to writing $\,a^n\,$ that way using the above well known factorization of $\,a^n-c^n\,$ via the Factor Theorem. By specializing $\,a,b,c\,$ one obtains many examples, e.g. using $\,5^2,4^2,3^2$ as in the OP then $S$ is set of squares composed only of those factors, and the $\,n\!+\!1\,$ element sum constructed is $$ 25^n =\, 9^n + 16(25^{n-1}+ \cdots + 9^{n-1})$$
|
{
"source": [
"https://math.stackexchange.com/questions/3466870",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/67933/"
]
}
|
3,466,976 |
There's: https://www.cs.kent.ac.uk/people/staff/sjt/TTFP/ttfp.pdf And a few other books, but they don't detail how one goes about implementing say another Lean, Coq, or Isabelle. How did the original authors of those software figure out how to implement things? Where's that book? Also, there's all of these: http://freecomputerbooks.com/mathLogicBooks.html But you can't expect me to search through every book in search of how to implement things. So was wondering if anyone here already knows what book to pick.
|
There is no such book. If someone wants to implement a proof assistant based on type theory now, they can look at a) tutorial implementations b) papers c) source code of existing systems. The issue with a) is that it only covers a small fraction of real-world functionality and commonly presents naive solutions which don't scale and differ greatly from real-world implementation. The issue with b) is that it's scattered, contains lots of outdated and irrelevant information, requires expertise to navigate, and it's also incomplete with respect to real-world systems. The issue with c) is that it's even harder to navigate and contains many legacy hacks and suboptimal solutions. It is also the case that the core implementations of the real-world systems are very different. E.g. Coq elaborates induction to fixpoints and case splits, Agda to recursive definitions and case trees and Lean to eliminators. There is also no clear consensus on design philosophy and in particular on how big a "kernel" should be and what it should include. Nonetheless, I can do a quick survey on references. Be warned that it is highly opinionated and reflects my personal take on the subject. 1. Basic type checking and conversion checking A modern view on this matter is that Coquand's semantic type checking algorithm is the preferred basic algorithm because of its performance and simplicity. Bidirectional type checking is another ubiquitous foundational principle. All of the sources below use both. http://davidchristiansen.dk/tutorials/nbe/ detailed and beginner-friendly. Code available in Racket in the previous link, there's also a Haskell version. Coquand's original paper has more technical parts, but the basic idea is readable. Includes Haskell appendix. My minimal self-contained Haskell implementation. Another small implementation , in OCaml, with additional explanation . 2. Basic unification and inference Here, the core ideas used in the larger systems are pattern unification and contextual metavariables. See my introduction . 3. Advanced unification, elaboration, implicit arguments Chapter 3 of Ulf Norell's thesis is good introduction to implicit arguments and constraint postponing. I have again a small implementation for implicit arguments (not for postponing though). Abel & Pientka on advanced unification features, most of which are implemented in Agda. This is a bit dense and short on implementation details. Gundry & McBride significantly overlaps with the previous reference, and is helpful as an alternative take. Ziliani & Sozeau on unification. Describes a version which is not the one actually implemented in Coq, but which is fairly close. Contains many implementation-relevant details. 4. Inductive types, induction, termination checking At this point, approaches diverge and literature becomes sparse. Coq, Agda and Lean have greatly different implementations. Mini-TT has simple inductive types and dependent case splits. Less expressive than Coq/Agda but good as starting point. Reference on the calculus of inductive constructions used in Coq. This is fairly detailed, and it should be possible to do an implementation based on this. The most recent piece on Agda pattern matching. Also partly describes inductive types in Agda. Fairly technical. The third main approach is compiling to eliminators, which is used in Lean AFAIK. The Lean elaboration paper lists references on elaborating to eliminators. On termination checking, the main reference is this , which is very similar to the one used in Agda. I'm not familiar with the Coq termination checker, I believe the basic principles are the same, although the Agda one is more powerful/complicated. In Lean, termination checking is implicit in the compilation to eliminators. 5. Modules, type classes, automation Unfortunately, I run out of expertise at this point, and I'm not sure what are the good references and approaches. The CiC manual and Norell's thesis can be used to copy the Coq or Agda module implementation. For tactics, reflection and type classes, I imagine the best would be to dig into Coq/Agda/Idris/Lean docs and sources and learn how they work. I haven't mentioned Idris so far, because it's similar to Agda, and its unique features are fairly experimental and more relevant to programming than to proof writing. I'm mentioning error reflection here specifically which I think is highly useful in any system.
|
{
"source": [
"https://math.stackexchange.com/questions/3466976",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/26327/"
]
}
|
3,476,022 |
I was watching this Mathologer video ( https://youtu.be/YuIIjLr6vUA?t=1652 ) and he says at 27:32 First, suppose that our initial chunk is part of a parabola, or if you like a cubic, or any polynomial. If I then tell you that my mystery function is a polynomial, there's always going to be exactly one polynomial that continues our initial chunk . In other words, a polynomial is completely determined by any part of it. [...] Again, just relax if all this seems a little bit too much. So he didn't give a proof of the theorem in bold text – I think this is very important. I understand that there always exists a polynomial of degree $n$ that passes through a set of $n+1$ points (i.e. there are finitely many custom points to be passed by, the chunk has to be discrete, like $(1,1),(2,2),(3,3),(4,5)$ ). But there also exists some polynomial of degree $m$ ( $m\ne n$ ) that passes through the same set of points. But how do I prove that there exists one and only one polynomial that passes through a set of infinitely many points?
|
If $p$ and $q$ are polynomials agreeing on infinitely many points, then $p-q$ is a polynomial that’s 0 on infinitely many points. But if a polynomial $f$ of degree $n$ is $0$ on more than $n$ points, then it’s zero everywhere. (If it has zeroes $a_1, \ldots a_n$ , then by repeated division it’s of the form $c(x-a_1)\cdots(x-a_n)$ ; if it’s zero at some other point as well, then we get $c=0$ .) So a polynomial that’s 0 on infinitely many points is 0 everywhere. So, going back to the beginning, if $p$ and $q$ are polynomials agreeing on infinitely many points, then $p-q$ is zero everywhere, i.e. $p=q$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3476022",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/678463/"
]
}
|
3,476,040 |
I want to show that $f: \mathbb{R}^3 \backslash \{0\} \to \mathbb{R}^2, f(x)=\begin{pmatrix} \sin |x|_2 \\ |x|_2^3+1 \end{pmatrix}$ is differentiable. I believe I have to show that $f_1$ and $f_2$ are both differentiable and their derivatives both continuous. Is it enough to say that $\partial _1 f_1=\cos |x|_2$ and $\partial _2 f_2 = 3|x|_2^2$ which are both continuous on $\mathbb{R}^2$ ? I feel like this is not enough...
|
If $p$ and $q$ are polynomials agreeing on infinitely many points, then $p-q$ is a polynomial that’s 0 on infinitely many points. But if a polynomial $f$ of degree $n$ is $0$ on more than $n$ points, then it’s zero everywhere. (If it has zeroes $a_1, \ldots a_n$ , then by repeated division it’s of the form $c(x-a_1)\cdots(x-a_n)$ ; if it’s zero at some other point as well, then we get $c=0$ .) So a polynomial that’s 0 on infinitely many points is 0 everywhere. So, going back to the beginning, if $p$ and $q$ are polynomials agreeing on infinitely many points, then $p-q$ is zero everywhere, i.e. $p=q$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3476040",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/771714/"
]
}
|
3,479,061 |
I recently took a mastercourse on functional analysis and I was wondering why we 'skip' the metric structure on vector spaces. I have seen that $$\{\text{vector spaces}\}\supsetneq\{\text{topological vector spaces}\}\supsetneq\{\text{locally convex vector spaces}\}\supsetneq\{\text{normed vector spaces}\}\supsetneq\{\text{innerproduct spaces}\}.$$ Isn't it natural to include vector spaces endowed with a metric (that may not be induced by a norm) in this sequence? I can't seem to find any literature about it.
|
If the metric is not related with the vector space structure, there is not much to talk about. As you say, we could require that the metric is translation invariant. And there is another operation on a vector space, which is multiplication by scalars: does it make any sense to say that $2x$ is not at twice the distance from the origin than $x$ is? So you want to assume that the metric scales with scalar multiplication. That is, the metric satisfies $d(x,y)=d(x+z,y+z)$ $d(\lambda x,\lambda y)=|\lambda|\,d(x,y)$ With those two assumptions, $\|x\|=d(x,0)$ is a norm that induces the metric $d$ . So, there is very little room for endowing a vector space with a meaningful metric that does not come from a norm but still somehow interacts naturally with the vector space structure. And if the metric does not match with the vector space structure, then you have no reason to pay attention to the topology and the vector space structure at the same time.
|
{
"source": [
"https://math.stackexchange.com/questions/3479061",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/701529/"
]
}
|
3,481,048 |
Is the number $$(11!)!+11!+1$$ a prime number ? I do not expect that a probable-prime-test is feasible, but if someone actually wants to let it run, this would of course be very nice. The main hope is to find a factor to show that the number is not prime. If we do not find a factor, it will be difficult to check the number for primality. I highly expect a probable-prime-test to reveal that the number is composite. "Composite" would be a surely correct result. Only if the result would be "probable prime", there would remain slight doubts, but I would be confident with such a test anyway. Motivation : $(n!)!+n!+1$ can only be prime if $\ n!+1\ $ is prime. This is because a non-trivial factor of $\ n!+1\ $ would also divide $\ (n!)!+n!+1\ $ . The cases $\ n=2,3\ $ are easy , but the case $\ n=11\ $ is the first non-trivial case. We only know that there is no factor upto $\ p=11!+1\ $ What I want to know : Can we calculate $$(11!)!\mod \ p$$ for $\ p\ $ having $\ 8-12\ $ digits with a trick ? I ask because pari/gp takes relatively long to calculate this residue directly. So, I am looking for an acceleration of this trial division.
|
I let $p_1=1+11!$ for convenience. By Wilson's theorem if there's a prime $p$ that divides $1+11!+(11!)! = p_1 + (p_1-1)!$ then $$(p-1)!\equiv -1\pmod p$$ And also $$(p_1-1)!\equiv -p_1$$ So $$(p-1)(p-2)...p_1\cdot(p_1-1)!\equiv -1$$ $$(p-1)(p-2)...p_1\cdot p_1\equiv 1$$ This way I was able to check all the primes from $p_1$ to 74000000 in 12 hours. This gives a 3.4% chance of finding a factor according to big prime country's heuristic. The algorithm has bad asymptotic complexity because to check a prime $p$ you need to perform $p-11!$ modular multiplications so there's not much hope of completing the calculation. Note that I haven't used that $p_1$ is prime, so maybe that can still help somehow. Here's the algorithm in c++: // compile with g++ main.cpp -o main -lpthread -O3
#include <iostream>
#include <vector>
#include <string>
#include <boost/process.hpp>
#include <thread>
namespace bp = boost::process;
const constexpr unsigned int p1 = 1 * 2 * 3 * 4 * 5 * 6 * 7 * 8 * 9 * 10 * 11 + 1; // 11!+1
const constexpr unsigned int max = 100'000'000; // maximum to trial divide
std::vector<unsigned int> primes;
unsigned int progress = 40;
void trial_division(unsigned int n) { // check the primes congruent to 2n+1 mod 16
for(auto p : primes) {
if(p % 16 != (2 * n + 1)) continue;
uint64_t prod = 1;
for(uint64_t i = p - 1; i >= p1; --i) {
prod = (prod * i) % p;
}
if((prod * p1) % p == 1) {
std::cout << p << "\n";
}
if(n == 0 && p > progress * 1'000'000) {
std::cout << progress * 1'000'000 << "\n";
++progress;
}
}
}
int main() {
bp::ipstream is;
bp::child primegen("./primes", std::to_string(p1), std::to_string(max), bp::std_out > is);
// this is https://cr.yp.to/primegen.html
// the size of these primes don't really justify using such a specialized tool, I'm just lazy
std::string line;
while (primegen.running() && std::getline(is, line) && !line.empty()) {
primes.push_back(std::stoi(line));
} // building the primes vector
// start 8 threads, one for each core for on my computer, each checking one residue class mod 16
// By Dirichlet's theorem on arithmetic progressions they should progress at the same speed
// the 16n+1 thread owns the progress counter
std::thread t0(trial_division, 0);
std::thread t1(trial_division, 1);
std::thread t2(trial_division, 2);
std::thread t3(trial_division, 3);
std::thread t4(trial_division, 4);
std::thread t5(trial_division, 5);
std::thread t6(trial_division, 6);
std::thread t7(trial_division, 7);
t0.join();
t1.join();
t2.join();
t3.join();
t4.join();
t5.join();
t6.join();
t7.join();
} I only need to multiply integers of the order of $11!$ so standard 64 bit ints suffice. EDIT: Divisor found! $1590429889$ So first of all, the Wilson's theorem trick slows down instead of speeding up after $2p_1$ . Secondly, the trial division function is nearly infinitely parallelizable, which means that it's prone to being computed with a GPU. My friend wrote an implementation that can be found here . This can be run on CUDA compatible nvidia GPUs. Finding the factor took about 18 hours on a Nvidia GTX Titan X pascal.
|
{
"source": [
"https://math.stackexchange.com/questions/3481048",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/82961/"
]
}
|
3,481,290 |
I am studying Lie theory and just thought of this random question out of curiosity. Can any manifold be turned into a Lie group? More precisely, given a manifold $G$ , can we always construct (or prove the existence of) some smooth map $m:G\times G\to G$ that makes $G$ into a Lie group? If not, is there an easy counterexample? I could imagine a construction going something like this: pick an arbitrary point $e\in M$ to be the identity, and define $m(e,g)=m(g,e)=g$ for all $g\in G$ . Then we already have the elements of the Lie algebra given as the tangent space at the identity $T_eG$ , and maybe we can use these to extend $m$ to all of $G$ ?
|
There is an easy counterexample: $S^2$ cannot be given a Lie group structure (this is a consequence of the hairy ball theorem). The problem with your construction is that it doesn't offer how to define $m(g,h)$ for any two nonidentity elements $g$ and $h$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3481290",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/477746/"
]
}
|
3,484,145 |
Scrolling through old discrete mathematics exams I have came across this "choose correct answer" question: $16!$ is: a). $20 \; 922 \; 789 \; 888 \; 000$ b). $18 \; 122 \; 471 \; 235 \; 500$ c). $17 \; 223 \; 258 \; 843 \; 600$ Would you show me how your thinking process of solving this problem would look like? The ultimate goal is to find the correct answer; how you get to it does not matter, except that you have to invest only a reasonable amount of time, and calculators or other devices are not allowed.
|
$16!$ is divisible by $125$ since it's divisible by $5\times10\times15$ , and by $8$ , since it's divisible by $2\times4$ . Therefore, $16!$ must be a multiple of $1000$ , and the only acceptable choice is a).
|
{
"source": [
"https://math.stackexchange.com/questions/3484145",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/617563/"
]
}
|
3,484,292 |
Let $S$ be a set and $F_S$ be the equivalence classes of all words that can be built from members of $S$ . Then $F_S$ is called the free group over $S$ . I don't understand the motivation for this definition. Since each word $w$ in $F_S$ is a finite product of elements of $S$ , it uniquely identifies to an element $s\in S$ , so if $S$ were a group then clearly $S$ and $F_S$ would be isomorphic. What makes the free group an interesting object? I assume it is the case when $S$ is not a group, but instead some arbitrary set closed under some binary operation. I suppose the most general type of set we could define a free group over would be a magma , then? What group axiom should we leave out to construct an interesting (nontrivial) example of a free group? I suppose it would be associativity, but I am not sure.
|
The idea of a free group is a way of turning sets (without any structure) into groups . So, a word in $F_S$ does not uniquely identify an element of $S$ , because there is no way given to multiply elements of $S$ . In particular the word "set" comes with no implications of any operations on that set. A somewhat modern way to express the motivation behind free group is via the following axiom: The free group $F_S$ has the property that there is a function $i:S\rightarrow F_S$ given by sending each element of $S$ to the word consisting of that element alone and that, for any function $i':S\rightarrow G$ , there is a unique homomorphism $f:F_S\rightarrow G$ such that $i'=f\circ i$ . I highlight the kinds of maps involved in this definition, to note that the free group converts the set theoretic notion of a function into the group theoretic homomorphism . This definition is terse and takes some time to appreciate, but let's think about a particular example: Consider the free group $F_1$ on a single element $\{x\}$ . There has to be some element in this group called $x$ , since we must be able to embed $x$ into our group. Moreover, we can work out that some equation in $x$ holds in $F_1$ if it holds for every $\bar x \in G$ - which is exactly what the homomorphisms capture in the definition. So, for instance $$x\cdot x^{-1} = e$$ must be true, because it is true of every group and so must $$x\cdot (x\cdot x) = (x\cdot x)\cdot x.$$ On the contrary, $$x\cdot x = e$$ must not be true in $F_1$ because it is not true of the element $1$ in $\mathbb Z/3\mathbb Z$ for instance. We can get the suspicion that $F_1\cong \mathbb Z$ through examples, since every product involving only $x$ and $x^{-1}$ reduces to $x^n$ for some $n$ . This is what is meant by saying that an equation follows from the group axioms: that means that it is true in every group. In fact, once we've proved that the definition, if it identifies a group, identifies a group uniquely up to isomorphism (not too terrible of an exercise, but not obvious - it's good to think of why $F_1$ can't be $\mathbb Q$ ), we can see that setting $F_1=\mathbb Z$ and selecting $x=1\in \mathbb Z$ , the definition above is indeed satisfied by constructing morphisms $f:F_S\rightarrow G$ that take $n\in\mathbb Z$ to $i'(x)^n$ . Things work out similarly if we have multiple elements; for instance, we would see that equations such as $$x\cdot y = y\cdot x$$ do not hold in every group $G$ and for every $x,y\in G$ , and in fact, the only equations that do hold are those that only involve easy cancellations, as in the free group - and to prove that, we just note that the set of words under this cancellation law is a group in which only these relations hold and that the group axioms imply that these relations must hold in every group. Formally, this gives another notion of a free group: The free group on a set $S$ is the set of expressions built from multiplication and inversion using elements of $S$ (and an added identity element), where two expressions are considered equivalent if their equivalence follows from the group axioms (i.e. holds in every group). A nice generalization of this is that you can then go on to define group presentations like $$D_{16}\cong\langle x,y | xy = y^{-1}x, y^2 = e, x^8 = e\rangle$$ similarly to be a group where an equation holds if and only if it follows from the group axioms and the given relations. Equivalently, you can define it as a group $G$ with identified elements $x,y$ such that, for any group $H$ and any elements $\bar x,\bar y\in H$ satisfying all the desired relations, there is a unique homomorphism $f:G\rightarrow H$ taking $x$ to $\bar x$ and $y$ to $\bar y$ - and with a bit more work, you can see that this is also just the quotient of the free group on $\{x,y\}$ by the normal subgroup generated by the set $\{xyx^{-1}y,y^2,x^8\}$ . Okay, but your question is somewhat implicitly asking about what happens when $S$ was already a group - since then, when we have a word in $S$ , we already know how to multiply it together. This leads to an interesting thought: $F_S$ is a group built by forgetting how to multiply, then putting a new multiplication rule generated by this set. In fact, the prior definition leads us to a nice fact: there is a homomorphism $\epsilon:F_S\rightarrow S$ that takes a free group on a group back into the group. This is, in category theory parlance, called the counit , but that's not so important. This map $\epsilon$ not the identity nor is it ever an isomorphism - for instance, if we started with the trivial group $(\{e,\},\cdot)$ and take the free group, we get that the free group on $\{e\}$ is $\mathbb Z$ with members of the form $e^n$ - all of which, when multiplied out, give $e$ . So, somehow, the members of this free group are "unevaluated" expressions in the prior group. What this also tells you is that, since $\epsilon$ is clearly surjective, it must be that $S$ is a quotient group of $F_S$ - telling us that every group is a quotient of some free group. What's really neat about these maps is that you can define the notion of a group by thinking about them carefully. In particular, if we know how to take free groups, but even then forget about how to multiply Let $S$ be a set and let $FS$ be the set of reduced words over $S$ . A group $G$ is a set $G$ along with a map $f:FG\rightarrow G$ such that, treating $g$ as a one letter word in $FG$ , we have $f(g)=g$ and for any element $\omega$ of $FFG$ (i.e. a reduced word whose letters are each reduced words), the following processes yield the same result: (1) take the word $\omega$ and apply $f$ to each letter in the word, yielding a word in $FG$ after reduction. Then apply $f$ again to this word. (2) append all the reduced words in $\omega$ together to get a reduced word in $FG$ . Apply $f$ to this. For instance, if we want to define the group $\mathbb Z/2\mathbb Z$ using this definition, we would start with the set $G=\{e,x\}$ and then define a map $f:FG\rightarrow G$ by saying $f(w)$ is $e$ if an even number of $x$ 's appear in $w$ and is $x$ otherwise. It's clear that $f(e)=e$ and $f(x)=x$ for the first axiom. For the second, we would consider words like $$(ex)\cdot(xx)^{-1}\cdot(xe)$$ and note that, apply $f$ to each "letter" (parenthesized expression) in this word gives $$x\cdot e^{-1} \cdot x$$ which gives $e$ when we apply $f$ . If we instead concatenated the word together first and cancelled, we would get $$exx^{-1}x^{-1}xe\rightarrow ee$$ which then, applying $f$ , gives $e$ . One can figure out that this process really does define a group, so we can then say that a group is precisely a rule for transforming words in $F_S$ back into $S$ . This process generalizes into the notion of an algebra over a monad, but that's more category theory nonsense we don't need to worry about. To finish, it's worth considering what happens when we take away some axioms of groups; if you remove inverses, then you just get that the free monoid on a set $S$ is just the set $S^*$ of all words in $S$ , under the operation of concatenation - where you still have relations like $$(xy)z = x(yz)$$ but almost nothing else. If you get rid of associativity and identity, you end up with the free magma on a set... which is just the set of all fully parenthesized expressions with one operator over that set (i.e. the set of rooted ordered binary trees whose leaves are labeled with the set and where the operation is taking two trees and building a new one whose root's left child is the left argument and right child is the right argument). A bit more illuminating is actually to add structure. For instance, we can get a ring by sensibly axiomatizing addition and multiplication - and then the free ring on a single element set $\{x\}$ is every expression one can write in terms of $x$ and the terms $0$ and $1$ with multiplication, addition and negation - so expressions like $1+x+x\cdot (x+1)$ . These all reduce down to some polynomial with integer coefficients - and one can prove that the free ring on a single element is just $\mathbb Z[x]$ : the ring of polynomials with integer coefficients. This also has the significance that you can essentially evaluate these polynomials in any ring by looking at the homomorphism that takes $x$ to what you want to evaluate at, then seeing where that same homomorphism takes the polynomial in question. For instance, if you want to evaluate $x^3-2$ at $\sqrt{2}\in\mathbb R$ , you can send $x$ to $\sqrt{2}$ and see that $x^3-2$ must go to $2\sqrt{2}-2$ . There are also some examples where "freeness" doesn't work out; for instance, a field has multiplication, division, addition, and subtraction. There is no free field on any set however, since, for instance, the equation $$1+1=0$$ is not true in every field, so couldn't hold in any free field - however, then we're confronted with the fact that we can't map any field in which this doesn't hold into any field in which it does, because, for instance, $\frac{1}2$ makes no sense in $\mathbb Z/2\mathbb Z$ . You will also find examples where you start with some structure, and then freely add some more structure - this is most common with rings (e.g. you can start with a monoid for multiplication and extend it to a ring), but it also can apply to groups - for instance, you can start with a group and freely "extend" to an abelian group (giving the process of abelianization) or start with a monoid and turn it into the freest group possible. There's also some analogous notions in fields such as topology - in general, these ideas fall under the general category of adjunction from category theory (but let's still not worry about that).
|
{
"source": [
"https://math.stackexchange.com/questions/3484292",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/38584/"
]
}
|
3,485,718 |
I'm wondering if it is possible to axiomatize associativity using a set of equations in only two variables. Suppose we have a signature consisting of one binary operation $\cdot$ . Is it possible to find a set $\Sigma$ of equations containing only variables $x$ and $y$ , such that the equational theory generated by axioms $\Sigma$ is equal to the equational theory generated by axiom $(x\cdot y)\cdot z = x\cdot (y\cdot z)$ ? Or in other words, such that the variety defined by equations $\Sigma$ is equal to the variety defined by equation $(x\cdot y)\cdot z = x\cdot (y\cdot z)$ ? EDIT: We can take as $\Sigma$ the set of all equations in variables $x$ and $y$ that are entailed by equation $(x\cdot y)\cdot z = x\cdot (y\cdot z)$ . For example $(x\cdot y)\cdot x = x\cdot (y\cdot x)$ is one of the many equations contained in $\Sigma$ . The question is whether $\Sigma$ in turn entails $(x\cdot y)\cdot z = x\cdot (y\cdot z)$ . EDIT2: As per Milo Brandt's comment, for any three terms $p(x,y)$ , $q(x,y)$ , $r(x,y)$ containing at most variables $x, y$ , equation $p\cdot (q\cdot r)=(p\cdot q)\cdot r$ is in $\Sigma$ . Thus, for any algebra $A$ in the variety defined by equations $\Sigma$ , every subalgebra of $A$ generated by two elements is associative. So, in a sense, $A$ is "locally associative".
|
To answer the question negatively, it suffices to find an algebra $(A, \cdot)$ such that each subalgebra generated by two elements is associative, but such that $A$ itself is non-associative. Let $A=\{a,b,c\}$ , and let $\cdot$ be defined by $$ab=ba=bb=b\\ bc=cb=cc=c\\ ca=ac=aa=a.$$ Every subset of $A$ of size $2$ is the domain of a subalgebra, isomorphic to $(\{0,1\},\max)$ , which is associative. But $A$ is not associative, since $(ab)c = bc = c$ , but $a(bc) = ac = a$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3485718",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/134012/"
]
}
|
3,498,785 |
In maths and sciences, I see the phrases "function of" and "with respect to" used quite a lot. For example, one might say that $f$ is a function of $x$ , and then differentiate $f$ "with respect to $x$ ". I am familiar with the definition of a function and of the derivative, but it's really not clear to me what a function of something is, or why we need to say "with respect to". I find all this a bit confusing, and it makes it hard for me to follow arguments sometimes. In my research, I've found this , but the answers here aren't quite what I'm looking for. The answers seemed to discuss either what a function is, but I know what a function is. I am also unsatisfied with the suggestion that $f$ is a function of $x$ if we just label its argument as $x$ , since labels are arbitrary. I could write $f(x)$ for some value in the domain of $f$ , but couldn't I equally well write $f(t)$ or $f(w)$ instead? To illustrate my confusion with a concrete example: consider the cumulative amount of wax burnt, $w$ as a candle burns. In a simple picture, we could say that $w$ depends on the amount of time for which the candle has been burning, and so we might say something like " $w$ is a function of time". In this simple picture, $w$ is a function of a single real variable. My confusion is, why do we actually say that $w$ is a function of time? Surely $w$ is just a function on some subset of the real numbers (depending specifically on how we chose to define $w$ ), rather than a function of time? Sure, $w$ only has the interpretation we think it does (cumulative amount of wax burnt) when we provide a time as its argument, but why does that mean it is a function of time ? There's nothing stopping me from putting any old argument (provided $w$ is defined at that point) in to $w$ , like the distance I have walked since the candle was lit. Sure, we can't really interpret $w$ in the same way if I did this, but there is nothing in the definition of $w$ which stops me from doing this. Also, what happens when I do some differentiation on $w$ . If I differentiate $w$ "with respect to time", then I'd get the time rate at which the candle is burning. If I differentiate $w$ "with respect to" the distance I have walked since the candle was lit, I'd expect to get either zero (since $w$ is not a function of this), or something more complicated (since the distance I have walked is related to time). I just can't see mathematically what is happening here: ultimately, no matter what we're calling our variables, $w$ is a function of a single variable, not of multiple, and so shouldn't there be absolutely no ambiguity in how to differentiate $w$ ? Shouldn't there just be "the derivative of w", found by differentiating $w$ with respect to its argument (writing "with respect to its argument" is redundant!). Can anyone help clarify what we mean by "function of" as opposed to function, and how this is important when we differentiate functions "with respect to" something? Thanks!
|
As a student of math and physics, this has been one of the biggest annoyances for me; I'll give my two cents on the matter. Throughout my entire answer, whenever I use the term "function", it will always mean in the usual math sense (a rule with a certain domain and codomain blablabla). I generally find two ways in which people use the phrase "... is a function of ..." The first is as you say: " $f$ is a function of $x$ " simply means that for the remainder of the discussion, we shall agree to denote the input of the function $f$ by the letter $x$ . This is just a notational choice as you say, so there's no real math going on. We just make this choice of notation to in a sense "standardize everything". Of course, we usually allow for variants on the letter $x$ . So, we may write things like $f(x), f(x_0), f(x_1), f(x'), f(\tilde{x}), f(\bar{x})$ etc. The way to interpret this is as usual: this is just the result obtained by evaluating the function $f$ on a specific element of its domain. Also, you're right that the input label is completely arbitrary, so we can say $f(t), f(y), f(\ddot{\smile})$ whatever else we like. But again, often times it might just be convenient to use certain letters for certain purposes (this can allow for easier reading, and also reduce notational conflicts); and as much as possible it is a good idea to conform to the widely used notation, because at the end of the day, math is about communicating ideas, and one must find a balance between absolute precision and rigour and clarity/flow of thought. btw as a side remark, I think I am a very very very nitpicky individual regarding issues like: $f$ vs $f(x)$ for a function, I'm also always careful to use my quantifiers properly etc. However, there have been a few textbooks I glossed over, which are also extremely picky and explicit and precise about everything; but while what they wrote was $100 \%$ correct, it was difficult to read (I had to pause often etc). This is as opposed to some other books/papers which leave certain issues implicit, but convey ideas more clearly. This is what I meant above regarding balance between precision and flow of thought. Now, back to the issue at hand. In your third and fourth paragraphs, I think you have made a couple of true statements, but you're missing the point. (one of) the job(s) of any scientist is to quantitatively describe and explain observations made in real life. For example, you introduced the example of the amount of wax burnt, $w$ . If all you wish to do is study properties of functions which map $\Bbb{R} \to \Bbb{R}$ (or subsets thereof), then there is clearly no point in calling $w$ the wax burnt or whatever. But given that you have $w$ as the amount of wax burnt, the most naive model for describing how this changes is to assume that the flame which is burning the wax is kept constant and all other variables are kept constant etc. Then, clearly the amount of wax burnt will only depend on the time elapsed. From the moment you start your measurement/experiment process, at each time $t$ , there will be a certain amount of wax burnt off, $w(t)$ . In other words, we have a function $w: [0, \tau] \to \Bbb{R}$ , where the physical interpretation is that for each $t \in [0, \tau]$ , $w(t)$ is the amount of wax burnt off $t$ units of time after starting the process. Let's for the sake of definiteness say that $w(t) = t^3$ (with the above domain and codomain). "Sure, $w$ only has the interpretation we think it does (cumulative amount of wax burnt) when we provide a (real number in the domain of definition, which we interpret as) time as its argument" True. "...Sure, we can't really interpret $w$ in the same way if I did this, but there is nothing in the definition of w which stops me from doing this." Also true. But here's where you're missing the point. If you didn't want to give a physical interpretation of what elements in the domain and target space of $w$ mean, why would you even talk about the example of burning wax? Why not just tell me the following: Fix a number $\tau > 0$ , and define $w: [0, \tau] \to \Bbb{R}$ by $w(t) = t^3$ . This is a perfectly self-contained mathematical statement. And now, I can tell you a bunch of properties of $w$ . Such as: $w$ is an increasing function For all $t \in [0, \tau]$ , $w'(t) = 3t^2$ (derivatives at end points of course are interpreted as one-sided limits) $w$ has exactly one root (of multiplicity $3$ ) on this interval of definition. (and many more other properties). So, if you want to completely forget about the physical context, and just focus on the function and its properties, then of course you can do so. Sometimes, such an abstraction is very useful as it removes any "clutter". However, I really don't think it is (always) a good idea to completely disconnect mathematical ideas from their physical origins/interpretations. And the reason that in the sciences people often assign such interpretations is because their purpose is to use the powerful tool of mathematics to quantitatively model an actual physical observation. So, while you have made a few technically true statements in your third and fourth paragraphs, I believe you've missed the point of why people assign physical meaning to certain quantities. For your fifth paragraph however, I agree with the sentiment you're describing, and questions like this have tortured me. You're right that $w$ is a function of a single variable (where in this physical context, we interpret the arguments as time). If you now ask me how does $w$ change in relation to the distance I have started to walk, then I completely agree that there is no relation whatsoever. But what is really going on is a terrible, annoying, confusing abuse of notation, where we use the same letter $w$ to have two differnent meanings. Physicists love such abuse of notation, and this has confused me for so long (and it still does from time to time). Of course, the intuitive idea of why the amount of wax burnt should depend on distance is clear: the further I walk, the more time has passed, and hence the more max has burnt. So, this is really a two step process. To formalize this, we need to introduce a second function $\gamma$ (between certain subsets of $\Bbb{R}$ ), where the interpretation is that $\gamma(x)$ is the time taken to walk a distance $x$ . Then when we (by abuse of language) say $w$ is a function of distance, what we really mean is that The composite function $w \circ \gamma$ has the physical interpretation that for each $x \in \text{domain}(\gamma)$ , $(w \circ \gamma)(x)$ is the amount of wax burnt when I walk a distance $x$ . Very often, this composition is not made explicit. In the Leibniz chain rule notation \begin{align}
\dfrac{dw}{dx} &= \dfrac{dw}{dt} \dfrac{dt}{dx}
\end{align} Where on the LHS $w$ is miraculously a function of distance, even though on the LHS (and initially) $w$ was a function of time, what is really going on is that the $w$ on the LHS is a complete abuse of notation. And of course, the precise way of writing it is $(w \circ \gamma)'(x) = w'(\gamma(x)) \cdot \gamma'(x)$ . In general, whenever you initially have a function $f$ "as a function of $x$ " and then suddenly it becomes a "function of $t$ ", what is really meant is that we are given two functions $f$ and $\gamma$ ; and when we say "consider $f$ as a function of $x$ ", we really mean to just consider the function $f$ , but when we say "consider $f$ as a function of time", we really mean to consider the (completely different) function $f \circ \gamma$ . Summary: if the arugments of a function suddenly change interpretations (eg from time to distance or really anything else) then you immediately know that the author is being sloppy/lazy in explicitly mentioning that there is a hidden composition.
|
{
"source": [
"https://math.stackexchange.com/questions/3498785",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/739254/"
]
}
|
3,499,645 |
In the mathematics book I have, there is a sub-chapter called "Practical procedure to resolve inequalities" that states: Given a polynomial $P(x)$ that has real, simple roots, and finding the solutions to the equation $P(x) = 0$ , afterwards sorting the solutions $x_1, x_2, ..., x_n$ , then the sign of $P$ over an interval $(x_i, x_{i + 1})$ is the opposite of its neighboring intervals $(x_{i - 1}, x_i)$ and $(x_{i + 1}, x_{i + 2})$ . I've plotted functions of the form $$a\prod_{i = 1}^{n}(x - a_i), \space a, a_1, a_2, ..., a_n \in [0, \infty), \space a_i \ne a_j \space \forall i, j \in \{1, 2, ..., n\} $$ What's an intuitive way of thinking about this and why it happens?
|
Firstly, to simplify the problem, start by re-numbering all the $a_{i}$ ’s from least to greatest. Think of the behavior at $x=a_n$ notice how the polynomial will look like $$(\text{pos numb})(\text{pos numb})\cdots(\text{pos numb})(x-a_{n})(\text{neg numb})(\text{neg numb})\cdots(\text{neg numb})$$ Recall that in a series of multiplied numbers, the sign of the product is determined by whether the amount of negative numbers is even or odd. Odd making the product negative, even making it positive. Then, imagine changing $x$ . Whenever it’s below $a_{n}$ there will be one more negative number than when it’s just barely above $a_{n}$ Therefore, the sign must change when passing through $x=a_{n}$
|
{
"source": [
"https://math.stackexchange.com/questions/3499645",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/690200/"
]
}
|
3,505,997 |
I want a correct and feasible answer to this question.
So does anyone have any creative ideas to prove this equation? $A$ and $B$ are $3\times3$ matrices. $\det(AB - BA) = \dfrac{1}{3}\operatorname{Trace}\left((AB - BA)^3\right)$ Answer: We can write and compute both sides to prove it but this is not a good solution!
|
This follows easily from Cayley-Hamilton theorem. Since $M=AB-BA$ has zero trace, by Cayley-Hamilton theorem, $M^3=cM+dI_3$ where $c$ is some scalar and $d=\det(M)$ . Therefore $\operatorname{tr}(M^3)=3d$ and the result follows.
|
{
"source": [
"https://math.stackexchange.com/questions/3505997",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/721429/"
]
}
|
3,506,017 |
I have seen plenty of epsilon delta examples, but am not sure how to apply them to this problem. The question states
"Using the $\epsilon$ − δ definition of limits, show that $\lim\limits_{x, y \to (0,0)} xy\frac{x^2-y^2}{x^2+y^2}=0$ .
I know how to prove a limit exists by showing delta > epsilon, but nothing at this level. Any help is appreciated, thank you.
|
This follows easily from Cayley-Hamilton theorem. Since $M=AB-BA$ has zero trace, by Cayley-Hamilton theorem, $M^3=cM+dI_3$ where $c$ is some scalar and $d=\det(M)$ . Therefore $\operatorname{tr}(M^3)=3d$ and the result follows.
|
{
"source": [
"https://math.stackexchange.com/questions/3506017",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/741229/"
]
}
|
3,506,020 |
Conjecture . For every natural number $n \in \Bbb{N}$ , there exists a finite set of consecutive numbers $C\subset \Bbb{N}$ containing $n$ such that $\sum\limits_{c\in C} c$ is a prime power. A list of the first few numbers in $\Bbb{N}$ has several different covers by such consecutive number sets. One such is: 3 7 5 13 8 19 11 25 29 16 37 41 49 53
___ ___ _ ___ _ ____ __ _____ _____ __ __ _____ _____ _____ _____ __
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
___ ___ _____ _____ __ ________
11 17 31 43 81
59 71 3^5
___ __ _____ _________________
30 31 32 33 34 35 36 37 38 39 40 41 42 43 .....
_____ _____ _____
61 67 73 Has this been proved already?
|
For any odd prime $p$ , there are $p$ consecutive integers centred on $p$ that sum to $p^2$ . $2+3+4=3^2$ $3+4+5+6+7=5^2$ $4+5+6+7+8+9+10=7^2$ etc. Let $p_n$ be the $n$ -th prime. Then, using Bertrand's postulate in the form $$p_{n+1}<2p_n$$ we know that the above sums for consecutive primes overlap. Finally, we note that $1+2=3$ to complete the proof. I don't know if this has been shown before, but the proof seems straightforward.
|
{
"source": [
"https://math.stackexchange.com/questions/3506020",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/26327/"
]
}
|
3,506,928 |
Given a square $n \times n$ matrix $A$ that satisfies $$\sum\limits_{k=0}^n a_k A^k = 0$$ for some coefficients $a_0, a_1, \dots, a_n,$ can we deduce that its characteristic polynomial is $\sum\limits_{k=0}^n a_k x^k$ ?
|
The answer is no. You are given a degree $n$ polynomial $p(x)$ that an $n\times n$ matrix $A$ satisfies. This is not enough information to find the characteristic polynomial $c_A(x)$ , although you will be able to narrow it down to finitely many possibilities. Let's look at an example to see why. Suppose your friend picks the matrix $A$ . Suppose he doesn't tell you $A$ , but he does tell you $p(x)$ and asks you to guess $c_A(x)$ . Suppose your friend picked $$A=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{bmatrix}$$ and told you that $p(x)=x^3-6x^2+11x-6$ . You could then reason that $p(x)=(x-1)(x-2)(x-3)$ . Which would mean that the minimal polynomial $m_A(x)$ must be one of the following: $(x-1)$ , $(x-2)$ , $(x-3)$ , $(x-1)(x-2)$ , $(x-1)(x-3)$ , $(x-2)(x-3)$ , $(x-1)(x-2)(x-3)$ . Hence the characteristic polynomial $c_A(x)$ must be one of the following: $(x-1)^3$ , $(x-2)^3$ , $(x-3)^3$ , $(x-1)^2(x-2)$ , $(x-1)(x-2)^2$ , $(x-1)^2(x-3)$ , $(x-1)(x-3)^2$ , $(x-2)^2(x-3)$ , $(x-2)(x-3)^2$ , $(x-1)(x-2)(x-3)$ . Since your friend picked $$A=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{bmatrix}$$ the characteristic polynomial is $c_A(x)=(x-1)^2(x-2)$ , but you can't prove that, because for all you know he may have picked $$B=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{bmatrix}$$ which also satisfies $p(x)=x^3-6x^2+11x-6=(x-1)(x-2)(x-3)$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3506928",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/608434/"
]
}
|
3,511,223 |
I have a circle of radius r a square of length l The centre of the square is currently rotating around the circle in a path described by a circle of radius $(r + \frac{l}{2})$ However, the square overlaps the circle at e.g. 45°. I do not want the square to overlap with the circle at all, and still smoothly move around the circle. The square must not rotate, and should remain in contact with the circle i.e. the distance of the square from the circle should fluctuate as the square moves around the circle. Is there a formula or algorithm for the path of the square? [edit] Thanks for all your help! I've implemented the movement based on Yves solution with help from Izaak's source code : I drew a diagram to help me visualise the movement as well (the square moves along the red track):
|
As said elsewhere, the trajectory of the center is made of four lines segments and four circular arcs obtained by shifting the quarters of the original circle. But it is not enough to known the shape of the trajectory, we also need to know the distance as a function of time, assuming that the center rotates at constant angular speed (not the contact point). For convenience, the square is of side $2l$ . For a contact on the left side of the square, we intersect the line $$d\,(\cos\theta,\sin\theta)$$ where $\theta=\omega t$ , with the straight part of the trajectory, $$x=r+l$$ and we obtain the point $$(r+l,\tan\theta(r+l))$$ and the distance $$\color{green}{d=\frac{r+l}{\cos\theta}}.$$ For a contact on the bottom left corner , we intersect the same line with the shifted circle $$\left(x-l\right)^2+\left(y-l\right)^2=r^2$$ and this gives us the quadratic equation in $d$ $$\color{green}{d^2-2ld(\cos\theta+\sin\theta)+2l^2-r^2=0}.$$ You need to repeat the reasoning for the other sides and corners of the square. Finally, the switch from contact by the side to contact by the corner occurs at an angle $\theta$ such that $$\begin{cases}d\cos\theta=r+l,\\d\sin\theta=l,\end{cases}$$ i.e. $$\color{green}{\tan\theta=\frac l{r+l}}.$$
|
{
"source": [
"https://math.stackexchange.com/questions/3511223",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/742405/"
]
}
|
3,511,229 |
I don't understand how to find a logarithm of Jordan block. Wikipedia says, that we can use Mercator series, but I don't know why it works for matrices - I know only the proof for real numbers. In some other books I've only found explanation on the level "by contruction we have that $ e^B=A $ " and the elements of matrix $ B $ . Can anybody explain formally how to find this logarithm? Edit . By finding the logarithm I mean not just deriving the formula for it, but rigorous proof that this formula is correct. If $ A $ is a Jordan block of the form $ \lambda I + N $ , where $ N $ is nilpotent and $ I $ is identity matrix, I know that $$ B=\log{A}=I\log{\lambda}+\sum_{j=1}^{n-1}\frac{(-1)^{j+1}}{j\lambda^j}N^j. $$ Unfortunately, I don't understand how to prove that $ e^B=A. $ I tried to do it by the definition of exponential, but I wasn't successful.
|
As said elsewhere, the trajectory of the center is made of four lines segments and four circular arcs obtained by shifting the quarters of the original circle. But it is not enough to known the shape of the trajectory, we also need to know the distance as a function of time, assuming that the center rotates at constant angular speed (not the contact point). For convenience, the square is of side $2l$ . For a contact on the left side of the square, we intersect the line $$d\,(\cos\theta,\sin\theta)$$ where $\theta=\omega t$ , with the straight part of the trajectory, $$x=r+l$$ and we obtain the point $$(r+l,\tan\theta(r+l))$$ and the distance $$\color{green}{d=\frac{r+l}{\cos\theta}}.$$ For a contact on the bottom left corner , we intersect the same line with the shifted circle $$\left(x-l\right)^2+\left(y-l\right)^2=r^2$$ and this gives us the quadratic equation in $d$ $$\color{green}{d^2-2ld(\cos\theta+\sin\theta)+2l^2-r^2=0}.$$ You need to repeat the reasoning for the other sides and corners of the square. Finally, the switch from contact by the side to contact by the corner occurs at an angle $\theta$ such that $$\begin{cases}d\cos\theta=r+l,\\d\sin\theta=l,\end{cases}$$ i.e. $$\color{green}{\tan\theta=\frac l{r+l}}.$$
|
{
"source": [
"https://math.stackexchange.com/questions/3511229",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/655905/"
]
}
|
3,521,499 |
Why does one need topological spaces if one has metric spaces? What is the motivation of the abstract theory of topological spaces? For me, the theory of metric spaces is quite natural. But I do wonder why there is the need of generalizing the whole theory... For instance, what are examples of topological spaces that are not metric spaces that really show that the theory of topological spaces is useful. There should be a strong reason, pathological examples don't suffice.
|
Metrics are often irrelevant. Even when working with metric spaces, it is not uncommon to phrase an argument purely in the language of open sets - and I have not infrequently seen mathematicians write proofs relying heavily on a metric and a complicated analytical argument when a simpler topological proof would suffice. Essentially, a topological space is a weaker structure than a metric space with a lot of the same logic. Metrics are sometimes unnatural. There is a lot of study with topology where one works in metrizable spaces, but where there's no clear candidate for which metric to use - and it doesn't matter because we only care about topology. A common example of this is that the extended reals $\mathbb R\cup \{-\infty,\infty\}$ is metrizable, being homeomorphic to $[0,1]$ , but it's really inconvenient to actually use the metric because every metric greatly distorts the ends of the real line - it's almost universally easier to think about the space in terms of open sets, noting that "close to $\infty$ " means "bigger than some value" and things like that - where there's still some clear metric-like ideas, but we don't have to warp the line to do it. A lot of spaces fall into this category: for instance, projective space, one point-compactifications, and cartesian products all tend to fall into this category. Similarly, in a simplicial or CW complex, there tends to be a possibility of defining a metric, but we really don't care about it because we're more interested in the combinatorial structure of the connections or the topological properties than any idea of distance. Some important (categorical) constructions don't work with metrics. A broader reason that metrics are not often used is because there's not really a good category of metric spaces. There is no notion of, for instance, an initial topology or an infinite product space - but these are extremely important in functional analysis. For instance, the Banach-Alaoglu theorem is really critical in functional analysis, especially in combination with theorems about duals such as the Riesz representation theorem , but these deal with the weak-* topology, which is usually not metrizable - and they often reason about these topologies via Tynochoff's theorem which simply has no analog in the theory of metric spaces. These theorems relate to incredibly important spaces that might have some nice properties (like Hausdorff or compact), but also fail others (like metrizable or even first countable). There are also wonderful things such as the Stone-Cech compactification which have surprising universal properties - but lead to incredibly badly behaved spaces which really cannot fit into the theory of metric spaces. Some useful topological spaces really aren't at all like metric spaces. Examples such as the Zariski topology or the order topology on a poset often greatly contrast with the usual intuition behind topology - and allow familiar topological reasoning on an unfamiliar object. These spaces, however, are often not compatible with the theory of metric spaces, so there is not convenient flow of ideas that way. This isn't to say that metric spaces are not useful, but they are good at describing spaces in which distance is a notion we want to think about. They are not so good as a basis for thinking about shapes and spaces more generally, where we might intentionally ignore distances to allow us to think about deformations and such.
|
{
"source": [
"https://math.stackexchange.com/questions/3521499",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/-1/"
]
}
|
3,521,509 |
Would the following be correct?: Let $(X_1\times X_2, T_1\times T_2)$ be the product topology of $(X_1,T_1)$ & $(X_2,T_2)$ , defined as the least fine topology that makes the projections $$p_1:X_1\times X_2\rightarrow X_1: (x_1, x_2)\rightarrow x_1$$ & $$p_2:X_1\times X_2\rightarrow X_2:(x_1,x_2)\rightarrow x_2$$ continuous. By a previous lemma, $$T_1\times T_2=\{ p_1^{-1}(U_1)\cap p^{-1}_2(U_2): U_1\in T_1, U_2\in T_2\} .$$ Now, $p^{-1}_1(U_1)=U_1\times X_2$ and $p^{-1}_2(U_2)=X_1\times U_2$ implies $p_1^{-1}(U_1)\cap p_2^{-1}(U_2)=U_1\times U_2$ and thus $$T_1\times T_2=\{ U_1\times U_2:U_1\in T_1, U_2\in T_2\} .$$ The reason I find this confusing is that I do not see a mistake in my reasoning, yet the author of my book merely claims that any element $U\in T_1\times T_2$ is of the form $$\bigcup _{\alpha ,\beta} \{U^{\alpha}_1\times U^{\beta}_2\}$$ for $U^{\alpha}_1\in T_1$ , $U^{\beta}_2\in T_2$ , which doesn't contradict my reasoning, but if I was correct then certainly the author would mention the result I've shown, yet he doesn't. I would appreciate any help.
|
Metrics are often irrelevant. Even when working with metric spaces, it is not uncommon to phrase an argument purely in the language of open sets - and I have not infrequently seen mathematicians write proofs relying heavily on a metric and a complicated analytical argument when a simpler topological proof would suffice. Essentially, a topological space is a weaker structure than a metric space with a lot of the same logic. Metrics are sometimes unnatural. There is a lot of study with topology where one works in metrizable spaces, but where there's no clear candidate for which metric to use - and it doesn't matter because we only care about topology. A common example of this is that the extended reals $\mathbb R\cup \{-\infty,\infty\}$ is metrizable, being homeomorphic to $[0,1]$ , but it's really inconvenient to actually use the metric because every metric greatly distorts the ends of the real line - it's almost universally easier to think about the space in terms of open sets, noting that "close to $\infty$ " means "bigger than some value" and things like that - where there's still some clear metric-like ideas, but we don't have to warp the line to do it. A lot of spaces fall into this category: for instance, projective space, one point-compactifications, and cartesian products all tend to fall into this category. Similarly, in a simplicial or CW complex, there tends to be a possibility of defining a metric, but we really don't care about it because we're more interested in the combinatorial structure of the connections or the topological properties than any idea of distance. Some important (categorical) constructions don't work with metrics. A broader reason that metrics are not often used is because there's not really a good category of metric spaces. There is no notion of, for instance, an initial topology or an infinite product space - but these are extremely important in functional analysis. For instance, the Banach-Alaoglu theorem is really critical in functional analysis, especially in combination with theorems about duals such as the Riesz representation theorem , but these deal with the weak-* topology, which is usually not metrizable - and they often reason about these topologies via Tynochoff's theorem which simply has no analog in the theory of metric spaces. These theorems relate to incredibly important spaces that might have some nice properties (like Hausdorff or compact), but also fail others (like metrizable or even first countable). There are also wonderful things such as the Stone-Cech compactification which have surprising universal properties - but lead to incredibly badly behaved spaces which really cannot fit into the theory of metric spaces. Some useful topological spaces really aren't at all like metric spaces. Examples such as the Zariski topology or the order topology on a poset often greatly contrast with the usual intuition behind topology - and allow familiar topological reasoning on an unfamiliar object. These spaces, however, are often not compatible with the theory of metric spaces, so there is not convenient flow of ideas that way. This isn't to say that metric spaces are not useful, but they are good at describing spaces in which distance is a notion we want to think about. They are not so good as a basis for thinking about shapes and spaces more generally, where we might intentionally ignore distances to allow us to think about deformations and such.
|
{
"source": [
"https://math.stackexchange.com/questions/3521509",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/362026/"
]
}
|
3,530,539 |
I was sitting in analysis yesterday and, naturally, we took the limit of some expression. It occurred to me that "taking the limit" of some expression abides the rules of a linear transformation $$\lim_{x \rightarrow k}\ c(f(x)+g(x)) = c \lim_{x \rightarrow k} f(x) + c\ \lim_{x \rightarrow k} g(x),$$ and (my group theory is virtually non existent) appears also to be a homomorphism: $$\lim_{x \rightarrow k} (fg)(x) = \lim_{x \rightarrow k} f(x)g(x), $$ etc. Anyway, my real question is, what mathematical construct is the limit?
|
In general, let $X, Y$ be topological spaces, and $x_0$ a non-isolated point of $X$ . Then strictly speaking, " $\lim_{x\to x_0} f(x) = L$ " is a relation between functions $f : X \to Y$ and points $L \in Y$ (the equality notation being misleading in general). Now, if $Y$ is a Hausdorff topological space, it happens that this relation is what is known as a partial function : for any $f : X \to Y$ , there is at most one $L \in Y$ such that $\lim_{x\to x_0} f(x) = L$ . Now, for any relation $R \subseteq (X \to Y) \times Y$ which is a partial function, we can define a corresponding function $\{ f \in (X \to Y) \mid \exists y \in Y, (f, y) \in R \} \to Y$ by sending $f$ satisfying this condition to the unique $y$ with $(f, y) \in R$ . Then that somewhat justifies the "equality" in the notation $\lim_{x\to x_0} f(x) = L$ , though you still need to keep in mind that it is a partial function where $\lim_{x\to x_0} f(x)$ is not defined for all $f$ . (This part relates to the answer by José Carlos Santos.) Building on top of this, in the special case of $Y = \mathbb{R}$ , we can put a ring structure on $X \to Y$ by pointwise addition, pointwise multiplication, etc. Then $\{ f : X \to \mathbb{R} \mid \exists L \in \mathbb{R}, \lim_{x\to x_0} f(x) = L \}$ turns out to be a subring of $X \to \mathbb{R}$ , and the induced function from this subring to $\mathbb{R}$ is a ring homomorphism. (More generally, this will work if $Y$ is a topological ring. Similarly, if $Y$ is a topological vector space, then the set of $f$ with a limit at $x_0$ is a linear subspace of $X \to Y$ and the limit gives a linear transformation; if $Y$ is a topological group, you get a subgroup of $X \to Y$ and a group homomorphism; and so on.)
|
{
"source": [
"https://math.stackexchange.com/questions/3530539",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/675015/"
]
}
|
3,540,068 |
Description In the game of Yahtzee, 5 dice are rolled to determine a score. One of the resulting rolls is called a Yahtzee. To roll a Yahtzee you must have 5 of a kind. (5 1's or 5 2's or 5 3's etc..). In the game of Yahtzee you can only have 5 dice. However, for the purpose of this question I want to entertain adding more dice to the equation. Therefore I'd like to define a Yahtzee as follows: To roll a Yahtzee you must have exactly 5 of a kind, no more or no less. (5 1's or 5 2's or 5 3's etc..). Examples Let's look at some rolls with 6 dice The following would be a Yahtzee: 1 1 1 1 1 4 6 3 3 3 3 3 5 5 3 5 5 5 The following would not be a Yahtzee: 1 1 1 3 3 3 1 1 1 1 5 3 1 1 1 1 1 1 - Note that the last roll does technically contain 5 1's, however because the roll as an entirety contains 6 1's this is not a Yahtzee. Let's look at some rolls with 12 dice The following would be a Yahtzee: 1 1 2 1 2 1 4 4 1 3 6 2 1 1 1 1 1 2 2 2 2 2 3 3 1 1 1 1 1 2 2 2 2 2 2 2 - Note that the first roll is a Yahtzee with 5 1's, this roll is to illustrate that order doesn't matter. - Note that the second roll has 2 Yahtzees, this is a roll that counts as a Yahtzee - Note that the third roll has a Yahtzee with 1's but has 7 2's. This roll is a Yahtzee because it contains exactly 5 1's. The 7 2's do not nullify this roll. The following would not be a Yahtzee: 1 1 1 2 2 2 3 3 3 4 4 4 1 1 1 1 1 1 6 6 6 6 6 6 - Note that the last roll has 6 1's and 6 6's. Because exactly 5 of one number (no more, no less) is not present, this roll does not contain a Yahtzee. The Question What is the optimal number of dice to roll a Yahtzee in one roll? A more generalized form of the question is as follows: Given $n$ dice, what is the probability of rolling a Yahtzee of length $y$ in one roll.
|
By inclusion-exclusion, the full probability of Yahtzee is: $$\frac{1}{6^n}\sum_{k=1}^{\min(6,n/5)} (-1)^{k+1} \binom{6}{k} (6-k)^{n-5k} \prod_{j=0}^{k-1} \binom{n-5j}{5}.$$ If you prefer, write the product with a multinomial: $$\prod_{j=0}^{k-1} \binom{n-5j}{5}=\binom{n}{5k}\binom{5k}{5,\dots,5}.$$ Looks like $n=29$ is the uniquely optimal number of dice: \begin{matrix}
n &p\\
\hline
28 &0.71591452705020 \\
29 &0.71810623718825 \\
30 &0.71770441391497 \\
\end{matrix} Here is the SAS code I used: proc optmodel;
set NSET = 1..100;
num p {n in NSET} =
(1/6^n) * sum {k in 1..min(6,n/5)} (-1)^(k+1)
* comb(6,k) * (if k = 6 and n = 5*k then 1 else (6-k)^(n-5*k))
* prod {j in 0..k-1} comb(n-5*j,5);
print p best20.;
create data outdata from [n] p;
quit;
proc sgplot data=outdata;
scatter x=n y=p;
refline 29 / axis=x;
xaxis values=(0 20 29 40 60 80 100);
run;
|
{
"source": [
"https://math.stackexchange.com/questions/3540068",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/147080/"
]
}
|
3,544,034 |
In this question I plotted the number of numbers with $n$ prime factors. It appears that the further out on the number line you go, the number of numbers with $3$ prime factors get ahead more and more. The charts show the number of numbers with exactly $n$ prime factors, counted with multiplicity: (Please ignore the 'Divisors' in the chart legend, it should read 'Factors') My question is : will the line for numbers with $3$ prime factors be overtaken by another line or do 'most numbers have $3$ prime factors'? It it is indeed that case that most numbers have $3$ prime factors, what is the explanation for this?
|
Yes, the line for numbers with $3$ prime factors will be overtaken by another line. As shown & explained in Prime Factors: Plotting the Prime Factor Frequencies , even up to $10$ million, the most frequent count is $3$ , with the mean being close to it. However, it later says For $n = 10^9$ the mean is close to $3$ , and for $n = 10^{24}$ the mean is close to $4$ . The most common # of prime factors increases, but only very slowly, and with the mean having "no upper limit". OEIS A $001221$ 's closely related (i.e., where multiplicities are not counted) Number of distinct primes dividing n (also called omega(n)) says The average order of $a(n): \sum_{k=1}^n a(k) \sim \sum_{k=1}^n \log \log k.$ - Daniel Forgues , Aug 13-16 2015 Since this involves the log of a log, it helps explain why the average order increases only very slowly. In addition, the Hardy–Ramanujan theorem says ... the normal order of the number $\omega(n)$ of distinct prime factors of a number $n$ is $\log(\log(n))$ . Roughly speaking, this means that most numbers have about this number of distinct prime factors. Also, regarding the statistical distribution, you have the Erdős–Kac theorem which states ... if $ω(n)$ is the number of distinct prime factors of $n$ (sequence A001221 in the OEIS , then, loosely speaking, the probability distribution of $$\frac {\omega (n)-\log \log n}{\sqrt {\log \log n}}$$ is the standard normal distribution . To see graphs related to this distribution, the first linked page of Prime Factors: Plotting the Prime Factor Frequencies has one which shows the values up to $10$ million.
|
{
"source": [
"https://math.stackexchange.com/questions/3544034",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/405780/"
]
}
|
3,550,990 |
Let's consider Euler's constant $\gamma$ , i.e., $$\gamma=\lim_{n\to \infty} \sum_{k=1}^n\frac{1}{k}-\ln(n).$$ Prove the following approximation: $$\sum_{k=1}^{m-1}\frac{1}{k}-\ln(m)+\frac{1}{2m}+\frac{1}{12m^2}\approx \gamma.$$ The above approximation can be found in many places, e.g. John D. Cook's blog and appears back in Concrete Mathematics asymptotics chapter as a non-trivial exercise of Euler's summation formula . While there are more efficient algorithms that estimates Euler's constant, this approximation allows also one way to look at large values of the Harmonic number (as mentioned in John's blog).
|
We have \begin{align}
\sum\limits_{k = 1}^m {\frac{1}{k}} - \log m &= \sum\limits_{k = 1}^m {\frac{1}{k}} - \log \prod\limits_{k = 2}^m {\frac{k}{{k - 1}}}
\\
&= \sum\limits_{k = 1}^m {\frac{1}{k}} - \sum\limits_{k = 2}^m {\log \frac{k}{{k - 1}}} \\&= 1 + \sum\limits_{k = 2}^m {\left[ {\frac{1}{k} - \log \frac{k}{{k - 1}}} \right]} \\ &= 1 + \sum\limits_{k = 2}^\infty {\left[ {\frac{1}{k} - \log \frac{k}{{k - 1}}} \right]} - \sum\limits_{k = m + 1}^\infty {\left[ {\frac{1}{k} - \log \frac{k}{{k - 1}}} \right]}
\\
&= 1 + \sum\limits_{k = 2}^\infty {\left[ {\frac{1}{k} + \log \left( {1 - \frac{1}{k}} \right)} \right]} - \sum\limits_{k = m + 1}^\infty {\left[ {\frac{1}{k} + \log \left( {1 - \frac{1}{k}} \right)} \right]} .
\end{align} By Taylor's theorem $$
\frac{1}{k} + \log \left( {1 - \frac{1}{k}} \right) = - \frac{1}{{2k^2 }} + \mathcal{O}\!\left( {\frac{1}{{k^3 }}} \right),
$$ whence the infinite series is convergent and we can write $$
\sum\limits_{k = 1}^m {\frac{1}{k}} - \log m = \gamma - \sum\limits_{k = m + 1}^\infty {\left[ {\frac{1}{k} + \log \left( {1 - \frac{1}{k}} \right)} \right]} ,
$$ with some constant $\gamma$ . By Taylor's formula, $$
\frac{1}{k} + \log \left( {1 - \frac{1}{k}} \right) = - \sum\limits_{j = 2}^\infty {\frac{1}{{jk^j }}} ,
$$ hence \begin{align}
\sum\limits_{k = 1}^m {\frac{1}{k}} - \log m - \gamma = \sum\limits_{k = m + 1}^\infty {\sum\limits_{j = 2}^\infty {\frac{1}{{jk^j }}} } = \sum\limits_{j = 2}^\infty {\frac{1}{j}\sum\limits_{k = m + 1}^\infty {\frac{1}{{k^j }}} } = \sum\limits_{j = 2}^\infty {\frac{1}{{j!}}\sum\limits_{k = m + 1}^\infty {\frac{{(j - 1)!}}{{k^j }}} } .
\end{align} By the Euler integral $$
\frac{{(j - 1)!}}{{k^j }} = \int_0^{ + \infty } {e^{ - kt} t^{j - 1} dt} ,
$$ whence, using the geometric series and the Taylor series of the exponential function, \begin{align}
\sum\limits_{k = 1}^m {\frac{1}{k}} - \log m - \gamma &= \sum\limits_{j = 2}^\infty {\frac{1}{{j!}}\sum\limits_{k = m + 1}^\infty {\int_0^{ + \infty } {e^{ - kt} t^{j - 1} dt} } } \\& = \sum\limits_{j = 2}^\infty {\frac{1}{{j!}}\int_0^{ + \infty } {\frac{{e^{ - (m + 1)t} }}{{1 - e^{ - t} }}t^{j - 1} dt} }
\\ &= \int_0^{ + \infty } {\frac{{e^{ - (m + 1)t} }}{{1 - e^{ - t} }}\frac{1}{t}\sum\limits_{j = 2}^\infty {\frac{{t^j }}{{j!}}} dt} \\ &= \int_0^{ + \infty } {\frac{{e^{ - mt} }}{{e^t - 1}}\frac{{e^t - t - 1}}{t}dt} \\ & = \int_0^{ + \infty } {e^{ - mt} \left( {1 - \frac{t}{{e^t - 1}}} \right)\frac{1}{t}dt} .
\end{align} Now for $0<t<2\pi$ , $$
\left( {1 - \frac{t}{{e^t - 1}}} \right)\frac{1}{t} = \frac{1}{2} - \sum\limits_{n = 1}^\infty {\frac{{B_{2n} }}{{(2n)!}}t^{2n - 1} } ,
$$ with $B_n$ being the Bernoulli numbers. Noting that our function tends to zero at infinity and employing Taylor's theorem, we have that $$
\left| {\left( {1 - \frac{t}{{e^t - 1}}} \right)\frac{1}{t} - \left( {\frac{1}{2} - \sum\limits_{n = 1}^{N - 1} {\frac{{B_{2n} }}{{(2n)!}}t^{2n - 1} } } \right)} \right| \le C_N t^{2N - 1}
$$ for $t>0$ and each positive $N$ with a suitable positive constant $C_N$ . Therefore, using the Euler integral, \begin{align}
\sum\limits_{k = 1}^m {\frac{1}{k}} - \log m - \gamma &= \int_0^{ + \infty } {e^{ - mt} \left( {\frac{1}{2} - \sum\limits_{n = 1}^{N - 1} {\frac{{B_{2n} }}{{(2n)!}}t^{2n - 1} } } \right)dt} + \mathcal{O}(1)\int_0^{ + \infty } {e^{ - mt} t^{2N - 1} dt}
\\ &= \frac{1}{2}\int_0^{ + \infty } {e^{ - mt} dt} - \sum\limits_{n = 1}^{N - 1} {\frac{{B_{2n} }}{{(2n)!}}\int_0^{ + \infty } {e^{ - mt} t^{2n - 1} dt} } \\ &\quad \, + \mathcal{O}(1)\int_0^{ + \infty } {e^{ - mt} t^{2N - 1} dt}
\\ &= \frac{1}{{2m}} - \sum\limits_{n = 1}^{N - 1} {\frac{{B_{2n} }}{{2n}}\frac{1}{{m^{2n} }}} + \mathcal{O}\! \left( {\frac{1}{{m^{2N} }}} \right).
\end{align} Re-arranging and subtracting $1/m$ from both sides gives $$
\sum\limits_{k = 1}^{m - 1} {\frac{1}{k}} = \log m + \gamma - \frac{1}{{2m}} - \sum\limits_{n = 1}^{N - 1} {\frac{{B_{2n} }}{{2n}}\frac{1}{{m^{2n} }}} + \mathcal{O}\!\left( {\frac{1}{{m^{2N} }}} \right).
$$ Taking $N=2$ yields your approximation.
|
{
"source": [
"https://math.stackexchange.com/questions/3550990",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/195296/"
]
}
|
3,556,440 |
This is a beginner's question. A complex number is an element of R², that is an ordered pair (a,b) , the numbers a and b being elements of R. A complex number can be written : a + ib . I know that a special kind of addition can be defined for complex numbers. But it seems to me that in " a+ ib" the " + " sign does not denote complex addition. It can't denote real addition either, for ( but I may be wrong here) , unless b=0 , ib is not a real number. Hence my question : what does the " + " sign denote in " a+ib "?
|
There is indeed a very annoying abuse of notation here. The short version is that the " $+$ " in " $a+bi$ " - in the context of defining the complex numbers - is being used as a purely formal symbol; that said, after having made sense of the complex numbers it can be conflated with complex addition. An actually formal way to construct $\mathbb{C}$ from $\mathbb{R}$ is the following: A complex number is an ordered pair $(a,b)$ with $a,b\in\mathbb{R}$ . We define complex addition and complex multiplication by $$(a,b)+_\mathbb{C}(c,d)=(a+c,b+d)$$ and $$(a,b)\times_\mathbb{C}(c,d)=(a\times c-b\times d, a\times d+b\times c)$$ respectively. Note that we're using the symbols " $+$ ," " $-$ ," and " $\times$ " here in the context of real numbers - we're assuming those have already been defined (we're building $\mathbb{C}$ from $\mathbb{R}$ ). We then introduce some shorthand: for real numbers $a$ and $b$ , the expression " $a+bi$ " is used to denote $(a,b)$ , " $a$ " is shorthand for $(a,0)$ , and " $bi$ " is shorthand for $(0,b)$ . We then note that " $a+bi=a+bi$ ," in the sense that $$a+bi=(a,b)=(a,0)+_\mathbb{C}(0,b)=a+_\mathbb{C}bi$$ (cringing a bit as we do so). Basically, what's happening in the usual construction of the complex numbers is that we're overloading the symbol " $+$ " horribly; this can in fact be untangled, but you're absolutely right to view it with skepticism (and it's bad practice in general to construct a new object so cavalierly). This old answer of mine explains how properties of $\mathbb{C}$ can be rigorously proved from such a rigorous construction, and may help clarify things. Additionally, it's worth noting that this sort of notational mess isn't unique to the complex numbers - the same issue can crop up with the construction of even very simple field extensions (see this old answer of mine ).
|
{
"source": [
"https://math.stackexchange.com/questions/3556440",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/-1/"
]
}
|
3,557,171 |
Ramanujan found the following formula: $$\large \sum_{n=1}^\infty \frac{n^{13}}{e^{2\pi n}-1}=\frac 1{24}$$ I let $e^{2\pi n}-1=\left(e^{\pi n}+1\right)\left(e^{\pi n}-1\right)$ to try partial fraction decomposition and turn the sum into telescoping, but methinks it doesn't lead anywhere and only makes things hairy. How does one go about proving this? Thanks.
|
Suppose we seek to evaluate $$S = \sum_{n\ge 1} \frac{n^{13}}{e^{2\pi n}-1}.$$ This sum may be evaluated using harmonic summation techniques. Introduce the sum $$S(x; p) = \sum_{n\ge 1} \frac{n^{4p+1}}{e^{nx}-1}$$ with $p$ a positive integer and $x\gt 0.$ The sum term is harmonic and may be evaluated by inverting its Mellin
transform. Recall the harmonic sum identity $$\mathfrak{M}\left(\sum_{k\ge 1} \lambda_k g(\mu_k x);s\right) =
\left(\sum_{k\ge 1} \frac{\lambda_k}{\mu_k^s} \right) g^*(s)$$ where $g^*(s)$ is the Mellin transform of $g(x).$ In the present case we have $$\lambda_k = k^{4p+1}, \quad \mu_k = k
\quad \text{and} \quad
g(x) = \frac{1}{e^x-1}.$$ We need the Mellin transform $g^*(s)$ of $g(x)$ which is $$\int_0^\infty \frac{1}{e^{x}-1} x^{s-1} dx
= \int_0^\infty \frac{e^{-x}}{1-e^{-x}} x^{s-1} dx
\\ = \int_0^\infty \sum_{q\ge 1} e^{-q x} x^{s-1} dx
= \sum_{q\ge 1} \int_0^\infty e^{-q x} x^{s-1} dx
\\= \Gamma(s) \sum_{q\ge 1} \frac{1}{q^s}
= \Gamma(s) \zeta(s).$$ It follows that the Mellin transform $Q(s)$ of the harmonic sum $S(x,p)$ is given by $$Q(s) = \Gamma(s) \zeta(s) \zeta(s-(4p+1))
\\ \text{because}\quad
\sum_{k\ge 1} \frac{\lambda_k}{\mu_k^s} =
\sum_{k\ge 1} k^{4p+1} \frac{1}{k^s}
= \zeta(s-(4p+1))$$ for $\Re(s) > 4p+2.$ The Mellin inversion integral here is $$\frac{1}{2\pi i} \int_{4p+5/2-i\infty}^{4p+5/2+i\infty} Q(s)/x^s ds$$ which we evaluate by shifting it to the left for an expansion about
zero. The two zeta function terms cancel the poles of the gamma function
term and we are left with just $$\begin{align}
\mathrm{Res}(Q(s)/x^s; s=4p+2) & = \Gamma(4p+2) \zeta(4p+2) / x^{4p+2}
\quad\text{and}\\
\mathrm{Res}(Q(s)/x^s; s=0) & = \zeta(0) \zeta(-(4p+1)).
\end{align}$$ Computing these residues we get $$(4p+1)! \frac{B_{4p+2} (2\pi)^{4p+2}}{2(4p+2)! \times x^{4p+2}}
= \frac{B_{4p+2} (2\pi)^{4p+2}}{2\times (4p+2) \times x^{4p+2}}$$ and $$- \frac{1}{2} \times -\frac{B_{4p+2}}{4p+2}.$$ This shows that $$S(x;p) = \frac{B_{4p+2} (2\pi)^{4p+2}}{(8p+4)\times x^{4p+2}}
+ \frac{B_{4p+2}}{8p+4}
+ \frac{1}{2\pi i} \int_{-1/2-i\infty}^{-1/2+i\infty} Q(s)/x^s ds.$$ To treat the integral recall the duplication formula of the gamma
function: $$\Gamma(s) =
\frac{1}{\sqrt\pi} 2^{s-1}
\Gamma\left(\frac{s}{2}\right)
\Gamma\left(\frac{s+1}{2}\right).$$ which yields for $Q(s)$ $$\frac{1}{\sqrt\pi} 2^{s-1}
\Gamma\left(\frac{s}{2}\right)
\Gamma\left(\frac{s+1}{2}\right)
\zeta(s) \zeta(s-(4p+1))$$ Furthermore observe the following variant of the functional equation
of the Riemann zeta function: $$\Gamma\left(\frac{s}{2}\right)\zeta(s)
= \pi^{s-1/2} \Gamma\left(\frac{1-s}{2}\right)
\zeta(1-s)$$ which gives for $Q(s)$ $$\frac{1}{\sqrt\pi} 2^{s-1}
\pi^{s-1/2}
\Gamma\left(\frac{s+1}{2}\right)
\Gamma\left(\frac{1-s}{2}\right)
\zeta(1-s)\zeta(s-(4p+1))
\\ =
\frac{1}{\sqrt\pi} 2^{s-1}
\pi^{s-1/2}
\frac{\pi}{\sin(\pi(s+1)/2)}
\zeta(1-s)\zeta(s-(4p+1))
\\ =
2^{s-1}
\frac{\pi^s}{\sin(\pi(s+1)/2)}
\zeta(1-s)\zeta(s-(4p+1)).$$ Now put $s=4p+2-u$ in the remainder integral to get $$- \frac{1}{x^{4p+2}}
\frac{1}{2\pi i} \int_{4p+5/2+i\infty}^{4p+5/2-i\infty}
2^{4p+1-u}
\\ \times \frac{\pi^{4p+2-u}}{\sin(\pi(4p+3-u)/2)}
\zeta(u-(4p+1))\zeta(1-u) x^u du
\\ = \frac{2^{4p+2} \pi^{4p+2}}{x^{4p+2}}
\frac{1}{2\pi i} \int_{4p+5/2-i\infty}^{4p+5/2+i\infty}
2^{u-1}
\\ \times \frac{\pi^{u}}{\sin(\pi(4p+3-u)/2)}
\zeta(u-(4p+1))\zeta(1-u) (x/\pi^2/2^2)^u du.$$ Now $$\sin(\pi(4p+3-u)/2) = \sin(\pi(1-u)/2+\pi (2p+1))
\\ = - \sin(\pi(1-u)/2) = \sin(\pi(-1-u)/2)
= - \sin(\pi(u+1)/2).$$ We have shown that $$\bbox[5px,border:2px solid #00A000]
{S(x;p) = \frac{B_{4p+2} (2\pi)^{4p+2}}{(8p+4)\times x^{4p+2}}
+ \frac{B_{4p+2}}{8p+4}
- \frac{(2\pi)^{4p+2}}{x^{4p+2}} S(4\pi^2/x;p)}.$$ In particular we get $$S(2\pi; p) = \frac{B_{4p+2}}{8p+4}.$$ The sequence in $p$ starting from $p=1$ is $${\frac{1}{504}},{\frac{1}{264}},1/24,
{\frac{43867}{28728}},{\frac{77683}{552}},
{\frac{657931}{24}},{\frac{1723168255201}{171864}},
\ldots$$ We thus have for $p=3$ as per request from OP $$\bbox[5px,border:2px solid #00A000]{
\sum_{n\ge 1} \frac{n^{13}}{e^{2\pi n}-1}
= \frac{1}{24}.}$$ References, as per request, are: Flajolet and Sedgewick, Mellin transform asymptotics , INRIA RR 2956 and Szpankowski, Mellin Transform and its applications , from Average Case Analysis of Algorithms on Sequences .
|
{
"source": [
"https://math.stackexchange.com/questions/3557171",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/477343/"
]
}
|
3,564,828 |
I'm currently learning calculus (high school senior), and I am not comfortable with the idea that the limit of the sums of rectangles actually converges to the area under the curve. I know it looks like it does, but how do we know for sure? Couldn't the tiny errors beneath/over the curve accumulate as we add more and more rectangles? What's troubling me is the whole Pi = 4 thing with the staircase approximating a circle pointwise, and how it's wrong and the perimeter of the staircase shape does not approach the circumference of the circle, even though pointwise it does approach a circle. So how are the increasingly many, increasingly small errors in Riemann sums any different? How do we know the error in each step decreases faster than the number of errors increases? I would really like to see a proof of this. Thanks so much!
|
This actually is a very good question and something that really takes a two-prong answer to fully do it justice. PART 1. You are right to be skeptical and, I'd say, this is yet another of those times where that the "standard" maths curriculum and expositions do a really great job of making something easy, hard. You should trust your gut here, not your teacher. The answer is simple: no, you cannot "prove" this, unless you have an independent formal definition of "area" that is separate from the Riemann integral and comprehensive enough to handle these situations on its own. Such a definition can be made: it's called Lebesgue Measure , but it takes a bit more mathematical machinery than is available at this point in exposition to make it cook. Basically, Lebesgue measure is a function that takes a single input argument which is an entire set of points on the plane, i.e. $S$ , interpreted as a solid area and not merely the boundary, and tells you its area, $\mu(S)$ . This $S$ would be, in your case, the solid plane figure that is colored in on the graphs in your calculus textbook as being the "area under the curve". There is no integrals involved in its definition though, as I'd say, we'd need to expose a considerable amount of new machinery to build this. But if you do that, then take my word that you can prove that $$\mu(S) = \int_{a}^{b} f(x)\ dx$$ where the right hand is a Riemann integral and the $S$ is as I described for this particular situation, when said Riemann integral exists. ADD: As @Paramanand Singh mentions in the comments, there are simpler ways to define the area that may be more digestible at this point, though they do not cover as many cases as the Lebesgue measure. The Borel measure and Jordan pseudo-measure are two such options and I could attempt to describe them here if you wish or, you could ask another question in the vein of "What is a simple, integral-free definition of the area of a complicated plane figure that is digestible at or near the level of introductory calculus?" and I could then answer it with one or both of these. PART 2. This, of course, leads one to what a better way to introduce the integral should be, given we cannot at this stage do the necessary proof. And moreover, even if we could, then it would lead one to scratch one's head as to just why exactly we care to be creating this idea of "Riemann integral" in the first place when we already have a perfectly-good working construct for area. And so, what I'd say is that a superior approach is to say that the Riemann integral is an explicit method to reconstruct a function from its derivative and, to make this clearer, we also need a better intuitive understanding of what a "derivative" means beyond the "tangent line" business that, while actually not bad at all, is itself also ruined by a poor explanation, too and which I could add still more details to get into, but I want to try and keep focus on the problem at hand. As Deane Yang mentioned in what was one of the posts that had a great influence in shaping my present attitude towards maths and especially maths education, here: a better intuitive model for the "derivative" is that it is a kind of "sensitivity measurement": if I say that the derivative of a real-valued function of a real variable, $f$ , has the value at the point $x$ of $f'(x)$ , what that means intuitively is that if I "wiggle" $x$ back and forth a little bit, i.e. $\Delta x$ , back and forth about this value, and I watch then the output value of $f$ , i.e. $f(x)$ , as though $f(x)$ were some instrument with a readout and $x$ a dial we could turn back and forth, then this $f(x)$ will likewise "wiggle" some other amount, i.e. $\Delta y$ , and that $$\Delta y \approx f'(x)\ \Delta x$$ provided $\Delta x$ is small - the accuracy of the approximation becoming as good as we like it if we make $\Delta x$ suitably smaller than whatever value we've been using so far: hence why we need to pass to a limit, a concept that, once more, can use some further elucidation. Or, to turn it around, $f'(x)$ is the "best" number to represent how much the output changes proportionally to the input, so long as we keep the input change small enough. The Riemann integral, then, is the answer to this question: Give me a procedure that, if we are first given the derivative , $f$ , to find a function $F$ that has it as its derivative, with the initial information that $F(a) = 0$ , for some selected point $a$ . That is, it is in effect a constructive way to solve what in differential equations terminology would be called the initial-value problem , or IVP, $$\frac{dF}{dx} = f, F(a) = 0$$ that proceeds as follows. We are given the only starting information that $F(a) = 0$ , and that $F' = f$ . So suppose we are to construct the value of $F$ at a new point $b$ for which $b > a$ . How may we try this, given what we've already discussed? So now, think about what I just said about the meaning of the derivative, and ask yourself this question: I know that $F'$ here is how sensitive it is to a small change. So suppose I were to now do a Zeno-like manoeuvre and hop a small amount $\Delta x$ from $a$ rightward along the real number line to $a + \Delta x$ . What then should we guess for $F(a + \Delta x)$ ? Well, if you got what I just mentioned, then you should come to that, since $F'(a)$ is proportionately how much $F$ will respond to a small change in its input around $a$ , and what we are doing is exactly that: to make such a small change from $a$ to $a + \Delta x$ , then we should likewise shift $F(a)$ to $F(a) + (F'(a) \Delta x)$ , so that $$\begin{align}
F(a + \Delta x) &\approx F(a) + [F'(a)\ \Delta x]\\
&= F(a) + [f(a)\ \Delta x]\end{align}$$ . And then, we can do the same thing, and make another small "wiggle" from $a + \Delta x$ to $[a + \Delta x] + \Delta x$ (i.e. $a + 2\Delta x$ ), and we get $$\begin{align}F([a + \Delta x] + \Delta x) &\approx F(a + \Delta x) + [F'(a + \Delta x)\ \Delta x]\\
&= F(a) + [f(a) + \Delta x] + [f(a + \Delta x)\ \Delta x]\end{align}$$ and if you continue on this way all the way until we get to $b$ , or at least as close as possible, you see we have $$F(b) \approx \sum_{n=0}^{N-1} f(a + n\Delta x)\ \Delta x$$ or, letting $x_i := a + i\Delta x$ , $$F(b) \approx \sum_{i=0}^{N-1} f(x_i)\ \Delta x$$ . Moreover, we can generalize this a bit further still to allow for irregular steps, which increases the flexibility a bit for, say, mildly discontinuous input functions $f$ , and so we get $$F(b) \approx \sum_{i=0}^{N-1} f(x_i)\ \Delta x_i$$ and we're almost there, all it takes now is a limit to get to... $$F(b) = \lim_{||\Delta|| \rightarrow 0} \sum_i f(x_i)\ \Delta x_i$$ and if we introduce a bit of notation for this new concept now... $$\int_{a}^{b} f(x)\ dx := \lim_{||\Delta|| \rightarrow 0} \sum_i f(x_i)\ \Delta x_i$$ which is...? And by the way, is the Fundamental Theorem of Calculus much of a "surprise" now, or almost tautological, something that was by design, not a mystery to be solved? (This is pattern I also find crops up elsewhere where the best motivating reason for something is put after and not before - e.g. Cayley's theorem in abstract algebra.) That is, the real surprise is not that we can use the Riemann sum to find an antiderivative - that's its whole point - but that this sum also can describe an area and that is, indeed, much less trivial to prove.
|
{
"source": [
"https://math.stackexchange.com/questions/3564828",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/755586/"
]
}
|
3,566,541 |
From a foundations/logic perspective, for one to prove Pythagoras' theorem, one would require the definition $$d(\boldsymbol x, \boldsymbol y) = \sqrt{(x_1-y_1)^2 + \cdots + (x_n - y_n)^2}$$ of distance in the first place, which embeds Pythagoras' theorem within it, so to call it a theorem seems a bit circular (although you could nitpick and prove the case for a right-angled triangle which is rotated and whose right-angle is not aligned with the coordinate axes). But basically Pythagoras' theorem is already encoded in the definition. So my question is, when we see "proofs" of Pythagoras such as the famous one below, what are we proving exactly? Or, more precisely, what axioms are we building off so that this is considered a proof? Is there some logical framework in which this can be considered a real proof? My guess is that this is simply something to aid our intuition and is based off our perception of the real world, and is in fact circular. Or perhaps it's based off something vague like Euclid's axioms. Edit: For clarity, I am mainly interested in whether the typical proofs we see are actually doing anything, in the modern sense. Allegedly there are hundreds of proofs of Pythagoras' theorem, some quite clever, but are they meaningful in any modern way?
|
Any mathematical system, and geometry in particular, is logically just a sequence of deductions from a set of axioms. The Pythagorean Theorem follows from Euclid's axioms for geometry. That was true in Euclid's time even though the axioms he used are not "logically sound" by modern standards. It's still true today when you use contemporary axioms. In fact, the Pythagorean Theorem is just one of many theorems in geometry that are equivalent to the famous fifth postulate on parallel lines - see https://www.cut-the-knot.org/triangle/pythpar/PTimpliesPP.shtml . If you study any of the non-Euclidean geometries in which the parallel postulate fails, the Pythagorean Theorem will fail there too. To connect the geometric content of the Pythagorean Theorem to the notion of distance between points given in the usual coordinate system in the plane you have to define a coordinate system. Doing that requires the parallel postulate. If your geometry starts from the usual coordinate system then you have implicitly assumed the parallel postulate in such a way that the Pythagorean Theorem does seem obvious, so not in need of the many proofs in the literature.
|
{
"source": [
"https://math.stackexchange.com/questions/3566541",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/301095/"
]
}
|
3,566,552 |
The coordinates of feet of perpendicular from the vertices of a triangle on opposite sides are $D(20,25),E(8,16),$ and $F(8,9).$ The number of such triangles are what i try we know that point of intersection of feet of perpendicular from vertices to opposite side is orthocenter of triangle. did not understand what is the use of that definition here please help me to solve it
|
Any mathematical system, and geometry in particular, is logically just a sequence of deductions from a set of axioms. The Pythagorean Theorem follows from Euclid's axioms for geometry. That was true in Euclid's time even though the axioms he used are not "logically sound" by modern standards. It's still true today when you use contemporary axioms. In fact, the Pythagorean Theorem is just one of many theorems in geometry that are equivalent to the famous fifth postulate on parallel lines - see https://www.cut-the-knot.org/triangle/pythpar/PTimpliesPP.shtml . If you study any of the non-Euclidean geometries in which the parallel postulate fails, the Pythagorean Theorem will fail there too. To connect the geometric content of the Pythagorean Theorem to the notion of distance between points given in the usual coordinate system in the plane you have to define a coordinate system. Doing that requires the parallel postulate. If your geometry starts from the usual coordinate system then you have implicitly assumed the parallel postulate in such a way that the Pythagorean Theorem does seem obvious, so not in need of the many proofs in the literature.
|
{
"source": [
"https://math.stackexchange.com/questions/3566552",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/14096/"
]
}
|
3,570,121 |
Take these two questions: "Given objects $X$ , is there always a group $(X, e, *)$ with those objects?” (Ans: yes iff Axiom of Choice.) "Take a group $G$ , its automorphism group ${\rm Aut}(G)$ , the automorphism group of that ${\rm Aut}({\rm Aut}(G))$ , ${\rm Aut}({\rm Aut}({\rm Aut}(G)))$ , etc.: does this automorphism tower terminate (count it as terminating when successive groups are iso)?" (Ans, Hamkins: Yes, but the very same group can lead to towers with wildly different heights in different set theoretic universes.) Now, these two questions should be readily understood by a student who has just met a small amount of group theory in an introductory course, though their answers depend on set theoretic ideas going far beyond the little bit that appears in their introductory text (e.g. Alan Beardon's first year Cambridge text Algebra and Geometry ). The question arising: What other questions in group theory are there that would also strike a near-beginning student as simple and natural, and similarly involve more or less significant amounts of set theory in their answers?
|
One of my favourite results, told to me by my advisor over coffee once. Let $G$ be an abelian group. We say that it has a norm if there is a function $\nu\colon G\to\Bbb R$ whose behaviour is what you'd expect from "norm". Say that a norm is discrete if its range in $\Bbb R$ is a discrete set. Exercise. If $G$ is a free-abelian group, then it has a discrete norm. Difficult theorem. If $G$ has a discrete norm, then it is free-abelian. The only known proof uses Shelah's compactness theorem for singular cardinals. So quite significant heavy machinery from set theory and model theory combined.
|
{
"source": [
"https://math.stackexchange.com/questions/3570121",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/35151/"
]
}
|
3,598,325 |
I need some help in untangling and solving the following exercise: Let the curve $c:[a,b] \to \mathbb{R}^2, t \mapsto (t, y(t))$ be a
solution for the ODE $$ y'(x) = f(x, y(x)). $$ Justify the
"Physicist's method" (no offense intended) of rearranging the equation $\frac{dy}{dx} = f(x,y)$ through formal multiplication of $dx$ to $$
dy = f(x,y)dx, $$ by showing that both differential forms agree in
every point $c(t)$ on the tangent space. As far as I understand the situation, we are considering the two differential forms $$\begin{equation}
dy: \mathbb{R}^2 \to {\bigwedge}^1(T_p\mathbb{R}^2)\\
f(x,y)dx: \mathbb{R}^2 \to {\bigwedge}^1(T_p\mathbb{R}^2)\\
\end{equation}
$$ Here $dy$ is a constant differential form, in the sense that $dy(p)(x,y) = y$ independent of $p=(v,w) \in \mathbb{R}^2$ . However, in general $f(x,y)dx$ is not a constant differential form.
Now, if we choose a point on the curve $c$ , say $p = c(t_0) \in c([a,b])$ then we have $$
f(p)dx = f(c(t_0))dx = y'(t_0)dx,
$$ since $c$ is a solution to the ODE given above. I don't know how to proceed from this point. How can we argue that this equals $dy$ ?
|
The equality in $dy = f\, dx$ is very misleading, because strictly speaking it's not true. To see why, note that $dy$ is a differential $1$ -form defined on $\Bbb{R}^2$ , which means for each $p \in \Bbb{R}^2$ , $dy_p : T_p \Bbb{R}^2 \to \Bbb{R}$ is linear. Similarly, $dx$ is also a differential $1$ -form on $\Bbb{R}^2$ . Let's for the sake of concreteness say $f: \Bbb{R}^2 \to \Bbb{R}$ is defined on all of $\Bbb{R}^2$ , so that $f \, dx$ is still a $1$ -form on $\Bbb{R}^2$ . So, if we just write $dy = f \, dx$ , this means that the $1$ -form on the LHS must equal the $1$ -form on the RHS. But this is just not the case, because it amounts to saying that $dy$ and $dx$ are linearly-dependent over the module $C^{\infty}(\Bbb{R}^2)$ . Just to really drive this point home, let's fix a point $p \in \Bbb{R}^2$ , then, if that equality were true, it would mean $dy_p = f(p)\, dx_p$ , where the equality is as elements in $T_p^*(\Bbb{R}^2)$ (the dual of the tangent space; i.e the cotangent space). But this is of course absurd, because if you evaluate both sides on the tangent vector $\dfrac{\partial}{\partial y}\bigg|_{p} \in T_p\Bbb{R}^2$ , you'll get the absurd equality $1 = 0$ . Yet again, the statement $dy = f \, dx$ is kind of like saying the row vector $(0 , 1)$ equals $\lambda \cdot (1,0)$ for some $\lambda \in \Bbb{R}$ ... which is plain wrong. Now that I've hopefully convinced you that the equation taken literally is false, how do we interpret it? Well, the last sentence of your question gives a clue it says "... by showing that both differential forms agree in every point c(t) on the tangent space." But the tangent space of what? $\Bbb{R}^2$ ? Clearly not, as I've just shown above. What is actually meant is that these two differential forms agree at every point $c(t) \in \Bbb{R}^2$ , when restricted to the (one-dimensional) subspace $T_{c(t)}\left(\text{image}(c) \right) \subset T_{c(t)} \Bbb{R}^2$ . But what is the tangent space to the image of $c$ ? It shouldn't be too hard to convince yourself that if you write $c(t) = (t, c_2(t))$ then the tangent space to the image equals the linear span of the (non-zero) vector \begin{align}
\xi_{c(t)} :=\dfrac{\partial}{\partial x}\bigg|_{c(t)} + c_2'(t) \dfrac{\partial}{\partial y}\bigg|_{c(t)} \in T_{c(t)} \Bbb{R}^2
\end{align} (i.e $c(t) = (t, c_2(t))$ implies $c'(t) = (1, c_2'(t))$ so the tangent space is just the
span of this vector). So, we have to show that for all $t \in [a,b]$ and for all $\zeta_{c(t)} \in T_{c(t)} \left( \text{image}(c)\right)$ , \begin{align}
dy_{c(t)}(\zeta_{c(t)}) &= f(c(t)) \cdot dx_{c(t)}(\zeta_{c(t)})
\end{align} But notice that since the tangent space to the image is one-dimensional, it suffices to verify equality when evaluated on the basis vector $\xi_{c(t)}$ defined above; i.e it's enough to prove \begin{align}
dy_{c(t)}(\xi_{c(t)}) &= f(c(t)) \cdot dx_{c(t)}(\xi_{c(t)}).
\end{align} This is straight forward: \begin{align}
dy_{c(t)}(\xi_{c(t)}) &= dy_{c(t)}\left(
\dfrac{\partial}{\partial x}\bigg|_{c(t)} + c_2'(t) \dfrac{\partial}{\partial y}\bigg|_{c(t)}\right) \\
&= c_2'(t) \\
&= f(c(t)) \tag{$c$ solves the ODE} \\
&= f(c(t)) \cdot 1 \\
&= f(c(t))\cdot dx_{c(t)}(\xi_{c(t)}).
\end{align} So, this completes the proof. Note that another way of stating the equality is that $c^*(dy) = c^*(f \, dx)$ ; i.e when you pull-back the two $1$ -forms on $\Bbb{R}^2$ via the curve $c$ , you get two $1$ -forms , but now defined on $[a,b]$ ; and it is these forms which are equal.
|
{
"source": [
"https://math.stackexchange.com/questions/3598325",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/123308/"
]
}
|
3,598,326 |
I can apply the needed theorem to get me to the fact that it only has one Sylow $3$ -Subgroup but I donʻt know how to find exactly what it is. I have the multiplication table computed so help in regards using the table specifically would be beneficial.
|
The equality in $dy = f\, dx$ is very misleading, because strictly speaking it's not true. To see why, note that $dy$ is a differential $1$ -form defined on $\Bbb{R}^2$ , which means for each $p \in \Bbb{R}^2$ , $dy_p : T_p \Bbb{R}^2 \to \Bbb{R}$ is linear. Similarly, $dx$ is also a differential $1$ -form on $\Bbb{R}^2$ . Let's for the sake of concreteness say $f: \Bbb{R}^2 \to \Bbb{R}$ is defined on all of $\Bbb{R}^2$ , so that $f \, dx$ is still a $1$ -form on $\Bbb{R}^2$ . So, if we just write $dy = f \, dx$ , this means that the $1$ -form on the LHS must equal the $1$ -form on the RHS. But this is just not the case, because it amounts to saying that $dy$ and $dx$ are linearly-dependent over the module $C^{\infty}(\Bbb{R}^2)$ . Just to really drive this point home, let's fix a point $p \in \Bbb{R}^2$ , then, if that equality were true, it would mean $dy_p = f(p)\, dx_p$ , where the equality is as elements in $T_p^*(\Bbb{R}^2)$ (the dual of the tangent space; i.e the cotangent space). But this is of course absurd, because if you evaluate both sides on the tangent vector $\dfrac{\partial}{\partial y}\bigg|_{p} \in T_p\Bbb{R}^2$ , you'll get the absurd equality $1 = 0$ . Yet again, the statement $dy = f \, dx$ is kind of like saying the row vector $(0 , 1)$ equals $\lambda \cdot (1,0)$ for some $\lambda \in \Bbb{R}$ ... which is plain wrong. Now that I've hopefully convinced you that the equation taken literally is false, how do we interpret it? Well, the last sentence of your question gives a clue it says "... by showing that both differential forms agree in every point c(t) on the tangent space." But the tangent space of what? $\Bbb{R}^2$ ? Clearly not, as I've just shown above. What is actually meant is that these two differential forms agree at every point $c(t) \in \Bbb{R}^2$ , when restricted to the (one-dimensional) subspace $T_{c(t)}\left(\text{image}(c) \right) \subset T_{c(t)} \Bbb{R}^2$ . But what is the tangent space to the image of $c$ ? It shouldn't be too hard to convince yourself that if you write $c(t) = (t, c_2(t))$ then the tangent space to the image equals the linear span of the (non-zero) vector \begin{align}
\xi_{c(t)} :=\dfrac{\partial}{\partial x}\bigg|_{c(t)} + c_2'(t) \dfrac{\partial}{\partial y}\bigg|_{c(t)} \in T_{c(t)} \Bbb{R}^2
\end{align} (i.e $c(t) = (t, c_2(t))$ implies $c'(t) = (1, c_2'(t))$ so the tangent space is just the
span of this vector). So, we have to show that for all $t \in [a,b]$ and for all $\zeta_{c(t)} \in T_{c(t)} \left( \text{image}(c)\right)$ , \begin{align}
dy_{c(t)}(\zeta_{c(t)}) &= f(c(t)) \cdot dx_{c(t)}(\zeta_{c(t)})
\end{align} But notice that since the tangent space to the image is one-dimensional, it suffices to verify equality when evaluated on the basis vector $\xi_{c(t)}$ defined above; i.e it's enough to prove \begin{align}
dy_{c(t)}(\xi_{c(t)}) &= f(c(t)) \cdot dx_{c(t)}(\xi_{c(t)}).
\end{align} This is straight forward: \begin{align}
dy_{c(t)}(\xi_{c(t)}) &= dy_{c(t)}\left(
\dfrac{\partial}{\partial x}\bigg|_{c(t)} + c_2'(t) \dfrac{\partial}{\partial y}\bigg|_{c(t)}\right) \\
&= c_2'(t) \\
&= f(c(t)) \tag{$c$ solves the ODE} \\
&= f(c(t)) \cdot 1 \\
&= f(c(t))\cdot dx_{c(t)}(\xi_{c(t)}).
\end{align} So, this completes the proof. Note that another way of stating the equality is that $c^*(dy) = c^*(f \, dx)$ ; i.e when you pull-back the two $1$ -forms on $\Bbb{R}^2$ via the curve $c$ , you get two $1$ -forms , but now defined on $[a,b]$ ; and it is these forms which are equal.
|
{
"source": [
"https://math.stackexchange.com/questions/3598326",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/668300/"
]
}
|
3,598,343 |
Prove if an injective map $f:A\longrightarrow B$ exists there is also a surjective map from $A$ to a subset of $B$ . Say we are given an injective map $f: S \longrightarrow N$ . It is easy to see that $f$ is surjective to some subset of $N$ . Does it even need proving, or it enough to say, 'it simply follows'?
|
The equality in $dy = f\, dx$ is very misleading, because strictly speaking it's not true. To see why, note that $dy$ is a differential $1$ -form defined on $\Bbb{R}^2$ , which means for each $p \in \Bbb{R}^2$ , $dy_p : T_p \Bbb{R}^2 \to \Bbb{R}$ is linear. Similarly, $dx$ is also a differential $1$ -form on $\Bbb{R}^2$ . Let's for the sake of concreteness say $f: \Bbb{R}^2 \to \Bbb{R}$ is defined on all of $\Bbb{R}^2$ , so that $f \, dx$ is still a $1$ -form on $\Bbb{R}^2$ . So, if we just write $dy = f \, dx$ , this means that the $1$ -form on the LHS must equal the $1$ -form on the RHS. But this is just not the case, because it amounts to saying that $dy$ and $dx$ are linearly-dependent over the module $C^{\infty}(\Bbb{R}^2)$ . Just to really drive this point home, let's fix a point $p \in \Bbb{R}^2$ , then, if that equality were true, it would mean $dy_p = f(p)\, dx_p$ , where the equality is as elements in $T_p^*(\Bbb{R}^2)$ (the dual of the tangent space; i.e the cotangent space). But this is of course absurd, because if you evaluate both sides on the tangent vector $\dfrac{\partial}{\partial y}\bigg|_{p} \in T_p\Bbb{R}^2$ , you'll get the absurd equality $1 = 0$ . Yet again, the statement $dy = f \, dx$ is kind of like saying the row vector $(0 , 1)$ equals $\lambda \cdot (1,0)$ for some $\lambda \in \Bbb{R}$ ... which is plain wrong. Now that I've hopefully convinced you that the equation taken literally is false, how do we interpret it? Well, the last sentence of your question gives a clue it says "... by showing that both differential forms agree in every point c(t) on the tangent space." But the tangent space of what? $\Bbb{R}^2$ ? Clearly not, as I've just shown above. What is actually meant is that these two differential forms agree at every point $c(t) \in \Bbb{R}^2$ , when restricted to the (one-dimensional) subspace $T_{c(t)}\left(\text{image}(c) \right) \subset T_{c(t)} \Bbb{R}^2$ . But what is the tangent space to the image of $c$ ? It shouldn't be too hard to convince yourself that if you write $c(t) = (t, c_2(t))$ then the tangent space to the image equals the linear span of the (non-zero) vector \begin{align}
\xi_{c(t)} :=\dfrac{\partial}{\partial x}\bigg|_{c(t)} + c_2'(t) \dfrac{\partial}{\partial y}\bigg|_{c(t)} \in T_{c(t)} \Bbb{R}^2
\end{align} (i.e $c(t) = (t, c_2(t))$ implies $c'(t) = (1, c_2'(t))$ so the tangent space is just the
span of this vector). So, we have to show that for all $t \in [a,b]$ and for all $\zeta_{c(t)} \in T_{c(t)} \left( \text{image}(c)\right)$ , \begin{align}
dy_{c(t)}(\zeta_{c(t)}) &= f(c(t)) \cdot dx_{c(t)}(\zeta_{c(t)})
\end{align} But notice that since the tangent space to the image is one-dimensional, it suffices to verify equality when evaluated on the basis vector $\xi_{c(t)}$ defined above; i.e it's enough to prove \begin{align}
dy_{c(t)}(\xi_{c(t)}) &= f(c(t)) \cdot dx_{c(t)}(\xi_{c(t)}).
\end{align} This is straight forward: \begin{align}
dy_{c(t)}(\xi_{c(t)}) &= dy_{c(t)}\left(
\dfrac{\partial}{\partial x}\bigg|_{c(t)} + c_2'(t) \dfrac{\partial}{\partial y}\bigg|_{c(t)}\right) \\
&= c_2'(t) \\
&= f(c(t)) \tag{$c$ solves the ODE} \\
&= f(c(t)) \cdot 1 \\
&= f(c(t))\cdot dx_{c(t)}(\xi_{c(t)}).
\end{align} So, this completes the proof. Note that another way of stating the equality is that $c^*(dy) = c^*(f \, dx)$ ; i.e when you pull-back the two $1$ -forms on $\Bbb{R}^2$ via the curve $c$ , you get two $1$ -forms , but now defined on $[a,b]$ ; and it is these forms which are equal.
|
{
"source": [
"https://math.stackexchange.com/questions/3598343",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/726039/"
]
}
|
3,599,410 |
The usual definition of a category states: a category $\mathbf{C}$ consists of: A collection $\text{ob}(\mathbf{C})$ of objects A collection $\text{arr}(\mathbf{C})$ of arrows Some rules on the behaviour of these two types of objects Leaving aside what collection here means, I realized that I have never seen a clear definition of what an arrow in a category is.
Surely, in the typical categories like $\textbf{Set}$ , $\textbf{Top}$ or $\textbf{Grp}$ arrows are functions that ... But there are categories whose arrows are not "functions". If you have a poset $(P,\le)$ then you can get a category whose objects are elements of $P$ and such that there is an arrow $x\to y$ if and only if $x\le y$ in $P$ . Okay but what kind of "entity" is the arrow there?, is there a precise, rigorous definition or do just have to accept that there are objects and arrows and asume that they follow the required rules, just like we asume there some things called "numbers" that follow the rules of arithmetic? I hope I made myself clear Thanks!
|
There are already many excellent answers, but I want to add another perspective, already partly found in other answers, but I hope distinct enough to stand on its own. I like to explain by analogy. Consider the question, "What is a vector?" What is a vector? Well, you might get you any of the following informal definitions as a response: (a) a list of numbers, (b) a quantity with magnitude and direction, (c) a quantity that transforms like a vector under a change of coordinates. (I think I might know some physicists who would take issue with me calling (c) informal, but oh well). And you might then ask, well ok, but what's a formal definition of a vector? Let's think of some examples of vectors. The elements of $\Bbb{R}^3$ , the elements of $\Bbb{R}[x]$ , continuous functions from $X$ to $\Bbb{R}$ , where $X$ is a topological space. These seem like fairly different objects, but the
common factor here is that a vector is simply an element of a set $V$ with a specified vector space structure. I.e., the formal definition of vector is simply an element of a vector space. Why is this useful as a definition? Well, all of the properties of vectors are already encoded in the definition of vector space. So if I tell you that $v,w$ are vectors in $V$ , and $r\in\Bbb{R}$ is a scalar, then you know that $v+w$ is also a vector, and that $rv$ is a vector, and that $r(v+w)=rv+rw$ . All the properties of a vector that we might find interesting are encoded in the vector space axioms. Note also that part of this means that it's meaningless to say $v$ is a vector on its own. It's only meaningful to say that $v$ is a vector of some vector space $V$ . This is good, because as an element $v$ might belong to many different vector spaces with different structures, but depending on the ambient vector space structure, $v$ might behave completely differently. Bringing it back to arrows Similarly, if I say $f:X\to Y$ is an arrow of a category $\mathcal{C}$ . The rigorous definition of arrow here is simply that $f$ belongs to the collection of arrows $\operatorname{Arr}(\mathcal{C})$ , and that the domain of $f$ is $X$ and the codomain of $f$ is $Y$ . All of the other interesting properties of arrows (for example that I could compose $f:X\to Y$ with an arrow $g:Y\to Z$ to get an arrow $g\circ f:X\to Z$ ) are already encoded in the axioms of the category $\mathcal{C}$ , and there's no need to say anything further to define arrows. Edit: Since comments are not permanent, I just want to edit in the link from Ethan Bolker's comment, to an excellent answer with a similar viewpoint to this one in reply to a similar (in spirit) question about "what actually is a polynomial?" The second paragraph in particular really captures what I wanted to say in my answer, (paraphrasing Ethan's answer) what really matters isn't what something actually is , but rather how it behaves.
|
{
"source": [
"https://math.stackexchange.com/questions/3599410",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/365761/"
]
}
|
3,599,893 |
I had this idea to build a model of Earth in Minecraft. In this game, everything is built on a 2D plane of infinite length and width. But, I wanted to make a world such that someone exploring it could think that they could possibly be walking on a very large sphere. (Stretching or shrinking of different places is OK.) What I first thought about doing was building a finite rectangular model of the world as like a mercator projection, and tessellating this model infinitely throughout the plane. Someone starting in the US could swim eastwards in a straight line across the Atlantic, walk across Africa and Asia, continue through the Pacific and return to the US. This would certainly create a sense of 3D-ness. However, if you travel north from the North Pole, you would wind up immediately at the South Pole. That wouldn't be right. After thinking about it, I hypothesized that an explorer of this model might conclude that they were walking on a donut-shaped world, since that would be the shape of a map where the left was looped around to the right (making a cylinder), and then the top was looped to the bottom. For some reason, by simply tessellating the map, I was creating a hole in the world. Anyway, to solve this issue, I thought about where one ends up after travelling north from various parts of the world. Going north from Canada, and continuing to go in that direction, you end up in Russia and you face south. The opposite is true as well: going north from Russia, you end up in Canada pointing south. Thus, I started to modify the tessellation to properly connect opposing parts of Earth at the poles. When going north of a map of Earth, the next (duplicate) map would have to be rotated 180 degrees to reflect the fact that one facing south after traversing the north pole. This was OK. However, to properly connect everything, the map also had to be flipped about the vertical axis. On a globe, if Alice starts east of Bob and they together walk North and cross the North Pole, Alice still remains east of Bob. So, going north from a map, the next map must be flipped to preserve the east/west directions that would have been otherwise rotated into the wrong direction. Now the situation is hopeless. After an explorer walks across the North Pole in this Minecraft world, he finds himself in a mirrored world. If the world were completely flat, it would feel as if walking North will take you from the outside of a 3D object to its inside. Although I now think that it is impossible to trick an explorer walking on infinite plane into thinking he is on a sphere-like world, a part of me remains unconvinced. Is it really impossible? Also, how come a naive tessellation introduces a hole? And finally, if an explorer were to roam the world where crossing a pole flips everything, what would he conclude the shape of the world to be?
|
What you want to do is not possible because there is no flat sphere. That is, there is no way to put a metric on a topological sphere such that the curvature is everywhere zero. This can be shown using the Gauss-Bonnet theorem : the global curvature (by which I mean the integral of the curvature on the whole sphere) is equal to ( $2\pi$ times) the Euler characteristic, which for a sphere is $2$ (and not $0$ ). On the other hand, it is very well-known to gamers that there are flat tori: you just teleport on the other side when you hit a wall. This is illustrated by the fact that the Euler characteristic of a torus is $0$ , so there can be a flat metric on a torus (and indeed you can define one by expressing the torus as a quotient of the plane).
|
{
"source": [
"https://math.stackexchange.com/questions/3599893",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/248182/"
]
}
|
3,599,899 |
Let $f(x) := \ln(1+e^{-x})$ and $$u_n := \sum_{k=1}^n \frac{(-1)^{k-1}}{k^2}.$$ Given that for $x>0$ $$ \tag{1} \sum_{k=1}^n \frac{(-1)^{k-1}}{k}e^{-kx} - \frac{e^{-(n+1)x}}{n+1} \le f(x) \le \sum_{k=1}^n \frac{(-1)^{k-1}}{k}e^{-kx} + \frac{e^{-(n+1)x}}{n+1} $$ it is asked to prove that $$u_n - \dfrac{1}{(n+1)^2} \le \int_0^n f(x) dx \le u_n + \frac{1}{(n+1)^2}$$ I tried to integrate the first double-inequality form $0$ to $n$ but I get extra terms of unknown sign. For the left side for instance I get $$ \tag{2} \left( u_n - \dfrac{1}{(n+1)^2} \right) + \left(\dfrac{e^{-(n+1)n}}{(n+1)^2} -
\displaystyle\sum_{k=1}^n \dfrac{(-1)^{k-1}}{k^2} e^{-nk} \right) \le \displaystyle\int_0^n f(x) dx $$ I tried to prove that the second term is positive (sufficient condition) but in vain. thanks for any advice.
|
What you want to do is not possible because there is no flat sphere. That is, there is no way to put a metric on a topological sphere such that the curvature is everywhere zero. This can be shown using the Gauss-Bonnet theorem : the global curvature (by which I mean the integral of the curvature on the whole sphere) is equal to ( $2\pi$ times) the Euler characteristic, which for a sphere is $2$ (and not $0$ ). On the other hand, it is very well-known to gamers that there are flat tori: you just teleport on the other side when you hit a wall. This is illustrated by the fact that the Euler characteristic of a torus is $0$ , so there can be a flat metric on a torus (and indeed you can define one by expressing the torus as a quotient of the plane).
|
{
"source": [
"https://math.stackexchange.com/questions/3599899",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/649402/"
]
}
|
3,599,901 |
Let $(z_n)$ be a sequence of complex numbers and $$\lim_{n\rightarrow \infty}|z_n|=0$$ this implies that $$|\lim_{n\rightarrow \infty} z_n|=0,$$ and therefore $$\lim_{n\rightarrow \infty}z_n=0$$ $$\Rightarrow (z_n) \quad \text{is converges to 0=0+0i.}$$ Conversely, Let $(z_n)$ is converges to 0 i.e., $$\lim_{n\rightarrow \infty}z_n=0$$ does this imply $$\lim_{n\rightarrow \infty}|z_n|=0$$
|
What you want to do is not possible because there is no flat sphere. That is, there is no way to put a metric on a topological sphere such that the curvature is everywhere zero. This can be shown using the Gauss-Bonnet theorem : the global curvature (by which I mean the integral of the curvature on the whole sphere) is equal to ( $2\pi$ times) the Euler characteristic, which for a sphere is $2$ (and not $0$ ). On the other hand, it is very well-known to gamers that there are flat tori: you just teleport on the other side when you hit a wall. This is illustrated by the fact that the Euler characteristic of a torus is $0$ , so there can be a flat metric on a torus (and indeed you can define one by expressing the torus as a quotient of the plane).
|
{
"source": [
"https://math.stackexchange.com/questions/3599901",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/561195/"
]
}
|
3,602,374 |
In many areas of math one can talk about types of closure: Subsets of sets with binary operations can be closed under that binary operation, subsets of topological spaces can be closed, sets of ordinals can be closed. There seems to be a common thread between many of these: the intersection of these structures is always a structure of the same kind. For example, if $A,B\subseteq(S,*)$ are closed under the binary operation $*$ , then so is their intersection. Often we can even say more: the arbitrary intersection of subgroups is a subgroup, etc. The intersection of $<\kappa$ club subsets of $\kappa$ is club, although the "ub" is unimportant - the intersection can probably be arbitrary if we only require closure. The intersection of $\sigma$ -algebras is a $\sigma$ -algebra, although I think this just a consequence of the binary operation example. Filters have the finite intersection property. The intersection of closed sets in a topological space is closed (one might simply see this as a consequence of De Morgan, but I think it is similar to the other examples when viewing closed sets as those which contain all their limit points as opposed to complements of open sets). Many examples of these kinds are very, very easy to prove, often following straight from the definitions. So much so that I might hesitate to make any comment on them in the first place, were it not for my inability to formally identify what exactly it is in all these structures that forces them to have this intersection-closure property. And maybe it's nothing at all, and I'm just cherry picking (after all, several structures aren't closed under intersection, like open sets, cardinality, etc). So my question: Is there a generalized "closedness" property which encompasses these examples as well as several others? Maybe the property is more general than intersection of sets? I gave several set-theoretic examples but that is only due to my mathematical exposure, and I'm not just asking about those in set theory. Maybe there are even equivalent notions of "intersection" and "closure" outside of a set-theoretic context. Edit: As user yoyostein mentioned, maybe there is a categorical perspective on this. At the risk of exposing my severe lack of expertise: my thoughts are to define a "categorical inclusion morphism" generalizing the inclusion morphism from a subset to a set. Then fixing $A,B$ we take the category whose objects $(f_{1},g_{1},X)$ consist of these inclusion maps $f_{1}:X\rightarrow A$ , $f_{2}:X\rightarrow B$ and whose morphisms are the usual commutative diagrams. Then $A\cap B$ would be final in this category, and so these "closed" structures would really be those for which this intersection construction exists in their respective categories. Any chance this is going anywhere? In the answer here user Stahl gives a categorical explanation for why this is the case for many algebraic structures. Unfortunately, I'm not familiar enough with category theory to tell if what Stahl has written generalizes to "less algebraically-motivated" structures like topological spaces or club sets (actually, I think those are topological), but I'd guess in many cases the properties of the categories he's mentioning hold elsewhere like in $\mathsf{Top}$ .
|
Many of these examples can be generalized by the notion of closure. Say in your universe $U$ you have a mapping $\operatorname{cl}: \mathcal{P}(U) \rightarrow \mathcal{P}(U)$ with the properties that i) $A \subseteq \operatorname{cl}(A)$ for all $A$ ii) if $A \subseteq B$ then $\operatorname{cl}(A) \subseteq \operatorname{cl}(B)$ (monotonicity.) Then defined the "closed" sets $S$ to be those for which $\operatorname{cl}(S) = S$ . Usually $\operatorname{cl}(S)$ is thought of as the object 'generated' by $S$ . For example, other than the usual closure from topology, $cl$ could be the span of vectors, or the subgroup/subring/submodule/ $\sigma$ -subalgebra etc. generated by $S$ ; or the connected components $S$ belongs to, or the convex hull of $S$ . We want to be able to combine elements of $S$ in various ways, and by taking $\operatorname{cl}(S)$ we add in all the extra elements of $U$ to do whatever it is we need, but no more. I claim that if $A,B$ are closed then $A \cap B$ is closed. Let $A,B$ be closed; then $$\operatorname{cl}(A \cap B) \subseteq \operatorname{cl}(A)$$ $$\operatorname{cl}(A \cap B) \subseteq \operatorname{cl}(B)$$ by (ii), implying $\operatorname{cl}(A \cap B) \subseteq \operatorname{cl}(A) \cap \operatorname{cl}(B)$ ; by definition, $\operatorname{cl}(A) = A$ and $\operatorname{cl}(B) = B$ , so $\operatorname{cl}(A \cap B) \subseteq A \cap B$ . Furthermore, $$\operatorname{cl}(A \cap B) \supseteq A \cap B$$ by (i); so $$\operatorname{cl}(A \cap B) = A \cap B.$$ So $A \cap B$ is closed. And the same proof works for showing intersections of arbitrary families of closed sets are closed. Conversely, if we have a family of 'closed' objects $\mathcal{F} \subseteq \mathcal{P}(U)$ that is closed under intersection, then we can define $\operatorname{cl}(A) = \bigcap \{S \in \mathcal{F} | S \supseteq A\}$ . In this case, $cl$ clearly obeys (i) and (ii), and $\mathcal{F} = \{A | \operatorname{cl}(A) = A\}$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3602374",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/614400/"
]
}
|
3,609,101 |
It’s well known that the geometric mean of a set of positive numbers is less sensitive to outliers than the arithmetic mean. It’s easy to see this by example, but is there a deeper theoretical reason for this? How would I go about “proving” that this is true? Would it make sense to compare the variances of the GM and AM of a sequence of random variables?
|
The geometric mean is the exponential of the arithmetic mean of a log-transformed sample. In particular, $$\log\left( \biggl(\prod_{i=1}^n x_i\biggr)^{\!1/n}\right) = \frac{1}{n} \sum_{i=1}^n \log x_i,$$ for $x_1, \ldots, x_n > 0$ . So this should provide some intuition as to why the geometric mean is insensitive to right outliers, because the logarithm is a very slowly increasing function for $x > 1$ . But what about when $0 < x < 1$ ? Doesn't the steepness of the logarithm in this interval suggest that the geometric mean is sensitive to very small positive values--i.e., left outliers? Indeed this is true. If your sample is $(0.001, 5, 10, 15),$ then your geometric mean is $0.930605$ and your arithmetic mean is $7.50025$ . But if you replace $0.001$ with $0.000001$ , this barely changes the arithmetic mean, but your geometric mean becomes $0.165488$ . So the notion that the geometric mean is insensitive to outliers is not entirely precise.
|
{
"source": [
"https://math.stackexchange.com/questions/3609101",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/208270/"
]
}
|
3,609,574 |
You are at point $C$ inside square $ABCD$ in the Cartesian plane, in which $A=(0,0), B=(0,1), C=(1,1), D=(1,0)$ . You want to get to vertex $A$ . However, your “speed” in $\frac{\text{units}}{\text{sec}}$ is everywhere equal to your y-coordinate. Can you get from $C$ to $A$ in finite time? If you can, what is the minimal time required for the journey? A friend asked me this as a challenge recently out of what I think was a calculus textbook. I haven’t found any concrete way of resolving the question one way or another (or even modeling it properly), but most of my intuition says that the voyage should not be possible in finite time. Specifically, it seems to me that this problem is somehow related to the fact that the harmonic series, as well as the integral of the harmonic series, diverges. On the other hand, perhaps this problem is like Zeno’s paradoxes- an infinite number of decreasing steps adding up to something finite. On solving the problem itself, I know that one can simplify whether it can be done in finite time to whether going straight down from $B$ to $A$ can be done in finite time. On minimizing the time taken (if it exists), I have no idea how to determine how to test infinite functions from $(1,1)$ to $(0,0)$ for their “speed”s, although I conjecture ones that are nowhere concave up should always be faster. $y=\sqrt{x}$
|
Another answer: To get from height $y$ to height $\frac y2$ , you need to travel a distance of at least $\frac y2$ at speeds less than $y$ . This takes at least half a second. So no matter how close you are to 0, you still have more than half a second left.
|
{
"source": [
"https://math.stackexchange.com/questions/3609574",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/664819/"
]
}
|
3,611,969 |
I am looking for two convex polygons $P,Q \subset \Bbb R^2$ such that $P$ does not tile the plane, $Q$ does not tile the plane, but if we allowed to use $P,Q$ together, then we can tile the plane. Here I do not require the tilings to be lattice tilings, or even periodic tilings. I allow tilings by congruent copies of $P$ and/or of $Q$ , i.e. I am allowing rotations and reflections! I haven't found any example, and maybe there could be none.
|
There is a tiling of the plane made from regular heptagons and irregular pentagons. We know that regular heptagons cannot tile the plane. The irregular pentagon has four equal sides and one shorter side. A tiling of the plane by these pentagons would require two pentagons to share the short side (as they do in the image), but the resulting angle cannot then be tiled by other pentagons, so this irregular pentagon does not tile the plane. Image via: https://twitter.com/gregegansf/status/1003181379469758464 I think the reference is to this paper: https://erikdemaine.org/papers/Sliceform_Symmetry/paper.pdf
|
{
"source": [
"https://math.stackexchange.com/questions/3611969",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/309286/"
]
}
|
3,616,290 |
If $\tau_{1}$ and $\tau_{2}$ are two topologies on a set $\Omega$ such that $\tau_{1}$ is weaker than $\tau_{2}$ (i.e. $\tau_{1}\subset\tau_{2}$ ) and $\tau_{1}$ is metrizable, is it then true that $\tau_{2}$ is also metrizable? My guess is that it is not true, since we do not know what the sets in $\tau_{2}$ look like, let alone whether they can contain 'open metric balls' or not. But maybe it is possible to adapt the metric so that the sets in $\tau_{2}$ can contain 'open metric balls'. Also, it may be the case that the converse is easier to work with: If $\tau_{2}$ is not metrizable, then $\tau_{1}$ is not metrizable. I find it very hard to think of any (counter)examples. Any help or hints would be greatly appreciated!
|
No, take $X=\Bbb R$ and $\tau_1$ the usual topology, clearly metrisable, and $\tau_2$ the Sorgenfrey (aka "lower limit") topology (generated by the sets of the form $[a,b)$ ). Then $\tau_1 \subsetneq \tau_2$ but $\tau_2$ is not metrisable, for several reasons, the most accessible of which are (and the reason it's covered in some many topology text books and papers): its square is not normal, or it's separable but it doesn't have a countable base. See Wikipedia for more info. Another example is $\mathbb{R}^\omega$ in the (metrisable) product topology and the finer box topology (which is not even first countable). Also Munkres' $\Bbb R_K$ space which is $\Bbb R$ in the usual topology but with one extra closed set $K=\{\frac{1}{n}: n \in \Bbb N^+\}$ , is not even regular (and all metrisable spaces are normal and perfectly normal) is another elementary example (see 2nd edition, p.197/198.).
|
{
"source": [
"https://math.stackexchange.com/questions/3616290",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/701529/"
]
}
|
3,619,098 |
Maybe the question is very trivial in a sense. So, it doesn't work for anyone. A few years ago, when I was a seventh-grade student, I had found a quadratic formula for myself. Unfortunately, I didn't have the chance to show it to my teacher at that time and later I saw that it was "trivial". I saw this formula again by chance while mixing my old notebooks. I wonder if this simple formula is used somewhere. The original method Let's remember the original method first: $$\color{#c00}{ax^2+bx+c=0, ~~\text {}~a\neq 0} \\ 4a^2 x^2+4abx+4ac =0 \\ 4a^2 x^2+4abx=-4ac \\ 4a^2 x^2+4abx+b^2=b^2-4ac \\ \left(2ax+b \right)^2 =b^2-4ac \\ 2ax+b= \pm \sqrt{b^2-4ac} \\ x_{1,2}= \dfrac{\pm\sqrt{b^2-4ac} -b}{2a} \\ \bbox[5px,border:2px solid #C0A000] {x_{1,2}=\dfrac{-b\pm\sqrt{b^2-4ac}}{2a}}$$ In fact, the "meat" of this method is as follows: $$\color{#c00}{{ax^2+bx+c=0, ~~\text {}~a\neq 0}}\\x^2+\dfrac{b}{a}x+ \dfrac{c}{a}=0 \\\left (x+ \dfrac{b}{2a} \right)^2- \left (\dfrac{b}{2a} \right)^2+\dfrac{c}{a}=0 \\ \left (x+ \dfrac{b}{2a} \right)^2=\dfrac{b^2}{4a^2}-\dfrac {c}{a} \\ \left (x+ \dfrac{b}{2a} \right)^2=\dfrac{b^2-4ac}{4a^2} \\ x+ \dfrac{b}{2a}= \dfrac{\pm\sqrt{b^2-4ac}}{2a} \\ \bbox[5px,border:2px solid #C0A000] {x_{1,2}=\dfrac{-b\pm\sqrt{b^2-4ac}}{2a}}$$ Construction of the general formula Now, we know that if one of the roots for $ax^2+bx+c=0$ is $x = 0,$ then our equation is equivalent to $ax^2 + bx = 0.$ No special formula is required to solve the last equation. In this sense, I am setting off by accepting that $x \neq0.$ $$\color{#c00}{ax^2+bx+c=0, ~~\text {}~a\neq 0} \\ a+\dfrac {b}{x} +\dfrac{c}{x^2}=0 \\ \dfrac{c}{x^2}+\dfrac {b}{x} +a=0 \\ \dfrac{4c^2}{x^2}+\dfrac{4bc}{x}+4ac=0 \\ \dfrac{4c^2}{x^2}+\dfrac{4bc}{x}=-4ac \\ \dfrac{4c^2}{x^2}+\dfrac{4bc}{x}+b^2=b^2-4ac\\ \left( \dfrac {2c}{x}+b \right)^2=b^2-4ac \\ \dfrac {2c}{x}+b= \pm\sqrt{b^2-4ac} \\ \dfrac {2c}{x}=-b\pm\sqrt{b^2-4ac} \\ \color{#c00}{\bbox[5px,border:2px solid #C0A000]{x_{1,2}= \dfrac{2c}{-b\pm\sqrt{b^2-4ac}}}}$$ Proof of the general formula Let's rewrite the well-known general formula as follows: $$\dfrac{-b\color{red}{\pm}\sqrt{b^2-4ac}}{2a}=\dfrac{-b\color{red}{\mp}\sqrt{b^2-4ac}}{2a}$$ If we accept $c\neq0$ , then we have: $
\dfrac{2c}{-b\color{blue}{\pm}\sqrt{b^2-4ac}}=\dfrac{-b\color{red}{\mp}\sqrt{b^2-4ac}}{2a}\\
\begin{align}
\Longleftrightarrow \left(-b\color{blue}{\pm}\sqrt{b^2-4ac}\right) \times \left(-b\color{red}{\mp}\sqrt{b^2-4ac}\right) &=4ac\\
\Longleftrightarrow -\left(b\color{blue}{\mp}\sqrt{b^2-4ac}\right) \times \left( -\left(b\color{red}{\pm}\sqrt{b^2-4ac}\right)\right)&=4ac\\
\Longleftrightarrow \left(b\color{blue}{\mp}\sqrt{b^2-4ac}\right) \times \left(b\color{red}{\pm}\sqrt{b^2-4ac}\right)&=4ac\\
\Longleftrightarrow b^2-\left(b^2-4ac\right)&=4ac\\
\Longleftrightarrow 4ac&=4ac
. \end{align}$ Insufficient point of the formula Since we have accepted $x \neq 0$ before, this formula cannot work completely for $c = 0.$ If $c=0$ , then we have: $x_1=\dfrac {0}{-2b}=0$ which imply, one of the roots is correct. $x_2=\dfrac {0}{0}=\text{undefined}$ which imply, the second root is incorrect. Curious points of the formula These are interesting points for an untutored person like me. On the other hand, they are trivial. If the $\Delta$ $\left(\text{Discriminant}\right)$ is zero, then there is exactly one real root, sometimes called a repeated or double root. $\Delta=b^2-4ac$ $~$ or $~$ $D=b^2-4ac$ $~$ and $~$ $D=0$ , then we have : From the formula $~$ $\color{blue}{x_{1,2}=\dfrac{-b\pm\sqrt{D}}{2a}}$ , $$\color{blue}{x}=x_1=x_2=\dfrac{-b}{2a}=\color{blue}{-\dfrac{b}{2a}}$$ From the formula $~$ $\color{#c00}{x_{1,2}= \dfrac{2c}{-b\pm\sqrt{D}}}$ , $$\color{#c00}{x}=x_1=x_2=\dfrac{-2c}{b}=\color{#c00}{-\dfrac{2c}{b}}$$ which both are equal. $$\begin{align} \color{blue}{x}=x_1=x_2=\color{blue}{-\dfrac{b}{2a}} \color{black}{=} \color{#c00}{-\dfrac{2c}{b}}\Longrightarrow b^2=4ac \Longrightarrow b^2-4ac=0.\end{align}$$ The original formula does not work for $a = 0$ . However, the alternative formula also works when $a = 0$ . The important point is that we should be careful not to make the denominator zero. In other words, If $a=0$ and $b>0$ then we write: $$x=\dfrac{2c}{-b-\sqrt{b^2}}=-\dfrac {c}{b}$$ If $a=0$ and $b<0$ then we write: $$x=\dfrac{2c}{-b+\sqrt{b^2}}=-\dfrac {c}{b}$$ My question Maybe in some special cases, can this formula be more useful than its own alternative? (I assume the formula I found here is correct.)
|
This is a very useful formula for when you want to accurately find the roots of a quadratic equation in which $a$ might be very small using finite precision arithmetic (e.g. on a computer). It's something I have used occasionally in programming. Sometimes it is called the "Citardauq formula" since it's sort of the quadratic formula, but backwards. When $a$ is really small and $b$ is positive, the formula $$\frac{-b +\sqrt{b^2 - 4ac}}{2a}$$ might involve adding $-b$ and $\sqrt{b^2-4ac}$ which is about $b$ - meaning that most of the significant figures cancel with each other - this causes a loss of significance in a floating point calculation (bad). Worse, then you go and divide this small result by $2a$ which means that if you were using a fixed point calculation, you've now suffered a loss of significance - either way, you could end up keeping track of lots of digits in the intermediate values and still get an inaccurate answer. Plus, this gives the impression that the exact value $a$ matters a ton since we divided by it, but if $b$ is really large and $a$ really small, the root of the quadratic closer to $0$ might not depend very much on $a$ - the quadratic would basically be linear near $0$ - despite what this formula suggests. (Of course, this formula accurately depicts the other root: if $a$ is small, its exact value does massively influence where the further root is). On the other hand, the equivalent value $$\frac{2c}{-b - \sqrt{b^2 - 4ac}}$$ likely suffers from neither problem: the value of $\sqrt{b^2-4ac}$ is not cancelling with $-b$ but rather adding to it, which does cause an undue loss of precision - and we are probably not dividing two small numbers, unless $c$ and $b$ were both small. Note that you can mix and match these formulas, noting that the $+$ case of one is the $-$ case of the other for the $\pm$ term. This form also makes what happens in the limiting case where $a$ goes to $0$ clear - it just decays to $\frac{c}{-b}$ - and sometimes the root of a quadratic that you care about is mostly determined by this linear term anyways (e.g. if you wanted to know when a ball thrown quickly at the ceiling would hit it - the other formula references this time off of when the ball would reach its apex, which may be long after it would reach the ceiling. This formula respects that the answer is just "a bit longer than if there were no gravity"). As a result of numerical stability, it tends to not be unreasonable to list the roots of a quadratic with $b>0$ as: $$\frac{2c}{-b - \sqrt{b^2 - 4ac}} \text{ and }\frac{-b -\sqrt{b^2 - 4ac}}{2a}$$ since these forms avoid the loss of precision that happens when adding a term near $b$ to $-b$ . For negative $b$ , you would want to flip the signs of the added radical to avoid the cancellation. This is also sort of cute because it makes the fact that the product of the roots is $\frac{c}a$ more obvious, whereas the usual formula emphasizes that their sum is $\frac{-b}a$ . It's worthy of note that you can also derive this formula by starting with $$ax^2+bx+c=0$$ dividing by $x^2$ to get $$a+b(1/x)+c(1/x)^2 = 0$$ which is a quadratic in $1/x$ . Solving for $1/x$ using the usual formula and then reciprocating that gives the formula you list. Generally, if you exchange the order of the coefficients in a polynomial, you reciprocate its roots, which is an often useful abstract fact.
|
{
"source": [
"https://math.stackexchange.com/questions/3619098",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/460967/"
]
}
|
3,619,102 |
Does this series diverge?
I know it doesn't make sense to split them.
I know both are divergent, but I don't know how to test the difference of two divergent series. $$\sum_{n=1}^n \frac{1}{n}-\frac{1}{\sqrt{n}}$$ Do I combine them as one and go from there like this? $$\sum_{n=1}^n \frac{\sqrt{n}-n}{n^{1.5}}$$
|
This is a very useful formula for when you want to accurately find the roots of a quadratic equation in which $a$ might be very small using finite precision arithmetic (e.g. on a computer). It's something I have used occasionally in programming. Sometimes it is called the "Citardauq formula" since it's sort of the quadratic formula, but backwards. When $a$ is really small and $b$ is positive, the formula $$\frac{-b +\sqrt{b^2 - 4ac}}{2a}$$ might involve adding $-b$ and $\sqrt{b^2-4ac}$ which is about $b$ - meaning that most of the significant figures cancel with each other - this causes a loss of significance in a floating point calculation (bad). Worse, then you go and divide this small result by $2a$ which means that if you were using a fixed point calculation, you've now suffered a loss of significance - either way, you could end up keeping track of lots of digits in the intermediate values and still get an inaccurate answer. Plus, this gives the impression that the exact value $a$ matters a ton since we divided by it, but if $b$ is really large and $a$ really small, the root of the quadratic closer to $0$ might not depend very much on $a$ - the quadratic would basically be linear near $0$ - despite what this formula suggests. (Of course, this formula accurately depicts the other root: if $a$ is small, its exact value does massively influence where the further root is). On the other hand, the equivalent value $$\frac{2c}{-b - \sqrt{b^2 - 4ac}}$$ likely suffers from neither problem: the value of $\sqrt{b^2-4ac}$ is not cancelling with $-b$ but rather adding to it, which does cause an undue loss of precision - and we are probably not dividing two small numbers, unless $c$ and $b$ were both small. Note that you can mix and match these formulas, noting that the $+$ case of one is the $-$ case of the other for the $\pm$ term. This form also makes what happens in the limiting case where $a$ goes to $0$ clear - it just decays to $\frac{c}{-b}$ - and sometimes the root of a quadratic that you care about is mostly determined by this linear term anyways (e.g. if you wanted to know when a ball thrown quickly at the ceiling would hit it - the other formula references this time off of when the ball would reach its apex, which may be long after it would reach the ceiling. This formula respects that the answer is just "a bit longer than if there were no gravity"). As a result of numerical stability, it tends to not be unreasonable to list the roots of a quadratic with $b>0$ as: $$\frac{2c}{-b - \sqrt{b^2 - 4ac}} \text{ and }\frac{-b -\sqrt{b^2 - 4ac}}{2a}$$ since these forms avoid the loss of precision that happens when adding a term near $b$ to $-b$ . For negative $b$ , you would want to flip the signs of the added radical to avoid the cancellation. This is also sort of cute because it makes the fact that the product of the roots is $\frac{c}a$ more obvious, whereas the usual formula emphasizes that their sum is $\frac{-b}a$ . It's worthy of note that you can also derive this formula by starting with $$ax^2+bx+c=0$$ dividing by $x^2$ to get $$a+b(1/x)+c(1/x)^2 = 0$$ which is a quadratic in $1/x$ . Solving for $1/x$ using the usual formula and then reciprocating that gives the formula you list. Generally, if you exchange the order of the coefficients in a polynomial, you reciprocate its roots, which is an often useful abstract fact.
|
{
"source": [
"https://math.stackexchange.com/questions/3619102",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/762122/"
]
}
|
3,620,230 |
Yesterday I posted this conjecture, but then deleted it thinking it was false. Turns out Python doesn't define $a^b$ as a^b , but rather as a**b . Conjecture: Denote by $G$ Catalan's constant , then $$G=\cfrac{1}{1+\cfrac{1^4}{8+\cfrac{3^4}{16+\cfrac{5^4}{24+\cfrac{7^4}{32+\cfrac{9^4}{40+\ddots}}}}}}$$ Given the connection $G$ has with the number $8$ shown here , as well as this continued fraction reaching nearly the first five decimal places of $G$ after around $200$ iterations (vinculums), I am confident this is true. However, I do not know how to code a continued fraction on Python or Pari/GP (a friend of mine gave it a go, but also to no avail) up to an iteration $n$ without having to write it out manually, which is really tedious. Here is some python code from a friend, coding this fraction up to $12$ iterations to be $\approx 0.9151$ , reaching the first three decimal places of $G$ . The only 'local' behaviour that I can say about continued fractions is that most of them are convergent, and that they all converge via oscillation at each iteration. But, more importantly, I'd like to know that if this be true, can it be shown from here that $G$ is irrational (or even transcendental, if you are willing)? I am aware this is an unsolved problem, which was what inspired me to write $G$ in another closed form. Any thoughts? Thank you in advance.
|
The accepted answer is misleading. The continued fraction may well be found in that reference, but this is not a result from 2002, but rather a trivial consequence of Euler's continued fraction formula from 1748. You should take a look at the wikipedia page: https://en.wikipedia.org/wiki/Euler%27s_continued_fraction_formula Euler's continued fraction formula $$a_0 + a_0 a_1 + \ldots + a_0 \cdots a_n =
\frac{a_0}{\displaystyle{1 - \frac{a_1}{\displaystyle{1 + a_1 - \frac{a_2}{\ldots (1 + a_{n-1}) - \frac{a_n}{1 + a_n}}}}}}$$ Now exactly as in the worked example in the wikipedia page for $\tan^{-1}(x)$ , you get the completely formal identity: $$\sum_{n=0}^{\infty} \frac{(-1)^n x^{2n+1}} {(2n+1)^2}
= x + x \left(\frac{-x^2}{3^2}\right) + x \left(\frac{-x^2}{3^2}\right) \left(\frac{-3^2 x^2}{5^2}\right)
+ x \left(\frac{-x^2}{3^2}\right) \left(\frac{-3^2 x^2}{5^2}\right)\left(\frac{-5^2 x^2}{7^2}\right)+ \ldots$$ $$=\frac{x}{\displaystyle{1 + \frac{x^2}{\displaystyle{9 - x^2 + \frac{(9x)^2}{25 - 9 x^2 + \displaystyle{ \frac{(25 x)^2}{49 - 25 x^2 + \ldots }}}}}}}$$ The case $x=1$ is your example. You can plug in $x=i$ if you want to get a continued fraction for $\pi^2/8$ . There are literally thousands of completely trivial continued fractions on can create in this way; take any infinite sum and just formally write out the corresponding Euler continued fraction, clearing denominators in the obvious way. None of those should be considered anything more than a corollary of Euler's result (given the evaluation of the initial sum). Of course, in this case, the evaluation of the initial sum is that it is $G$ by definition. (And no, this doesn't give anywhere near good enough convergents to say anything about the rationality or otherwise of $G$ .)
|
{
"source": [
"https://math.stackexchange.com/questions/3620230",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/477343/"
]
}
|
3,633,109 |
In Linear Algebra, we consider characteristic polynomials. Is a characteristic polynomial we consider in Linear Algebra a polynomial or a polynomial function? I think it is a polynomial function. I am reading "Introduction to Linear Algebra" (in Japanese) by Kazuo Matsuzaka. In this book, the characteristic polynomial of a linear map $F$ is defined by $\det(A - \lambda I)$ , where $A$ is a matrix which represents $F$ . And in this book, the author defines a determinant only for a matrix whose elements belong to a some field $K$ . If $\det(A - \lambda I)$ is a polynomial, then the elements of $A - \lambda I$ are polynomials too. But the author didn't define a determinant for a matrix whose elements are polynomials.
|
Nice question! In many cases, that distinction is irrelevant, but in some cases it matters. And, when it matters, you are not right: it is a polynomial, not a polynomial function. For instance, polynomials have degrees, whereas polynomial functions don't (for instance, over $\mathbb F_2$ the polynomial function $x\mapsto x^2+x$ is the null function, but the polynomial $x^2+x$ still has degree $2$ , whereas the null polynomial still has degree $0$ ). And the degree of the characteristic polynomial of a $n\times n$ matrix is $n$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3633109",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/384082/"
]
}
|
3,636,009 |
Antoine's necklace is an embedding of the Cantor set in $\mathbb{R}^3$ constructed by taking a torus, replacing it with a necklace of smaller interlinked tori lying inside it, replacing each smaller torus with a necklace of interlinked tori lying inside it, and continuing the process ad infinitum ; Antoine's necklace is the intersection of all iterations. A number of sources claim that this necklace ' cannot fall apart ' (e.g. here ). Given that the necklace is totally disconnected this obviously has to be taken somewhat loosely, but I tried to figure out exactly what is meant by this. Most sources seem to point to this paper (which it must be noted contains some truly remarkable images, e.g. Figure 12). There the authors make the same point that Antoine's necklace 'cannot fall apart'. Nevertheless, all they seem to show in the paper is that it cannot be separated by a sphere (every sphere with a point of the necklace inside it and a point of the necklace outside it has a point of the necklace on it). It seems to me to be a reasonably trivial exercise to construct a geometrical object in $\mathbb{R}^3$ which cannot be separated by a sphere, and yet can still 'fall apart'. In the spirit of the construction of Antoine's necklace, these two interlinked tori cannot be separated by a sphere (any sphere containing a point of one torus inside it will contain a point of that torus on its surface), but this seems to have no relation to the fact that they cannot fall apart - if we remove a segment of one of the tori the object still cannot be separated by a sphere, and yet can fall apart macroscopically. The fact mentioned here that the complement of the necklace is not simply connected, and the fact mentioned here that there are loops that cannot be unlinked from the necklace shouldn't impact whether it can be pulled apart either, as both are true of our broken rings My question is this: Is it possible to let me know either: How I have misunderstood separation by a sphere (so that it may still be relevant to an object being able to fall apart), What property Antoine's necklace does satisfy so that it cannot fall apart (if I have missed this), or What is actually meant when it is said to be unable to fall apart (if I have misunderstood this)
|
if we remove a segment of one of the tori the object still cannot be separated by a sphere, and yet can fall apart macroscopically. Well, sure it can. The key observation is that "sphere" means "homeomorphic image of a sphere" in this context. Turn the ring with the part removed (the "C") and pull the in tact ring (the "O") through the missing piece. Then you have two separate components, the C and the O, which it is easy to see could be placed inside and outside of a sphere. Let's put the C on the inside and the O on the outside. Now, think about shrinking that sphere very tightly (but not touching) around the C, then putting the C back where it was. You've separated the two by the homeomorphic image of a sphere. Think about if we tried to do this with two Os, like your first figure. If the Os weren't interlocked, sure, it's easy, just like when the C and the O were separated. But if two Os are interlocked, there isn't any way to fit a sphere around one of them in the same way as you could with the C. And if you can't do it with only two Os, you certainly can't do it with Antoine's Necklace. Thus, it will stay together.
|
{
"source": [
"https://math.stackexchange.com/questions/3636009",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/284973/"
]
}
|
3,640,770 |
On a closed interval (e.g. $[-\pi, \pi]$ ), $\cos{x}$ has finitely many zeros. Thus I wonder if we could fit a finite degree polynomial $p:\mathbb{R} \to \mathbb{R}$ perfectly to $\cos{x}$ on a closed interval such as $[-\pi, \pi]$ . The Taylor series is $$\cos{x} = \sum_{i=0}^{\infty} (-1)^i\frac{x^{2i}}{(2i)!} = 1 - \frac{x^2}{2} + \frac{x^4}{4!} - \frac{x^6}{6!} + \frac{x^8}{8!}-\dots$$ Using Desmos to graph $\cos{x}$ and $1-\frac{x^2}{2}$ yields: which is clearly imperfect on $[-\pi,\pi]$ . Using a degree 8 polynomial (the first 5 terms of the Taylor series above) looks more promising: But upon zooming in very closely, the approximation is still imperfect: There is no finite degree polynomial that equals $\cos{x}$ on all of $\mathbb{R}$ (although I do not know how to prove this either), but can we prove that no finite degree polynomial can perfectly equal $\cos{x}$ on any closed interval $[a,b]\subseteq \mathbb{R}$ ? Would it be as simple as proving that the remainder term in Taylor's Theorem cannot equal 0? But this would only prove that no Taylor polynomial can perfectly fit $\cos{x}$ on a closed interval...
|
Yes, it is impossible. Pick any point in the interior of the interval, and any polynomial. If you differentiate the polynomial repeatedly at that point, you will eventually get only zeroes. This doesn't happen for the cosine function, which instead repeats in an infinite cycle of length $4$ . Thus the cosine function cannot be a polynomial on a domain with non-empty interior.
|
{
"source": [
"https://math.stackexchange.com/questions/3640770",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/711748/"
]
}
|
3,641,205 |
I know what "surface area" means for: a 2d shape a cylinder or cone but I don't know what it actually means for a sphere. For a 2d shape Suppose I'm given a 2d shape, such as a rectangle, or a triangle, or a drawing of a puddle. I can cut out a 1cm by 1cm piece of paper, and trace that piece of paper on the shape. Many full 1 cm squares will be traced on the shape, and there will likely be many partial squares traced on the edges of the shape. Suppose I can accept that I can "combine" the partial squares into full squares. Then I count the total number of full squares, to find the surface area. For a cone or cylinder I can convert a paper cone into two 2d shapes. The bottom of the cone is a circle. I can then cut the curved (ie not-bottom) part of the cone using scissors, and unfold that part into a flat 2d shape. Similarly, I can convert a cylinder into flat 2d shapes: two circles and a rectangle. For a sphere But the above methods for understanding surface area don't work for a sphere. I can't lay a 1 cm by 1 cm piece of paper onto a sphere in a flat way. I can't even trace a square centimetre onto the sphere using that piece of paper! People might say, "suppose you have an orange, and you peel the orange. Then you can lay the peel flat onto the table, into a flat 2d shape". But they're lying! The orange peel can never be mashed down perfectly flat onto the table! So, I don't know what "surface area of a sphere" even means, if you cannot measure it using flat square pieces of paper! What does "surface area of a sphere" even mean?
|
This is actually an interesting question. It involves how to define "area" on a curved surface. The examples you have provided are surfaces that are developable (can be flattened onto a plane) after a few cuts. And you can compute the flattened area. You can never do this to a sphere, because no matter how small a patch from a sphere is, it can never be flattened onto a plane. The idea is to break down the sphere to small patches such that each is flat enough and you compute the area as if it is flat, and then add up the areas of the patches. Mathematically, suppose $S$ is a sphere. The above procedure is stated as: Break up $S$ into patches $P_1,\dots,P_n$ , where each $P_i$ is a patch that is flat enough, and $n$ is the number of patches you have. Compute $\operatorname{Area}(P_i)$ as if each $P_i$ is flat. As suggest by levap, one way to do it is to project each patch onto one of its tangent planes. Note that I am not saying this is the only way to approximate a patch, and I am also not saying that one way that would seem correct at first glance would really be correct , see Update 2 for an example, there's also discussion about this in the comments. Use $\operatorname{Area}(P_1)+\dots+\operatorname{Area}(P_n)$ as an approximation of the area of $S$ . If the patches are small enough, then the approximation should be a good one. But if you want better precision, use smaller patches and do the above again. This is to make the math precise, I can't guarantee that a third-grade student can understand this: As you take smaller and smaller patches, the value of the approximation above should tend to a fixed number, which is the mathematical definition of the area. P.S. For a visualization of this approximation, you can search online for sphere parametrization , or simply think of a football (soccer ball). Update 1: Thanks to Leander , we have a visualization: One might notice that this visualization is slightly different from cutting up a sphere; it takes sample points on the sphere and attach triangles to these sample points. I want to remark that there is no essential difference between this and my method. The idea is the same: approximation. Update 2: A comment (by Tanner Swett) mention that the method of using a polygon mesh may be flawed. Indeed, the example of Schwarz lantern shows that some pathological choice of the polygon mesh may produce a limit different from the surface area. The following explanation should be helpful: As I have mentioned in step 2 above, if we are not careful with how we approximate the areas of the patches, the approximation may not work. The Schwarz lantern is an example where a careful choice of the approximating triangles can lead to the following result: Suppose $T$ is a triangle we use to approximate a patch $P$ , then it is possible ${\rm Area}(T)/{\rm Area}(P)\to a\neq1$ . To illustrate this, consider a single triangle on the Schwarz lantern: We assume the cyclinder has total height $1$ and radius $1$ . We take $n+1$ axial slices, and on each slice $m$ points. The area enclosed by the red curves is a patch on the cylinder, and the triangle enclosed by the blue dashed lines is the one used to approximate the patch. Let $P$ and $T$ denote the patch and the triangle respectively. We see that the bottom edge of $P$ and $T$ has ratio $1$ as $m\to\infty$ . What really makes a difference is the ratio of their heights. Suppose along the vertical direction the height of $P$ is $$h=1/n$$ Then the height of the triangle is $$h_T=\sqrt{1/n^2+a^2}$$ By a simple computation we know $a=1-\cos(\pi/m)\approx(\pi^2/m^2)/2$ . Therefore, $$h_T/h=\sqrt{1+\frac{\pi^4n^2}{m^4}}$$ If $n$ has higher order than $m^2$ , then the limit is bigger than $1$ , and consequently ${\rm Area}(T)/{\rm Area}(P)\not\to1$ . This problem would have a smaller probability of occurring in practice. Imagine if you do cut the cyclinder into patches, you'd use $h$ instead of $h_T$ to estimate the area. But again, it is hard to make this (what approximation is acceptable) precise without using the language of calculus.
|
{
"source": [
"https://math.stackexchange.com/questions/3641205",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/192945/"
]
}
|
3,641,218 |
$$\displaystyle\sum_{n=1}^\infty \dfrac{(-2)^{n+1}}{x^n}$$ for which values of $x\neq 0$ does the series converge? I cannot apply the ratio test because the general term of the series is not going to $>0, <0$ eventually (on its taill) And I observe that If $x>0$ or $x<0$ only the alternating parity is changin $+$ tersm becomes $-$ and vice versa and it doesnot effects the convergence so I want to appy alternating series test and if $$\lim\limits_{n\to \infty}\dfrac{(-2)^{n+1}}{x^n}<\infty$$ then $x$ must be absolute value less than 1 so $|x|<1$ is the solution? How can I show it properly?
|
This is actually an interesting question. It involves how to define "area" on a curved surface. The examples you have provided are surfaces that are developable (can be flattened onto a plane) after a few cuts. And you can compute the flattened area. You can never do this to a sphere, because no matter how small a patch from a sphere is, it can never be flattened onto a plane. The idea is to break down the sphere to small patches such that each is flat enough and you compute the area as if it is flat, and then add up the areas of the patches. Mathematically, suppose $S$ is a sphere. The above procedure is stated as: Break up $S$ into patches $P_1,\dots,P_n$ , where each $P_i$ is a patch that is flat enough, and $n$ is the number of patches you have. Compute $\operatorname{Area}(P_i)$ as if each $P_i$ is flat. As suggest by levap, one way to do it is to project each patch onto one of its tangent planes. Note that I am not saying this is the only way to approximate a patch, and I am also not saying that one way that would seem correct at first glance would really be correct , see Update 2 for an example, there's also discussion about this in the comments. Use $\operatorname{Area}(P_1)+\dots+\operatorname{Area}(P_n)$ as an approximation of the area of $S$ . If the patches are small enough, then the approximation should be a good one. But if you want better precision, use smaller patches and do the above again. This is to make the math precise, I can't guarantee that a third-grade student can understand this: As you take smaller and smaller patches, the value of the approximation above should tend to a fixed number, which is the mathematical definition of the area. P.S. For a visualization of this approximation, you can search online for sphere parametrization , or simply think of a football (soccer ball). Update 1: Thanks to Leander , we have a visualization: One might notice that this visualization is slightly different from cutting up a sphere; it takes sample points on the sphere and attach triangles to these sample points. I want to remark that there is no essential difference between this and my method. The idea is the same: approximation. Update 2: A comment (by Tanner Swett) mention that the method of using a polygon mesh may be flawed. Indeed, the example of Schwarz lantern shows that some pathological choice of the polygon mesh may produce a limit different from the surface area. The following explanation should be helpful: As I have mentioned in step 2 above, if we are not careful with how we approximate the areas of the patches, the approximation may not work. The Schwarz lantern is an example where a careful choice of the approximating triangles can lead to the following result: Suppose $T$ is a triangle we use to approximate a patch $P$ , then it is possible ${\rm Area}(T)/{\rm Area}(P)\to a\neq1$ . To illustrate this, consider a single triangle on the Schwarz lantern: We assume the cyclinder has total height $1$ and radius $1$ . We take $n+1$ axial slices, and on each slice $m$ points. The area enclosed by the red curves is a patch on the cylinder, and the triangle enclosed by the blue dashed lines is the one used to approximate the patch. Let $P$ and $T$ denote the patch and the triangle respectively. We see that the bottom edge of $P$ and $T$ has ratio $1$ as $m\to\infty$ . What really makes a difference is the ratio of their heights. Suppose along the vertical direction the height of $P$ is $$h=1/n$$ Then the height of the triangle is $$h_T=\sqrt{1/n^2+a^2}$$ By a simple computation we know $a=1-\cos(\pi/m)\approx(\pi^2/m^2)/2$ . Therefore, $$h_T/h=\sqrt{1+\frac{\pi^4n^2}{m^4}}$$ If $n$ has higher order than $m^2$ , then the limit is bigger than $1$ , and consequently ${\rm Area}(T)/{\rm Area}(P)\not\to1$ . This problem would have a smaller probability of occurring in practice. Imagine if you do cut the cyclinder into patches, you'd use $h$ instead of $h_T$ to estimate the area. But again, it is hard to make this (what approximation is acceptable) precise without using the language of calculus.
|
{
"source": [
"https://math.stackexchange.com/questions/3641218",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/342943/"
]
}
|
3,645,139 |
Having a bit of a problem calculating the volume of a take-away box: I originally wanted to use integration to measure it by rotating around the x-axiz, but realised that when folded the top becomes a square, and the whole thing becomes rather irregular. Since it differs in circumference I won't be able to measure it like I planned. Is there any method or formula that can be used to measure a shape like this, or do I just have to approximate a cylinder and approximate a box and add those two together?
|
As the shape of the solid is not clearly defined, I'll make the simplest assumption: lateral surface is made of lines, connecting every point $P$ of square base to the point $P'$ of circular base with the same longitude $\theta$ (see figure below). In that case, if $r$ is the radius of the circular base, $2a$ the side of the square base, $h$ the distance between bases,
a section of the solid at height $x$ (red line in the figure) is formed by points $M$ having a radial distance $OM=\rho(\theta)$ from the axis of the solid given by: $$
\rho(\theta)={a\over\cos\theta}{x\over h}+r\left(1-{x\over h}\right),
\quad\text{for}\quad -{\pi\over4}\le\theta\le{\pi\over4}.
$$ A quarter of the solid has then a volume given by: $$
{V\over4}=\int_0^h\int_{-\pi/4}^{\pi/4}\int_0^{\rho(\theta)}r\,dr\,d\theta\,dx=
\frac{h}{12} \left(4 a^2+\pi r^2 + 2ar \ln{\sqrt2+1\over\sqrt2-1}\right).
$$
|
{
"source": [
"https://math.stackexchange.com/questions/3645139",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/644909/"
]
}
|
3,650,200 |
In Rudin's Principles of Mathematical Analysis 1.1, he first shows that there is no rational number $p$ with $p^2=2$ . Then he creates two sets: $A$ is the set of all positive rationals $p$ such that $p^2<2$ , and $B$ consists of all positive rationals $p$ such that $p^2>2$ . He shows that $A$ contains no largest number and $B$ contains no smallest. And then in 1.2, Rudin remarks that what he has done above is to show that the rational number system has certain gaps. His remarks confused me. My questions are: If he had shown that no rational number $p$ with $p^2=2$ , this already gave the conclusion that rational number system has "gaps" or "holes". Why did he need to set up the second argument about the two sets $A$ and $B$ ? How does the second argument that " $A$ contains no largest number and $B$ contains no smallest" showed gaps in rational number system? My intuition does not work here. Or it is nothing to do with intuition?
|
It depends on what you consider a “gap” in the rational numbers. As long as this is not a formally defined concept, we’re just talking about our everyday, geometrically informed conceptions of gaps. The mere fact that a certain equation doesn’t have a rational solution doesn’t seem like a basis for identifying a “gap”. The equation $x^2=-1$ also has no solution in the rational numbers, and this fact also gives rise to an extension of the number system (to the complex numbers, in this case), but it doesn’t fit with our everyday notion of a gap to call this deficiency a “gap”. This corresponds to the fact that when we fill the need to solve the equation $x^2=2$ by introducing irrational numbers, we depict them on the same axis as the rational numbers, between rational numbers, whereas when we fill the need to solve the equation $x^2=-1$ by introducing imaginary numbers, we depict them along a different axis. So the mere fact that some equation can’t be solved does not indicate a gap in the number system, if by “gap” we mean anything like what we mean by it in everyday language (where a “gap” would certainly be depicted along the same axis as the things between which it lies). By contrast, the fact that you can split the rational numbers into two sets, with all numbers in one set greater than all numbers in the other but without a number that marks the boundary, does seem to suggest that there “should” be a number at the boundary, so that, in a sense not too removed from our everyday use of the word, there is a gap at the boundary.
|
{
"source": [
"https://math.stackexchange.com/questions/3650200",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/780980/"
]
}
|
3,650,360 |
Why are the Partial Differential Equations so named? i.e, elliptical, hyperbolic, and parabolic. I do know the condition at which a general second order partial differential equation becomes these, but I don't understand why they are so named? Does it has anything to do with the ellipse, hyperbolas and parabolas?
|
A general 2nd order linear PDE in two variables is written $$Au_{xx} + 2Bu_{xy} + Cu_{yy} + Du_x + Eu_y + F = 0$$ and $A,B,C,D,E,F$ can be functions depending on $x$ and $y$ . We say a PDE is elliptic, hyperbolic or parabolic if \begin{align}
B^2 - AC &= 0, &\text{parabolic} \\
B^2 - AC &>0, &\text{hyperbolic} \\
B^2 - AC &<0, &\text{elliptic}
\end{align} Note that if $A,B,C,D,E,F$ depend on $x$ or $y$ , there can be regions where the PDE is elliptic, hyperbolic or parabolic and different techniques are used to solve each type. If the coefficients are constant the naming comes form considering the polynomial equation $$Ax^2 + 2Bxy + Cy^2 + Dx + Ey + F = 0$$ depending on the sign of $B^2 - AC$ , this forms an ellipse, hyperbola or parabola in $\mathbb{R}^2$ . This can be extended to higher dimensions as well with hyperboloids, paraboloids, or ellipsoids.
|
{
"source": [
"https://math.stackexchange.com/questions/3650360",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/781078/"
]
}
|
3,656,344 |
This might be silly, but here it goes. Let $P,S>0$ be positive real numbers that satisfy $\frac{S}{n} \ge \sqrt[n]{P}$ . Does there exist a sequence of positive real numbers $a_1,\dots,a_n$ such that $S=\sum a_i,P=\prod a_i$ ? Clearly, $\frac{S}{n} \ge \sqrt[n]{P}$ is a necessary condition, due to the AM-GM inequality. But is it sufficient? For $n=2$ , the answer is positive, as can be seen by analysing the discriminant of the associated quadratic equation. (In fact, the solvability criterion for the quadratic, namely- the non-negativity of the discriminant, is equivalent to the AM-GM inequality for the sum and the product). What about $n>3$ ?
|
If we choose $$
(a_1, \ldots, a_{n-1}, a_n) = (a, \ldots, a, \frac{P}{a^{n-1}})
$$ for some $a > 0$ then $\prod a_i = P$ is satisfied, and we need that $$
f(a) = \sum a_i = (n-1)a + \frac{P}{a^{n-1}} = S \, .
$$ This equation has a solution because $f$ is continuous on $(0, \infty)$ with $$
\min_{a > 0} f(a) = f(\sqrt[n]P) = n \sqrt[n]P \le S
$$ and $$
\lim_{a \to \infty} f(a) = + \infty \, .
$$ Remarks about a possible generalization The Maclaurin's inequalities state the following: If $a_1, \ldots, a_n$ are positive real numbers, and the “averages” $S_1, \ldots, S_n$ are defined as $$
S_k = \frac{1}{\binom n k} \sum_{1 \le i_1 < \ldots < i_k \le n}
a_{i_1}a_{i_2} \cdots a_{i_k}
$$ then $$
S_1 \ge \sqrt{S_2} \ge \sqrt[3]{S_3} \ge \ldots \ge \sqrt[n]{S_n} \, .
$$ In particular, $$
\frac 1 n (a_1 + \ldots + a_n) = S_1 \ge \sqrt[n]{S_n} = \sqrt[n]{a_1 \cdots a_n}
$$ so that this is a generalization of the inequality between the geometric and arithmetic means. Therefore a natural generalization of the above question would be: Let $S_1, \ldots, S_n$ be given positive real numbers with $$
S_1 \ge \sqrt{S_2} \ge \sqrt[3]{S_3} \ge \ldots \ge \sqrt[n]{S_n} \, .
$$ Are there positive real numbers $a_1, \ldots, a_n$ such that $$
S_k = \frac{1}{\binom n k} \sum_{1 \le i_1 < \ldots < i_k \le n}
a_{i_1}a_{i_2} \cdots a_{i_k}
$$ for $1 \le k \le n$ ? This is equivalent to asking if the polynomial $$
p(x) = x^n - \binom n 1 S_1 x^{n-1} + \binom n 2 S_2 x_{n-2} + \ldots + (-1)^n \binom n n S_n
$$ has $n$ positive real zeros. Unfortunately, this generalization does not hold. The following counterexample is from SIKLOS, STEPHEN. “Maclaurin's Inequalities: Reflections on a STEP Question.” The Mathematical Gazette, vol. 96, no. 537, 2012, pp. 499–507. JSTOR, https://www.jstor.org/stable/24496873 . We choose $n=3$ and $$
S_1 = \frac 3 2, \, S_2 = 2, \, S_3 = 1 \, .
$$ The condition on the $S_k$ is satisfied (with strict inequalities), but a simple analysis shows that the polynomial $$
p(x) = x^3 - 3 S_1 x^2 + 3 S_2 x - S_3 = x^3 - \frac 9 2 x^2 + 6 x - 1
$$ has one real (positive) zero and two non-real zeros. It is therefore not possible to find real numbers $a_1, a_2, a_3$ such that $$
\frac{a_1+a_2+a_3}{3} = S_1, \, \frac{a_1a_2 + a_1 a_3 + a_2 a_3}{3} = S_2,\,
a_1 a_2 a_3 = S_3 \, .
$$
|
{
"source": [
"https://math.stackexchange.com/questions/3656344",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/104576/"
]
}
|
3,674,276 |
How to solve the following problem from Hall and Knight's Higher Algebra ? Suppose that \begin{align}
a&=zb+yc,\tag{1}\\
b&=xc+za,\tag{2}\\
c&=ya+xb.\tag{3}
\end{align} Prove that $$\frac{a^2}{1-x^2}=\frac{b^2}{1-y^2}=\frac{c^2}{1-z^2}.\tag{4}$$ (I suppose that $x,y,z$ are real numbers whose moduli are not equal to $1$ .) I discovered this problem from chapter 3 of Prelude to Mathematics by W. W. Sawyer. Sawyer thought that this problem arose from the sine law: let $a,b,c$ be respectively the lengths of the edges opposite to three vertices $A,B,C$ of a triangle. Define $x=\cos A$ and define $y,z$ analogously. Now equalities $(1)-(3)$ simply relate $a,b$ and $c$ to each other by the cosines of the angles and $(4)$ is just a rewrite of the sine law $$
\frac{a}{\sin A}=\frac{b}{\sin B}=\frac{c}{\sin C}.
$$ However, the algebraic version $(4)$ looks more general. For example, it does not state that $a,b,c$ must be positive or that they must satisfy the triangle inequality. Sawyer wrote that this isn't a hard problem, but he didn't provide any solution. I can prove $(4)$ using linear algebra. Suppose that $(a,b,c)\ne(0,0,0)$ (otherwise $(4)$ is obvious). Rewrite $(1)-(3)$ in the form of $M\mathbf a=0$ : $$\begin{bmatrix}-1&z&y\\ z&-1&x\\ y&x&-1\end{bmatrix}\begin{bmatrix}a\\ b\\ c\end{bmatrix}=0.$$ Since $x^2,y^2,z^2\ne1$ , $M$ has rank $2$ and $D=\operatorname{adj}(M)$ has rank $1$ . Hence all columns of $D$ are parallel to $(a,b,c)^T$ and $\frac{d_{11}}{d_{21}}=\frac{d_{12}}{d_{22}}=\frac{a}{b}$ . Since $M$ is symmetric, $D$ is symmetric too. Therefore $\frac{1-x^2}{1-y^2}=\frac{d_{11}}{d_{22}}=\frac{d_{11}d_{12}}{d_{21}d_{22}}=\frac{a^2}{b^2}$ , i.e. $\frac{a^2}{1-x^2}=\frac{b^2}{1-y^2}$ . As this problem comes from Hall and Knight's book, I think there should be a more elementary solution. Any ideas?
|
Let $a=0$ . Thus, $$xc=b$$ and $$xb=c,$$ which gives $$x^2bc=bc$$ or $$(x^2-1)bc=0$$ and since $x^2\neq1,$ we obtain $bc=0$ and from here $$a=b=c=0,$$ which gives that our statement is true. Let $abc\neq0$ . Thus, $$\frac{zb}{a}+\frac{yc}{a}=1$$ and $$\frac{xc}{b}+\frac{za}{b}=1,$$ which gives $$z^2+\frac{xyc^2}{ab}+\frac{xzc}{a}+\frac{yzc}{b}=1$$ or $$\frac{1-z^2}{c^2}=\frac{xy}{ab}+\frac{yz}{bc}+\frac{zx}{ca},$$ which gives $$\frac{a^2}{1-x^2}=\frac{b^2}{1-y^2}=\frac{c^2}{1-z^2}=\frac{1}{\frac{xy}{ab}+\frac{yz}{bc}+\frac{zx}{ca}}.$$
|
{
"source": [
"https://math.stackexchange.com/questions/3674276",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/594946/"
]
}
|
3,679,761 |
I am currently reading through Rudin's Principles of Mathematical Analysis and I am learning about fields and their properties. Note that this is the initial chapter - I am just starting off. I was wondering which field property enables us to multiply on both sides of an equation and still preserve equality. There is a very clear proposition stated in the book that gives me this for inequalities: $$
\text{If} \ \ x > 0 \ \ \text{and} \ \ y < z \ \ \text{then} \ \ xy<xz.
$$ However, the only proposition that seems useful for this in the case of equalities, is stated as an implication and not an equivalence: $$
\text{If} \ x\not= 0 \ \ \text{and} \ \ xy=xz \ \ \text{then} \ \ y=z.
$$ Any help would be much appreciated.
|
This isn't a field property, it's a property of the underlying logical framework within which we're defining fields in the first place. Specifically, the main property is that if $a=b$ then any sentence involving $a$ is equivalent to the same sentence gotten by replacing some of the $a$ s with $b$ s; we also use the simpler property that " $=$ " is reflexive. From this we can argue: Suppose $a=b$ . By reflexivity we have $ma=ma$ . Now by the first bulletpoint we can substitute $b$ for the second $a$ in the second bulletpoint, which gives us $$ma=mb$$ as desired. That logical framework is often swept under the rug. Some people find this helpful since it means that they don't have to worry about such "basic" facts and can focus on more interesting stuff. Others find this annoying since hiding assumptions really goes against the whole point of the "axiomatic" turn that the definition of fields is part of in the first place. Personally, I lean on the side of not sweeping this sort of thing under the rug, but that reflects my own logician-y biases. Aside from basic rules for equality, our logical rules also tell us how to manipulate statements in general. E.g. the fact that you can prove "Every $x$ has property $P$ " by introducing an arbitrary $x$ and showing it has property $P$ is just the rule of universal generalization . However, there are some subtleties around this logical framework itself. Basically, "naive" mathematical reasoning takes place in second-order (or similar) logic, but that's truly terrible when we actually look at it. First-order logic turns out to be the right way to go, but with a twist: we study (for example) fields within the large first-order theory $\mathsf{ZFC}$ , the latter of which serves as a general all-purpose framework for conducting mathematics.
|
{
"source": [
"https://math.stackexchange.com/questions/3679761",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/537648/"
]
}
|
3,680,708 |
I know there is a lot of topic regarding this on the internet, and trust me, I've googled it. But things are getting more and more confused for me. From my understanding, The gradient is the slope of the most rapid descent. Modifying your position by descending along this gradient will most rapidly cause your cost function to become minimal (the typical goal). Could anyone explain in simple words (and maybe with an example) what the difference between the Jacobian, Hessian, and the Gradient?
|
Some good resources on this would be any introductory vector calculus text. I'll try to be as consistent as I can be with Stewart's Calculus, perhaps the most popular calculus textbook in North America. The Gradient Let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ be a scalar field. The gradient, $\nabla f: \mathbb{R}^n \rightarrow \mathbb{R}^n$ is a vector, such that $(\nabla f)_j = \partial f/ \partial x_j$ . Because every point in $\text{dom}(f)$ is mapped to a vector, then $\nabla f$ is a vector field . The Jacobian Let $\operatorname{F}: \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a vector field. The Jacobian can be considered as the derivative of a vector field. Considering each component of $\mbox{F}$ as a single function (like $f$ above), then the Jacobian is a matrix in which the $i^{th}$ row is the gradient of the $i^{th}$ component of $\operatorname{F}$ . If $\mathbf{J}$ is the Jacobian, then $$\mathbf{J}_{i,j} = \dfrac{\partial \operatorname{F}_i}{\partial x_j}$$ The Hessian Simply, the Hessian is the matrix of second order mixed partials of a scalar field. $$\mathbf{H}_{i, j}=\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}}$$ In summation: Gradient: Vector of first order derivatives of a scalar field Jacobian: Matrix of gradients for components of a vector field Hessian: Matrix of second order mixed partials of a scalar field. Example Squared error loss $f(\beta_0, \beta_1) = \sum_i (y_i - \beta_0 - \beta_1x_i)^2$ is a scalar field. We map every pair of coefficients to a loss value. The gradient of this scalar field is $$\nabla f = \left< -2 \sum_i( y_i - \beta_0 - \beta_1x_i), -2\sum_i x_i(y_i - \beta_0 - \beta_1x_i) \right>$$ Now, each component of $\nabla f$ is itself a scalar field. Take gradients of those and set them to be rows of a matrix and you've got yourself the Jacobian $$ \left[\begin{array}{cc}
\sum_{i=1}^{n} 2 & \sum_{i=1}^{n} 2 x_{i} \\
\sum_{i=1}^{n} 2 x_{i} & \sum_{i=1}^{n} 2 x_{i}^{2}
\end{array}\right]$$ The Hessian of $f$ is the same as the Jacobian of $\nabla f$ . It would behoove you to prove this to yourself. Resources: Calculus: Early Transcendentals by James Stewart, or earlier editions, as well as Wikipedia which is surprisingly good for these topics.
|
{
"source": [
"https://math.stackexchange.com/questions/3680708",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/787560/"
]
}
|
3,687,703 |
A topological space is zero dimensional if, and only if, it has a basis consisting of sets which are both open and closed (that is, "clopen"). This definition is according to "Counterexamples in Topology" by Steen and Seebach, 2nd ed. 1978. I see that "zero dimensional" is tied in with various levels of being "disconnected", e.g. extremally disconnected, totally separated, totally, disconnected, scattered, etc. (although the property of being "zero dimensional" is itself strictly independent of all disconnectivity properties). I understand (vaguely) the concept of a "dimension" in the context of manifolds: a space of $n$ dimensions has a boundary of $n - 1$ dimensions. I also note that a set is clopen if, and only if, it has no boundary. So would that be the basis of this terminology? Is there a source which states this definitively?
|
This notion of zero-dimensionality is based on the small inductive dimension $\operatorname{ind}(X)$ which can be defined for lots of spaces. It's defined inductively, based on the intuition that the boundary of open sets in two-dimensional spaces (which are boundaries disks, so circles in the plane, e.g.) have a dimension that is one lower than that of the space itself (this works nicely, at least intuitively, for the Euclidean spaces), setting up a recursion: We define $\operatorname{ind}(X) = -1$ iff $X=\emptyset$ (!) and a space has $\operatorname{ind}(X) \le n$ whenever $X$ has a base $\mathcal{B}$ of open sets such that $\operatorname{ind}(\partial O) \le n-1$ for all $O \in \mathcal{B}$ , where $\partial A$ denotes the boundary of a set $A$ . Finally, $\operatorname{ind}(X) = n$ holds if $\operatorname{ind}(X) \le n$ holds and $\operatorname{ind}(X)\le n-1$ does not hold. Also, $\operatorname{ind}(X)=\infty$ if for no $n$ we have $\operatorname{ind}(X) \le n$ . It's clear that this is a topological invariant (homeomorphic spaces have the same dimension w.r.t. $\operatorname{ind}$ ) and zero-dimensional (i.e. $\operatorname{ind}(X)=0$ ) exactly means that there is a base of open sets with empty boundary (from the $-1$ clause!) and $\partial O=\emptyset$ means that $O$ is clopen. Note that $\Bbb Q$ and $\Bbb P = \Bbb R\setminus \Bbb Q$ are both zero-dimensional in $\Bbb R$ and that $\Bbb R$ is one-dimensional (i.e. $\operatorname{ind}(\Bbb R)=1$ ) as boundaries of $(a,b)$ are $\{a,b\}$ , which is zero-dimensional (discrete), etc. In dimension theory more dimension functions have been defined as well, e.g. large inductive dimension $\operatorname{Ind}(X)$ , which is a variant of $\operatorname{ind}(X)$ , and the (Lebesgue) covering dimension $\dim(X)$ , which has a different flavour and is about refinements of open covers and the order of those covers. For separable metric spaces however it can be shown that all $3$ that were mentioned are the same (i.e. give the same (integer) value). There are also metric-based definitions (fractal dimensions) which have more possible values, but are not topological, but metric invariants. Outside of metric spaces, we can have gaps between the dimension functions and stuff gets hairy quickly. See Engelking's book "Theory of dimensions, finite and infinite" for a taste of this field. So in summary: the name "comes" (can be justified) from the small inductive dimension definition $\operatorname{ind}(X)$ , but the name itself for that special class (clopen base) is older I think, and other names (like Boolean space etc) have been used too. It's a nice way to be very disconnected, giving a lot of structure.
|
{
"source": [
"https://math.stackexchange.com/questions/3687703",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/466895/"
]
}
|
3,687,717 |
I want to calculate the natural logarithm of this matrix $$A=\begin{bmatrix}0&0&0&0\\0&\frac{1}{2}&\frac{i}{2}&0\\0&-\frac{i}{2}&\frac{1}{2}&0\\0&0&0&0\end{bmatrix}$$ After calculating the eigenvalues and eigenvectors I find: $\lambda_1=1,\lambda_2=0,\lambda_3=0,\lambda_4=0$ . These values are the diagonal elements in D of $A=M D M^{-1}$ . Here $M$ is the modal matrix and $D$ the diagonal matrix looking like: $$D=diag(\lambda_1,\lambda_2,\lambda_3,\lambda_4)$$ Now taking the natural logarithm of the matrix would require me to take the natural logarithm of the elements of $D$ . So what is my result? I should just say that three of the elements are undefined or that the logarithm of this matrix doesn't exists/diverges? Is there some pre requirement for A that I did not take into account?
|
This notion of zero-dimensionality is based on the small inductive dimension $\operatorname{ind}(X)$ which can be defined for lots of spaces. It's defined inductively, based on the intuition that the boundary of open sets in two-dimensional spaces (which are boundaries disks, so circles in the plane, e.g.) have a dimension that is one lower than that of the space itself (this works nicely, at least intuitively, for the Euclidean spaces), setting up a recursion: We define $\operatorname{ind}(X) = -1$ iff $X=\emptyset$ (!) and a space has $\operatorname{ind}(X) \le n$ whenever $X$ has a base $\mathcal{B}$ of open sets such that $\operatorname{ind}(\partial O) \le n-1$ for all $O \in \mathcal{B}$ , where $\partial A$ denotes the boundary of a set $A$ . Finally, $\operatorname{ind}(X) = n$ holds if $\operatorname{ind}(X) \le n$ holds and $\operatorname{ind}(X)\le n-1$ does not hold. Also, $\operatorname{ind}(X)=\infty$ if for no $n$ we have $\operatorname{ind}(X) \le n$ . It's clear that this is a topological invariant (homeomorphic spaces have the same dimension w.r.t. $\operatorname{ind}$ ) and zero-dimensional (i.e. $\operatorname{ind}(X)=0$ ) exactly means that there is a base of open sets with empty boundary (from the $-1$ clause!) and $\partial O=\emptyset$ means that $O$ is clopen. Note that $\Bbb Q$ and $\Bbb P = \Bbb R\setminus \Bbb Q$ are both zero-dimensional in $\Bbb R$ and that $\Bbb R$ is one-dimensional (i.e. $\operatorname{ind}(\Bbb R)=1$ ) as boundaries of $(a,b)$ are $\{a,b\}$ , which is zero-dimensional (discrete), etc. In dimension theory more dimension functions have been defined as well, e.g. large inductive dimension $\operatorname{Ind}(X)$ , which is a variant of $\operatorname{ind}(X)$ , and the (Lebesgue) covering dimension $\dim(X)$ , which has a different flavour and is about refinements of open covers and the order of those covers. For separable metric spaces however it can be shown that all $3$ that were mentioned are the same (i.e. give the same (integer) value). There are also metric-based definitions (fractal dimensions) which have more possible values, but are not topological, but metric invariants. Outside of metric spaces, we can have gaps between the dimension functions and stuff gets hairy quickly. See Engelking's book "Theory of dimensions, finite and infinite" for a taste of this field. So in summary: the name "comes" (can be justified) from the small inductive dimension definition $\operatorname{ind}(X)$ , but the name itself for that special class (clopen base) is older I think, and other names (like Boolean space etc) have been used too. It's a nice way to be very disconnected, giving a lot of structure.
|
{
"source": [
"https://math.stackexchange.com/questions/3687717",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/744027/"
]
}
|
3,689,229 |
Consider the following definite integral: $$I=\int^{0}_{-1}x\sqrt{-x}dx \tag{1}$$ With the substitution $x=-u$ , I got $I=-\frac{2}{5}$ (which seems correct). But I then tried a different method by first taking out $\sqrt{-1}=i$ from the integrand: $$I=i\int^{0}_{-1}x\sqrt{x}dx=\frac{2i}{5}[x^{\frac{5}{2}}]^{0}_{-1}=\frac{2i}{5}{(0-(\sqrt{-1})^5})=-\frac{2i^6}{5}=+\frac{2}{5} \tag{2}$$ which is clearly wrong. I understand that $x\sqrt{x}$ is not even defined within $(-1,0)$ , but why can't we use the same 'imaginary approach' ( $\sqrt{-1}=i$ ) to treat this undefined part of the function (i.e. the third equality in $(2)$ ). I can't find a better way of phrasing my question so it may seem gibberish, but why is $(2)$ just invalid?
|
I had difficulty understanding the previous answer so am offering an expanded version. Taking your first step, you write $\sqrt{-x} = i\sqrt{x}$ . Now try that with $x=-1$ . It gives a contradiction, $$1 = \sqrt{1} = i \sqrt{-1} = i^2 = -1.$$ It is not really fixed if you use the alternative sign for $\sqrt{-1}$ because you obtain $$ 1 = \sqrt{1} = -i \sqrt{-1} = (-i) \times (-i) = -1 $$ Only if you take different signs for the imaginary part at each square root do you get the answer you want. Underlying this is a general point about complex valued functions. By convention for real $ x \geqslant 0$ , $\sqrt{x}$ is always taken to be the positive root. When $x < 0$ there is no natural convention and $\sqrt{x} $ could be either one of $\pm i\sqrt{-x}$ . The difficulty arises because there cannot be a consistent choice for the root of a negative number that at the same time satisfies the desirable identity $\sqrt{xy} = \sqrt{x}\sqrt{y}$ . That is because in complex analysis the square root $\sqrt{z}$ has a branch point (that is, it is badly behaved) at $z=0$ and it cannot be extended to a well behaved function across the whole complex plane.
|
{
"source": [
"https://math.stackexchange.com/questions/3689229",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/677074/"
]
}
|
3,691,332 |
The diagram shows 12 small circles of radius 1 and a large circle, inside a square. Each side of the square is a tangent to the large circle and four of the small circles. Each small circle touches two other circles. What is the length of each side of the square? The answer is 18 CONTEXT: This question came up in a Team Maths Challenge I did back in November. No one on our team knew how to do it and we ended up guessing the answer (please understand that time was scarce and we did several other questions without guessing!) I just remembered this question and thought I'd have a go but I am still struggling with it. There are no worked solutions online (only the answer) so I reaching out to this website as a final resort. Any help would be greatly appreciated. Thank you!
|
Join the center of the bigger circle (radius assumed to be $r$ ) to the mid-points of the square. It’s easy to see that $ABCD$ is a square as well. Now, join the center of the big circle to the center of one of the smaller circles ( $P$ ). Then $BP=r+1$ . Further, if we draw a vertical line through $P$ , it intersects $AB$ at a point distant $r-1$ from $B$ . Lastly, the perpendicular distance from $E$ to the bottom side of the square is equal to $AD=r$ . Take away three radii to obtain $EP=r-3$ . Using Pythagoras’ Theorem, $$(r-1)^2 +(r-3)^2 =(r+1)^2 \\ r^2-10r+9=0 \implies r=9,1$$ , but clearly $r\ne 1$ , and so the side of the square is $2r=18$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3691332",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/792587/"
]
}
|
3,691,336 |
Prove by contradiction that: $A∩B=B∩A$ . My Solution: Assume to the contrary that $A∩B ≠ B∩A$ .... (i) Consequently, $B∩A≠ A∩B$ ...... (ii). Since for all sets $A$ and $B$ , $A=B → B=A$ . Then, (i) and (ii) as a whole suggest $A∩B≠A∩B$ . The foregoing statement contradicts the fact that for every set $A$ , $A = A$ . By such an antilogy we've shown that $B∩A = A∩B$ . Recently enrolled for math major. I try and devising my proving skills before the first day.
|
Join the center of the bigger circle (radius assumed to be $r$ ) to the mid-points of the square. It’s easy to see that $ABCD$ is a square as well. Now, join the center of the big circle to the center of one of the smaller circles ( $P$ ). Then $BP=r+1$ . Further, if we draw a vertical line through $P$ , it intersects $AB$ at a point distant $r-1$ from $B$ . Lastly, the perpendicular distance from $E$ to the bottom side of the square is equal to $AD=r$ . Take away three radii to obtain $EP=r-3$ . Using Pythagoras’ Theorem, $$(r-1)^2 +(r-3)^2 =(r+1)^2 \\ r^2-10r+9=0 \implies r=9,1$$ , but clearly $r\ne 1$ , and so the side of the square is $2r=18$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3691336",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/792608/"
]
}
|
3,692,052 |
The integers and the integers modulo a prime are both groups under addition. What about the computer representation of integers (e.g. int64)? It's closed under addition, since a sum that is too large wraps around to the negatives. It also inherits the other group properties from the integers (associativity, identity, inverse). So int64 seems like a finite group, but am I missing anything?
|
If you just let overflows happen without doing anything about it, and in particular with 2's complement representation (or unsigned), a computer's $n$ -bit integer representation becomes the integers modulo $2^n$ . So yes, you're entirely right: It is a finite group (and with multiplication becomes a finite ring). (As a side note, working with and thinking about 2's complement became a lot easier for me once I realized this. No one really told me during my education, so for ages I was stuck having to remember all the details in the algorithm for taking negatives, i.e. actually taking the 2's complement. Now that I have the algebraic knowledge of what's actually going on, I can just deduce the algorithm on the fly whenever I need it.) It's not entirely obvious to check explicitly that they satisfy, say, associativity when overflow is in the picture. It's easier to set up the obvious bijection with the integers modulo $2^n$ and show that addition stays the same, and prove the group properties that way.
|
{
"source": [
"https://math.stackexchange.com/questions/3692052",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/28855/"
]
}
|
3,692,082 |
A polynomial of degree at most 7 is such that leaves remainders –1 and 1 upon division by $(x-1)^4$ and $(x+1)^4$ respectively. Find the sum of roots of this polynomial. Now as we have to find sum , I thinks it's pointing towards using viete. From remainder theorem , we get $f(x) = g_1(x)(x-1)^4-1$ and $f(x) = g_2(x)(x+1)^4+1$ where $g_{1,2}(x)$ is a polynomial of degree atmost 3 . But from this point , I get no more ideas. Like assuming a cubic for g(x) and then using binomial on $(x-1)^4$ is to too long and doesn't get me anywhere. Please help The above approach is followed Here but there it's manageable because the powers are small, so is there no alternate elegant way ??
|
If you just let overflows happen without doing anything about it, and in particular with 2's complement representation (or unsigned), a computer's $n$ -bit integer representation becomes the integers modulo $2^n$ . So yes, you're entirely right: It is a finite group (and with multiplication becomes a finite ring). (As a side note, working with and thinking about 2's complement became a lot easier for me once I realized this. No one really told me during my education, so for ages I was stuck having to remember all the details in the algorithm for taking negatives, i.e. actually taking the 2's complement. Now that I have the algebraic knowledge of what's actually going on, I can just deduce the algorithm on the fly whenever I need it.) It's not entirely obvious to check explicitly that they satisfy, say, associativity when overflow is in the picture. It's easier to set up the obvious bijection with the integers modulo $2^n$ and show that addition stays the same, and prove the group properties that way.
|
{
"source": [
"https://math.stackexchange.com/questions/3692082",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/675350/"
]
}
|
3,698,864 |
Wikipedia's article on parametric statistical models ( https://en.wikipedia.org/wiki/Parametric_model ) mentions that you could parameterize all probability distributions with a one-dimensional real parameter, since the set of all probability measures & $\mathbb{R}$ share the same cardinality. This fact is mentioned in the cited text (Bickel et al, Efficient and Adaptive Estimation for Semiparametric Models), but not proved or elaborated on. This is pretty neat to me. (If I'd been forced to guess, I would have guessed the set of possible probability distributions to be bigger, since pdfs are functions $\mathbb{R}\rightarrow\mathbb{R}$ , and we're counting probability distributions that don't have a density, too. It's got to be countable additivity constraining the number of possible distributions, but how?) Where could I go to find a proof of this, or is it straightforward enough to outline in an answer here? Does its proof depend on AC or the continuum hypothesis? We need some kind of condition on the cardinality of the sample space that neither Wikipedia or Bickel mention, right (if it's too big, then the number of degenerate probability distributions is too big)?
|
A probability on $\mathbb{R}$ , be it continuous or not, is given by its CDF $x \mapsto\mathbb{P}(X \leq x)$ . A CDF is right-continuous, and the set of right-continuous functions has the cardinality of $\mathbb{R}$ . To see this, you can for instance argue that the values of such a function are given by its values at the rational points, so it has at most the cardinality of a countable product of copies of $\mathbb{R}$ , which has the cardinality of $\mathbb{R}$ as well.
|
{
"source": [
"https://math.stackexchange.com/questions/3698864",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/373774/"
]
}
|
3,705,806 |
I was reading on improper integrals, and came across : $$\int_{-\infty}^{\infty} f(x)\,dx = \lim_{A \to -\infty}
\int_A^Cf(x)\,dx + \lim_{B \to \infty} \int_C^B f(x)\,dx$$ My question is a rather silly one: Is it any different if I write the above as : $$\int_{-\infty}^{\infty} f(x)\,dx = \lim_{a \to \infty} \int_{-a}^a f(x)\,dx$$ If it's correct, is there any specific reason why it is written that way? Or is looking at it in that way any more beneficial than my way? If it's not, why is it incorrect to write it this way? (Any counter-example also appreciated)
|
Your question is not silly; it's important to be clear on the definitions of these things. It is indeed different if you write the above like that. Consider for instance an arbitrary odd function $f : \mathbb R \to \mathbb R$ (a function is called odd if $f(-x) = -f(x)$ ). Then, using your definition, $$
\int_{-\infty}^{\infty} f(x) \, dx = \lim_{a \to \infty} \int_{-a}^{a} f(x) \, dx = \lim_{a \to \infty} \int_{-a}^0 f(x) \, dx + \int_0^a f(x) \, dx = 0,
$$ which implies that the integral from $-\infty$ to $\infty$ of $f$ is $0$ , no matter how pathological it is. Clearly we don't want to consider arbitrary odd $f$ to integrate to $0$ over the entire real line. Take as an example $f(x) = x$ . The concern becomes especially obvious if we consider $$
\lim_{a \to \infty} \int_{-a}^{2a} x \, dx = \lim_{a \to \infty} \frac 3 2 a^2 = \infty \neq 0,
$$ so the "rate" at which the upper and lower bounds move changes the answer. The method of splitting up the integral into two improper integrals is thus used as a convention; it doesn't have this problem. In fact, you can prove that if splitting up the integral yields a convergent result, then so will your method. In essence, the method of splitting up the integral prevents "too many" things from converging. What you are suggesting does in fact have a name; it is the Cauchy principal value of the improper integral. This is useful in some special cases but certainly should not be used all the time for the reasons given above.
|
{
"source": [
"https://math.stackexchange.com/questions/3705806",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/773140/"
]
}
|
3,706,717 |
I know that rationals, being a countable set, have zero Lebesgue measure. I think one way to prove it is to find an open set containing rationals that has measure less than $\epsilon$ for every $\epsilon >0$ fixed. You can do it by taking the rational points in sequence and taking intervals of length $\epsilon/2^n$ . Then the union of these intervals has measure less or equal than $\epsilon$ . However I was wondering: how can I explain this intuitively? If one thinks of a dense subset, such as $\mathbb{Q}$ in $\mathbb{R}$ , one thinks of something that is "so close" to the original set that it is undistinguishable, in a certain way. I think the most intuitive explanation would be that when you take those intervals, you are "scaling down" their lengths faster than how a given sequence of rational points approach a non rational one. But this may sound a bit confusing, tricky, so I was wondering: is there a simple, intuitive, possibly graphical way of explaining to someone with very little background in math why rationals have measure zero?
|
This is a really hard question; I think in general intuition for this sort of thing tends to come with experience, as you get used to the concepts. Having said that, I'll try to articulate the way that I think about it. I guess the way of viewing $\mathbb{Q}$ as a subset of $\mathbb{R}$ is a load of dots on a continuous line. Obviously these dots are very close together (in fact the whole thing is nonsense because they're dense in $\mathbb{R}$ ), but intuitively the mental image does help to capture some of the relevant properties, particularly with an eye to the Lebesgue measure. I would suggest constructing this set in steps, according to increasing denominator. Start with $\mathbb{Z}$ . It seems pretty clear to me that this should have measure zero, since the dots are spaced out, and hence they occupy an "infinitely small" proportion of $\mathbb{R}$ . Rigorously, we can prove that $\mathbb{Z}$ has measure zero by putting an interval of width $\epsilon 2^{-\lvert n \rvert}$ around each $n$ . For each $n\geq 1$ , define $S_n = \{\frac{a}{b}\mid a,b\in\mathbb{Z}, b \leq n\}$ to be the set of rational numbers with denominator at most $n$ . Thus, $\mathbb{Z} = S_1$ . For each $n$ , the elements of $S_n$ have some minimum gap between them (the lowest common multiple of the denominators less than or equal to $n$ ), hence the same argument we used for $\mathbb{Z}$ shows that $S_n$ has measure zero for each $n$ . At each step, we have a set of measure zero. If we continue this process infinitely, we will eventually reach every rational number (i.e. for every rational number $x$ , there is a finite $n$ with $x \in S_n$ ), so in some sense $\mathbb{Q}$ is the "limit" of these null sets, and hence it is itself null. We can certainly make this "some sense" rigorous, since $\mathbb{Q}$ is the countable union of the $S_n$ , but I'm not sure that's useful for the intuition. Obviously what I've done here is not very sophisticated, but I think it is a bit easier to visualise than just invoking countability of $\mathbb{Q}$ , since we are actually "zooming in" on $\mathbb{Q}$ in an explicit way.
|
{
"source": [
"https://math.stackexchange.com/questions/3706717",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/363048/"
]
}
|
3,706,767 |
I've been attempting to show that: $$\sum_{k=1}^n {k+1\choose 2}{2n+1\choose n+k+1}={n\choose 1}4^{n-1}\\
\sum_{k=2}^n {k+2\choose 4}{2n+1\choose n+k+1}={n\choose 2}4^{n-2}$$ Can anyone give some direction?
|
This is a really hard question; I think in general intuition for this sort of thing tends to come with experience, as you get used to the concepts. Having said that, I'll try to articulate the way that I think about it. I guess the way of viewing $\mathbb{Q}$ as a subset of $\mathbb{R}$ is a load of dots on a continuous line. Obviously these dots are very close together (in fact the whole thing is nonsense because they're dense in $\mathbb{R}$ ), but intuitively the mental image does help to capture some of the relevant properties, particularly with an eye to the Lebesgue measure. I would suggest constructing this set in steps, according to increasing denominator. Start with $\mathbb{Z}$ . It seems pretty clear to me that this should have measure zero, since the dots are spaced out, and hence they occupy an "infinitely small" proportion of $\mathbb{R}$ . Rigorously, we can prove that $\mathbb{Z}$ has measure zero by putting an interval of width $\epsilon 2^{-\lvert n \rvert}$ around each $n$ . For each $n\geq 1$ , define $S_n = \{\frac{a}{b}\mid a,b\in\mathbb{Z}, b \leq n\}$ to be the set of rational numbers with denominator at most $n$ . Thus, $\mathbb{Z} = S_1$ . For each $n$ , the elements of $S_n$ have some minimum gap between them (the lowest common multiple of the denominators less than or equal to $n$ ), hence the same argument we used for $\mathbb{Z}$ shows that $S_n$ has measure zero for each $n$ . At each step, we have a set of measure zero. If we continue this process infinitely, we will eventually reach every rational number (i.e. for every rational number $x$ , there is a finite $n$ with $x \in S_n$ ), so in some sense $\mathbb{Q}$ is the "limit" of these null sets, and hence it is itself null. We can certainly make this "some sense" rigorous, since $\mathbb{Q}$ is the countable union of the $S_n$ , but I'm not sure that's useful for the intuition. Obviously what I've done here is not very sophisticated, but I think it is a bit easier to visualise than just invoking countability of $\mathbb{Q}$ , since we are actually "zooming in" on $\mathbb{Q}$ in an explicit way.
|
{
"source": [
"https://math.stackexchange.com/questions/3706767",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/558586/"
]
}
|
3,706,934 |
Berggren and Borwein brothers in "Pi: A Source Book" showed a mysterious approximation for $\pi$ with astonishingly high accuracy: $$ \left(\frac{1}{10^5}\sum_{n={-\infty}}^\infty e^{-n^2/10^{10}}\right)^2 \approx \pi.$$ In particular, this formula gives $42$ billion (!) correct digits of $\pi$ . I have two questions: (1) Why this approximation is so much accurate? (2) Is it possible to prove without computation that this formula is not an identity?
|
This isn't actually so mysterious after all; it's just a high precision approximation of the identity $$\int_{-\infty}^\infty e^{-x^2}\,dx = \sqrt{\pi},$$ by using very small rectangles to approximate area. By considering these rectangles, you can also see that the areas cannot be equal, since some area will be left over between the rectangles and the graph of $e^{-x^2}$ .
|
{
"source": [
"https://math.stackexchange.com/questions/3706934",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/791148/"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.