source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
2,919,728
Just wondering if there’s a more systematic approach for those problems using combinatorics, (combinations or permutations) For instance: The probability that the sum of the numbers of 3 rolled dice is 8. Thanks
You'll never have to prove something is a vector space in real life. You may want to prove something is a vector space, because vector spaces have a simply enormous amount of theory proven about them, and the non-trivial fact that you wish to establish about your specific object might boil down to a much more general, well-known result about vector spaces. Here's my favourite example. It's still a little artificial, but I came by it through simple curiousity, rather than any course, or the search for an example. Consider the logic puzzle here . It's a classic. You have a $5 \times 5$ grid of squares, coloured black or white. Every time you press a square, it changes the square and the (up to four) adjacent squares from black to white or white to black. Your job is to press squares in such a way that you end up with every square being white. So, my question to you is, can you form any configuration of white and black squares by pressing these squares? Put another way, is any $5 \times 5$ grid of black and white squares a valid puzzle with a valid solution? Well, it turns out that this can be easily answered using linear algebra. We form a vector space of $5 \times 5$ matrices whose entries are in the field $\mathbb{Z}_2 = \lbrace 0, 1 \rbrace$. We represent white squares with $0$ and black squares with $1$. Such a vector space is finite, and contains $2^{25}$ vectors. Note that every vector is its own additive inverse (as is the case for any vector space over $\mathbb{Z}_2$). Also note that the usual standard basis, consisting of matrices with $0$ everywhere, except for a single $1$ in one entry, forms a basis for our vector space. Therefore, the dimension of the space is $25$. Pressing each square corresponds to adding one of $25$ vectors to the current vector. For example, pressing the top left square will add the vector $$\begin{pmatrix} 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}.$$ We are trying to find, therefore, a linear combination of these $25$ vectors that will sum to the current vector (remember $-v = v$ for all our vectors $v$). So, my question that I posed to you, boils down to asking whether these $25$ vectors span the $25$-dimensional space. Due to standard results in finite-dimensional linear algebra, this is equivalent to asking whether the set of $25$ vectors is linearly independent. The answer is, no, they are not linearly independent. In particular, if you press the buttons highlighted in the following picture, you'll obtain the white grid again, i.e. the additive identity. Therefore, we have a non-trivial linear combination of the vectors, so they are linearly dependent, and hence not spanning . That is, there must exist certain configurations that cannot be attained, i.e. there are invalid puzzles with no solution. The linear dependency I found while playing the game myself, and noticing some of the asymmetries of the automatically generated solutions, even when the problem itself was symmetric. Proving the linear dependence is as easy as showing the above picture. I still don't know of an elegant way to find an example of a puzzle that can't be solved though! So, my proof is somewhat non-constructive, and very easy if you know some linear algebra, and are willing to prove that a set is a vector space.
{ "source": [ "https://math.stackexchange.com/questions/2919728", "https://math.stackexchange.com", "https://math.stackexchange.com/users/583476/" ] }
2,920,725
Just for fun (inspired by sub-problem described and answered here ): Let's pick three points on a circle, say $A,B,C$. Move one point ($A$ for example) until the triangle becomes isosceles ($A'BC$) with all angles acute: Now we have triangle with sides $AB$ and $AC$ equal. Pick any of the two, say $AC$ and move $B$ until the triangle becomes isosceles again, with all angles acute: Now we have a triangle with sides $AB$ and $BC$ equal. Pick any of the two, say $BC$ and move $A$ until the triangle becomes isosceles again, with all angles acute: Repeate the same process infinite number of times. Can we prove that the end result is always an equilateral triangle? It looks so but I might be wrong. I have checked several initial configurations and always ended up with something looking like an equilateral triangle.
Think about what happens to the maximum difference between angles over time. For simplicity, let's start with an isoceles triangle with angles $x,y,y$. This triangle has "maximum angle difference" $\vert y-x\vert$. Then when we move one of the $y$-angled points, our new triangle will have angles $$y, {x+y\over 2}, {x+y\over 2}$$ since the angle of the point being moved doesn't change.The maximum difference of angles in this new triangle is $$\left\vert {y\over 2}-{x\over 2}\right\vert={1\over 2}\vert y-x\vert.$$ So each time we perform this transformation, the maximum angle difference goes down by a factor of two. Whatever the initial value $\vert y-x\vert$ was, this means that the maximum angle difference goes to zero,$^*$ which in turn means that in the limit the angles are equal. $^*$This is because it's a geometric sequence with ratio in $(-1,1)$ (namely, ${1\over 2}$) : if $r\in(-1,1)$ then for any $a$ we have $$\lim_{n\rightarrow\infty}ar^n=0.$$ Note that it would not have been enough to simply know that the maximum angle difference decreases, since not every decreasing sequence goes to zero!
{ "source": [ "https://math.stackexchange.com/questions/2920725", "https://math.stackexchange.com", "https://math.stackexchange.com/users/401277/" ] }
2,922,494
I came across the following problem today. Flip four coins. For every head, you get $\$1$ . You may reflip one coin after the four flips. Calculate the expected returns. I know that the expected value without the extra flip is $\$2$ . However, I am unsure of how to condition on the extra flips. I am tempted to claim that having the reflip simply adds $\$\frac{1}{2}$ to each case with tails since the only thing which affects the reflip is whether there are tails or not, but my gut tells me this is wrong. I am also told the correct returns is $\$\frac{79}{32}$ and I have no idea where this comes from.
Your temptation is right and your gut is wrong. You do get an extra $\frac12$ if you got tails at least once. The probability that you don't have a tail to reflip is $\frac1{16}$, so you get an extra $\frac12\left(1-\frac1{16}\right)=\frac{15}{32}$. This added to the base expectation of $2 = \frac{64}{32}$ gives $\frac{79}{32}$.
{ "source": [ "https://math.stackexchange.com/questions/2922494", "https://math.stackexchange.com", "https://math.stackexchange.com/users/320332/" ] }
2,922,749
I have come across this fact a very long time ago, but I still can't tell why is this happening. Given the standard calculators numpad: 7 8 9 4 5 6 1 2 3 if you dial any rectangular shape, going only in right angles and each shape consisting of 4 points, then the dialed number is always divisible by 11 without a remainder. Examples of the shapes: 1254 , 3179 , 2893 , 8569 , 2871 , and so on. It is not allowed to use zero, only digits 1..9. UPDATE: The part of the question below proved to be an error on my part because I did not double-check what the Programmers calculator was showing, and it turned out that it was rounding the results. !!! See the accepted answer for an interesting follow-up to this, which actually expands the usecase to a working hexadecimal "keypad", and the other answers for even more various interesting approaches, discovering different aspects of this problem !!! The same rule also works even for the Programmer calculator layout on MS Windows 10 which looks like this: A B 7 8 9 C D 4 5 6 E F 1 2 3 All valid rectangular shapes, for example, A85C , E25C , 39BF , and so on, being divided by 11 still give an integer result! Initially I was thinking that it's just somehow tied to picking digits from the triplets and being just another peculiarity of the decimal base and number 11 and started looking this way, but discovering that it works for the hexidecimal base and even with the hex part of the keyboard layout not obeying exactly the pattern of the decimal part layout, I'm lost. What law is this fun rule based on?
Since you are concerned only with rectangular patterns, you have four digit numbers in a certain base $l$, and you want to check divisibility by $11$, where $11 = l+1$. When you have a four digit number in base $l$, write it as $al^3 + bl^2 + cl + d$, where $0 \leq a,b,c,d < l$. (This represents the $l$-base number $\overline{abcd}$). Now, we have a cute fact : $l^3 + 1$ is a multiple of $l+1$, since $l^3+1 = (l^2 - l + 1)(l+1)$. Furthermore, $l^2 - 1 = (l-1)(l+1)$. Therefore, we make the following rewrite : $$ al^3 + bl^2 + cl + d = a(l^3 + 1) + b(l^2 - 1) + c(l+1) - (a - b + c - d) \\ = (l+1)(...) + ((b+d)-(a+c)) $$ where $l+1 = 11$ in base $l$. Therefore, the remainder when $\overline{abcd}$ is divided by $11$ is $(b+d) - (a+c)$. When you consider four numbers ($1 \to 9$, since I found problems in the letters for hex) in a rectangle and form a four digit number out of them now, can you see why this number $(b+d) - (a+c)$ is in fact zero, therefore giving your desired result? Now that we have pointed out this rectangular pattern, I noted above that some counterexamples did exist for the MS - layout of hexadecimal numbers. The issue there was a fairly simple one : the "matrix" of entries did not satisfy the property that $a+d = b+c$ for $a,b,c,d$ going CW/CCW around any $2\times 2$ subrectangle of entries of the matrix. If $l-1$ is not a prime, then we can actually arrange a "matrix" of $l$ entries which is not strictly column or row, but satisfies this "rectangle property" as we can call it. For this, write $l-1 = ab$ where $a,b \neq 1$, and arrange an $a \times b$ matrix of entries, which we fill in the following fashion : start with $1$ in the bottom corner, proceed towards right filling $2,3,...$ until you hit the end, then return to the left of the row above, and fill the next number, now repeat till you fill the whole matrix. For $10$, this procedure yields the conventional keyboard pattern. For $15 = 5 \times 3$, it would yield $$ \begin{pmatrix} B&C &D & E& F\\ 6&7&8&9&A\\ 1&2&3&4&5 \\ \end{pmatrix} $$ which indeed will satisfy the property that any rectangle has $11$ in that base as a divisor. For example, $A8DF$, $1496$ and $CE97$ are all multiples of $11$. Note that more is true : in fact, every parallellogram , read CW or CCW, leads to a multiple of eleven.
{ "source": [ "https://math.stackexchange.com/questions/2922749", "https://math.stackexchange.com", "https://math.stackexchange.com/users/240262/" ] }
2,926,782
There are two statements which to me seem rather symmetric: Let $A$ be a ring, $M$ an $A$ -module, and $f : M \to M$ . If $M$ is Noetherian and $f$ is surjective, then $f$ is injective. If $M$ is Artinian and $f$ is injective, then $f$ is surjective. The proofs also seem symmetric in a sense: in the first case one constructs the increasing chain of ideals $0 \subset \ker f \subset \ker f^2 \subset \dots$ which is strict when $f$ is surjective but not injective. In the second case one uses the injectivity of $f$ to construct the decreasing chain of ideals $M \supset im \, f \supset im \, f^2 \supset \dots$ which is strict when $f$ is injective but not surjective. However, some symmetry is lost in the assertion of the last part ("which is strict when $f$ is __ but not __"). In the first case I use the fact $\ker f^n = \ker f^{n+1}$ implies that $f$ is injective on $im \, f^n = M$ . In the second case I use the fact that $M \supsetneq im \, f$ would imply that $im \, f^n \supsetneq im \, f^{n+1}$ because injective maps preserve strict inclusions. My question is, is there a way to prove one of the statements in the appropriate category/framework such that the other follows from some kind of formulaic reversal of arrows? This is definitely more of a soft question because I'm not sure what this might mean, but the two situations seem symmetric enough that this might be plausible.
Yes! These are both special cases of a general statement: If $M$ is a Noetherian object in an abelian category and $f:M\to M$ is an epimorphism, then $f$ is a monomorphism. Here an object is "Noetherian" if every ascending chain of subobjects stabilizes. The proof is exactly the same as in the case of modules: look at the ascending chain $0 \subset \ker f \subset \ker f^2 \subset \dots$ (though it takes a little more work to prove this chain is strictly ascending in an abstract abelian category than in the case of modules). Now, how does this imply the Artinian version? Well, the opposite category of $A$ -modules is also an abelian category, so we can apply the result in that category. What does it mean for $M$ to be a Noetherian object in the opposite category of $A$ -modules? Well, a subobject is a monomorphism $N\to M$ (up to isomorphism), which would be an epimorphism $M\to N$ in the original category. But such an epimorphism is determined (up to isomorphism) by its kernel, which is a subobject of $M$ . So subobjects of $M$ in the opposite category are naturally in bijection with subobjects in the original category. However, this bijection reverses the inclusion order on subobjects. Indeed, suppose $N\to M$ and $P\to M$ are two subobjects of $M$ in the opposite category, with $N$ contained in $P$ . That means we can factor the map $N\to M$ as $N\to P\to M$ . In the original category, then, this means we can factor the quotient map $M\to N$ as $M\to P\to N$ . This is possible if and only if the kernel of $M\to N$ contains the kernel of $M\to P$ . In other words, $N$ is contained in $P$ as subobjects in the opposite category iff the subobject in the original category corresponding to $P$ is contained in the subobject in the original category corresponding to $N$ . This means that $M$ is Noetherian in the opposite category iff $M$ is Artinian in the original category, since the order on subobjects has been reversed. Applying the result in the opposite category, we conclude that if $M$ is Artinian and if $f:M\to M$ is a monomorphism, then $f$ is an epimorphism.
{ "source": [ "https://math.stackexchange.com/questions/2926782", "https://math.stackexchange.com", "https://math.stackexchange.com/users/92774/" ] }
2,926,784
I'm trying to plot this in the complex plane: $$C = \{z \in\mathbb C | z \neq 0,\arg(z^2) \in \left[0, \pi/4\right)\}$$ My work so far: Let $z = re^{(i\theta)}$ $z^2 = r^2(\cos(2\theta) + i\sin(2\theta))$ I know how to plot in the complex plane, but I'm not really sure how to specifically plot this function. Thanks in advance for your help!
Yes! These are both special cases of a general statement: If $M$ is a Noetherian object in an abelian category and $f:M\to M$ is an epimorphism, then $f$ is a monomorphism. Here an object is "Noetherian" if every ascending chain of subobjects stabilizes. The proof is exactly the same as in the case of modules: look at the ascending chain $0 \subset \ker f \subset \ker f^2 \subset \dots$ (though it takes a little more work to prove this chain is strictly ascending in an abstract abelian category than in the case of modules). Now, how does this imply the Artinian version? Well, the opposite category of $A$ -modules is also an abelian category, so we can apply the result in that category. What does it mean for $M$ to be a Noetherian object in the opposite category of $A$ -modules? Well, a subobject is a monomorphism $N\to M$ (up to isomorphism), which would be an epimorphism $M\to N$ in the original category. But such an epimorphism is determined (up to isomorphism) by its kernel, which is a subobject of $M$ . So subobjects of $M$ in the opposite category are naturally in bijection with subobjects in the original category. However, this bijection reverses the inclusion order on subobjects. Indeed, suppose $N\to M$ and $P\to M$ are two subobjects of $M$ in the opposite category, with $N$ contained in $P$ . That means we can factor the map $N\to M$ as $N\to P\to M$ . In the original category, then, this means we can factor the quotient map $M\to N$ as $M\to P\to N$ . This is possible if and only if the kernel of $M\to N$ contains the kernel of $M\to P$ . In other words, $N$ is contained in $P$ as subobjects in the opposite category iff the subobject in the original category corresponding to $P$ is contained in the subobject in the original category corresponding to $N$ . This means that $M$ is Noetherian in the opposite category iff $M$ is Artinian in the original category, since the order on subobjects has been reversed. Applying the result in the opposite category, we conclude that if $M$ is Artinian and if $f:M\to M$ is a monomorphism, then $f$ is an epimorphism.
{ "source": [ "https://math.stackexchange.com/questions/2926784", "https://math.stackexchange.com", "https://math.stackexchange.com/users/594288/" ] }
2,926,788
I'm in my first few weeks of taking a theoretical course at my school and was wondering what is wrong with my answer to this question. I've been told to show that the language: L = {x | x has even length and ends with b} over the alphabet {a,b} Is regular I know I can prove this by showing a DFA that accepts that language, over that alphabet. I came up with this solution: (the letters are in color blocks because the background is black and with a transparent image they wouldn't show up on a black background). https://imgur.com/a/0ZVqpAZ This seems to work on strings I've tested such as: ab abbb ababab bbaabbababab etc However, my textbook provides a solution with 4 states instead of 3, so I'm wondering where I have gone wrong here? Is there a way to easily check what strings wouldn't work for this DFA? It seems trivial to test every string out there, as it would be impossible.
Yes! These are both special cases of a general statement: If $M$ is a Noetherian object in an abelian category and $f:M\to M$ is an epimorphism, then $f$ is a monomorphism. Here an object is "Noetherian" if every ascending chain of subobjects stabilizes. The proof is exactly the same as in the case of modules: look at the ascending chain $0 \subset \ker f \subset \ker f^2 \subset \dots$ (though it takes a little more work to prove this chain is strictly ascending in an abstract abelian category than in the case of modules). Now, how does this imply the Artinian version? Well, the opposite category of $A$ -modules is also an abelian category, so we can apply the result in that category. What does it mean for $M$ to be a Noetherian object in the opposite category of $A$ -modules? Well, a subobject is a monomorphism $N\to M$ (up to isomorphism), which would be an epimorphism $M\to N$ in the original category. But such an epimorphism is determined (up to isomorphism) by its kernel, which is a subobject of $M$ . So subobjects of $M$ in the opposite category are naturally in bijection with subobjects in the original category. However, this bijection reverses the inclusion order on subobjects. Indeed, suppose $N\to M$ and $P\to M$ are two subobjects of $M$ in the opposite category, with $N$ contained in $P$ . That means we can factor the map $N\to M$ as $N\to P\to M$ . In the original category, then, this means we can factor the quotient map $M\to N$ as $M\to P\to N$ . This is possible if and only if the kernel of $M\to N$ contains the kernel of $M\to P$ . In other words, $N$ is contained in $P$ as subobjects in the opposite category iff the subobject in the original category corresponding to $P$ is contained in the subobject in the original category corresponding to $N$ . This means that $M$ is Noetherian in the opposite category iff $M$ is Artinian in the original category, since the order on subobjects has been reversed. Applying the result in the opposite category, we conclude that if $M$ is Artinian and if $f:M\to M$ is a monomorphism, then $f$ is an epimorphism.
{ "source": [ "https://math.stackexchange.com/questions/2926788", "https://math.stackexchange.com", "https://math.stackexchange.com/users/529959/" ] }
2,926,803
Let $A$ be the ring, and $I_1, I_2, \dots, I_n$ are the ideals. I need to show that $I_1 I_2 \dots I_n \subset I_1 I_2\dots I_{n-1}$ . I know that $I_1 I_2 \dots I_n \subset I_1 \cap I_2 \cap \dots \cap I_n$ , and also $I_1 I_2 \dots I_{n-1} \subset I_1 \cap I_2 \cap \dots \cap I_{n-1}$ . Clearly, $I_1 \cap I_2 \cap \dots \cap I_{n} \subset I_1 \cap I_2 \cap \dots \cap I_{n-1}$ , but how can I conclude that $I_1 I_2 \dots I_n \subset I_1 I_2\dots I_{n-1}$ ?
Yes! These are both special cases of a general statement: If $M$ is a Noetherian object in an abelian category and $f:M\to M$ is an epimorphism, then $f$ is a monomorphism. Here an object is "Noetherian" if every ascending chain of subobjects stabilizes. The proof is exactly the same as in the case of modules: look at the ascending chain $0 \subset \ker f \subset \ker f^2 \subset \dots$ (though it takes a little more work to prove this chain is strictly ascending in an abstract abelian category than in the case of modules). Now, how does this imply the Artinian version? Well, the opposite category of $A$ -modules is also an abelian category, so we can apply the result in that category. What does it mean for $M$ to be a Noetherian object in the opposite category of $A$ -modules? Well, a subobject is a monomorphism $N\to M$ (up to isomorphism), which would be an epimorphism $M\to N$ in the original category. But such an epimorphism is determined (up to isomorphism) by its kernel, which is a subobject of $M$ . So subobjects of $M$ in the opposite category are naturally in bijection with subobjects in the original category. However, this bijection reverses the inclusion order on subobjects. Indeed, suppose $N\to M$ and $P\to M$ are two subobjects of $M$ in the opposite category, with $N$ contained in $P$ . That means we can factor the map $N\to M$ as $N\to P\to M$ . In the original category, then, this means we can factor the quotient map $M\to N$ as $M\to P\to N$ . This is possible if and only if the kernel of $M\to N$ contains the kernel of $M\to P$ . In other words, $N$ is contained in $P$ as subobjects in the opposite category iff the subobject in the original category corresponding to $P$ is contained in the subobject in the original category corresponding to $N$ . This means that $M$ is Noetherian in the opposite category iff $M$ is Artinian in the original category, since the order on subobjects has been reversed. Applying the result in the opposite category, we conclude that if $M$ is Artinian and if $f:M\to M$ is a monomorphism, then $f$ is an epimorphism.
{ "source": [ "https://math.stackexchange.com/questions/2926803", "https://math.stackexchange.com", "https://math.stackexchange.com/users/497860/" ] }
2,929,008
I'm asking this question because I was unable to find an answer elsewhere as most questions are about the summation of different irrational numbers, which is not what this question is about. Here, I'm interested in demonstrating that the result of the summation of the same irrational number is always irrational: $\sum_{i=1}^n a$ , where $n$ is a non-negative integer $>0$ and $a$ is an irrational constant.
It is trivially so. If you sum the irrational number $x$ $n$ times ( $n$ being an integer "of course"), you end up with $nx$ . If $nx=\frac ab$ , with $(a,b)$ integers, then $x=\frac{a}{nb}$ , thus is rational, which is not true...
{ "source": [ "https://math.stackexchange.com/questions/2929008", "https://math.stackexchange.com", "https://math.stackexchange.com/users/121047/" ] }
2,930,970
I recently came up with this theorem: For any complex polynomial $P$ degree $n$ : $$ \sum\limits_{k=0}^{n+1}(-1)^k\binom{n+1}{k}P(a+kb) = 0\quad \forall a,b \in\mathbb{C}$$ Basically, if $P$ is quadratic, $P(a) - 3P(a+b) + 3P(a+2b) - P(a+3b) = 0$ (inputs of $P$ are consecutive terms of any arithmetic sequence). This can be generalized to any other degrees. Has this been discovered? If yes, what's the formal name for this phenomenon? Is it significant/Are there important consequences of this being true? Can this be generalized to non-polynomials?
In brief: this is well-known, but definitely important. It's easiest to write this in terms of the finite difference operator $\Delta$ : $\Delta P(x)=P(x+1)-P(x)$ . You use $P(x+b)$ instead of $P(x+1)$ , but it's easy to see that these two things are equivalent; to keep things consistent with your notation, I'll write $\Delta_b$ for your operator. The most important feature of the $\Delta_b$ operator is how it affects the degree of a polynomial: Theorem: for any nonconstant polynomial $P(x)$ , the degree of $\Delta_b P(x)$ is one less than the degree of $P(x)$ . Proof outline : Note that the degree of $\Delta_b P(x)$ is no greater than the degree of $P(x)$ . Now, write $P(x) = a_dx^d+Q(x)$ , where $Q(x)$ is a polynomial of degree $d-1$ or less. Then $P(x+b) =a_d(x+b)^d+Q(x+b)$ , so $\Delta_b P(x) = a_d\left((x+b)^d-x^d\right)+\Delta_b Q(x)$ ; by the binomial theorem $(x+b)^d=x^d+{d\choose 1}bx^{d-1}+\ldots$ , so $(x+b)^d-x^d={d\choose 1}bx^{d-1}+\ldots$ is a polynomial of degree at most $d-1$ , and thus $\Delta_bP(x)$ is the sum of two polynomials of degree at most $d-1$ (namely, $a_d\left((x+b)^d-x^d\right)$ and $\Delta_b Q(x)$ ), so it's of degree at most $d-1$ itself. (It's slightly more challenging to prove that the degree of $\Delta_bP(x)$ is exactly $d-1$ when $b \neq 0$ , but this can also be shown.) Why does this matter? Because it can be shown by induction that your sum is exactly the result of applying the $\Delta_b$ operator $d+1$ times, where $d$ is the degree of the polynomial; since each application of $\Delta_b$ reduces the degree by one, then $(\Delta_b)^dP(x)$ is a polynomial of degree zero — a constant — and thus $(\Delta_b)^{d+1}P(x)$ will be identically zero. This is exactly your identity. Now, you may know that the derivative of a polynomial of degree $d$ is also a polynomial of degree $d-1$ . It turns out that this isn't a coincidence; $\Delta$ is very similar to a derivative in many ways, with the Newton polynomials ${x\choose d}=\frac1{d!}x(x-1)(x-2)\cdots(x-d)$ playing the role of the monomial $x^d$ with respect to the derivative. For more details, I suggest starting with Wikipedia's page on finite difference calculus . In fact, we can also prove the converse (and this answers the question about generalizing to non-polynomials in the negative). I'll work in terms of $\Delta$ , rather than $\Delta_b$ , but again all the results generalize readily. Note that $\Delta^n P(x)$ only depends on the values of $P(x+i)$ for $i$ an integer between $0$ and $n$ ; thus, a function can take arbitrary values for $0\lt x\lt1$ and still satisfy the identity; we can't say much about general points. However, it does constrain the values at integers: Theorem: suppose that $\Delta^{d+1}f(x)\equiv 0$ identically. Then there exists a polynomial $P(x)$ of degree $d$ such that $f(n)=P(n)$ for all integers $n$ . The proof works by induction. For simplicity's sake, I'll consider all functions as being on $\mathbb{Z}$ now, and not consider non-integer values at all. Note first of all that if $\Delta f(x)=g(x)$ , then $f(n)=f(0)+\sum_{i=0}^{n-1}g(i)$ . (Proof by induction: the case $n=1$ is true by definition, since $g(0)=\Delta f(0)=f(1)-f(0)$ implies that $f(1)=f(0)+g(0)$ . Now, assuming it's true for $n=k$ , at $n=k+1$ we have $f(k+1)=f(k)+g(k)$ $=f(0)+\sum_{i=0}^{k-1}g(i)+g(k)$ $=f(0)+\sum_{i=0}^kg(i)$ .) In particular, if $\Delta f(x)\equiv 0$ identically, then $f(n)=f(0)$ for all integers $n$ ; $f()$ is constant on $\mathbb{Z}$ . This gives us the base case for our induction; to induct we just need to show that if $\Delta f(x)$ is a polynomial of degree $d$ , then $f(x)$ is polynomial of degree $d+1$ . But suppose for concreteness that $\Delta f(x)=P(x)=\sum_{i=0}^da_ix^i$ . Then $f(n)=f(0)+\sum_{k=0}^{n-1}P(k)$ $=f(0)+\sum_{k=0}^{n-1}\left(\sum_{i=0}^da_ik^i\right)$ $=f(0)+\sum_{i=0}^da_i\left(\sum_{k=0}^{n-1}k^i\right)$ . Now, for each $i$ the sum $\sum_{k=0}^{n-1}k^i$ in parentheses in this last expression is known to be a polynomial of degree $i+1$ (see e.g. https://en.wikipedia.org/wiki/Faulhaber%27s_formula ), so the whole expression is a polynomial of degree $d+1$ , as was to be proved.
{ "source": [ "https://math.stackexchange.com/questions/2930970", "https://math.stackexchange.com", "https://math.stackexchange.com/users/422749/" ] }
2,932,482
What is the probability that a sheepdog performs at least $1$ of these tasks successfully? My approach is to subtract the probability of performing at most $1$ of these tasks successfully from the probability of performing all $4$ tasks successfully. $P(\text{fetch})=.9, P(\text{drive})=.7, P(\text{herd})=.84, P(\text{separate})=.75$ . The complement of these four probabilities is, $.1,.3,.16,$ and $,.25$ , respectively. So the probability that the sheepdog performs all four tasks successfully is simply, $(.9)(.7)(.84)(.75)$ . The probability that the sheepdog performs at most $1$ task successfully can be split into $4$ cases. Either the sheepdog performs the fetch task (and not the other 3) successfully, performs the drive task, performs the herd task, or performs the separate task. This would look like: $(.9)(.3)(.16)(.25)+(.1)(.7)(.16)(.25)+(.1)(.3)(.84)(.25)+(.1)(.3)(.16)(.75)$ Subtracting this from the case in which the sheepdog performs all four tasks would yield: $(.9)(.7)(.84)(.75)-[(.9)(.3)(.16)(.25)+(.1)(.7)(.16)(.25)+(.1)(.3)(.84)(.25)+(.1)(.3)(.16)(.75)]$ . Is this correct?
The complement of at least $1$ is not at most $1$ . The complement is if none of the task is perform. Hence just compute $$1-\prod_{i=1}^4 (1-p_i)$$
{ "source": [ "https://math.stackexchange.com/questions/2932482", "https://math.stackexchange.com", "https://math.stackexchange.com/users/432062/" ] }
2,934,741
Question: Solve: $xy+x+y=23\tag{1}$ $yz+y+z=31\tag{2}$ $zx+z+x=47\tag{3}$ My attempt: By adding all we get $$\sum xy +2\sum x =101$$ Multiplying $(1)$ by $z$ , $(2)$ by $x$ , and $(3)$ by $y$ and adding altogether gives $$3xyz+ 2\sum xy =31x+47y+23z$$ Then, from above two equations after eliminating $\sum xy$ term we get $$35x+51y+27z=202+3xyz$$ After that subtracting $(1)\times 3z$ from equation just above (to eliminate $3xyz$ term) gives $$35x +51y-3z(14+x+y)=202\implies (x+y)[35-3z]+16y-42z=202$$ I tried pairwise subtraction of $(1),(2)$ and $(3)$ but it also seems to be not working. Please give me some hint so that I can proceed or provide with the answer.
Hint: Put $$X=x+1$$ $$Y=y+1$$ $$Z=z+1$$ Then we have $$XY=24$$ $$YZ=32$$ $$ZX=48$$ Can you take it from there?
{ "source": [ "https://math.stackexchange.com/questions/2934741", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
2,936,156
I was watching a video and the lecturer discusses the function $$\frac{1}{1+x^2} = \sum_{n=0}^{\infty} {(-1)^n x^{2n}}$$ $$|x| < 1$$ explaining that the radius of convergence for this Taylor series centered at $x=0$ is 1 because it is being affected by $i$ and $-i$ . Then, he goes on to talk about how real analysis is a glimpse into complex analysis. In the same video, the lecturer also provides the following example where a complex function is defined by using real Taylor series: \begin{align} e^{ix} &= \cos(x) +i \sin(x) \\ &= \sum_{n=0}^\infty \frac{(ix)^n}{n!} \\ &= \sum_{n=0}^\infty (-1)^n \frac{x^{2n}}{2n!} + i \sum_{n=0}^\infty (-1)^n \frac{x^{2n+1}}{(2n+1)!} \end{align} Can someone help elaborate by what the lecturer probably meant? What is the connection between real analysis and complex analysis? I understand that there are two different types of analyticity: real analytic and complex analytic. Are they connected?
This is an interesting question, but one that might be hard to address completely. Let me see what I can do to help. Real Power Series: The easiest way to address the connection between the two subjects is through the study of power series, as you have already had alluded to you. A power series is a particular kind of infinite sum (there are many different kinds of these) of the form $$ f(x) = \sum_{k=0}^{\infty}a_k x^k. $$ They get their name from the fact that we are adding together powers of $x$ with different coefficients. In real analysis the argument of such a function (the " $x$ " in $f(x)$ ) is taken to be a real number. And depending on the coefficients that are being multiplied with the powers of $x$ , we get different intervals of convergence (intervals on which the sequence of partial sums converges.) For example, if $a_k$ is the $k$ th Fibonacci number, then the radius of convergence ends up being $1/\phi$ , where $\phi = (1+\sqrt{5})/2$ is the "golden ratio". The most common kind of power series that come up in calculus (and real analysis) are Taylor series or Maclaurin series . These series are created to represent (a portion) of some differentiable function. Let me try to make this a little more concrete. Pick some function $f(x)$ that is infinitely differentiable at a fixed value, say at $x=a$ . The Taylor series corresponding to $f(x)$ , centered at $a$ , is given by $$ \sum_{k=0}^{\infty}\frac{f^{(k)}(a)}{k!}(x-a)^k. $$ A Maclaurin series is just a Taylor series where $a=0$ . After playing around with the series a bit, you may notice a few things about it. When $x=a$ , the only non-zero term in the series is the first one: the constant $f(a)$ . This means that no matter what the radius of convergence is for the series, it will at least agree with the original function at this one point. Taking derivatives of the power series and evaluating them at $x=a$ , we see that the power series and the original function $f(x)$ have the same derivatives at $a$ . This is by construction, and not a coincidence. If the radius of convergence is positive, then the interval on which the series converges will converge to the original function $f(x)$ . In this way, we can say that $f(x)$ "equals" it's Taylor series, understanding that this equality may only hold on some interval centered at $a$ , and perhaps not on the entire domain that $f(x)$ is defined on. You've already seen such an example with $(1+x^2)^{-1}$ . In some extreme cases, such as with $\sin x$ and $\cos x$ as mentioned above, this equality ends up holding for ALL real values of $x$ , and so the convergence is on all of $\mathbb{R}$ and there is no harm completely equating the function with it's Taylor series. Even if the radius of convergence of a particular Taylor series (centered at $a$ ) is finite, it does not mean you cannot take other Taylor series (centered at other values than $a$ ) that also have a positive radius of convergence. For example, even though the Maclaurin series for $(1+x^2)^{-1}$ does not converge outside of the interval of radius 1, you can compute the Taylor series centered at $a=1$ , \begin{align} \frac{1}{1+x^2} &= \sum_{k=0}^{\infty} \left( \frac{1}{k!} \frac{d^k}{dx^k}\left(\frac{1}{1+x^2}\right)\Bigg|_{x=1}(x-1)^k \right) \\ &= \frac{1}{2}-\frac{1}{2}(x-1)+\frac{1}{2}(x-1)^2-3(x-1)^4+\cdots, \quad \text{for $|x-1|<\sqrt{2}$}. \end{align} You will then find that this new power series converges on a slightly larger radius than 1 (that's the $\sqrt{2}$ mentioned above), and that the two power series (one centered at 0, the other centered at 1) overlap and agree for certain values of $x$ . The big take away that you should have about power series and Taylor series is that they are one and the same. Defined a function by a power series, and then take it's Taylor series centered at the same point; you will get the same series. Conversely, any infinitely-differentiable function that has a Taylor series with positive radius of convergence is uniquely determined by that power series. This is where complex analysis beings to come into play... Complex Power Series: Complex numbers have many similar properties to real numbers. They form and algebra (you can do arithmetic with them); the only number you still cannot divide by is zero; and the absolute value of a complex number still tells you the distance from 0 that number is. In particular there is nothing stopping you from defining power series of complex numbers, with z=x+iy: $$ f(z) = \sum_{k=0}^{\infty}c_k z^k. $$ The only difference now is that the coefficients $c_k$ can be complex numbers, and the radius of convergence is now relating to the radius of a circle (as opposed to the radius of an interval). Things may seem exactly like in the real-valued situation, but there is more lurking beneath the surface. For starters, let me define some new vocabulary terms. We say that a complex function is complex differentiable at some fixed value $z=w$ if the following limit exists: $$ \lim_{z\to w}\frac{f(z)-f(w)}{z-w}. $$ If this limit exists, we denote the value of the limit by $f'(w)$ . This should look familiar, as it is the same limit definition for real-valued derivatives. In the same way that we could create Taylor series for real-valued functions, we can also create Taylor series (center at $z=w$ ) for complex-valued functions, provided the functions have an infinite number of derivatives at $w$ (sometimes referred to as being holomorphic at $w$ ): $$ \sum_{k=0}^{\infty}\frac{f^{(k)}(w)}{k!}(z-w)^k. $$ If the Taylor series defined above has a positive radius of convergence then we say that $f$ is analytic at $w$ . This vocabulary is also used in the case of real-valued Taylor series. Perhaps the biggest surprise in complex analysis is that the following conditions are all equivalent : $f(z)$ is complex differentiable at $z=w$ . $f(z)$ is holomorphic at $w$ . That is $f$ has an infinite number of derivatives at $z=w$ . In real analysis this condition is sometimes referred to as being "smooth". $f(z)$ is analytic at $z=w$ . That is it's Taylor series converges to $f(z)$ with some positive radius of convergence. This means that being differentiable in the complex sense is a much harder thing to accomplish than in the real sense. Consider the contrast with the "real-valued" equivalents of the points made above: In real analysis there are a number of functions with only finitely many derivatives. For example, $x^2 \sin(1/x)$ has only one derivative at $x=0$ , and $f'(x)$ is not even continuous , let alone twice differentiable. There are also real-valued functions that are smooth (infinitely-differentiable) yet do not have a convergent Taylor series. An example is the function $e^{-1/x^2}$ which is smooth for every $x$ , but for which every order of derivative at $x=0$ is equal to zero; this means every coefficient in the Maclaurin series is zero, and the radius of convergence is zero as well. These kinds of pathologies do not occur in the complex world: one derivative is as good as an infinite number of derivatives; differentiability at one point translates to differentiability on a neighborhood of that point. Laurent Series: A natural question that one might ask is what dictates the radius of convergence for a power series? In the real-valued case things seemed to be fairly unpredictable. However, in the complex-valued case things are much more elegant. Let $f(z)$ be differentiable at some point $z=w$ . Then the radius of convergence for the Taylor series of $f(z)$ centered at $w$ will be the distance to the nearest complex number at which $f(z)$ fails to be differentiable. Think of it like dropping a pebble into a pool of water. The ripples will extend radially outward from the initial point of differentiability, all the way until the circular edge of the ripple hits the first " singularity " -- a point where $f(z)$ fails to be differentiable. Take the complex version of our previous example, $$ f(z) = \frac{1}{1+z^2}. $$ This is a ration function, and will be smooth for all values of $z$ where the denominator is non-zero. Since the only roots of $z^2 + 1$ are $z = i$ and $z = -i$ , then $f(z)$ is differentiable/smooth/analytic at all values $w\neq \pm i$ . This is precisely why the radius of convergence for the real-valued Maclaurin series is 1, as you've already noted: the shortest distance from $z=0$ to $z=\pm i$ is 1. The real-valued Maclaurin series is just a "snapshot" or "sliver" of the complex-valued Taylor series centered at $z=0$ . This is also why the radius of convergence for the real-valued Taylor series increases when you move away from zero; the distance to $\pm i$ becomes greater, and so the complex-valued Taylor series can converge on a larger disk. So now should come the question: when exactly does a complex function fail to be differentiable? Without going into too many details from complex analysis, suffice it to say that complex functions fail to be differentiable when one of three things occurs: The function has a "singularity" (think division by zero). The function is defined in terms of the complex conjugate, $\bar{z}$ . For example, $f(z) = |z|^2 = z\,\bar{z}$ . The function involves logarithms or non-integer exponents (these two ideas are actually related). Number 2. is perhaps the most egregious of the three issues, and it means that functions like $f(z) = |z|$ are actually differentiable nowhere . This is in stark contrast to the real-valued version $f(x) = |x|$ , which is differentiable everywhere except at $x=0$ where there is a "corner". Number 3. is actually not too bad, and it turns out that these kinds of functions are usually differentiable everywhere except along certain rays or line segments. To get into it further, however, will take us too far off course. Number 1. is the best case scenario, and is the focus of this section of our discussion. Essentially, singularities are places where division by zero has occurred, and the extend to which something has "gone wrong" can be quantified. Let me try to elaborate. Consider the previous example of $f(z) = (1+z^2)^{-1}$ . Again, since the denominator can factor into the product $(z+i)(z-i)$ , then this means we could "erase" the singularity at $z=i$ by multiplying the function by a copy of $z-i$ . In other words if $$ g(z) = (z+i)f(z) = \frac{z-i}{1+z^2}, $$ then $g(z) = z+i$ for all $z\neq i$ , and $\lim_{z\to i}g(z) = 2i$ exists, and is no longer a singularity. Similarly, if $f(z) = 1/(1+z^2)^3$ , then we again have singularities at $z=\pm i$ . This time, however, multiplying $f(z)$ by only one copy of $z-i$ will not remove the singularity at $z=i$ . Instead, we would need to multiply by three copies to get $$ g(z) = (z-i)^3 f(z) = \frac{(z-i)^3}{(1+z^2)^3}, $$ which again means that $g(z) = (z+i)^3$ for all $z\neq i$ , and that $\lim_{z\to i}g(z) = (2i)^3 = -8i$ exists. Singularities like this --- ones that can be "removed" through the use of multiplication of a finite number of linear terms --- are called poles . The order of the pole is the minimum number of liner terms need to remove the singularity. The real-valued Maclaurin series for $\sin x$ is given by $$ \sin x = \sum_{k=0}^{\infty} \frac{(-1)^{k}}{(2k+1)!}x^{2k+1}, $$ and has a infinite radius of convergence. This means that the complex version $$ \sin z = \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k+1)!}z^{2k+1} = z - \frac{1}{3!}z^3 + \frac{1}{5!}z^5 - \cdots $$ also has an infinite range of convergence (such functions are called entire ) and hence no singularities. From here it's easy to see that the function $(\sin z)/z$ is analytic as well, with Taylor series $$ \frac{\sin z}{z} = \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k+1)!}z^{2k} = 1 - \frac{1}{3!}z^2 + \frac{1}{5!}z^4 - \cdots $$ However, a function like $(\sin z)/z^3$ is not analytic at $z=0$ , since dividing $\sin z$ by $z^3$ would give us the following expression: $$ \frac{\sin z}{z^3} = \frac{1}{z^2} - \frac{1}{3!} + \frac{1}{5!}z^2 - \cdots $$ But notice that if we were to subtract the term $1/z^2$ from both sides we would be left again with a proper Taylor series $$ \frac{\sin z}{z^3} - \frac{1}{z^2} = \frac{\sin z - z}{z^3} = \frac{1}{3!} + \frac{1}{5!}z^2 - \frac{1}{7!}z^4 + \cdots $$ This idea of extending the idea of Taylor series to include terms with negative powers of $z$ is what is referred to a Laurent series . A Laurent series is a power series in which the powers of $z$ are allowed to take on negative values, as well as positive: $$ f(z) := \sum_{k = -\infty}^{\infty} c_k z^k $$ In this way we can expand complex functions around singular points in a fashion similar to expanding around analytic points. A pole, it turns out, is a singular point for which there are a finite number of terms with negative powers, such as with $(\sin z)/z^3$ . If, however, an infinite number of negative powers are needed to fully express a Laurent series, then this type of singular point is called an essential singularity . An excellent example of such a function can be made by taking an analytic function (one with a Taylor series) and replacing $z$ by $1/z$ : \begin{align} \sin(1/z) &= \sum_{k=0}^{\infty}\frac{(-1)^k}{(2k+1)!}(1/z)^k \\ &= \sum_{k=0}^{\infty}\frac{(-1)^k}{(2k+1)!}z^{-k} \\ &= \sum_{k=-\infty}^{0}\frac{(-1)^{-k}}{(1-2k)!}z^{k} \end{align} These kinds of singularities are quite severe and the behavior of complex functions around such a point is rather erratic. This also explains why the real-valued function $e^{-1/x^2}$ was so pathological. The Taylor series for $e^z$ is given by $$ e^z = \sum_{k=0}^{\infty}\frac{1}{k!}z^k $$ and so \begin{align} e^{-1/z^2} &= \sum_{k=0}^{\infty}\frac{1}{k!}(-1/z^2)^k \\ &= \sum_{k=0}^{\infty}\frac{1}{k!}(-1)^k z^{-2k} \\ &= \sum_{k=-\infty}^{0}\frac{1}{(-k)!}(-1)^{-k} z^{2k}. \end{align} Hence there is an essential singularity at $z=0$ , and so even though the real-valued version is smooth at $x=0$ , there is no hope of differentiability in a disk around $z=0$ .
{ "source": [ "https://math.stackexchange.com/questions/2936156", "https://math.stackexchange.com", "https://math.stackexchange.com/users/386180/" ] }
2,936,741
$7$ fishermen caught exactly $100$ fish and no two had caught the same number of fish. Prove that there are three fishermen who have captured together at least $50$ fish. Try: Suppose $k$ th fisher caught $r_k$ fishes and that we have $$r_1<r_2<r_3<r_4<r_5<r_6<r_7$$ and let $r(ijk) := r_i+r_j+r_k$ . Now suppose $r(ijk)<49$ for all triples $\{i,j,k\}$ . Then we have $$r(123)<r(124)<r(125)<r(345)<r(367)<r(467)<r(567)\leq 49$$ so $$300\leq 3(r_1+\cdots+r_7)\leq 49+48+47+46+45+44+43= 322$$ and no contradiction. Any idea how to resolve this? Edit: Actually we have from $r(5,6,7)\leq 49$ that $r(4,6,7)\leq 48$ and $r(3,6,7)\leq 47$ and then $r(3,4,5)\leq r(3,6,7) - 4 \leq 43$ and $r(1,2,5)\leq r(3,4,5)-4\leq 39$ and $r(1,2,4)\leq 38$ and $r(1,2,3)\leq 37$ so we have: $$300\leq 49+48+47+43+39+38+37= 301$$ but again no contradiction.
Let's work with the lowest four numbers instead of the other suggestions. Supposing there is a counterexample, then the lowest four must add to at least $51$ (else the highest three add to $50$ or more). Since $14+13+12+11=50$ the lowest four numbers would have to include one number at least equal to $15$ to get a total as big as $51$ . Then the greatest three numbers must be at least $16+17+18=51$ , which is a contradiction to the assumption that there exists a counterexample. The examples $18+17+15+14+13+12+11=100$ and $19+16+15+14+13+12+11=100$ show that the bound is tight.
{ "source": [ "https://math.stackexchange.com/questions/2936741", "https://math.stackexchange.com", "https://math.stackexchange.com/users/463553/" ] }
2,936,768
Trying to apply Cavalieri's method of indivisibles to calculate the volume of a cylinder with radius $R$ and height $h$ , I get the following paradoxical argument. A cylinder with radius $R$ and height $h$ can be seen as a solid obtained by rotating a rectangle with height $h$ and base $R$ about its height. Therefore, the volume of the cylinder can be thought as made out of an infinity of areas of such rectangles of infinitesimal thickness rotated for $360^\circ$ ; hence, the volume $V$ of the cylinder should the area of the rectangle $A_\text{rect} = R \cdot h$ multiplied by the circumference of the rotation circle $C_\text{circ} = 2\pi R$ : \begin{align} V = A_\text{rect} \cdot C_\text{circ} = 2 \pi R^2 \cdot h \end{align} Of course, the right volume of a cylinder with radius $R$ and height $h$ is \begin{align} V = A_\text{circ} \cdot h = \pi R^2 \cdot h \end{align} where $A_\text{circ} = \pi R^2$ is the area of the base circle of the cylinder. Question: Where is the error in my previous argument based on infinitesimals?
The rotation contributes to the volume by $2\pi R$ only for the side of the rectangle that is opposite to its rotation axis. The circumference covered by each point of the rectangle depends on its distance from the rotation axis. Keeping an infinitesimal approach, think of the rectangle as made out of an infinity of vertical lines of infinitesimal thickness $dr$ at a distance $r$ (with $0 \leq r \leq R$ ) from the rotation axis. Every vertical line (or infinitesimal rectangle) rotates for $2\pi r$ , so its contribution to the volume of the cylinder is $dV = h \cdot dr \cdot 2 \pi r$ . Therefore, the volume of the cylinder is the infinite sum of such infinitesimal volumes, i.e. \begin{align} V = \int dV = 2 \pi h \int_0^R r dr = 2 \pi h \left[\frac{r^2}{2} \right]^R_0 = \pi R^2 h. \end{align}
{ "source": [ "https://math.stackexchange.com/questions/2936768", "https://math.stackexchange.com", "https://math.stackexchange.com/users/584090/" ] }
2,936,783
I failed to find any step-by-step demonstration of the following equality: $|e^z| = e^x$ Feedback: I was doing something really stupid: In the calculus of the module I was using $i^2$ , resulting in $|e^x|\sqrt{\cos^2y - \sin^2y}$
The rotation contributes to the volume by $2\pi R$ only for the side of the rectangle that is opposite to its rotation axis. The circumference covered by each point of the rectangle depends on its distance from the rotation axis. Keeping an infinitesimal approach, think of the rectangle as made out of an infinity of vertical lines of infinitesimal thickness $dr$ at a distance $r$ (with $0 \leq r \leq R$ ) from the rotation axis. Every vertical line (or infinitesimal rectangle) rotates for $2\pi r$ , so its contribution to the volume of the cylinder is $dV = h \cdot dr \cdot 2 \pi r$ . Therefore, the volume of the cylinder is the infinite sum of such infinitesimal volumes, i.e. \begin{align} V = \int dV = 2 \pi h \int_0^R r dr = 2 \pi h \left[\frac{r^2}{2} \right]^R_0 = \pi R^2 h. \end{align}
{ "source": [ "https://math.stackexchange.com/questions/2936783", "https://math.stackexchange.com", "https://math.stackexchange.com/users/58925/" ] }
2,937,751
So my friend sent me this really interesting problem. It goes: Evaluate the following expression: $$ \sum_{a=2}^\infty \sum_{b=1}^\infty \int_{0}^\infty \frac{x^{b}}{e^{ax} \ b!} \ dx .$$ Here is my approach: First evaluate the integral: $$ \frac{1}{b!} \int_0^\infty \frac{x^b}{e^{ax}}\ dx.$$ This can be done using integration by parts and we get: $$ \frac{1}{b!} \frac{b}{a} \int_0^\infty \frac{x^{b-1}}{e^{ax}}\ dx.$$ We can do this $ b $ times until we get: $$ \frac{1}{b!} \frac{(b)(b-a).....(b-b+1)}{a^b} \int_0^\infty \frac{x^{b-b}}{e^{ax}}\ dx.$$ and hence we end up with: $$ \frac{1}{b!} \frac{b!}{a^b}\qquad\left(\frac{-1 \ e^{-ax}}{a}\Big|_0^\infty\right) = \frac{1}{a^{b+1}}.$$ Now we can apply the sum of GP to infinity formula and we get: $$ \sum_{a=2}^\infty \sum_{b=1}^\infty \frac{1}{a^{b+1}} = \sum_{a=2}^\infty \frac{\frac{1}{a^{2}}}{1-\frac{1}{a}}.$$ This is a telescoping series and we end up with $$ \frac{1}{a-1} = \frac{1}{2-1} = 1.$$ Do you guys have any other ways of solving this problem? Please do share it here.
since $\frac{x^b}{e^{ax} b!}$ is non-negative, Tonelli's theorem for iterated integrals/sums allows us to interchange integrals and sums without worry. Then: \begin{align} &\sum_{a=2}^\infty \sum_{b=1}^\infty \int_{0}^\infty \frac{x^{b}}{e^{ax} \ b!} \ dx \\ &=\int_{0}^\infty \sum_{a=2}^\infty e^{-ax} \sum_{b=1}^\infty \frac{x^{b}}{ \ b!} \ dx \\ &= \int_{0}^\infty \underbrace{\left(\sum_{a=2}^\infty (e^{-x})^a\right)}_{\text{geometric series}} \overbrace{\left(\sum_{b=0}^\infty \frac{x^{b}}{ \ b!}-1\right)}^{\text{series definition of $e^x$}} \ dx \\ &= \int_{0}^\infty \frac{1}{e^x(e^x-1)}(e^{x}-1)dx \\ &= \int_0^\infty e^{-x} dx \\&= 1.\end{align}
{ "source": [ "https://math.stackexchange.com/questions/2937751", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
2,937,783
I've started my topology classes like one month ago, so please excuse me if this is a silly question to ask, but I need to understand it to move forward. I have an exercise that states: Let $(X,\tau)$ be a topological space. Consider $Y=U\sqcup\{a\}$ , and $\tau'=\{X\sqcup \{a\}| X\in\tau\}$ . Prove that $(Y,\tau')$ is a topology. Okay, so here on $\tau'$ we're taking an open set of $\tau$ . That's what they want us to understand from that $X\in\tau$ . But two exercises below that, we have another one: Let $A$ and $B$ subspaces of a topological space $X$ (that is, $A$ and $B \subset (X,\tau)$ ). Prove the following $i)$ $Int(Int(A))=Int(A)$ , $Int(A\cap B)=Int(A)\cap Int(B), \dots$ And more equalities that I'm asked to prove (7 more), but that's not the point on here. What I can understand now is that they want me to take two arbitrary subsets, open or closed . So what does it mean to take a set from a topology? It's always open? Or can be open or closed? Sorry if I didn't explained it well, I'll answer further question if something can't be understood. Thanks for your time.
since $\frac{x^b}{e^{ax} b!}$ is non-negative, Tonelli's theorem for iterated integrals/sums allows us to interchange integrals and sums without worry. Then: \begin{align} &\sum_{a=2}^\infty \sum_{b=1}^\infty \int_{0}^\infty \frac{x^{b}}{e^{ax} \ b!} \ dx \\ &=\int_{0}^\infty \sum_{a=2}^\infty e^{-ax} \sum_{b=1}^\infty \frac{x^{b}}{ \ b!} \ dx \\ &= \int_{0}^\infty \underbrace{\left(\sum_{a=2}^\infty (e^{-x})^a\right)}_{\text{geometric series}} \overbrace{\left(\sum_{b=0}^\infty \frac{x^{b}}{ \ b!}-1\right)}^{\text{series definition of $e^x$}} \ dx \\ &= \int_{0}^\infty \frac{1}{e^x(e^x-1)}(e^{x}-1)dx \\ &= \int_0^\infty e^{-x} dx \\&= 1.\end{align}
{ "source": [ "https://math.stackexchange.com/questions/2937783", "https://math.stackexchange.com", "https://math.stackexchange.com/users/120603/" ] }
2,937,788
In Exercise 1.16 in Hall's book Lie Group, Lie Algebras, and Representations , we are asked to show that if $A$ commutes with every matrix $X$ of the form \begin{align} \begin{pmatrix}x_1 & x_2+ix_3 \\ x_2-ix_3 & -x_1\end{pmatrix}, \end{align} (where $x_1,x_2,x_3\in\mathbb{R}$ ) then $A$ commutes with every element of $M_2(\mathbb{C})$ . It is possible to write $A=\begin{pmatrix}\alpha & \beta \\ \gamma & \delta\end{pmatrix}$ and use the commutativity to exploit restrictions on $\alpha,\beta,\gamma,\delta$ . However, is there any other way to prove this without explicitly investigating $\alpha,\beta,\gamma,\delta$ ? Thanks in advance for any comment, hint and answer.
since $\frac{x^b}{e^{ax} b!}$ is non-negative, Tonelli's theorem for iterated integrals/sums allows us to interchange integrals and sums without worry. Then: \begin{align} &\sum_{a=2}^\infty \sum_{b=1}^\infty \int_{0}^\infty \frac{x^{b}}{e^{ax} \ b!} \ dx \\ &=\int_{0}^\infty \sum_{a=2}^\infty e^{-ax} \sum_{b=1}^\infty \frac{x^{b}}{ \ b!} \ dx \\ &= \int_{0}^\infty \underbrace{\left(\sum_{a=2}^\infty (e^{-x})^a\right)}_{\text{geometric series}} \overbrace{\left(\sum_{b=0}^\infty \frac{x^{b}}{ \ b!}-1\right)}^{\text{series definition of $e^x$}} \ dx \\ &= \int_{0}^\infty \frac{1}{e^x(e^x-1)}(e^{x}-1)dx \\ &= \int_0^\infty e^{-x} dx \\&= 1.\end{align}
{ "source": [ "https://math.stackexchange.com/questions/2937788", "https://math.stackexchange.com", "https://math.stackexchange.com/users/179461/" ] }
2,937,799
From my understanding, a sequence $\{a_{n}\}$ is said to converge to $a$ if for all $\epsilon > 0$ , there exists an index $N$ such that for all $n \geq N$ , we have $$|a_{n} - a| < \epsilon.$$ But, why is this different from the following definition: There exists an index $N$ for every $\epsilon > 0$ , such that for all $n \geq N$ , we have $$|a_{n} - a| < \epsilon $$ Pretty much, I switched the $\forall \epsilon > 0$ quantifier with the $\exists N$ quantifier, and the definition becomes invalid, but why?
I am not sure why the other current answers are interpreting your second definition as invalid, because as a native English speaker I interpret it to mean exactly the same (logically speaking) as the first definition. In particular, you did not switch the quantifiers, even though the surface text appears to have them switched. You wrote: There exists an index $N$ for every $\epsilon > 0$ , such that for all $n \geq N$ , we have ... This means that there is some $N$ for every $\epsilon > 0$ , not necessarily the same $N$ for all $\epsilon > 0$ . So it conveys the same logical structure as the other definition, whereby for any given $\epsilon > 0$ there is an index $N$ such that ... It would become invalid if you had the following English phrasing (which switches the quantifiers): There exists an index $N$ such that, for every $\epsilon > 0$ and for all $n \geq N$ , we have ... But you did not use that phrasing, so that does not imply anything about yours. Just for fun, here is a quote (apocryphally attributed to Albert Einstein) using exactly that phrase structure: Stay away from negative people. They have a problem for every solution. It is clearly understood by everyone to mean: Stay away from negative people, because for every solution they will find some problem with it.
{ "source": [ "https://math.stackexchange.com/questions/2937799", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
2,939,900
I was solving the exercises in Discrete Mathematics and its applications book. Determine whether each of these statements is true or false. {0} ⊂ {0} {∅} ⊆ {∅} I thought both 1 and 2 are true, but when I checked the answers I found that 1 is false and 2 is true. I got confused and distracted because I don't know the difference between them.
If the book distinguishes between $\subset$ and $\subseteq$ , then most likely the former symbol denotes proper inclusion , so $\{0\}\subset\{0\}$ is false. The latter symbol instead will denote inclusion (with possible equality). However it's very common to find $\subset$ denoting inclusion (with possible equality), so one always has to check or try and infer from the context. Don't take Wikipedia pages as revealed truth. It's so common that $\subset$ denotes nonstrict inclusion that somebody uses $\subsetneq$ to denote proper inclusion, for safety.
{ "source": [ "https://math.stackexchange.com/questions/2939900", "https://math.stackexchange.com", "https://math.stackexchange.com/users/515971/" ] }
2,940,671
So I know that if you roll a standard pair of dice, your chances of getting Snake Eyes (double 1s) is $1$ in $36$ . What I'm not sure of is how to do the math to figure out your chances of rolling Snake Eyes at least once during a series of rolls. I know if I roll the dice $36$ times it won't lead to a $100\%$ chance of rolling Snake Eyes, and while I imagine it's in the upper nineties, I'd like to figure out exactly how unlikely it is.
The probability of hitting it at least once is $1$ minus the probabilty of never hitting it. Every time you roll the dice, you have a $35/36$ chance of not hitting it. If you roll the dice $n$ times, then the only case where you have never hit it, is when you have not hit it every single time. The probabilty of not hitting with $2$ rolls is thus $35/36\times 35/36$ , the probabilty of not hitting with $3$ rolls is $35/36\times 35/36\times 35/36=(35/36)^3$ and so on till $(35/36)^n$ . Thus the probability of hitting it at least once is $1-(35/36)^n$ where $n$ is the number of throws. After $164$ throws, the probability of hitting it at least once is $99\%$
{ "source": [ "https://math.stackexchange.com/questions/2940671", "https://math.stackexchange.com", "https://math.stackexchange.com/users/11634/" ] }
2,943,102
The problem at hand is, find the solutions of $x$ in the following equation: $$ (x^2−7x+11)^{x^2−7x+6}=1 $$ My friend who gave me this questions, told me that you can find $6$ solutions without needing to graph the equation. My approach was this: Use factoring and the fact that $z^0=1$ for $z≠0$ and $1^z=1$ for any $z$ . Factorising the exponent, we have: $$ x^{2}-7x+6 = (x-1)(x-6) $$ Therefore, by making the exponent = 0, we have possible solutions as $x \in \{1,6\} $ Making the base of the exponent = $1$ , we get $$ x^2-7x+10 = 0 $$ $$ (x-2)(x-5)$$ Hence we can say $x \in \{2, 5\} $ . However, I am unable to compute the last two solutions. Could anyone shed some light on how to proceed?
Denote $a=x^2-7x+11.$ The equation becomes $a^{a-5}=1,$ or equivalently* $$a^a=a^5,$$ which has in $\mathbb{R}$ the solutions $a\in \{ {5,1,-1}\}.$ Solving the corresponding quadratic equations we get the solutions $x\in \{1,6,2,5,3,4\}.$ *Note added: $a=0$ is excluded in both equations.
{ "source": [ "https://math.stackexchange.com/questions/2943102", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
2,946,885
We want to evaluate $$\lim_{x \to -8}\frac{\sqrt{1-x}-3}{2+\sqrt[3]{x}}.$$ The solving process can be written as follows: \begin{align*}\lim_{x \to -8}\frac{\sqrt{1-x}-3}{2+\sqrt[3]{x}}&=\lim_{x \to -8}\left[\frac{(\sqrt{1-x}-3)(\sqrt{1-x}+3)}{(2+\sqrt[3]{x})(4-2\sqrt[3]{x}+\sqrt[3]{x^2})}\cdot \frac{4-2\sqrt[3]{x}+\sqrt[3]{x^2}}{\sqrt{1-x}+3}\right]\\&=\lim_{x \to -8}\left[\frac{-(x+8)}{x+8}\cdot \frac{4-2\sqrt[3]{x}+\sqrt[3]{x^2}}{\sqrt{1-x}+3}\right]\\&=-\lim_{x \to -8} \frac{4-2\sqrt[3]{x}+\sqrt[3]{x^2}}{\sqrt{1-x}+3}\\&=-2.\end{align*} But when I input this lim\frac{\sqrt{1-x}-3}{2+\sqrt[3]{x}} as x to -8 into Wolfram|Alpha, it gives the limit $0$ . Why is Wolfram|Alpha making a mistake here?
WolframAlpha understands the expression $\sqrt[3]{x}$ for negative x in a different way than you expect. Try this : lim\frac{\sqrt{1-x}-3}{2+surd(x,3)} as x to -8
{ "source": [ "https://math.stackexchange.com/questions/2946885", "https://math.stackexchange.com", "https://math.stackexchange.com/users/560634/" ] }
2,947,054
What is $3^{3^{3}}?$ Plugging $3^{3^{3}} $ into the calculator gives 7625597484987. I believe because this implies that $3^{3^{3}}=3^{27}$ , is this true? And plugging $(3^{3})^{3}$ gives 19683, because $ (3^{3})^{3}=3^{3}\times 3^{3}\times 3^{3}=3^{9}=19683$ So which one is the correct answer, and why?
Unlike addition and multiplication, exponentiation is not associative : $(a+b)+c=a+(b+c)$ $(a\times b)\times c=a\times (b\times c)$ but ( $a$ ^ $b$ )^ $c\ne a\!$ ^( $b$ ^ $c$ ), more commonly written as: $\left(a^b\right)^c \ne a^{\left( b^c \right)}$ This means there's no risk in simply writing " $a+b+c$ " or " $a \times b \times c$ " since the order in which you perform the operations doesn't matter in both cases. For exponentiation this is not the case and writing " $a$ ^ $b$ ^ $c$ " is ambiguous, but we do have: $$\color{blue}{\left(a^b\right)^c = a^{bc}} \ne a^{\left( b^c \right)}$$ Because we have this property (in blue), it's common to interpret $a^{b^c}$ as $a^{\left( b^c \right)}$ but if you want to avoid confusion, you can always add the parentheses.
{ "source": [ "https://math.stackexchange.com/questions/2947054", "https://math.stackexchange.com", "https://math.stackexchange.com/users/601853/" ] }
2,947,788
I am working on a problem from probability theory and am a little bit stuck. I know that the formula for $\operatorname{Var}(X + Y)$ is $$\operatorname{Var}(X) + \operatorname{Var}(Y) + 2\operatorname{Cov}(X,Y)$$ Does this mean that for $\operatorname{Var}(X - Y)$ it is just: $$\operatorname{Var}(X) - \operatorname{Var}(Y) - 2\operatorname{Cov}(X,Y)$$ ?
It will be $\text{Var}(X) + \text{Var}(Y) - 2\text{Cov}(X,Y)$ , because $\text{Var}(-Y) = \text{Var}(Y)$ .
{ "source": [ "https://math.stackexchange.com/questions/2947788", "https://math.stackexchange.com", "https://math.stackexchange.com/users/594558/" ] }
2,947,790
Two events $A$ and $B$ are independent if : $P(A|B) = P(A)$ However, I was wondering if there was an intuitive way to test for independence - rather than having to check this condition every time.
It will be $\text{Var}(X) + \text{Var}(Y) - 2\text{Cov}(X,Y)$ , because $\text{Var}(-Y) = \text{Var}(Y)$ .
{ "source": [ "https://math.stackexchange.com/questions/2947790", "https://math.stackexchange.com", "https://math.stackexchange.com/users/602053/" ] }
2,948,351
Using $\Delta$ for set symmetric difference, $A \Delta B$ is all the elements in exactly one of the sets but not all of them. $A \Delta B \Delta C $ is all the elements in exactly one of the sets or all of them. I appreciate there is an even number of sets in the first example and an odd number in the second (and associativity implies no order ambiguity), but what is symmetric about set symmetric difference?
A function in two variables $f(x,y)$ is called symmetric if $f(x,y)=f(y,x)$ . It is easy to see that $A\mathbin{\triangle}B=B\mathbin{\triangle}A$ , exactly because being in exactly in one of $A$ or $B$ is the same as being exactly in one of $B$ and $A$ . This is in contrast to set difference, where $A\setminus B$ is generally not the same as $B\setminus A$ .
{ "source": [ "https://math.stackexchange.com/questions/2948351", "https://math.stackexchange.com", "https://math.stackexchange.com/users/416252/" ] }
2,951,249
I know that not every function has a power series expansion. Yet what I don't understand is that for every $C^{\infty}$ functions there is a sequence of polynomial $(P_n)$ such that $P_n$ converges uniformly to $f$ . That's to say : $$\forall x \in [a,b], f(x) = \lim_{n \to \infty} \sum_{k = 0}^{\infty} a_{k,n}x^k$$ But then because it converges uniformly why can't I say that : $$\forall x \in [a,b], f(x) = \sum_{k = 0}^{\infty} \lim_{n \to \infty} a_{k,n}x^k$$ And so $f$ has a power series expansion with coefficients: $\lim_{n \to \infty} a_{k,n}x^k$ .
Short answer. The sequence of polynomials guaranteed by the Stone Weierstrass theorem may not be constructible by appending terms of higher and higher order. The early coefficients can vary as the sequence grows. So you don't have the sequence of partial sums of a power series.
{ "source": [ "https://math.stackexchange.com/questions/2951249", "https://math.stackexchange.com", "https://math.stackexchange.com/users/543418/" ] }
2,960,199
Very Easy question, but I wasn't actively doing the question and I got it wrong. I was looking at it & didn't know which step was wrong? It looked like all the individual steps are correct but the final solution is wrong. I'm fully aware the solution is wrong & that if I plug the "roots" back in its wrong. $$ 3x^2 - 6x - 9 = 0 $$ $$ 9 = 3x(x-2) $$ $$ x_1 = \frac{3}{9} \qquad x_2 = 11 $$ I should have factored etc etc, but I'm curious to know which step is incorrect & why.
I think you meant to write $3x^2−6x−9=0$ . Then your first steps are fine: $$3x^2−6x=9$$ $$3x(x-2)=9$$ At this point, I think you erroneously separated this equation into two equations $$3x=9\quad\mathrm{or}$$ $$x-2=9$$ . If you have two numbers $a$ and $b$ where $a\cdot b=9$ , you cannot conclude that $a=9$ or $b=9$ . On the other hand, (*) if you have two numbers $a$ and $b$ where $a\cdot b=0$ , you can conclude that $a=0$ or $b=0$ . Let's do it right. $$3x^2−6x−9=0$$ $$x^2−2x−3=0$$ $$(x-3)(x+1)=0$$ Now apply (*), to get $$(x-3)=0\quad\mathrm{or}\quad(x+1)=0.$$
{ "source": [ "https://math.stackexchange.com/questions/2960199", "https://math.stackexchange.com", "https://math.stackexchange.com/users/533193/" ] }
2,960,201
Let S be a smooth closed surface in a three-dimensional xyz-space, n, be the unit outward normal vector on S, and r be the distance between the origin and a point (x, y, z). Solve the following problems. (1) Evaluate $\nabla (\frac 1r)$ (2) Evaluate $\nabla^2 (\frac 1r)$ (3) Evaluate the integral $\quad \int_S \nabla (\frac 1r).ndS$ when the origin is located outside S. (4) Evaluate the integral $\quad \int_S \nabla (\frac 1r).ndS$ when the origin is located intside S. First of all, I completely have no idea on what topic this problem is about. I have tried learning divergence, differential, projection from a point to a plane, but nothing seems to be applicable to solve this problem. Edited on 30th Oct 2018 Here is my current solution $r = \sqrt{(x^2 + y^2 + z^2)}$ $\frac 1r = \frac 1 {\sqrt{x^2 + y^2 + z^2}}$ Answer No. 1 $\nabla (\frac 1r) = {\frac {\partial {\frac 1r}}{\partial x}} + {\frac {\partial {\frac 1r}}{\partial y}} + {\frac {\partial {\frac 1r}}{\partial z}} $ $\nabla (\frac 1r) = - \frac {x \hat i + y \hat i + y \hat k}{{(x^2 + y^2 + z^2)}^\frac 32} $ $\nabla (\frac 1r) = - \frac 1{r^3} {<x,y,z>}$ Answer No. 2 $ \nabla . \nabla \frac 1r = div \nabla \frac 1r = \nabla^2 \frac1r $ $ \nabla^2 \frac1r = 0$ This is as far as I've got Thank you for all the helps and hints! But still I couldn't find solution to next question. I know that on question 4 we could apply divergence theorem, but how about question 3 when the origin is outside S? I assume we can't apply the divergence theorem the region is not within the closed surface anymore. Is that right? Any kind of help would be gladly appreciated!
I think you meant to write $3x^2−6x−9=0$ . Then your first steps are fine: $$3x^2−6x=9$$ $$3x(x-2)=9$$ At this point, I think you erroneously separated this equation into two equations $$3x=9\quad\mathrm{or}$$ $$x-2=9$$ . If you have two numbers $a$ and $b$ where $a\cdot b=9$ , you cannot conclude that $a=9$ or $b=9$ . On the other hand, (*) if you have two numbers $a$ and $b$ where $a\cdot b=0$ , you can conclude that $a=0$ or $b=0$ . Let's do it right. $$3x^2−6x−9=0$$ $$x^2−2x−3=0$$ $$(x-3)(x+1)=0$$ Now apply (*), to get $$(x-3)=0\quad\mathrm{or}\quad(x+1)=0.$$
{ "source": [ "https://math.stackexchange.com/questions/2960201", "https://math.stackexchange.com", "https://math.stackexchange.com/users/605482/" ] }
2,963,241
I am seeking an easily comprehended, convincing explanation that ${RP}^2$ is topologically the same as gluing the circle boundary of a disk to the edge of a Möbius band.
Let $D$ be the closed unit disk in $\mathbb{R}^2$ , and $D/\sim$ the disk with antipodal points on the boundary identified, which is homeomorphic to $\mathbb{RP}^2$ . Now decompose $D$ into an annulus $A$ and a smaller disk, so that attaching a disk to $A$ along the inner circle gives you $D$ . So, attaching a disk to $A/ \sim$ along the inner circle will give you $(D/\sim) \cong \mathbb{RP}^2$ . If we can show that $A/\sim$ is homeomorphic to a Möbius band, we're done. Here's how we do that. (The image is from the Oxford Part A Topology lecture notes)
{ "source": [ "https://math.stackexchange.com/questions/2963241", "https://math.stackexchange.com", "https://math.stackexchange.com/users/237/" ] }
2,965,193
Basically the question is asking us to prove that given any integers $$x_1,x_2,x_3,x_4,x_5$$ Prove that 3 of the integers from the set above, suppose $$x_a,x_b,x_c$$ satisfy this equation: $$x_a^2 + x_b^2 + x_c^2 = 3k$$ So I know I am suppose to use the pigeon hole principle to prove this. I know that if I have 5 pigeons and 2 holes then 1 hole will have 3 pigeons. But what I am confused about is how do you define the hole? Do I just say that the container has a property such that if 3 integers are in it then those 3 integers squared sum up to a multiple of 3?
Any square integer must be congruent to either 0 or 1 mod 3. So for each of the 5 squares, we put it into hole 0 if it is congruent to 0 and into hole 1 if it is congruent to 1. Then take three squares from the hole with at least 3 squares and add them together. You will get either: $0+0+0\equiv 0$ or $1+1+1\equiv0$ mod 3.
{ "source": [ "https://math.stackexchange.com/questions/2965193", "https://math.stackexchange.com", "https://math.stackexchange.com/users/601483/" ] }
2,965,640
It seems easy to grasp that number of ways of choosing $n$ items from $n$ items is 1. But I am unable to understand why is it 1 for choosing 0 items.
When you phrase it in plain English, the answer isn't necessarily totally clear. It might seem reasonable to argue that there are $0$ ways of choosing $0$ things from $n$ , since choosing $0$ things isn't really a choice. However, when mathematicians talk about "the number of ways to choose $0$ from $n$ ," we mean something a bit more specific. A few ways of describing what we mean: The number of subsets of an $n$ -element set with $0$ elements (and we always assume the empty set counts) The number of functions from a $0$ -element set to an $n$ -element set, up to permutations of the domain (there's exactly one function from the empty set to any set). The coefficient of $x^0y^n$ in the expansion of the binomial $(x+y)^n$ . All of these agree that there is one way to choose $0$ out of $n$ things, and all of these perspectives are mathematically very useful. Thus the perspective that there is a way to choose $0$ out of $n$ things is universal in mathematics. This wasn't always so obvious to everyone: for instance, I have heard of an early 20th century mathematician who insisted in his books that all intersections be nonempty to be defined. It seems to me that this would be consistent with refusing to accept the empty set as a subset, and with saying there are $0$ ways to choose $0$ things from $n$ . But this would make for lots of inconvenient circumlocutions, so we've settled pretty firmly on the other solution.
{ "source": [ "https://math.stackexchange.com/questions/2965640", "https://math.stackexchange.com", "https://math.stackexchange.com/users/606951/" ] }
2,965,740
I am having difficulty determining is the solution for the following problem: $$\displaystyle \lim_{x \rightarrow \infty}\left( x \times 0 \right)$$ To clarify, this question assumes ${0}$ is a constant and is absolutely zero ("true zero"), and not another figure approaching or is approximately zero ("near zero"). Thus, the question is not asking what "near zero" times "near infinity" is. I know that ${\infty *0}$ is undefined, however my difficulty is that I'm unsure whether the answer to the problem is undefined because ${\infty *0}$ is undefined. From my understanding, a limit does not ever 'reach' infinity - it only approaches infinity, thus there are a rational amount of numbers. As ${x\cdot 0=0}$ , when x is not ${\infty}$ , it seems to me that in all cases of $x$ approaching infinity the answer could also be ${0}$ .
Note that for any $x$ we have $x\cdot 0=0$ and therefore $$\lim_{x\to\infty} (x\cdot 0) =\lim_{x\to\infty} 0=0$$
{ "source": [ "https://math.stackexchange.com/questions/2965740", "https://math.stackexchange.com", "https://math.stackexchange.com/users/606971/" ] }
2,966,876
Are rational points dense on every circle in the coordinate plane? First thing first I know that rational points are dense on the unit circle. However, I am not so sure how to show that rational points are not dense on every circle. How would one come about answering this. Any hits are appreciate it.
They're not. No two different circles centered at the origin contain any of the same points. There are uncountably many circles (specifically, one for each real number, corresponding to the radius) , so most circles contain no rational points at all. We can find some more specific examples. Specifically, any rational point $(a, b)$ on a circle of radius $r$ centered at the origin satisfies $a^2+b^2=r^2$ . In particular, $r^2$ must be rational. There are also radii whose squares are rational where there are no rational points. Clearing denominators, say multiplying by some $c^2$ to do so, we have that $c^2r^2$ is a sum of two squares. If $r^2$ is an integer, then $r^2$ must be a sum of two squares, since an integer is a sum of two squares if and only if its prime factorization doesn't contain an odd power of a prime congruent to $3$ mod $4$ . $r^2$ was arbitrary, so if we choose it not to be a sum of two squares we get circles with no rational points.
{ "source": [ "https://math.stackexchange.com/questions/2966876", "https://math.stackexchange.com", "https://math.stackexchange.com/users/390495/" ] }
2,967,587
I am sort of confused regarding differentiable functions, continuous derivatives, and continuous functions. And I just want to make sure I'm thinking about this correctly. (1) If you have a function that's continuous everywhere, then this doesn't necessarily mean its derivative exists everywhere, correct? e.g., $$f(x) = |x|$$ has an undefined derivative at $x=0$ (2) So this above function, even though its continuous, does not have a continuous derivative? (3) Now say you have a derivative that's continuous everywhere, then this doesn't necessarily mean the underlying function is continuous everywhere, correct? For example, consider $$ f(x) = \begin{cases} 1 - x \ \ \ \ \ x<0 \\ 2 - x \ \ \ \ \ x \geq 0 \end{cases} $$ So its derivative is -1 everywhere, hence continuous, but the function itself is not continuous? So what does a function with a continuous derivative say about the underlying function?
A function may or may not be continuous. If it is continuous, it may or may not be differentiable. $f(x) = |x|$ is a standard example of a function which is continuous, but not (everywhere) differentiable. However, any differentiable function is necessarily continuous. If a function is differentiable, its derivative may or may not be continuous. This is a bit more subtle, and the standard example of a differentiable function with discontinuous derivative is a bit more complicated: $$ f(x) = \cases{x^2\sin(1/x) & if $x\neq 0$\\ 0 & if $x = 0$} $$ It is differentiable everywhere, $f'(0) = 0$ , but $f'(x)$ oscillates wildly between (a little less than) $-1$ and (a little more than) $1$ as $x$ comes closer and closer to $0$ , so it isn't continuous.
{ "source": [ "https://math.stackexchange.com/questions/2967587", "https://math.stackexchange.com", "https://math.stackexchange.com/users/512679/" ] }
2,969,828
Given the empty function $\varnothing: \emptyset \to X$ where $X$ is a set, is $\varnothing$ a differentiable function? If so, what is its derivative? Also, is the empty set a differentiable manifold? If so, what is its dimension?
Yes, the empty set is a smooth manifold (it is covered by the empty collection of coordinate charts!). It has every dimension. (That is, for any $n$ , it is true that $\emptyset$ is a manifold of dimension $n$ . Note that there is not really a unified definition of "manifold" but rather a separate definition of "manifold of dimension $n$ " for each $n$ , so there is nothing wrong with a single object satisfying the definition for multiple different values of $n$ .) For any smooth manifold $X$ , the empty function $\emptyset\to X$ is smooth. After all, this just means that it gives a smooth function in every pair coordinate charts of a pair of atlases on the domain and codomain, which is vacuously true since the empty set is an atlas for $\emptyset$ . (Or, in the context of just open subset of Euclidean space, if we consider $\emptyset$ as an open subset of $\mathbb{R}^m$ , then the empty function $\emptyset\to\mathbb{R}^n$ is smooth because it is vacuously infinitely differentiable at every point in the domain. Its derivative is then the empty function $\emptyset\to \mathbb{R}^{n\times m}$ .)
{ "source": [ "https://math.stackexchange.com/questions/2969828", "https://math.stackexchange.com", "https://math.stackexchange.com/users/400350/" ] }
2,969,840
I know that numbers till 79 can be expressed as a sum of fourth powers of 18 positive integers and 79 is the smallest number to require 19 terms. What are some other numbers that require a minimum of 19 terms? Repetition is allowed
Yes, the empty set is a smooth manifold (it is covered by the empty collection of coordinate charts!). It has every dimension. (That is, for any $n$ , it is true that $\emptyset$ is a manifold of dimension $n$ . Note that there is not really a unified definition of "manifold" but rather a separate definition of "manifold of dimension $n$ " for each $n$ , so there is nothing wrong with a single object satisfying the definition for multiple different values of $n$ .) For any smooth manifold $X$ , the empty function $\emptyset\to X$ is smooth. After all, this just means that it gives a smooth function in every pair coordinate charts of a pair of atlases on the domain and codomain, which is vacuously true since the empty set is an atlas for $\emptyset$ . (Or, in the context of just open subset of Euclidean space, if we consider $\emptyset$ as an open subset of $\mathbb{R}^m$ , then the empty function $\emptyset\to\mathbb{R}^n$ is smooth because it is vacuously infinitely differentiable at every point in the domain. Its derivative is then the empty function $\emptyset\to \mathbb{R}^{n\times m}$ .)
{ "source": [ "https://math.stackexchange.com/questions/2969840", "https://math.stackexchange.com", "https://math.stackexchange.com/users/608192/" ] }
2,970,514
Here's a question relating to graph theory that was asked in the International Kangaroo Maths Contest 2017 . I am a college student and have a very less knowhow of graph theory. The question goes like this: We have $10$ islands that have been connected by $15$ bridges. What is the smallest number of bridges that may be eliminated so that it becomes impossible to get from $A$ to $B$ by bridge? Answer: The answer given in the key is $3$ , but I can't see how that might be. What I did: First of all I transferred the problem into graphical terms. We have $10$ vertices connected by $15$ edges. What is the smallest possible number of edges that when removed to make it impossible to find a path from $A$ to $B$ ? So I considered that we can do this by 'isolating' either $A$ or $B$ . This may be done by removing all the edges corresponding to either $A$ or $B$ . The number of edges corresponding to $A$ is $4$ while those with $B$ are $5$ . So the smallest number that can be the answer is $4$ . So what is wrong with this method? How can be the answer, $3$ , be reached? Thanks for the attention.
MathFun123's answer shows that it is possible to remove three edges in this way, but not that three is the minimum. To see that you can't do it with less than three, consider the following. There are three completely separate paths from A to B shown. In order to cut off B from A, you need to remove at least one red edge to destroy the red path, and similarly at least one blue edge and at least one green edge. So at least three are needed.
{ "source": [ "https://math.stackexchange.com/questions/2970514", "https://math.stackexchange.com", "https://math.stackexchange.com/users/468087/" ] }
2,971,315
I have 2 groups of people. I'm working with the data about their age. I know the means, the standard deviations and the number of people. I don't know the data of each person in the groups. Group 1 : Mean = 35 years old; SD = 14; n = 137 people Group 2 : Mean = 31 years old; SD = 11; n = 112 people I want to combine those 2 groups to obtain a new mean and SD. It's easy for the mean, but is it possible for the SD? I do not know the distribution of those samples, and I can't assume those are normal distributions. Is there a formula for distributions that aren't necessarily normal?
Continuing on from BruceET's explanation, note that if we are computing the unbiased estimator of the standard deviation of each sample, namely $$s = \sqrt{\frac{1}{n-1} \sum_{i=1}^n (x_i - \bar x)^2},$$ and this is what is provided, then note that for samples $\boldsymbol x = (x_1, \ldots, x_n)$ , $\boldsymbol y = (y_1, \ldots, y_m)$ , let $\boldsymbol z = (x_1, \ldots, x_n, y_1, \ldots, y_m)$ be the combined sample, hence the combined sample mean is $$\bar z = \frac{1}{n+m} \left( \sum_{i=1}^n x_i + \sum_{j=1}^m y_i \right) = \frac{n \bar x + m \bar y}{n+m}.$$ Consequently, the combined sample variance is $$s_z^2 = \frac{1}{n+m-1} \left( \sum_{i=1}^n (x_i - \bar z)^2 + \sum_{j=1}^m (y_i - \bar z)^2 \right),$$ where it is important to note that the combined mean is used. In order to have any hope of expressing this in terms of $s_x^2$ and $s_y^2$ , we clearly need to decompose the sums of squares; for instance, $$(x_i - \bar z)^2 = (x_i - \bar x + \bar x - \bar z)^2 = (x_i - \bar x)^2 + 2(x_i - \bar x)(\bar x - \bar z) + (\bar x - \bar z)^2,$$ thus $$\sum_{i=1}^n (x_i - \bar z)^2 = (n-1)s_x^2 + 2(\bar x - \bar z)\sum_{i=1}^n (x_i - \bar x) + n(\bar x - \bar z)^2.$$ But the middle term vanishes, so this gives $$s_z^2 = \frac{(n-1)s_x^2 + n(\bar x - \bar z)^2 + (m-1)s_y^2 + m(\bar y - \bar z)^2}{n+m-1}.$$ Upon simplification, we find $$n(\bar x - \bar z)^2 + m(\bar y - \bar z)^2 = \frac{mn(\bar x - \bar y)^2}{m + n},$$ so the formula becomes $$s_z^2 = \frac{(n-1) s_x^2 + (m-1) s_y^2}{n+m-1} + \frac{nm(\bar x - \bar y)^2}{(n+m)(n+m-1)}.$$ This second term is the required correction factor.
{ "source": [ "https://math.stackexchange.com/questions/2971315", "https://math.stackexchange.com", "https://math.stackexchange.com/users/608602/" ] }
2,973,819
Here is a magic trick I saw. My question is how the magician and his partner did it. Given the simple French deck of cards, with $52$ cards. A person from the audience chooses randomly five cards from the deck and gives it to the partner (the partner works with the magician), without showing it to the magician. Then the partner (who sees the five cards) chooses four cards from the five cards, and gives it to the magician one by one (so the order of the four cards matters). From that the magician knows the fifth card. The partner and the magician can’t communicate during the trick. How did they do it? I thought that amoung the five cards there will be two with the same sign (Spades,Hearts,Diamonds or Clubs) and one of these two cards will be the fifth, and the other will be the first card to give to the magician...
There is in fact a solution which uses your idea: that some suit will be repeated twice. Say that among the $13$ ranks within a suit, ordered $A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K$ , each rank "beats" the next six ranks, wrapping around when we get to the end. For example, $A$ beats $2,3,4,5,6,7$ , and $10$ beats $J, Q, K, A, 2, 3$ . If you have two cards of the same suit, exactly one of them beats the other one. So you should pass, in order: A card of the same suit as the missing card, which beats the missing card. This leaves six possibilities for the missing card. The remaining three cards, in an order that encodes which of the six possibilities it is. For the second step, we should order all $52$ cards in the deck somehow; for instance, say that $\clubsuit < \diamondsuit < \heartsuit < \spadesuit$ and $A < 2 < 3 < \dots < Q < K$ . Then the order of the last three cards is one of "Low, Middle, High", "Low, High, Middle", and so on through "High, Middle, Low". Just remember some correspondence between these six possibilities and the values $+1, +2, +3, +4, +5, +6$ , and add the value you get to the rank of the first card (wrapping around from $K$ to $A$ of the same suit). For example, say that the correspondence we chose in the second step is \begin{array}{ccc|c} \text{Low} & \text{Middle} & \text{High} &+1 \\ \text{Low} & \text{High} & \text{Middle} &+2 \\ \text{Middle} & \text{Low} & \text{High} &+3 \\ \text{Middle} & \text{High} & \text{Low} &+4 \\ \text{High} & \text{Low} & \text{Middle} &+5 \\ \text{High} & \text{Middle} & \text{Low} &+6 \end{array} and you draw the cards $\{4\clubsuit, 5\spadesuit, 5\diamondsuit, A\clubsuit, J\spadesuit\}$ . We have two possibilities for the repeated suit, so let's choose $\spadesuit$ . In the cyclic order in that suit, $5\spadesuit$ beats $J\spadesuit$ , so the first card we pass is $5\spadesuit$ . We want to encode the offset $+6$ , which is the ordering High, Middle, Low. So we pass that ordering: after $5\spadesuit$ we pass $5\diamondsuit, 4\clubsuit, A\clubsuit$ in that order, because $5\diamondsuit > 4\clubsuit > A\clubsuit$ .
{ "source": [ "https://math.stackexchange.com/questions/2973819", "https://math.stackexchange.com", "https://math.stackexchange.com/users/519742/" ] }
2,973,825
when you are checking to see if a sum of say $k^2$ from $k=1$ to to $k=n$ is equal to a sum of $(k+1)^2$ from $k=0$ to $n−1$ can someone explain what is going on here. THanks (looking for a fairly simple way to work the problem without writing out the sums which may help me understand what is going on, )
There is in fact a solution which uses your idea: that some suit will be repeated twice. Say that among the $13$ ranks within a suit, ordered $A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K$ , each rank "beats" the next six ranks, wrapping around when we get to the end. For example, $A$ beats $2,3,4,5,6,7$ , and $10$ beats $J, Q, K, A, 2, 3$ . If you have two cards of the same suit, exactly one of them beats the other one. So you should pass, in order: A card of the same suit as the missing card, which beats the missing card. This leaves six possibilities for the missing card. The remaining three cards, in an order that encodes which of the six possibilities it is. For the second step, we should order all $52$ cards in the deck somehow; for instance, say that $\clubsuit < \diamondsuit < \heartsuit < \spadesuit$ and $A < 2 < 3 < \dots < Q < K$ . Then the order of the last three cards is one of "Low, Middle, High", "Low, High, Middle", and so on through "High, Middle, Low". Just remember some correspondence between these six possibilities and the values $+1, +2, +3, +4, +5, +6$ , and add the value you get to the rank of the first card (wrapping around from $K$ to $A$ of the same suit). For example, say that the correspondence we chose in the second step is \begin{array}{ccc|c} \text{Low} & \text{Middle} & \text{High} &+1 \\ \text{Low} & \text{High} & \text{Middle} &+2 \\ \text{Middle} & \text{Low} & \text{High} &+3 \\ \text{Middle} & \text{High} & \text{Low} &+4 \\ \text{High} & \text{Low} & \text{Middle} &+5 \\ \text{High} & \text{Middle} & \text{Low} &+6 \end{array} and you draw the cards $\{4\clubsuit, 5\spadesuit, 5\diamondsuit, A\clubsuit, J\spadesuit\}$ . We have two possibilities for the repeated suit, so let's choose $\spadesuit$ . In the cyclic order in that suit, $5\spadesuit$ beats $J\spadesuit$ , so the first card we pass is $5\spadesuit$ . We want to encode the offset $+6$ , which is the ordering High, Middle, Low. So we pass that ordering: after $5\spadesuit$ we pass $5\diamondsuit, 4\clubsuit, A\clubsuit$ in that order, because $5\diamondsuit > 4\clubsuit > A\clubsuit$ .
{ "source": [ "https://math.stackexchange.com/questions/2973825", "https://math.stackexchange.com", "https://math.stackexchange.com/users/565964/" ] }
2,973,835
Let $P(x)=x^2 +2013x+1$ . Show that $P(P(\cdots P(x))\cdots)=0$ (i.e. $P$ is $n$ -times nested) has at least one real root for any $n$ $P$ . For $n = 1$ this is obvious. Next, for $n = 2$ we get a fourth order polynomial $$x^4 + 4,026 x^3 + 4,054,184 x + 2,015$$ and by substituiting $y = x + 2,013/2$ can elimate the cubic term. Rearranging and standardizing yields a 4th order equation $$y^4 – 4,048,139/2 y^2 + 16,387,413,154,661/16 = 0$$ It can be solved but further substitutions quickly become intractable and don´t lead anywhere. Can anyone help? Thanks.
There is in fact a solution which uses your idea: that some suit will be repeated twice. Say that among the $13$ ranks within a suit, ordered $A, 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K$ , each rank "beats" the next six ranks, wrapping around when we get to the end. For example, $A$ beats $2,3,4,5,6,7$ , and $10$ beats $J, Q, K, A, 2, 3$ . If you have two cards of the same suit, exactly one of them beats the other one. So you should pass, in order: A card of the same suit as the missing card, which beats the missing card. This leaves six possibilities for the missing card. The remaining three cards, in an order that encodes which of the six possibilities it is. For the second step, we should order all $52$ cards in the deck somehow; for instance, say that $\clubsuit < \diamondsuit < \heartsuit < \spadesuit$ and $A < 2 < 3 < \dots < Q < K$ . Then the order of the last three cards is one of "Low, Middle, High", "Low, High, Middle", and so on through "High, Middle, Low". Just remember some correspondence between these six possibilities and the values $+1, +2, +3, +4, +5, +6$ , and add the value you get to the rank of the first card (wrapping around from $K$ to $A$ of the same suit). For example, say that the correspondence we chose in the second step is \begin{array}{ccc|c} \text{Low} & \text{Middle} & \text{High} &+1 \\ \text{Low} & \text{High} & \text{Middle} &+2 \\ \text{Middle} & \text{Low} & \text{High} &+3 \\ \text{Middle} & \text{High} & \text{Low} &+4 \\ \text{High} & \text{Low} & \text{Middle} &+5 \\ \text{High} & \text{Middle} & \text{Low} &+6 \end{array} and you draw the cards $\{4\clubsuit, 5\spadesuit, 5\diamondsuit, A\clubsuit, J\spadesuit\}$ . We have two possibilities for the repeated suit, so let's choose $\spadesuit$ . In the cyclic order in that suit, $5\spadesuit$ beats $J\spadesuit$ , so the first card we pass is $5\spadesuit$ . We want to encode the offset $+6$ , which is the ordering High, Middle, Low. So we pass that ordering: after $5\spadesuit$ we pass $5\diamondsuit, 4\clubsuit, A\clubsuit$ in that order, because $5\diamondsuit > 4\clubsuit > A\clubsuit$ .
{ "source": [ "https://math.stackexchange.com/questions/2973835", "https://math.stackexchange.com", "https://math.stackexchange.com/users/572375/" ] }
2,975,907
Why do we use von Neumann ordinals, $$ 0 = \emptyset $$ $$ n+1 = n \cup \{n\} $$ and not Zermelo ordinals? $$ 0 = \emptyset $$ $$ n+1 = \{ n \} $$
There is no real, deep, fundamental reason. You can find a bijection between the set of Von Neumann naturals and Zermelo naturals, so anything you can do with the one set you can do with the other. However, Von Neumann naturals are more convenient in practice for a lot of reasons. For one, the element we call $n$ also has exactly $n$ elements. That means that we can use the actual set $n$ as a cardinality, defining "the set $A$ has $n$ elements" to mean that there is a bijection between $A$ and $n$ . For another, a convenient way to define the ordinals is to say that they are the transitive sets which are linearly ordered by $\in$ . Then the Von Neumann naturals are precisely the finite ordinals, which is a natural and important way to think about the finite ordinals.
{ "source": [ "https://math.stackexchange.com/questions/2975907", "https://math.stackexchange.com", "https://math.stackexchange.com/users/609813/" ] }
2,976,181
What is the smallest integer greater than 1 such that $\frac12$ of it is a perfect square and $\frac15$ of it is a perfect fifth power? I have tried multiplying every perfect square (up to 400 by two and checking if it is a perfect 5th power, but still nothing. I don't know what to do at this point.
The number is clearly a multiple of $5$ and $2$ . We look for the smallest, so we assume that it has no more prime factors. So let $n=2^a5^b$ . Since $n/2$ is a square, then $a-1$ and $b$ are even. Since $n/5$ is a fifth power, $a$ and $b-1$ are multiples of $5$ . Then $a=5$ and $b=6$ .
{ "source": [ "https://math.stackexchange.com/questions/2976181", "https://math.stackexchange.com", "https://math.stackexchange.com/users/609335/" ] }
2,978,505
My friend and I were discussing some mathematical philosophy and how the number systems were created when we reached a question. Why can we multiply two different numbers like this? Say we had to multiply $13\times 34$ . One may break this up like $(10+3)\times (30+4)$ . Applying distributive property here will give us the answer of 442. We can also choose to multiply this as $(6+7)\times (22+12)$ . Intuitively, we can hypothesize that this should give us the same answer as $13\times 34$ . How can we prove that our answer will be equal regardless of how we break up the numbers?
There are two ways I see to answer this question. One is from an axiomatic standpoint, where numbers are merely symbols on paper that are required to follow certain rules. The other uses the interpretation of multiplication as computing area. The former would take a good 5-10 pages to build up from the Peano axioms. For the latter, you can draw a rectangle 13 units by 34 units. Break one side into 10 units and 3 units, and the other side into 30 units and 4 units. At this point, you should see that this decomposes the rectangle into four pieces, corresponding to the four terms you get from the distributive rule. The various was to compute $13 \cdot 34$ are all just ways to decompose the rectangle, and at the end of the day they all compute the same number: the area of the rectangle.
{ "source": [ "https://math.stackexchange.com/questions/2978505", "https://math.stackexchange.com", "https://math.stackexchange.com/users/533510/" ] }
2,981,007
I could not find any information on this online so I thought I'd make a question about this. If we take the Fibonacci sequence $F_n = F_{n-1} + F_{n-2}$ , is this growing exponentially? Or perhaps if we consider it as a function $F(x) = F(x-1) + F(x-2)$ , is $F(x)$ an exponential function? I know Fibonacci grows rather quick, but is there a proof that shows whether it is exponential or not?
The Fibonacci Sequence does not take the form of an exponential $b^n$ , but it does exhibit exponential growth. Binet's formula for the $n$ th Fibonacci number is $$F_n=\frac{1}{\sqrt{5}}\bigg(\frac{1+\sqrt 5}{2}\bigg)^n-\frac{1}{\sqrt{5}}\bigg(\frac{1-\sqrt 5}{2}\bigg)^n$$ Which shows that, for large values of $n$ , the Fibonacci numbers behave approximately like the exponential $F_n\approx \frac{1}{\sqrt{5}}\phi^n$ .
{ "source": [ "https://math.stackexchange.com/questions/2981007", "https://math.stackexchange.com", "https://math.stackexchange.com/users/532170/" ] }
2,981,986
this might be a silly question, but I was wondering: PA cannot prove its consistency by the incompleteness theorems, but we can "step outside" and exhibit a model of it, namely $\mathbb{N}$ , so we know that PA is consistent. Why can't we do this with ZFC? I have seen things like "if $\kappa$ is [some large cardinal] then $V_{\kappa}$ models ZFC", but these stem from an "if". Is this a case of us not having been able to do this yet, or is there a good reason why it is simply not possible?
The problem is that, unlike the case for PA, essentially all accepted mathematical reasoning can be formalized in ZFC. Any proof of the consistency of ZFC must come from a system that is stronger (at least in some ways), so we must go outside ZFC-formalizable mathematics, which is most of mathmatics. This is just like how we go outside of PA-formalizable mathematics to prove the consistency of PA (say, working in ZFC), except that PA-formalizable mathematics is a much smaller and relatively uncontroversial subset of mathematics. Thus it is a common view that the consistency of PA is a settled fact whereas the consistency of ZFC is conjectural. (As I mentioned in a comment, as a gross oversimplification, “settled mathematical truth (amongst mainstream classical mathematicians)” roughly corresponds to “provable in ZFC”... Con PA is whereas Con ZFC is not.) As for proving the consistency of ZFC, as you mention, we could go to a stronger system where there is an axiom giving the existence of inaccessible cardinals. Working in Morse-Kelley is another possibility. In any case, you are in the same position with the stronger system as you were with ZFC: you cannot use it to prove its own consistency, and since you’ve made stronger assumptions, you’re at a higher “risk” of inconsistency.
{ "source": [ "https://math.stackexchange.com/questions/2981986", "https://math.stackexchange.com", "https://math.stackexchange.com/users/173841/" ] }
2,984,282
For example, if I have 456, How can I split this and then let each column value be a separate number? The only way I can think of doing this would be to subtract '100' n times until that column is '0' and storing the number of subtractions, leaving '4' then repeating for the other columns. Is there a quicker way?
You can use the operation "modulo". It calculates the remainder after you have divided by a number. So 456 modulo 10 is 6, now you have the first digit. Then you can divide 456 with 10 without remainder, you get 45. Now 45 modulo 10 gives 5, now you have second digit. Then you can divide 45 with 10 without remainder, you get 4. Now 4 modulo 10 gives 4, now you have last digit.
{ "source": [ "https://math.stackexchange.com/questions/2984282", "https://math.stackexchange.com", "https://math.stackexchange.com/users/608583/" ] }
2,987,574
Suppose we have a real function $ f: \mathbb{R} \to \mathbb{R}$ that is two times differentiable and we draw its graph $\{(x,f(x)), x \in \mathbb{R} \} $ . We know, for example, that when the first derivative is not continuous at a point we then have an "corner" in the graph. How about a discontinuity of $f''(x)$ at a point $x_0$ though? Can we spot that just by drawing the graph of $f(x)$ -not drawing the graph of the first derivative and noticing it has an "corner" at $x_0$ , that's cheating. What about discontinuities of higher derivatives, which will of course be way harder to "see"?
Seeing the second derivative by eye is rather impossible as the examples in the other answers show. Nonetheless there is a good way to visualize the second derivative as curvature , which gives a good way to see the discontinuities if some additional visual aids are available. Whereas the first derivative describes the slope of the line that best approximates the graph, the second derivative describes the curvature of the best approximating circle. The best approximating circle to the graph of $f:\mathbb{R}\to\mathbb{R}$ at a point $x$ is the circle tangent to $(x,f(x))$ with radius equal to $((1+f'(x)^2)^{3/2})/\left\vert f''(x) \right\vert$ . This leaves exactly two possible circles, and the sign of $f''(x)$ determines which is the correct one. That is, whether the circle should be above ( $f''(x)>0$ ) or below ( $f''(x)<0$ ) the graph. Note that the case when $f''(x)=0$ corresponds to a circle with "infinite" radius, i.e., just a line. As a demonstration, consider the difference between some of the functions mentioned before. For $x\mapsto x^3$ , the second derivative is continuous, and the best approximating circle varies in a continuous manner ("continuous" here needs to be interpreted with care, since the radius passes through the degenerate $\infty$ -case when it switches sign) For $x\mapsto \begin{cases}x^2,&x>0\\-x^2,&x\leq 0\end{cases}$ the best approximating circle has a visible discontinuity at $x=0$ , where the circle abruptly hops from one side to the other. The pictures above were made with Sage . The code used to create the second one is below to play around with: def tangent_circle(f,x0,df=None): if df is None: df = f.derivative(x) r = (1+df(x=x0)^2)^(3/2)/df.derivative(x)(x=x0) tang = df(x=x0) unitnormal = vector(SR,(-tang,1))/sqrt(1+tang^2) c = vector(SR,(x0,f(x0)))+unitnormal*r return c,r def plot_curvatures(f,df=None,plotrange=(-1.2,1.2),xran=(-1,1),framenum=50): fplot = plot(f,(x,plotrange[0],plotrange[1]),aspect_ratio=1,color="black") xmin = xran[0] xmax = xran[1] pts = [xmin+(xmax-xmin)*(k/(framenum-1)) for k in range(framenum)] return [fplot+circle((x0,f(x0)),0.03,fill=True,color="blue")+circle(*tangent_circle(f,x0,df),color="blue") for x0 in pts] f = lambda x: x^2 if x>0 else -x^2 df = 2*abs(x) frames = plot_curvatures(f,df) animate(frames,xmin=-1.5,xmax=1.5,ymin=-1,ymax=1)
{ "source": [ "https://math.stackexchange.com/questions/2987574", "https://math.stackexchange.com", "https://math.stackexchange.com/users/311060/" ] }
2,987,994
I'm looking for definite integrals that are solvable using the method of differentiation under the integral sign (also called the Feynman Trick) in order to practice using this technique. Does anyone know of any good ones to tackle?
Since this became quite popular I will mention here about an introduction to Feynman's trick that I wrote recently. It also contains some exercises that are solvable using this technique. My goal there is to give some ideas on how to introduce a new parameter as well as to describe some heuristics that I tend to follow when using Feynman's trick, hoping that it can serve as a good starting point. In case you are already familiar with Feynman's trick or prefer to evaluate some integrals directly, here is a brief list: $$I_1=\int_0^\frac{\pi}{2} \ln(\sec^2x +\tan^4x)dx$$ $$I_2=\int_0^\infty \frac{\ln\left({1+x+x^2}\right)}{1+x^2}dx$$ $$I_3=\int_0^\frac{\pi}{2}\ln(2+\tan^2x)dx$$ $$I_4=\int_0^\infty \frac{x-\sin x}{x^3(x^2+4)} dx$$ $$I_5=\int_0^\frac{\pi}{2}\arcsin\left(\frac{\sin x}{\sqrt 2}\right)dx$$ $$I_6=\int_0^\frac{\pi}{2} \ln\left(\frac{2+\sin x}{2-\sin x}\right)dx$$ $$I_7=\int_0^\frac{\pi}{2} \frac{\arctan(\sin x)}{\sin x}dx $$ $$I_8=\int_0^1 \frac{\ln(1+x^3)}{1+x^2}dx $$ $$I_9=\int_0^{\infty} \frac{x^{4/5}-x^{2/3}}{\ln(x)(1+x^2)}dx$$ $$I_{10}=\int_{0}^{1} \int_{0}^{1} \frac{1}{(1+x y) \ln (x y)}dxdy$$ $$I_{11}=\int_0^1\frac{\ln(1+x-x^2)}{x}dx$$ $$I_{12}=\int_0^1 \frac{\ln(1-x+x^2)}{x(1-x)}dx$$ $$I_{13}=\int_0^\infty \log \left(1-2\frac{\cos 2\theta}{x^2}+\frac{1}{x^4} \right)dx$$ $$I_{14}=\int_0^{\infty} \exp\left(-\left(4x+\frac{9}{x}\right)\right) \sqrt{x}dx$$ $$I_{15}=\int_0^\frac{\pi}{2}\arctan\left(\frac{2\sin x}{2\cos x -1}\right)\frac{\sin\left(\frac{x}{2}\right)}{\sqrt{\cos x}}dx$$ $$I_{16}=\int_0^1\int_0^1 \frac{ x\ln x\ln y}{(1-xy)\ln(xy)}dxdy$$ $$I_{17}=\int_1^{2}\frac{\cosh^{-1} x}{\sqrt{4-x^2}}dx$$ $$I_{18}=\int_0^t\frac{1}{\sqrt{x^3}} \exp\left({-\frac{(a-bx)^2}{2x}}\right) dx$$ $$I_{19}=\int_{0}^{\frac{\pi}4} \ln(\sin{x}+\cos{x}+\sqrt{\sin(2x)})dx$$ $$I_{20}=\int_0^1 \frac{\ln^2 x\ln(1+x)}{1+x^2}dx$$
{ "source": [ "https://math.stackexchange.com/questions/2987994", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
2,988,348
$$\begin{cases} 2x_1+5x_2-8x_3=8\\ 4x_1+3x_2-9x_3=9\\ 2x_1+3x_2-5x_3=7\\ x_1+8x_2-7x_3=12 \end{cases}$$ From my elementary row operations, I get that it has no solution. (Row operations are to be read from top to bottom.) $$\left[\begin{array}{ccc|c} 2 & 5 & -8 & 8 \\ 4 & 3 & -9 & 9 \\ 2 & 3 & -5 & 7 \\ 1 & 8 & -7 & 12 \end{array}\right] \overset{\overset{\large{R_1\to R_1-R_3}}{{R_2\to R_2-2R_3}}}{\overset{R_3\to R_3-2R_4}{\large\longrightarrow}} \left[\begin{array}{ccc|c} 0 & 2 & -3 & 1 \\ 0 & -3 & 1 & -5 \\ 0 & -13 & 9 & -17 \\ 1 & 8 & -7 & 12 \end{array}\right] \overset{\overset{\large{R_3\,\leftrightarrow\, R_4}}{R_2\,\leftrightarrow\, R_3}}{\overset{R_1\,\leftrightarrow\,R_2}{\large\longrightarrow}} \left[\begin{array}{ccc|c} 1 & 8 & -7 & 12 \\ 0 & 2 & -3 & 1 \\ 0 & -3 & 1 & -5 \\ 0 & -13 & 9 & -17 \end{array}\right]$$ $$\overset{R_4\to R_4-R_3}{\large\longrightarrow} \left[\begin{array}{ccc|c} 1 & 8 & -7 & 12 \\ 0 & 2 & -3 & 1 \\ 0 & -3 & 1 & -5 \\ 0 & 10 & 8 & -12 \end{array}\right] \overset{\overset{\large{R_3\to R_3+R_2}}{R_4\to R_4-5R_2}}{\large\longrightarrow} \left[\begin{array}{ccc|c} 1 & 8 & -7 & 12 \\ 0 & 2 & -3 & 1 \\ 0 & -1 & -2 & -4 \\ 0 & 0 & 23 & -17 \end{array}\right] \overset{\overset{\large{R_2\to R_2+2R_3}}{R_3\to-R_3}}{\large\longrightarrow}$$ $$\left[\begin{array}{ccc|c} 1 & 8 & -7 & 12 \\ 0 & 0 & -7 & -7 \\ 0 & 1 & 2 & 4 \\ 0 & 0 & 23 & -17 \\ \end{array}\right] \overset{R_2\,\leftrightarrow\,R_3}{\large\longrightarrow} \left[\begin{array}{ccc|c} 1 & 8 & -7 & 12 \\ 0 & 1 & 2 & 4 \\ 0 & 0 & -7 & -7 \\ 0 & 0 & 23 & -17 \\ \end{array}\right]$$ However, the answer in the book $(3, 2, 1)$ fits the system. Was there an arithmetical mistake, or do I misunderstand something fundamentally?
Hint : Try inputing the solution $(3,2,1)$ into every step. That will allow you to identify the step where you went wrong.
{ "source": [ "https://math.stackexchange.com/questions/2988348", "https://math.stackexchange.com", "https://math.stackexchange.com/users/540252/" ] }
2,988,516
I was wondering how many definitions of exponential functions can we think of. The basic ones could be: $$e^x:=\sum_{k=0}^{\infty}\frac{x^k}{k!}$$ also $$e^x:=\lim_{n\to\infty}\bigg(1+\frac{x}{n}\bigg)^n$$ or this one: Define $e^x:\mathbb{R}\rightarrow\mathbb{R}\\$ as unique function satisfying: \begin{align} e^x\geq x+1\\ \forall x,y\in\mathbb{R}:e^{x+y}=e^xe^y \end{align} Can anyone come up with something unusual? (Possibly with some explanation or references).
The exponential function is the unique solution of the initial value problem $y'(x)=y(x) , \quad y(0)=1$ .
{ "source": [ "https://math.stackexchange.com/questions/2988516", "https://math.stackexchange.com", "https://math.stackexchange.com/users/504245/" ] }
2,990,086
When I try to find the derivative of $f(x) = \sqrt[3]{x} \sin(x)$ at $x=0$ , using the formal definition of first derivative, I get this: $$ f'(0) = \lim\limits_{x \to 0} \frac{\sqrt[3]{x} \sin(x)-0}{x-0},$$ which gives zero. However, when I use derivative rules I get that: $$ f'(x) = {\sin(x) \frac{1}{3\sqrt[3]{x^2}}+\cos(x)\sqrt[3]{x}} $$ and thus $f'(0)$ doesn't exist. Why does this happen? what's the reason behind it?
The rule $(fg)'=f'g+fg'$ works where $f$ and $g$ are differentiable. And $\sqrt[3]x$ is not differentiable at $x=0$ .
{ "source": [ "https://math.stackexchange.com/questions/2990086", "https://math.stackexchange.com", "https://math.stackexchange.com/users/403497/" ] }
2,991,075
Given any triangle $\triangle ABC$ , and given one of its side, we can draw two lines perpendicular to that side passing through its two vertices. If we do this construction for each side, we obtain the points $D,E,F$ where two of these perpendicular lines meet at the minimum distance to each side. These three points can be used to build three triangles on each side of the starting triangle. The conjecture is that The sum of the areas of the triangles $\triangle AFB$ , $\triangle BDC$ , and $\triangle CEA$ is equal to the area of $\triangle ABC$ . This is likely an obvious and very well known result. But I cannot find an easy proof of this. Therefore I apologize for possible triviality, and I thank you for any suggestion.
Draw the orthocenter. You get three parallelograms which immediately provide the answer.
{ "source": [ "https://math.stackexchange.com/questions/2991075", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
2,993,855
In simple terms, could someone explain why there is not a logical connective for ‘because’ in propositional logic like there is for ‘and’ and ‘or’? Is this because the equivalent of ‘because’ is the argument of the form ‘if p , then q ’, or am I missing something? Please illustrate your answer with example(s) if possible.
It is because 'because' is not truth-functional. That is, knowing the truth-values of $P$ and $Q$ does not tell you the truth-value of ' $P$ because of $Q$ ' For example, the two statements 'Grass is green' and 'Snow is white' are both true, but 'Grass is green because snow is white' is an invalid argument, and hence, as a statement as to the validity of that argument, a false statement. On the other hand,'Grass is green because grass is green' is a true statement as to the validity of this as an argument, but yet again it involves two true statements. This shows that with $P$ and $Q$ both being true, the statement ' $P$ because of $Q$ ' can either be true or false, and hence it is not truth-functional.
{ "source": [ "https://math.stackexchange.com/questions/2993855", "https://math.stackexchange.com", "https://math.stackexchange.com/users/45635/" ] }
2,994,429
Suppose some number $n \in \mathbb{N}$ is divisible by $144$ . $$\implies \frac{n}{144}=k, \space \space \space k \in \mathbb{Z} \\ \iff \frac{n}{36\cdot4}=k \iff \frac{n}{36}=4k$$ Since any whole number times a whole number is still a whole number, it follows that $n$ must also be divisible by $36$ . However, what I think I have just shown is: $$\text{A number }n \space \text{is divisble by} \space 144 \implies n \space \text{is divisible by} \space 36 \space (1)$$ Is that the same as saying: $$\text{For a number to be divisible by 144 it has to be divisible by 36} \space (2)$$ In other words, are statements (1) and (2) equivalent?
Yes, it's the same. $A\implies B$ is equivalent to "if we have $A$ , we must have $B$ ". And your proof looks fine. Good job. If I were to offer some constructive criticism, it would be of the general kind: In number theory, even though we call the property "divisible", we usually avoid division whenever possible. Of the four basic arithmetic operations it is the only one which makes integers into non-integers. And number theory is all about integers. Therefore, " $n$ is divisible by $144$ ", or " $144$ divides $n$ " as it's also called, is defined a bit backwards: There is an integer $k$ such that $n=144k$ (This is defined for any number in place of $144$ , except $0$ .) Using that definition, your proof becomes something like this: If $n$ is divisible by $144$ , then there is an integer $k$ such that $n=144k$ . This gives $$ n=144k=(36\cdot4)k=36(4k) $$ Since $4k$ is an integer, this means $n$ is also divisible by $36$ .
{ "source": [ "https://math.stackexchange.com/questions/2994429", "https://math.stackexchange.com", "https://math.stackexchange.com/users/605695/" ] }
2,995,979
You have two identical perfectly square pieces of paper. The area of each paper is 1000 units. On each paper, draw 1000 convex, non-overlapping polygons with all polygons having the same area (exactly 1 unit). Obviously, the polygons are covering both papers completely and edges of paper also serve as edges of some polygons). Polygons may have different shapes and number of sides and the drawing on the first paper is completely different from the drawing on the second paper. Now put the first paper on top of the second and align paper edges perfectly. Prove that it is always possible to punch a hole in all 2000 polygons with 1000 needles (each needle goes through both papers). What have I tried? This problem came from my son who likes to torture his father with difficult problems brought back from his math school. My first try was to steal his clever analysis book while he was sleeping and find the right page in the answer section. Alas, this problem had no solution, which basically means that it's either too simple (and I'm too stupid) or it's too difficult. So I decided to read some theory and discovered that I had some pretty huge gaps in my math education. This problem is definitely about functions. You have a set of 1000 polygons on one side and a set of 1000 polygons on the other side. I have to prove that there is a bijective function between these two sets. Needles are just lines connecting the dots. However, all my attempts to construct such function ended miserably. I guess there has to be some clever theorem than can be applied to problems like this one but I would have to read a pretty thick book to find it. Thanks for the hint.
The clever theorem that can be applied to questions like this one is Hall's theorem . Construct a bipartite graph, with the vertices on one side being the polygons on the first sheet of paper, and the vertices on the other side being the polygons on the second sheet of paper. Draw an edge between two polygons whenever they overlap. A perfect matching in this graph is a one-to-one pairing of the polygons on the two sheets such that any two polygons that are paired overlap somewhere. If we find a perfect matching, we can punch the holes: for every pair of polygons in the perfect matching, poke a needle through some part of their overlapping region. Hall's theorem says that a perfect matching is guaranteed to exist if, for every set $S$ of vertices on one side, the set $N(S)$ of vertices adjacent to some vertex in $S$ satisfies $|N(S)| \ge |S|$ . In other words, if you pick any $k$ polygons on one sheet of paper, there will be at least $k$ polygons on the other sheet of paper adjcent to at least one of the ones you picked. This follows from looking at the areas. An individual polygon has area $1$ . So the $k$ polygons you chose on one sheet have total area $k$ . On the other sheet, that same region needs at least $k$ polygons to cover.
{ "source": [ "https://math.stackexchange.com/questions/2995979", "https://math.stackexchange.com", "https://math.stackexchange.com/users/401277/" ] }
2,996,541
When discussing with my son a few of the many methods to calculate the digits of $\pi$ (15 yo school level), I realized that the methods I know more or less (geometric approximation, Monte Carlo and basic series) are all convergent but none of them explicitly states that the $n$ -th digit calculated at some point is indeed a true digit (that it will not change in further calculations). To take an example, the Gregory–Leibniz series gives us, for each step: $$ \begin{align} \frac{4}{1} & = 4\\ \frac{4}{1}-\frac{4}{3} & = 2.666666667...\\ \frac{4}{1}-\frac{4}{3}+\frac{4}{5} & = 3.466666667...\\ \frac{4}{1}-\frac{4}{3}+\frac{4}{5}-\frac{4}{7} & = 2.895238095... \end{align} $$ The integer part has changed four times in four steps. Why would we know that $3$ is the correct first digit? Similarly in Monte Carlo: the larger the sample, the better the result but do we mathematically know that "now that we tried [that many times] , we are mathematically sure that $\pi$ starts with $3$ ". In other words: does each of the techniques to calculate $\pi$ (or at least the major ones) have a proof that a given digit is now correct? if not, what are examples of the ones which do and do not have this proof? Note: The great answers so far (thank you!) mention a proof on a specific technique, and/or a proof that a specific digit is indeed the correct one. I was more interested to understand if this applies to all of the (major) techniques (= whether they all certify that this digit is guaranteed correct) . Or that we have some which do (the ones in the two first answers for instance) and others do not (the further we go, the more precise the number but we do not know if something will not jump in at some step and change a previously stable digit. When typing this in and thinking on the fly, I wonder if this would not be a very bad technique in itself, due to that lack of stability)
Note that $\pi=6\arcsin\left(\frac12\right)$ . So, since $$\arcsin(x)=\sum_{n=0}^\infty \frac1{2^{2n}}\binom{2n}n\frac{ x^{2n+1}}{2n+1},$$ you have $$\pi=\sum_{n=0}^\infty\frac6{2^{4n+1}(2n+1)}\binom{2n}n.$$ Now, for each $N\in\mathbb{Z}^+$ , let $$S_N=\sum_{n=0}^N\frac6{2^{4n+1}(2n+1)}\binom{2n}n\text{ and let }R_N=\sum_{n=N+1}^\infty\frac6{2^{4n+1}(2n+1)}\binom{2n}n.$$ Then: $(\forall N\in\mathbb{Z}^+):\pi=S_N+R_N$ ; the sequence $(S_N)_{N\in\mathbb{Z}_+}$ is strictly increasing and $\lim_{N\to\infty}S_N=\pi$ . In particular, each $S_N$ is a better approximation of $\pi$ than the previous one. Since $$(\forall n\in\mathbb N):\binom{2n}n<4^n=2^{2n},$$ you have $$R_N<\sum_{n=N+1}^\infty\frac6{2^{2n+1}}=\frac1{4^N}.$$ So, taking $N=0$ , you get that $\pi=S_0+R_0$ . But $S_0=3$ and $R_0<1$ . So, the first digit of $\pi$ is $3$ . If you take $N=3$ , then $\pi=S_3+R_3$ . But $S_3\approx3.14116$ and $R_3<0.015625$ . So, the second digit is $1$ . And so on…
{ "source": [ "https://math.stackexchange.com/questions/2996541", "https://math.stackexchange.com", "https://math.stackexchange.com/users/96639/" ] }
2,998,175
$\sqrt{6 +\sqrt{6 +\sqrt{6 + \cdots}}}$ . This is the famous question. I have to calculate it's value. I found somewhere to the solution to be putting this number equal to a variable $x$ . That is, $\sqrt{6 +\sqrt{6 +\sqrt{6 + \cdots}}} = x$ . Then we square both the sides. $6 +{\sqrt{6 +\sqrt{6 + \cdots}}} = x^2$ . Then we replace the square root thing with $x$ . $6 + x = x^2$ and solve the equation to get the answer as 3. But I have a doubt that what type of number is this? Is it a real number or not? And if it isn't, how can we perform mathematical operations on it?
Why care must be taken Equating any such infinite expression to a real number must be done with a hint of caution, because the expression need not have a real value. For example, setting $x = 1-1+1-1+\cdots$ is not correct, since one sees that $1-x = 1+(-1+1-1+1) = x$ , so $x = \frac 12$ which is absurd from an addition point of view : whenever you truncate the series, it always has value either $1$ or $0$ , so where does $\frac 12$ comes from? With this logic, it is safe to say $1-1+1-\cdots$ does not evaluate to any finite real number. However, once we can confirm that the result of such an expression is real and well defined, then we can play with them as we do with real numbers. Ok, so what about this one? To confirm that $\sqrt{6+\sqrt{6+\sqrt{6+\cdots}}}$ is a finite real number, we need the language of sequences. I won't go very far in, but essentially, if we define a sequence of reals by $x_1 = \sqrt 6$ and $x_{n+1} = \sqrt{6+x_n}$ , then $x_ 2 = \sqrt{6+\sqrt 6}$ , $x_3 = \sqrt{6+\sqrt{6+\sqrt 6}}$ , and eventually, $x_n$ resembles more and more the expression that we are given to evaluate. EDITED : I have modified the steps required for showing that $x_n$ is a convergent sequence, to the real number $3$ . It is easy to see that $x_n$ is bounded. It is clearly positive for all $n$ , and can be shown to be bounded by $3$ above by induction. $a_n$ is an increasing sequence can also be shown easily. Any bounded increasing sequence is convergent. Once convergence is shown, we can then assume that $\lim x_n = L$ exists, and then use continuity to take limits on $x_{n+1} = \sqrt{6+x_n}$ to see that $L = \sqrt{6+L}$ . But $L \geq 0$ must happen. Thus, $L=3$ is the limit, and hence the value of the expression. Versatility of sequences To add to this, sequences also offer versatility. A similar question may be asked : what is: $$ 6+\cfrac{6}{6+\cfrac{6}{6+\cfrac{6}{6+\cdots}}} $$ What we do here is the same : use the language of sequences, by defining $x_1 = 6$ and $x_{n+1} = 6 + \frac{6}{x_n}$ . Once again, we can check convergence i.e. that this quantity is a finite real number(But on this occasion, the sequence rather oscillates around the point of convergence before converging). Next, we can use limits to deduce that if $L$ is the value then it satisfies $L = 6+ \frac 6L$ , which gives one reasonable candidate, $3+\sqrt{15}$ . So, this expression is actually equal to $3+\sqrt{15}$ . It's not easy all the time! However, the approach using sequences doesn't always give immediate rewards. For example, you could ask for the following : $$ \sqrt{1+2\sqrt{1+3\sqrt{1+4\sqrt{1+\cdots}}}} $$ which also looks like a nested radical.Can we find a sequence which, for large $n$ , looks like this expression? Try to write one down, which you can work with. Anyway, the answer to the above expression is $3$ ! To see this, we need to use "reverse nesting" : $$ 3 = \sqrt{9} = \sqrt{1+2\cdot 4} = \sqrt{1+2\sqrt{16}} = \sqrt{1+2\sqrt{1+15}} \\ = \sqrt{1+2\sqrt{1+3\sqrt{25}}} = \sqrt{1+2\sqrt{1+3\sqrt{1+4\sqrt{36}}}} \\= \sqrt{1+2\sqrt{1+3\sqrt{1+4\sqrt{1+5\sqrt{49}}}}} =\cdots $$ And just breathe in, breathe out. Ramanujan, class ten I believe. EDIT The nested radical method is wrong , from a rigorous point of view, for the reason pointed out in the comments. However, there is a rigorous proof here . The proof by Yiorgos Smyrlis is brilliant. Note that the "nested radical" method can be used for the earlier problem as well, by $3 = \sqrt{6+3} = \sqrt{6+\sqrt{6+3}} = \cdots$ , but this is unrigorous, only providing intuition. You can try to see if something intuitive can be derived for the continued fraction.
{ "source": [ "https://math.stackexchange.com/questions/2998175", "https://math.stackexchange.com", "https://math.stackexchange.com/users/614620/" ] }
3,000,024
Q: A wall, rectangular in shape, has a perimeter of 72 m. If the length of its diagonal is 18 m, what is the area of the wall ? The answer given to me is area of 486 m 2 . This is the explanation given to me Is it possible to have a rectangle of diagonal 18 m and area greater than the area of a square of side 18 m ?
The area of the square built on the diagonal must be at least twice the area of the rectangle: $\hskip 4 cm$
{ "source": [ "https://math.stackexchange.com/questions/3000024", "https://math.stackexchange.com", "https://math.stackexchange.com/users/616188/" ] }
3,002,706
I was reading about proof by infinite descent, and proof by minimal counterexample. My understanding of it is that we assume the existance of some smallest counterexample $A$ that disproves some proposition $P$ , then go onto show that there is some smaller counterexample to this which to me seems like a mix of infinite descent and 'reverse proof by contradiction'. My question is, how do we know that there might be some counterexample? Furthermore, are there any examples of this?
Consider, for instance, the statement Every $n\in\mathbb{N}\setminus\{1\}$ can be written as a product of prime numbers (including the case in which there's a single prime number appearing only once). Suppose otherwise. Then there would be a smallest $n\in\mathbb{N}\setminus\{1\}$ that could not be expressed as a product of prime numbers. In particular, this implies that $n$ cannot be a prime number. Since $n$ is also different from $1$ , it can be written as $a\times b$ , where $a,b\in\{2,3,\ldots,n-1\}$ . Since $n$ is the smallest counterexample, neither $a$ nor $b$ are counterexamples and therefore both of them can be written as a product of prime numbers. But then $n(=a\times b)$ can be written in such a way too.
{ "source": [ "https://math.stackexchange.com/questions/3002706", "https://math.stackexchange.com", "https://math.stackexchange.com/users/522095/" ] }
3,009,635
I recently learnt a Japanese geometry temple problem. The problem is the following: Five squares are arranged as the image shows. Prove that the area of triangle T and the area of square S are equal. This is problem 6 in this article. I am thinking about law of cosines, but I have not been able to prove the theorem. Any hints would be appreciated.
We will, first of all, prove a very interesting property $\mathbf{Lemma\;1}$ Given two squares PQRS and PTUV (as shown on the picture), the triangles $\Delta STP$ and $\Delta PVQ$ have equal area. $\mathbf {Proof}$ Denote by $\alpha$ the angle SPT and by $[...]$ the area of the polygon "...". Hence $$[\Delta STP]=\frac{\overline {PS}\cdot\overline {PT}\cdot \sin(\alpha)}{2}$$ $$[\Delta PVQ]=\frac{\overline {QP}*\overline {PV}\cdot\sin\Bigl(360°-(90°+90+\alpha)\Bigr)}{2}=\frac{\overline {QP}\cdot\overline {PV}\cdot\sin\Bigl(180°-\alpha\Bigr)}{2}=\frac{\overline {QP}\cdot\overline {PV}\cdot\sin(\alpha)}{2}$$ Since $\overline {PS}=\overline {PQ}$ and $\overline {PT}=\overline {PV}$ $$[\Delta STP]=[\Delta PVQ]$$ Now, back to the problem Let $\overline {AB}=a$ and $\overline {IJ}=b$ . Note first of all that $$\Delta BEC \cong \Delta EIF$$ See why? $\mathbf {Hint:}$ It is obvious that $\overline {CE}=\overline {EF}$ . Use the properties of right triangles in order to show that all angles are equal. Thus $${(\overline{CE})^2}={a^2}+{b^2}=S$$ Note furthermore that $$[\Delta BEC]=[\Delta EIF]=\frac{ab}{2}$$ By Lemma 1: $$[\Delta DCG]=[\Delta BEC]=\frac{ab}{2}=[\Delta EIF]=[\Delta GFK]$$ The area of the polygon AJKGD is thus $$[AJKGD]=[ABCD]+[CEFG]+[FIJK]+4[\Delta DCG]=2\Bigl({a^2}+{b^2}\Bigr)+2ab$$ The area of the trapezoid AJKD is moreover $$[AJKD]=\frac{(a+b)(2a+2b)}{2}={a^2}+2ab+{b^2}$$ Finally $$T=[\Delta DKG]=[AJKGD]-[AJKD]={a^2}+{b^2}=S \Rightarrow S=T$$
{ "source": [ "https://math.stackexchange.com/questions/3009635", "https://math.stackexchange.com", "https://math.stackexchange.com/users/587091/" ] }
3,009,766
Give an example of a sequence in $\mathbb{R}$ which has no subsequence which is a Cauchy sequence. I can find out a sequence that is not a Cauchy sequence such as $\{\ln(n)\}$ once $|\ln(n)-\ln(n+1)|=0$ but $|\ln(n)-\ln(2n)|=|\ln(\frac{1}{2})|>\epsilon$ $\forall \epsilon<\ln(\frac{1}{2})$ I can still find a subsequence of the type ${\ln(2n)}_{2n\in\mathbb{N}}$ such that $|\ln(2n)-\ln(2n+1)|=0$ Question: What should I do to get a sequence that has no Cauchy subsequence? Thanks in advance!
Take $a_n=n$ . Then for any subsequence $n_k$ , $|n_k-n_{k-1}|\geq 1$ . So, it has no Cauchy sub-sequence.
{ "source": [ "https://math.stackexchange.com/questions/3009766", "https://math.stackexchange.com", "https://math.stackexchange.com/users/400711/" ] }
3,013,709
I thought maybe we can start with congruent triangle and try to cut it similar to how we create a Sierpinski's Triangle? However, the number of smaller triangles we get is a power of $4$ so it does not work. Any ideas?
The decomposition is possible because $2005 = 5\cdot 401$ and both $5$ and $401$ are primes of the form $4k+1$ . This allow $2005$ to be written as a sum of squares. $$2005 = 22^2 + 39^2 = 18^2+41^2$$ For any integer $n = p^2 + q^2$ that can be written as a sum of squares. Consider a right-angled triangle $ABC$ with $$AB = p\sqrt{n}, AC = q\sqrt{n}\quad\text{ and }\quad BC = n$$ Let $D$ on $BC$ be the foot of attitude at $A$ . It is easy to see $\triangle DBA$ and $\triangle DAC$ are similar to $\triangle ABC$ with $$BD = p^2, AD = pq\quad\text{ and }\quad CD = q^2$$ One can split $\triangle DBA$ into $p^2$ and $\triangle DAC$ into $q^2$ triangles with sides $p, q$ and $\sqrt{n}$ . As an example, following is a subdivision of a triangle into $13 = 2^2 + 3^2$ congruent triangles. In the literature, this is known as a biquadratic tiling of a triangle. For more information about subdividing triangles into congruent triangles, look at answers in this MO post . In particular, the list of papers by Michael Beeson there. The construction described here is based on what I have learned from one of Michael's papers.
{ "source": [ "https://math.stackexchange.com/questions/3013709", "https://math.stackexchange.com", "https://math.stackexchange.com/users/614287/" ] }
3,014,981
I am a senior in high school so I know I am simply misunderstanding something but I don't know what, please have patience. I was tasked to find the derivative for the following function: $$ y = \frac{ (4x)^{1/5} }{5} + { \left( \frac{1}{x^3} \right) } ^ {1/4} $$ Simplifying: $$ y = \frac{ 4^{1/5} }{5} x^{1/5} + { \frac{1 ^ {1/4}}{x ^ {3/4}} } $$ $$ y = \frac{ 4^{1/5} }{5} x^{1/5} + { \frac{\pm 1}{x ^ {3/4}} } $$ Because $ 1 ^ {1/n} = \pm 1 $ , given $n$ is even $$ y = \frac{ 4^{1/5} }{5} x^{1/5} \pm { x ^ {-3/4} } $$ Taking the derivative using power rule: $$ \frac{dy}{dx} = \frac{ 4^{1/5} }{25} x^{-4/5} \pm \frac{-3}{4} { x ^ {-7/4} } $$ which is the same as $$ \frac{dy}{dx} = \frac{ 4^{1/5} }{25} x^{-4/5} \pm \frac{3}{4} { x ^ {-7/4} } $$ And that is the part that I find difficult to understand. I know that I should be adding the second term(I graphed it multiple times to make sure), but I cannot catch my error and my teacher did't want to discuss it. So I know I am doing something wrong because one function cannot have more than one derivative.
The $(\cdot)^{\frac{1}{4}}$ operation has to be understood as a function. A function can only have one image for any argument. Depending upon how you interpret the fourth root, the image could be positive or negative. But once you set how you interpret your function (positive or negative valued), you have to stick with that interpretation throughout. When you write $y = \frac{ 4^{1/5} }{5} x^{1/5} \pm { \frac{1}{x ^ {3/4}} }$ , you are working with both interpretations simultaneously. In other words, when you differentiate, you don't get two derivatives for one function, rather two derivatives corresponding to two different functions, one $y = \frac{ 4^{1/5} }{5} x^{1/5} + { \frac{1}{x ^ {3/4}} }$ , and the other, $y = \frac{ 4^{1/5} }{5} x^{1/5} - { \frac{1}{x ^ {3/4}} }$ .
{ "source": [ "https://math.stackexchange.com/questions/3014981", "https://math.stackexchange.com", "https://math.stackexchange.com/users/619996/" ] }
3,016,718
This is a popular problem spreading around. Solve for the shaded reddish/orange area. (more precisely: the area in hex color #FF5600 ) $ABCD$ is a square with a side of $10$ , $APD$ and $CPD$ are semicircles, and $ADQB$ is a quarter circle. The problem is to find the shaded area $DPQ$ . I was able to solve it with coordinate geometry and calculus, and I verified the exact answer against a numerical calculation on Desmos . Ultimately the result is 4 terms and not very complicated. So I was wondering: Is there was a way to solve this using trigonometry? Perhaps there is a way to decompose the shapes I am not seeing. A couple of years ago there was a similar "Find the shaded area" problem for Chinese students . I was able to solve that without calculus, even though it was quite an involved calculation. Disclosure: I run the YouTube channel MindYourDecisions. I plan to post a video on this topic. I'm okay posting only the calculus solution, but it would be nice to post one using only trigonometry as many have not taken calculus. I will give proper credit to anyone that helps, thanks! Update : Thanks for everyone's help! I prepared a video for this and presented 3 methods of solving it (the short way like Achille Hui's answer, a slightly longer way like David K and Seyed's answer, and a third way using calculus). I thanked those people in the video on screen, see around 1:30 in this link: https://youtu.be/cPNdvdYn05c .
The area can be simplified to $75\tan^{-1}\left(\frac12\right) - 25 \approx 9.773570675060455 $ . It come down to finding the area of lens $DP$ and $DQ$ and take difference. What you need is the area of the lens formed by intersecting two circles, one centered at $(a,0)$ with radius $a$ , another centered on $(0,b)$ with radius $b$ . It is given by the expression. $$\begin{align}\Delta(a,b) \stackrel{def}{=} & \overbrace{a^2\tan^{-1}\left(\frac{b}{a}\right)}^{I} + \overbrace{b^2\tan^{-1}\left(\frac{a}{b}\right)}^{II} - ab\\ = & (a^2-b^2) \tan^{-1}\left(\frac{b}{a}\right) + \frac{\pi}{2} b^2 - ab \end{align} $$ In above expression, $I$ is area of the circular sector span by the lens at $(a,0)$ (as a convex hull). $II$ is area of the circular sector span by the lens at $(0,b)$ (as a convex hull). $ab$ is area of union of these two sectors, a right-angled kite with sides $a$ and $b$ . Apply this to problem at hand, we get $$\begin{align}\verb/Area/(DPQ) &= \verb/Area/({\rm lens}(DQ)) - \verb/Area/({\rm lens}(DP))\\[5pt] &= \Delta(10,5) - \Delta(5,5)\\ &= \left((10^2-5^2)\tan^{-1}\left(\frac12\right) + 5^2\cdot\frac{\pi}{2} - 5\cdot 10\right) - \left( 5^2\cdot\frac{\pi}{2} - 5^2\right)\\ &= 75\tan^{-1}\left(\frac12\right) - 25 \end{align} $$
{ "source": [ "https://math.stackexchange.com/questions/3016718", "https://math.stackexchange.com", "https://math.stackexchange.com/users/84271/" ] }
3,019,500
So I was watching a video that features astronomer and topologist Cliff Stoll talking about how figures that aren't quadrilaterals can have all their angles equal 90 degrees on different surfaces. For example, on a sphere, you can create a triangle that has all of its angles equal $90^\circ$ . On a pseudosphere, you can create a pentagon that has all of its angles equal $90^\circ$ . Now, here's my question. Is there a surface where a hexagon with this property is possible?
You would need a surface of negative curvature. It is best to use a hyperbolic plane for this, where you can easily fit any regular n-gon with given angles as long as the sum of its external angles is greater than 360 degrees. The problem is that the hyperbolic plane does not fit in Euclidean space. The pseudosphere is a small fragment of the hyperbolic plane. You can draw a right-angled hexagon on the pseudosphere only if you allow it to wrap over itself. (Edit: actually I am not completely sure about this; see here , you get a pseudosphere by cutting the part covered with white dots; it appears that a hexagon is slightly larger than the area covered by the pseudosphere, but I am not sure. Should be possible to prove.) You can also draw it on a Dini's surface -- that is basically an unrolled pseudosphere where you have several layers, and thus you avoid the intersection problem. But it would be hard to see anything because it is rolled very tightly. See here . Less smooth, but probably the best way would be to use something similar to a hyperbolic crochet. See our computer simulation (arrow keys to rotate).
{ "source": [ "https://math.stackexchange.com/questions/3019500", "https://math.stackexchange.com", "https://math.stackexchange.com/users/620797/" ] }
3,019,506
I am stuck on this problem during my review for my stats test. I know I have to use the convolution formula, and I understand that: $f_{U_1}(U_1) = 1$ for $0≤U_1≤1$ $f_{U_2}(U_2) = 1$ for $0≤U_2≤1$ but I do not know how to continue on from there. How do I use the convolution formula in this question? Thanks
You would need a surface of negative curvature. It is best to use a hyperbolic plane for this, where you can easily fit any regular n-gon with given angles as long as the sum of its external angles is greater than 360 degrees. The problem is that the hyperbolic plane does not fit in Euclidean space. The pseudosphere is a small fragment of the hyperbolic plane. You can draw a right-angled hexagon on the pseudosphere only if you allow it to wrap over itself. (Edit: actually I am not completely sure about this; see here , you get a pseudosphere by cutting the part covered with white dots; it appears that a hexagon is slightly larger than the area covered by the pseudosphere, but I am not sure. Should be possible to prove.) You can also draw it on a Dini's surface -- that is basically an unrolled pseudosphere where you have several layers, and thus you avoid the intersection problem. But it would be hard to see anything because it is rolled very tightly. See here . Less smooth, but probably the best way would be to use something similar to a hyperbolic crochet. See our computer simulation (arrow keys to rotate).
{ "source": [ "https://math.stackexchange.com/questions/3019506", "https://math.stackexchange.com", "https://math.stackexchange.com/users/543185/" ] }
3,019,535
I am currently working on understanding trig identities. A question has me stumped, and no matter how I look at it, it never leads to the proof. I believe I am making a mistake when dividing multiple fractions. $$ \frac{\csc x + \cot x}{\tan x + \sin x} = \cot x\csc x $$ For my first step I break up the $\csc x$ and $\cot x$ in the numerator and add them together to make: $$\frac{\frac{1+\cos x}{\sin x\cos x}}{\tan x+\sin x}$$ I then simplify further and end up at: $$ \frac{\cos x+\cos^2 x}{\sin^2 x\cos^2 x} $$ From here on I don't see any identities, or possible ways to decompose this further.
You would need a surface of negative curvature. It is best to use a hyperbolic plane for this, where you can easily fit any regular n-gon with given angles as long as the sum of its external angles is greater than 360 degrees. The problem is that the hyperbolic plane does not fit in Euclidean space. The pseudosphere is a small fragment of the hyperbolic plane. You can draw a right-angled hexagon on the pseudosphere only if you allow it to wrap over itself. (Edit: actually I am not completely sure about this; see here , you get a pseudosphere by cutting the part covered with white dots; it appears that a hexagon is slightly larger than the area covered by the pseudosphere, but I am not sure. Should be possible to prove.) You can also draw it on a Dini's surface -- that is basically an unrolled pseudosphere where you have several layers, and thus you avoid the intersection problem. But it would be hard to see anything because it is rolled very tightly. See here . Less smooth, but probably the best way would be to use something similar to a hyperbolic crochet. See our computer simulation (arrow keys to rotate).
{ "source": [ "https://math.stackexchange.com/questions/3019535", "https://math.stackexchange.com", "https://math.stackexchange.com/users/392187/" ] }
3,019,766
$$\sum_{n=1}^\infty\frac{\cot(\pi n\sqrt{61})}{n^3}=-\frac{16793\pi^3}{45660\sqrt{61}}.$$ Prove it converges and, evaluate the series. For the first part of the question, I prove it converges by considering the irrationality measure, $$|\sin (n\pi\sqrt{61})|=|\sin (n\pi\sqrt{61}-m\pi)|\ge \frac{2}{\pi}|n\pi\sqrt{61}-m\pi|\ge 2n\left|\sqrt{61}-\frac{m}{n}\right|>\frac{C}{n},$$ so $$\left|\frac{\cot(\pi n\sqrt{61})}{n^3}\right|\le\left|\frac{1}{n^3\sin(\pi n\sqrt{61})}\right|<\frac{C}{n^2}.$$ How to evaluate the series? I have found a related question . I apologize for incorrect information in the previous post.
In general $$F(\alpha) = \sum_{n=1}^\infty \frac{\cot(n\pi \alpha)}{n^3}$$ converges and can be explicitly calculated when $\alpha$ is a quadratic irrational . The convergence in this case is easily seen as $\alpha$ has irrationality measure $2$ . More precisely, $F(\alpha)/\pi^3 \in \mathbb{Q}(\alpha)$ when $\alpha$ is quadratic irrational. The procedure below also works when $n^3$ is replaced by any $n^{2k+1}$ . Let $$g(z) = \frac{\cot(z\pi \alpha)\cot(z\pi)}{z^3}$$ then $g$ has simple poles at non-zero integer multiples of $1$ and $1/\alpha$ , and $5$ -th order pole at $0$ . Let $R_N$ denote a large rectangle with corners at $N(\pm 1 \pm i)$ . Then contour integration gives $$\tag{1}\sum_{\substack{n\in R_N \\ n\neq 0}} \frac{\cot(n\pi\alpha)}{\pi n^3} + \sum_{\substack{n/\alpha\in R_N \\ n\neq 0}} \frac{\alpha^2\cot(n\pi/\alpha)}{\pi n^3}-\frac{\pi ^2 \left(\alpha^4-5 \alpha^2+1\right)}{45 \alpha} = \frac{1}{2\pi i} \int_{R_N} g(z)dz$$ I claim there exists a sequence of integers $N_1, N_2, \cdots$ such that RHS tends to $0$ . Note that $\cot(z\pi)$ is uniformly bounded on the annulus $R_{N+3/4} - R_{N+1/4}$ when $N$ is an integer. Hence by equidistribution of $n\alpha$ modulo $1$ , we can find integers $N_i$ such that both $\cot(z\pi\alpha)$ and $\cot(z\pi)$ are uniformly bounded on $R_{N_i+3/4} - R_{N_i+1/4}$ . Since we already know the series converges , from $(1)$ : $$\tag{2}F(\alpha) + \alpha^2F(\frac{1}{\alpha}) = \underbrace{\frac{\pi ^3 \left(\alpha^4-5 \alpha^2+1\right)}{90 \alpha}}_{\rho(\alpha)}$$ Note that obviously $F(\alpha+1)=F(\alpha)$ . Let the continued fraction expansion of $\alpha$ be given by $$\alpha = [a_0;a_1,a_2,\cdots]$$ Successive complete quotients are denoted by: $$\zeta_0 = [a_0;a_1,a_2,\cdots]\qquad \zeta_1 = [a_1;a_2,a_3,\cdots]\qquad \zeta_2 = [a_2;a_3,a_4,\cdots]$$ Then $(2)$ and periodicity implies for $k\geq 0$ : $$\tag{3} F(\zeta_{k+1}) + \zeta_{k+1}^2 F(\zeta_k) = \rho(\zeta_{k+1})$$ If continued fraction of $\alpha$ is of form $$\alpha = [a_0;a_1,\cdots,a_m,\overline{b_1,\cdots,b_r}]$$ Then $\zeta_{m+r+1} = \zeta_{m+1}$ , so we eventually entered a cycle. $(3)$ gives a system of $m+r+1$ linear equations (by setting $k=0,\cdots,m+r$ ), with $m+r+1$ variables: $F(\zeta_0), F(\zeta_1),\cdots,F(\zeta_{m+r})$ . $$\begin{cases} F(\zeta_1) + \zeta_1^2 F(\zeta_0) &= \rho(\zeta_1) \\ F(\zeta_2) + \zeta_2^2 F(\zeta_1) &= \rho(\zeta_2) \\ \cdots \\ F(\zeta_{m+1}) + \zeta_{m+1}^2 F(\zeta_{m+r}) &= \rho(\zeta_{m+1}) \end{cases}$$ Solving it gives the value of $F(\zeta_0)=F(\alpha)$ . For $\alpha = \sqrt{61} = [7;\overline{1,4,3,1,2,2,1,3,4,1,14}]$ , we have $$\begin{aligned} \zeta_0 = \sqrt{61} \qquad \zeta_1 &= \frac{1}{12}(7+\sqrt{61}) \\ \zeta_2 = \frac{1}{3}(5+\sqrt{61}) \qquad \zeta_3 &= \frac{1}{4}(7+\sqrt{61})\\ \zeta_4 = \frac{1}{9}(5+\sqrt{61}) \qquad \zeta_5 &= \frac{1}{5}(4+\sqrt{61})\\ \zeta_6 = \frac{1}{5}(6+\sqrt{61}) \qquad \zeta_7 &= \frac{1}{9}(4+\sqrt{61})\\ \zeta_8 = \frac{1}{4}(5+\sqrt{61}) \qquad \zeta_9 &= \frac{1}{3}(7+\sqrt{61}) \\ \zeta_{10} = \frac{1}{12}(5+\sqrt{61}) \qquad \zeta_{11} &= \frac{1}{12}(7+\sqrt{61}) \end{aligned}$$ solving the above system gives the result. A few examples: for $\alpha = (1+\sqrt{5})/2$ , the continued fraction has period $1$ , direct substitution into $(2)$ gives $$\sum_{n=1}^\infty \frac{\cot(n\pi \frac{1+\sqrt{5}}{2})}{n^3} = -\frac{\pi ^3}{45 \sqrt{5}}$$ Complexity of result increases as period of $\alpha$ increases. For $\alpha = \sqrt{211}$ , which has period $26$ : $$\sum_{n=1}^\infty \frac{\cot(n\pi \sqrt{211})}{n^3} = \frac{128833758679 \pi ^3}{383254107060 \sqrt{211}}$$ For $\alpha = \sqrt{1051}$ , with period $50$ : $$\sum_{n=1}^\infty \frac{\cot(n\pi \sqrt{1051})}{n^3} = \frac{47332791433774124737806821 \pi ^3}{589394448213331173141730140 \sqrt{1051}}$$ When $\alpha$ is not a "pure" quadratic irrational, the result involves "constant term" (because of non-trivial automorphism of $\mathbb{Q}(\alpha)$ ): $$\sum_{n=1}^\infty \frac{\cot(n\pi(\frac{1}{4}+\frac{\sqrt{7}}{3}))}{n^3} = \frac{13 \pi ^3}{288}+\frac{104771 \pi ^3}{1244160 \sqrt{7}}$$ The closed-form here follows immediately by noting $\csc x = \cot (x/2) - \cot x$ . I wrote a Mathematica code to evaluate this sum. The command cotsum[Sqrt[61]] evaluates the sum in the question. You can try other quadratic irrationals as well. This algorithm can be made more efficient, but I don't have much motivation to optimize it. cotsum[x_] /; QuadraticIrrationalQ[x] := Module[{a1 = x, list, l, r, i, nlist, solution, output, equation, string}, list = ContinuedFraction[a1]; l = Length[list] - 1; r = Length[list[[l + 1]]]; Global`f[a_] := (1 - 5 a^2 + a^4)/90/a; i = 1; string = "{"; While[i < l + r + 1, string = string <> "x" <> ToString[i] <> ","; i++]; string = StringTake[string, StringLength[string] - 1] <> "}"; Do[Evaluate[ToExpression["a" <> ToString[i + 1]]] = FromContinuedFraction[Drop[list, i]], {i, 1, l - 1}]; nlist = list[[l + 1]]; Do[Evaluate[ToExpression["a" <> ToString[i + l + 1]]] = FromContinuedFraction[{Flatten[ Append[Drop[nlist, i], Take[nlist, i]]]}], {i, 0, r - 1}]; equation = Table[ToExpression[ "x" <> ToString[i + 1] <> "+a" <> ToString[i + 1] <> "^2*x" <> ToString[i] <> "==f[a" <> ToString[i + 1] <> "]"], {i, 1, r + l - 1}]; equation = Append[equation, ToExpression[ "x" <> ToString[l + 1] <> "+a" <> ToString[l + 1] <> "^2*x" <> ToString[r + l] <> "==f[a" <> ToString[l + 1] <> "]"]]; solution = Solve[equation, ToExpression[string]]; Clear["a*"]; output = (ToExpression[string][[1]] /. Flatten[solution])*Pi^3; Clear[f]; FullSimplify[output]]
{ "source": [ "https://math.stackexchange.com/questions/3019766", "https://math.stackexchange.com", "https://math.stackexchange.com/users/394456/" ] }
3,023,500
Let $F_n$ denote the $n$ th Fibonacci number , adopting the convention $F_1=1$ , $F_2=1$ and so on. Consider the $n\times n$ matrix defined by $$\mathbf M_n:=\begin{bmatrix}F_1&F_2&\dots&F_n\\F_{n+1}&F_{n+2}&\dots&F_{2n}\\\vdots&\vdots&\ddots&\vdots\\F_{n^2-n+1}&F_{n^2-n+2}&\dots&F_{n^2}\end{bmatrix}.$$ I have the following conjecture: Conjecture. For all integers $n\geq3$ , $\det\mathbf M_n=0$ . I have used some Python code to test this conjecture for $n$ up to $9$ , but I cannot go further. Note that $\det\mathbf M_1=\det\mathbf M_2=1$ . Due to the elementary nature of this problem I have to assume that it has been discussed before, perhaps even on this site. But I couldn't find any reference on it, by Googling or searching here. Can someone shed light onto whether the conjecture is true, and a proof of it if so?
Here's a hint: what's the relationship between $F_{k+1}+F_{k+2}$ and $F_{k+3}$ ? What does that say about the 1st, 2nd, and 3rd columns of this matrix?
{ "source": [ "https://math.stackexchange.com/questions/3023500", "https://math.stackexchange.com", "https://math.stackexchange.com/users/496634/" ] }
3,028,345
I was doing a consistency check for some calculations I'm performing for my master thesis (roughly - about a problem in discrete bayesian model selection) - and it turns out that my choice of priors is only consistent if this identity is true: $$\sum_{k=0}^{N}\left[ \binom{k+j}{j}\binom{N-k+j}{j}\right] = \binom{N+2j+1}{2j+1}$$ Now, this seems to work for small numbers, but I searched for it a lot in the literature and I can't find it.(I have a physics background so probably my knowledge of proper "literature" is the problem here). Neither I can demonstrate it! Has anyone of you seen this before? Can this be rewritten equivalently in some more commonly seen way? Can it be proven right...or wrong? Thanks in advance! :)
Your identity is correct. I don't know a reference offhand, but here is a proof. The right side, $\binom{N+2j+1}{2j+1}$ , is the number of bitstrings of length $N+2j+1$ consisting of $N$ zeroes and $2j+1$ ones. The sum on the left counts the same set of bitstrings. Namely, for $0\le k\le N$ , the term $\binom{k+j}j\binom{N-k+j}j$ is the number of those bitstrings in which the middle one, with $j$ ones on either side, is in the $k+j+1^\text{st}$ position; i.e., it has $k$ zeroes and $j$ ones to the left, $N-k$ zeroes and $j$ ones to the right. P.S. I found your identity in László Lovász, Combinatorial Problems and Exercises , North-Holland, 1979 (the first edition), where it is Exercise 1.42(i) on p. 18, with hint on p. 96 and solution on p. 172. Lovász gives the identity in the following (more general) form: $$\sum_{k=0}^m\binom{u+k}k\binom{v-k}{m-k}=\binom{u+v+1}m.$$ If we set $m=N$ , $u=j$ , $v=N+j$ , this becomes $$\sum_{k=0}^N\binom{j+k}k\binom{N+j-k}{N-k}=\binom{N+2j+1}N$$ which is plainly equivalent to your identity $$\sum_{k=0}^N\binom{k+j}j\binom{N-k+j}j=\binom{N+2j+1}{2j+1}.$$
{ "source": [ "https://math.stackexchange.com/questions/3028345", "https://math.stackexchange.com", "https://math.stackexchange.com/users/518213/" ] }
3,028,346
Let $P(X)=X^4+1 \in \mathbb{Q[X]}$ . Find the splitting field for $P$ over $\mathbb{C}$ and determine the degree of it over $\mathbb{C}$ . My attempt: Roots of $P$ are $\alpha_1 = \sqrt{i},\alpha_2=-\sqrt{i},\alpha_3=i^{3/2},\alpha_4=-i^{3/2}$ Now the splitting field is $\mathbb{Q}(\sqrt{i},i)$ . Since, $i$ has minimal polynomial of degree 2, $\sqrt{i}$ has minimal polynomial of degree 4, thus $[\mathbb{Q}(\sqrt{i},i):\mathbb{Q}]=[\mathbb{Q}(\sqrt{i}):\mathbb{Q}]=4$ Is there a more elegant argument? Can the roots of $P$ be expressed in a better form (analogue to roots of unity for $X^n-1$ )?
Your identity is correct. I don't know a reference offhand, but here is a proof. The right side, $\binom{N+2j+1}{2j+1}$ , is the number of bitstrings of length $N+2j+1$ consisting of $N$ zeroes and $2j+1$ ones. The sum on the left counts the same set of bitstrings. Namely, for $0\le k\le N$ , the term $\binom{k+j}j\binom{N-k+j}j$ is the number of those bitstrings in which the middle one, with $j$ ones on either side, is in the $k+j+1^\text{st}$ position; i.e., it has $k$ zeroes and $j$ ones to the left, $N-k$ zeroes and $j$ ones to the right. P.S. I found your identity in László Lovász, Combinatorial Problems and Exercises , North-Holland, 1979 (the first edition), where it is Exercise 1.42(i) on p. 18, with hint on p. 96 and solution on p. 172. Lovász gives the identity in the following (more general) form: $$\sum_{k=0}^m\binom{u+k}k\binom{v-k}{m-k}=\binom{u+v+1}m.$$ If we set $m=N$ , $u=j$ , $v=N+j$ , this becomes $$\sum_{k=0}^N\binom{j+k}k\binom{N+j-k}{N-k}=\binom{N+2j+1}N$$ which is plainly equivalent to your identity $$\sum_{k=0}^N\binom{k+j}j\binom{N-k+j}j=\binom{N+2j+1}{2j+1}.$$
{ "source": [ "https://math.stackexchange.com/questions/3028346", "https://math.stackexchange.com", "https://math.stackexchange.com/users/239181/" ] }
3,028,868
(Don't be alarmed by the title; this is a question about mathematics, not programming.) In the documentation for the decimal module in the Python Standard Library, an example is given for computing the digits of $\pi$ to a given precision: def pi(): """Compute Pi to the current precision. >>> print(pi()) 3.141592653589793238462643383 """ getcontext().prec += 2 # extra digits for intermediate steps three = Decimal(3) # substitute "three=3.0" for regular floats lasts, t, s, n, na, d, da = 0, three, 3, 1, 0, 0, 24 while s != lasts: lasts = s n, na = n+na, na+8 d, da = d+da, da+32 t = (t * n) / d s += t getcontext().prec -= 2 return +s # unary plus applies the new precision I was not able to find any reference for what formula or fact about $\pi$ this computation uses, hence this question. Translating from code into more typical mathematical notation, and using some calculation and observation, this amounts to a formula for $\pi$ that begins like: $$\begin{align}\pi &= 3+\frac{1}{8}+\frac{9}{640}+\frac{15}{7168}+\frac{35}{98304}+\frac{189}{2883584}+\frac{693}{54525952}+\frac{429}{167772160} + \dots\\ &= 3\left(1+\frac{1}{24}+\frac{1}{24}\frac{9}{80}+\frac{1}{24}\frac{9}{80}\frac{25}{168}+\frac{1}{24}\frac{9}{80}\frac{25}{168}\frac{49}{288}+\frac{1}{24}\frac{9}{80}\frac{25}{168}\frac{49}{288}\frac{81}{440}+\frac{1}{24}\frac{9}{80}\frac{25}{168}\frac{49}{288}\frac{81}{440}\frac{121}{624}+\frac{1}{24}\frac{9}{80}\frac{25}{168}\frac{49}{288}\frac{81}{440}\frac{121}{624}\frac{169}{840}+\dots\right) \end{align}$$ or, more compactly, $$\pi = 3\left(1 + \sum_{n=1}^{\infty}\prod_{k=1}^{n}\frac{(2k-1)^2}{8k(2k+1)}\right)$$ Is this a well-known formula for $\pi$ ? How is it proved? How does it compare to other methods, in terms of how how quickly it converges, numerical stability issues, etc? At a glance I didn't see it on the Wikipedia page for List of formulae involving π or on the MathWorld page for Pi Formulas .
That is the Taylor series of $\arcsin(x)$ at $x=1/2$ (times 6).
{ "source": [ "https://math.stackexchange.com/questions/3028868", "https://math.stackexchange.com", "https://math.stackexchange.com/users/205/" ] }
3,030,191
This is a question about a mathematical concept, but I think I will be able to ask the question better with a little bit of background first. Warren Buffett famously provided 2 rules to investing: Rule No. 1: Never lose money. Rule No. 2: Never forget rule No. 1. I initially took this quote as tongue-in-cheek. Duh, of course you don't want to lose money. But after better educating myself in the world of investing I see this quote more as words from the wise sensei of investing. It means more than just be careful, or be conservative. Losing money can destroy a portfolio because there is a mathematical disadvantage. Consider two hedge fund managers: Mr. Turtle and Mr. Hare. Mr. Turtle is steady, he doesn't have high returns, but he also doesn't lose money. Mr. Hare is aggressive, getting huge returns, but occasionally losing money. Here are their returns over the past 5 years Mr. Hare has a higher average rate of return. Further, he has made (significantly) more money in 4 out of the 5 years. Mr. Turtle, however, has made his clients more money overall in the same timeframe. This seems counter-intuitive. I would think you would want to maximize your average rate of return at all costs, but it's not that simple. Why? Why does one negative input have such a significant impact on an exponential growth function? Why doesn't the average rate of growth always lead to the largest possible result? How does one explain this (hopefully in layman's terms)?
There are two things I should point out. One is that the arithmetic mean doesn't properly measure annual growth rate. The second is why. The correct calculation for average annual growth is geometric mean. Let $r_1,r_2,r_3,\ldots,r_n$ be the yearly growth of a particular investment/portfolio/whatever. Then if you invest $P$ into this investment, after $n$ years your final amount of money is $Pr_1r_2\cdots r_n$ . The (yearly) average growth rate of this investment is the number $r$ such that if the investment grew at a constant rate of $r$ every year then after $n$ years we'd have the same amount as we actually ended up with. In other words it is $r$ such that $Pr_1r_2\cdots r_n=Pr^n$ . Thus we have $$r=\sqrt[n]{r_1r_2r_3\cdots r_n},$$ which is the geometric mean, not the arithmetic mean. If we use the geometric mean, we see that Turtle's average yearly growth is $\sqrt[5]{1.39}\approx 1.07$ , and Hare's average yearly growth rate is $\sqrt[5]{1.36}\approx 1.06$ , which is more in line with our expectations. Why doesn't the arithmetic mean behave as expected? Well, let's look at something over two years. Say its arithmetic mean growth is 1. Then the growth rate for one year will be $1+x$ and the other year will be $1-x$ . Multiplying these together, we see that total growth is $1-x^2$ . In other words, actual growth is always less than or equal to that predicted by the arithmetic mean (this is true for $n$ years as well, see the AM-GM inequality ). Note further that the actual growth is closer to that predicted by the arithmetic mean when the individual annual growth rates are closer together. Thus if you are more consistent (your annual growth rates are closer together) then your arithmetic mean growth rate will closely approximate your true average annual growth rate as in Turtle's case. On the other hand, if your annual growth rates are more spread out, then your true average annual growth rate will be much lower than the arithmetic average growth rate (as in Hare's case).
{ "source": [ "https://math.stackexchange.com/questions/3030191", "https://math.stackexchange.com", "https://math.stackexchange.com/users/425855/" ] }
3,031,937
Can an uncountable group have only a countable number of subgroups? Please give examples if any exist! Edit: I want a group having uncountable cardinality but having a countable number of subgroups. By countable number of subgroups, I mean the collection of all subgroups of a group is countable.
No. Suppose $G$ is an uncountable group. Every element $g$ of $G$ belongs to a countable subgroup of $G$ , namely the cyclic subgroup $\langle g\rangle$ . Thus $G$ is the union of all of its countable subgroups. Since a countable union of countable sets is countable, $G$ must have uncountably many countable subgroups.
{ "source": [ "https://math.stackexchange.com/questions/3031937", "https://math.stackexchange.com", "https://math.stackexchange.com/users/517603/" ] }
3,031,968
As I was looking through Spotify, I noticed that I listened to 2040 minutes of music this year. I did as follows: 2040=204*10, and 60=6*10. Thus 2040/60= 204/6. Intuitively, I'm not able to see why both of these actually equal each other. I mean, the closest I'm able to get to an answer is to assume that this division problem is a ratio, and that I'm trying to find how many minutes listened to 1 "real" minute. However, this makes no physical sense. Can someone please provide some intuition? Thanks
No. Suppose $G$ is an uncountable group. Every element $g$ of $G$ belongs to a countable subgroup of $G$ , namely the cyclic subgroup $\langle g\rangle$ . Thus $G$ is the union of all of its countable subgroups. Since a countable union of countable sets is countable, $G$ must have uncountably many countable subgroups.
{ "source": [ "https://math.stackexchange.com/questions/3031968", "https://math.stackexchange.com", "https://math.stackexchange.com/users/533510/" ] }
3,031,984
I have two questions regarding the development of mathematics: 1) Is there an example where in mathematics, a collaboration has led to the discovery of another result? I already know something like the Polymath project, or the Hilbert program and the Hamilton's program. 2) Is there an example of how individual secrecy and lone effort in mathematics has led to a breakthrough discovery without much contact with the math community? The ones I am aware of are like Andrew Wiles, and Ramanujan. I just want some more examples cause I was quite curious with regards to certain approaches to mathematical development.
No. Suppose $G$ is an uncountable group. Every element $g$ of $G$ belongs to a countable subgroup of $G$ , namely the cyclic subgroup $\langle g\rangle$ . Thus $G$ is the union of all of its countable subgroups. Since a countable union of countable sets is countable, $G$ must have uncountably many countable subgroups.
{ "source": [ "https://math.stackexchange.com/questions/3031984", "https://math.stackexchange.com", "https://math.stackexchange.com/users/488748/" ] }
3,031,988
Consider the boolean hypercube $\{0,1\}^N$ . For a set I $\subseteq$ {1,2,...N}, we define the parity function $h_I$ as follows. For a binary vector x = $(x_1, x_2, ...,x_N) \in \{0,1\}^N$ , $h_I(x) = \bigg(\sum_{i\in I}x_i\bigg)mod 2$ What is the VC-dimension of the class of all such parity functions, $H_{N-parity} = \{h_I:I\subseteq \{1,2,..., N\}\}$ ? [Courtesy: Shai Ben-David et al.,]
No. Suppose $G$ is an uncountable group. Every element $g$ of $G$ belongs to a countable subgroup of $G$ , namely the cyclic subgroup $\langle g\rangle$ . Thus $G$ is the union of all of its countable subgroups. Since a countable union of countable sets is countable, $G$ must have uncountably many countable subgroups.
{ "source": [ "https://math.stackexchange.com/questions/3031988", "https://math.stackexchange.com", "https://math.stackexchange.com/users/624430/" ] }
3,035,639
If we say fields $A$ and $B$ are isomorphic, does that just mean they are isomorphic as rings, or is there something else?
In a sense, yes, that is what it means. But not really. When we say two structures $S$ and $T$ of a certain type are isomorphic, we mean that there is a bijection $\varphi:S\rightarrow T$ which preserves the structure. So, for instance, if $\circ$ is a binary operation in the structure, then for $x,y\in S$ , we have $\varphi(x\circ y)=\varphi(x)\circ \varphi(y)$ . It turns out that preserving the ring structure is enough to preserve the field structure; a field is just a commutative ring with inverses, so the property of being a field is preserved if the operations $+$ and $\times$ are preserved. Thus two fields are isomorphic if and only if they are isomorphic when considered as rings. But this is a contingent fact, and it's not really what we mean when we say that two fields are isomorphic. I realise that this view verges on philosophy, and I wouldn't defend it to the death. I am just trying to give an idea of what mathematicians are thinking of when they say isomorphic .
{ "source": [ "https://math.stackexchange.com/questions/3035639", "https://math.stackexchange.com", "https://math.stackexchange.com/users/64460/" ] }
3,035,983
Here are some facts about myself: In 2017, I was $15$ years old. Canada, my country, was $150$ years old. When will be the next time that my country's age will be a multiple of mine? I've toned this down to a function. With $n$ being the number of years before this will happen and $m$ being any integer, $$\frac{150+n}{15+n}=m$$ How would you find $n$ ?
You want $\frac{150+n}{15+n}=m$ , and clearing denominators gives us $$150+n=(15+n)m.$$ Subtracting $15+n$ from both sides give us $$135=(15+n)(m-1).$$ Now you are looking for the smallest $n>1$ for which such an $m$ exists, so the smallest $n>1$ for which $15+n$ divides $135$ .
{ "source": [ "https://math.stackexchange.com/questions/3035983", "https://math.stackexchange.com", "https://math.stackexchange.com/users/550374/" ] }
3,036,169
So I tried looking around for this question, but I didn't find much of anything - mostly unrelated-but-similarly-worded stuff. So either I suck at Googling or whatever but I'll get to the point. So far in my coursework, it seems like we've mostly taken for granted that $(\mathbb{R},+,\cdot)$ is a field. I'm not doubting that much, that would seem silly. However, my question is: how would one prove this? In particular, how would one prove that $(\mathbb{R},+)$ and $(\mathbb{R}\setminus \{0\}, \cdot)$ are closed under their respective operations? I understand the definition of closure, but to say "a real number plus/times a real number is a real number" seems oddly circular since, without demonstrating that, it essentially invokes the assumptions we're trying to prove. Obviously, there's something "more" to the definition of "real number" that would make proving this possible. Though I'm not sure what property would be used for this. One thought I dwelled on for a while was instead looking at what the real numbers are not . For example, they are numbers lacking those "imaginary" components you see in their higher-dimensional generalizations - the complex numbers ( $i$ ), quaternions ( $i,j,k$ ), and so on. But that didn't seem quite "right" to me? Like I'm not sure if it's really wrong, it just irked me in some way. Like it's simple enough to say "a real number is any complex number with a zero imaginary component," take two real numbers, show their imaginary parts sum/multiply to zero, and thus we have a real number. Maybe it's just a personal issue? Like I said - I'm not saying it's inherently wrong (it might be, though, I don't know - if it is, I would like to know why). Maybe it's just the whole idea of "defining a number by what's it's not " that bugs me. Like I said, I'm not really sure, and I think I'm rambling/unclear enough as it is, so I'll get straight to the point. In short, how does one properly demonstrate, if not in the above way, $$a,b \in \mathbb{R} \Rightarrow (a+b)\in \mathbb{R}$$ $$a,b\in \mathbb{R \setminus \{0\}} \Rightarrow (a\cdot b) \in \mathbb{R \setminus \{0\}}$$ (And again, I'm not at all doubting that these are true. I'm just curious as to how one would demonstrate these facts in the most appropriate manner since I don't believe it's come up in my coursework and I've been curious on how one would prove it for a few days now.)
As I seem to be so fond of saying, it all comes down to definitions. Specifically, before you can answer the question "Why are the real numbers closed under addition and multiplication?", one needs to answer the question "What is a real number?" There are lots of ways of doing this–I'll outline two common approaches. Axiomatic Approaches The real numbers can be defined axiomatically, i.e. we simply list the properties that we want the real numbers to have, then work with those properties directly. The "usual" axiomatization is to define the real numbers to be the (unique, up to isomorphism) totally ordered, Dedekind-complete field. Basically, if we let $\mathbb{R}$ denote the set of real numbers, then $\mathbb{R}$ is defined by the properties that: $\mathbb{R}$ is a field; that is, there are two binary operations $+$ and $\cdot$ defined on $\mathbb{R}$ which obey the usual field axioms; $\mathbb{R}$ is totally ordered; that is, there is a relation $\le$ defined on $\mathbb{R}$ allowing us to compare any two real numbers and (importantly) this order is compatible with the field operations in the sense that $x \le y \implies x+z \le y+z$ , and $0\le x,y \implies 0 \le xy$ . $\mathbb{R}$ is Dedekind-complete; that is, every nonempty bounded set in $\mathbb{R}$ has a least upper bound. If $\mathbb{R}$ is defined in this manner, then the answer to your question is trivial: $\mathbb{R}$ is a field because we have defined it to be so, and fields are closed under addition and multiplication. These properties are "baked in" to the definition of the real numbers. It is worth noting that other axiomatizations are possible. For example, Tarski's axiomitization , which defines $\mathbb{R}$ as a linearly ordered abelian group with some other nice properties. Under this axiomatization, $\mathbb{R}$ is, by definition, closed under addition, but no multiplication operation is defined a priori . I am not as familiar with this construction, but the above cited Wikipedia article suggests that Tarski was able to define a multiplication operation and show that it behaved as expected, making $\mathbb{R}$ into a field. I suspect that one could also go the route of defining the group $\mathbb{R}$ in terms of the Tarski axioms, then obtaining the field $\mathbb{R}$ as the field of fractions. A More Constructive Approach (In the interest of brevity, the following discussion is very loosey-goosey. It can mostly be made rigorous, but I'm not sure that it would be terribly useful to do so.) The downside of a axiomatic approach is that you are kind of asserting the existence of of a mathematical object which has the properties that you want. It might be more satisfying to build the real numbers from more basic building blocks. We still need some axioms as a starting point, be we can start from a much more primitive place, say the Peano axioms or the ZFC axioms for set theory. From the Peano axioms , we get the natural numbers immediately, so let's start there. From this point of view, $\mathbb{N}$ is a fairly simple object: about the only thing we get from the definition is a successor operation and zero, which gives us an order on $\mathbb{N}$ , but not a lot else. However, from these, we can build up $\mathbb{N}$ as a monoid (there is an additive structure with identity, but no inverses). From the natural numbers, we can define the integers: first, we define an equivalence relation on $\mathbb{N}\times \mathbb{N}$ by declaring that $(a,b)\sim (c,d)$ if $a+d = c+b$ . The integers are then elements of the set $\mathbb{N}\times\mathbb{N}\,/\,{\sim}$ , i.e. equivalence classes modulo this relation. The basic idea is that $(a,b)$ represents the number $a-b$ (so $(0,b)$ is the integer $-b$ , for example). Since subtraction and negative numbers are not defined by default, everything gets done in terms of addition. When the dust settles, we get a group $(\mathbb{Z},+)$ . This group has all the nice additive properties which we expect from the integers, so we can now forget that the integers are "really" equivalence classes of ordered pairs of natural numbers. Note that $\mathbb{N}$ embeds nicely into $\mathbb{Z}$ , and that the order on $\mathbb{N}$ can be extended to $\mathbb{Z}$ . So, now we have an ordered group. Hoo-ray! On the integers, it is now possible to define a multiplication operation which is compatible with the addition operation and the order relation. For example, we can define multiplication here as repeated addition, then use some fancy induction. The problem with the integers is that most integers don't have multiplicative inverses, so we need to patch that up. We do this in much the same way that we got the integers. This time, we define an equivalence relation on pairs of integers by declaring that $(p,q)\sim (r,s)$ if $ps = qr$ , then mod out by this relation. This gives us the set $\mathbb{Q} := \mathbb{Z}\times\mathbb{Z}\,/\,{\sim}$ , i.e. the field of fractions of $\mathbb{Z}$ . Again, $\mathbb{Z}$ can be embedded into the rationals in a way that preserves all of the operations as well as the order. Finally, to get the real numbers, we want some kind of notion of completeness. The usual constructions are via Dedekind cuts or Cauchy sequences . At the end of the day, the two constructions can be shown to be equivalent, but, as a guy who does analysis on metric spaces, I am more comfortable with the construction via Cauchy sequences. Since this is the heart of your question, I'll be a little more detailed here. Let $|\cdot| : \mathbb{Q} \to \mathbb{Q}_{\ge 0}$ denote the usual absolute value on $\mathbb{Q}$ . What we would really like to do is use the absolute value to define a metric on $\mathbb{Q}$ , but we don't have the real numbers in hand yet, so this would be a little circular. However, the absolute value does define a valuation on $\mathbb{Q}$ , which is morally the same thing as a norm which can induce the moral equivalent of a metric. Definitions: Let $(a_n)_{n=1}^{\infty}$ be a sequence of rational numbers. We say that $(a_n)$ is Cauchy if for any rational number $\varepsilon > 0$ , there exists a natural number $N$ such that $m,n \ge N$ implies that $|a_m - a_n| < \varepsilon$ . We say that $(a_n)$ converges to a limit $L\in\mathbb{Q}$ if for any rational $\varepsilon> 0$ , there exists some $N$ such that $n\ge N$ implies that $|a_n-L|<\varepsilon$ . We would like to live a world where Cauchy sequences converge to limits. However, there are examples of Cauchy sequences in $\mathbb{Q}$ which do not converge in $\mathbb{Q}$ . Consider, for example, the sequence of partial sums $$ S_N := \sum_{n=0}^{N} \frac{1}{n!}, $$ which is Cauchy but has no rational limit. To rectify this situation, we once again build an equivalence relation. Definition: We say that a sequence $(a_n)$ is a null sequence if $a_n$ converges to $0$ . That is, for any rational $\varepsilon > 0$ , there exists some $N$ such that $n \ge N$ implies that $|a_n| < \varepsilon$ . Now we declare an equivalence relation on the set of sequences. Specifically, we say that $(a_n) \sim (b_n)$ if the sequence $(a_n - b_n)$ is a null sequence. It is a little bit of work to show that this is an equivalence relation, but it works out. Hoo-ray! We can now define $\mathbb{R}$ to be $\mathscr{C}(\mathbb{Q})\,/\,{\sim}$ , where $\mathscr{C}(\mathbb{Q})$ denotes the set of Cauchy sequences. A real number is then an equivalence class of Cauchy sequences. To finish the construction, we need an order relation, as well as addition and multiplication operations. Since your question is about the additive and multiplicative closure of $\mathbb{R}$ , I'll ignore the order relation and note only that addition and multiplication are defined pointwise, i.e. Definition: Let $a = [(a_n)]$ and $b = [(b_n)]$ be two equivalence classes of Cauchy sequences represented by $(a_n)$ and $(b_n)$ , respectively. Then define $a+b := [(a_n + b_n)]$ , and $a\cdot b := [(a_n \cdot b_n)]$ . To show that the reals are closed, it is necessary to show that $(a_n + b_n)$ and $(a_n \cdot b_n)$ are both Cauchy. This isn't too bad to do, so I'll leave it as an exercise. Once the exercise is completed, the question is answered. :)
{ "source": [ "https://math.stackexchange.com/questions/3036169", "https://math.stackexchange.com", "https://math.stackexchange.com/users/597568/" ] }
3,038,712
What proportion of positive integers have two factors that differ by 1? This question occurred to me while trying to figure out why there are 7 days in a week. I looked at 364, the number of days closest to a year (there are about 364.2422 days in a year, iirc). Since $364 = 2\cdot 2 \cdot 7 \cdot 13$ , the number of possible number that evenly divide a year are 2, 4, 7, 13, 14, 26, 28, and larger. Given this, 7 looks reasonable - 2 and 4 are too short and 13 is too long. Anyway, I noticed that 13 and 14 are there, and wondered how often this happens. I wasn't able to figure out a nice way to specify the probability (as in a Hardy-Littlewood product), and wasn't able to do it from the inverse direction (i.e., sort of a sieve with n(n+1) going into the array of integers). Ideally, I would like an asymptotic function f(x) such that $\lim_{n \to \infty} \dfrac{\text{number of such integers } \ge 2 \le nx}{n} =f(x) $ or find $c$ such that $\lim_{n \to \infty} \dfrac{\text{number of such integers } \ge 2 \le n}{n} =c $ . My guess is that, in the latter case, $c = 0$ or 1, but I have no idea which is true. Maybe its $1-\frac1{e}$ . Note: I have modified this to not allow 1 as a divisor.
Every even number has consecutive factors: $1$ and $2$ . No odd number has, because all its factors are odd. The probability is $1/2$ .
{ "source": [ "https://math.stackexchange.com/questions/3038712", "https://math.stackexchange.com", "https://math.stackexchange.com/users/13079/" ] }
3,038,718
If we have a function $w=f(x,y)$ and we change the variables using polar coordinates $x=r\cos\theta$ and $y=r\sin\theta$ we get a new function $f(r,\theta)$ . In order to find the partial derivative $$\frac{\partial w}{\partial x}=\frac{\partial w}{\partial r}\frac{\partial r}{\partial x}+\frac{\partial w}{\partial \theta}\frac{\partial \theta}{\partial x}$$ we use the inverse mapping $$r=(x^2+y^2)^{1/2}$$ and $$\theta=\tan^{-1}\frac{y}{x}$$ and we get $$\frac{\partial r}{\partial x}=\frac{\partial}{\partial x}(x^2+y^2)^{1/2}=\cos\theta$$ etc... Why do we have to use the inverse mapping in order to find the derivative? Why can't we simply do $$x=r\cos\theta\Rightarrow r=\frac{x}{\cos\theta}$$ and then differentiate that? Doing so is a mistake but why?
Every even number has consecutive factors: $1$ and $2$ . No odd number has, because all its factors are odd. The probability is $1/2$ .
{ "source": [ "https://math.stackexchange.com/questions/3038718", "https://math.stackexchange.com", "https://math.stackexchange.com/users/134044/" ] }
3,039,113
In general, the intersection of subgroups/subrings/subfields/sub(vector)spaces will still be subgroups/subrings/subfields/sub(vector)spaces. However, the union will (generally) not be. Is there a "deep" reason for this?
I wouldn't call it "deep", but here's an intuitive reasoning. Intersections have elements that come from both sets, so they have the properties of both sets. If, for each of the component sets, there is some element(s) guaranteed to exist within that set, then such element(s) must necessarily exist in the intersection. For example, if $A$ and $B$ are closed under addition, then any pair of elements $x,y\in A\cap B$ is in each of $A$ and $B$ , so the sum $x+y$ must be in each of $A$ and $B$ , and so $x+y\in A\cap B$ . This line of reasoning holds for basically any "structure" property out there, simply by virtue of the fact that all elements come from a collection of sets that simultaneously have that property. Unions, on the other hand, have some elements from only one set or the other. In a sense, these elements only have one piece of the puzzle, i.e. they only have the properties of one set rather than both. Even if the statement of those properties is the same, like "closure under addition", the actual mechanics of those properties is different from set to set, and may not be compatible. Given $x\in A$ and $y\in B$ , we have $x,y\in A\cup B$ , but there's no reason to believe that $x+y \in A\cup B$ . Sometimes it's simply not true, such as $\Bbb{N}\cup i\Bbb{N}$ , where $i\Bbb{N} = \{ z \in \Bbb{C} \ | \ z = in \text{ for some } n\in\Bbb{N} \}$ . In this case, the closure under addition which is guaranteed for each of the component sets is not compatible with one another, so you get sums like $1+i$ which isn't in either set. On the other hand, sometimes you do have sets with compatible structure, such as $\Bbb{N}\cup -\Bbb{N}$ (considering $0\in\Bbb{N}$ ), where any sum of elements from this union still lies in the union.
{ "source": [ "https://math.stackexchange.com/questions/3039113", "https://math.stackexchange.com", "https://math.stackexchange.com/users/238417/" ] }
3,039,137
My question is about Y that is discrete and for some random variable X, but if its having a meaning for Y that is continuous then please give that case your attention. What can I say about $E[E[X|Y]]$ ? I know that E[X|Y] is random variable, so It's not trivial case when we calculate expected value of just a number. And what about $E[E[X|Y]|Y]$ ? does something like this have a meaning? if it's, then does for some function $g$ (for simplicity, assuming g with suitable range and continuous) it's true to say that: $$E[g(Y)E[X|Y]|Y]=g(Y)E[E[X|Y]|Y]$$ Because of a theorem that I seen: $$E[g(Y)X|Y]=g(Y)E[X|Y]$$
I wouldn't call it "deep", but here's an intuitive reasoning. Intersections have elements that come from both sets, so they have the properties of both sets. If, for each of the component sets, there is some element(s) guaranteed to exist within that set, then such element(s) must necessarily exist in the intersection. For example, if $A$ and $B$ are closed under addition, then any pair of elements $x,y\in A\cap B$ is in each of $A$ and $B$ , so the sum $x+y$ must be in each of $A$ and $B$ , and so $x+y\in A\cap B$ . This line of reasoning holds for basically any "structure" property out there, simply by virtue of the fact that all elements come from a collection of sets that simultaneously have that property. Unions, on the other hand, have some elements from only one set or the other. In a sense, these elements only have one piece of the puzzle, i.e. they only have the properties of one set rather than both. Even if the statement of those properties is the same, like "closure under addition", the actual mechanics of those properties is different from set to set, and may not be compatible. Given $x\in A$ and $y\in B$ , we have $x,y\in A\cup B$ , but there's no reason to believe that $x+y \in A\cup B$ . Sometimes it's simply not true, such as $\Bbb{N}\cup i\Bbb{N}$ , where $i\Bbb{N} = \{ z \in \Bbb{C} \ | \ z = in \text{ for some } n\in\Bbb{N} \}$ . In this case, the closure under addition which is guaranteed for each of the component sets is not compatible with one another, so you get sums like $1+i$ which isn't in either set. On the other hand, sometimes you do have sets with compatible structure, such as $\Bbb{N}\cup -\Bbb{N}$ (considering $0\in\Bbb{N}$ ), where any sum of elements from this union still lies in the union.
{ "source": [ "https://math.stackexchange.com/questions/3039137", "https://math.stackexchange.com", "https://math.stackexchange.com/users/476043/" ] }
3,042,059
I am reading Christopher D. Manning's Foundations of Statistical Natural Language Processing which gives an introduction on Probability Theory where it talks about $\sigma$ -fields. It says, The foundations of probability theory depend on the set of events $\mathscr{F}$ forming a $\sigma$ -field". I understand the definition of a $\sigma$ -field, but what are these foundations of probability theory, and how are these foundations dependent upon a $\sigma$ -field?
Probability when there are only finitely many outcomes is a matter of counting. There are $36$ possible results from a roll of two dice and $6$ of them sum to $7$ so the probability of a sum of $7$ is $6/36$ . You've measured the size of the set of outcomes that you are interested in. It's harder to make rigorous sense of things when the set of possible results is infinite. What does it mean to choose two numbers at random in the interval $[1,6]$ and ask for their sum? Any particular pair, like $(1.3, \pi)$ , will have probability $0$ . You deal with this problem by replacing counting with integration. Unfortunately, the integration you learn in first year calculus ("Riemann integration") isn't powerful enough to derive all you need about probability. (It is enough to determine the probability that your two rolls total exactly $7$ is $0$ , and to find the probability that it's at least $7$ .) For the definitions and theorems of rigorous probability theory (those are the "foundations" you ask about) you need "Lebesgue integration". That requires first carefully specifying the sets that you are going to ask for the probabilities of - and not every set is allowed, for technical reasons without which you can't make the mathematics work the way you want. It turns out that the set of sets whose probability you are going to ask about carries the name " $\sigma$ -field " or "sigma algebra". (It's not a field in the arithmetic sense.) The essential point is that it's closed under countable set operations. That's what the " $\sigma$ " says. Your text may not provide a formal definition - you may not need it for NLP applications.
{ "source": [ "https://math.stackexchange.com/questions/3042059", "https://math.stackexchange.com", "https://math.stackexchange.com/users/87558/" ] }
3,042,183
I have a feel this will be a duplicate question. I have had a look around and couldn't find it, so please advise if so. Here I wish to address the definite integral: \begin{equation} I = \int_{0}^{\infty} \frac{e^{-x^2}}{x^2 + 1}\:dx \end{equation} I have solved it using Feynman's Trick, however I feel it's limited and am hoping to find other methods to solve. Without using Residues, what are some other approaches to this integral? My method: \begin{equation} I(t) = \int_{0}^{\infty} \frac{e^{-tx^2}}{x^2 + 1}\:dx \end{equation} Here $I = I(1)$ and $I(0) = \frac{\pi}{2}$ . Take the derivative under the curve with respect to ' $t$ ' to achieve: \begin{align} I'(t) &= \int_{0}^{\infty} \frac{-x^2e^{-tx^2}}{x^2 + 1}\:dx = -\int_{0}^{\infty} \frac{x^2e^{-tx^2}}{x^2 + 1}\:dx \\ &= -\left[\int_{0}^{\infty} \frac{\left(x^2 + 1 - 1\right)e^{-tx^2}}{x^2 + 1}\:dx \right] \\ &= -\int_{0}^{\infty} e^{-tx^2}\:dx + \int_{0}^{\infty} \frac{e^{-tx^2}}{x^2 + 1}\:dx \\ &= -\frac{\sqrt{\pi}}{2}\frac{1}{\sqrt{t}} + I(t) \end{align} And so we arrive at the differential equation: \begin{equation} I'(t) - I(t) = -\frac{\sqrt{\pi}}{2}\frac{1}{\sqrt{t}} \end{equation} Which yields the solution: \begin{equation} I(t) = \frac{\pi}{2}e^t\operatorname{erfc}\left(t\right) \end{equation} Thus, \begin{equation} I = I(1) \int_{0}^{\infty} \frac{e^{-x^2}}{x^2 + 1}\:dx = \frac{\pi}{2}e\operatorname{erfc}(1) \end{equation} Addendum: Using the exact method I've employed, you can extend the above integral into a more genealised form: \begin{equation} I = \int_{0}^{\infty} \frac{e^{-kx^2}}{x^2 + 1}\:dx = \frac{\pi}{2}e^k\operatorname{erfc}(\sqrt{k}) \end{equation} Addendum 2: Whilst we are genealising: \begin{equation} I = \int_{0}^{\infty} \frac{e^{-kx^2}}{ax^2 + b}\:dx = \frac{\pi}{2b}e^\Phi\operatorname{erfc}(\sqrt{\Phi}) \end{equation} Where $\Phi = \frac{kb}{a}$ and $a,b,k \in \mathbb{R}^{+}$
You can use Plancherel's theorem. Note that $$ 2I = \int_{-\infty}^{\infty} \frac{e^{-x^2}}{x^2 + 1}dx. $$ Let $f(x) = e^{-x^2}$ and $g(x) = \frac{1}{1+x^2}$ . Then we have $$ \widehat{f}(\xi) = \sqrt{\pi}e^{-\pi^2\xi^2}, $$ and $$ \widehat{g}(\xi) = \pi e^{-2\pi|\xi|}. $$ By Plancherel's theorem, we have $$\begin{eqnarray} \int_{-\infty}^{\infty} f(x)g(x)dx&=&\int_{-\infty}^{\infty} \widehat{f}(\xi)\widehat{g}(\xi)d\xi\\&=&\pi^{\frac{3}{2}}\int_{-\infty}^{\infty}e^{-\pi^2\xi^2-2\pi|\xi|}d\xi\\ &=&2\pi^{\frac{3}{2}}\int_{0}^{\infty}e^{-\pi^2\xi^2-2\pi\xi}d\xi\\ &=&2\pi^{\frac{3}{2}}e\int_{\frac{1}{\pi}}^{\infty}e^{-\pi^2\xi^2}d\xi\\ &=&2\pi^{\frac{1}{2}}e\int_{1}^{\infty}e^{-\xi^2}d\xi = \pi e \operatorname{erfc}(1). \end{eqnarray}$$ This gives $I = \frac{\pi}{2}e \operatorname{erfc}(1).$
{ "source": [ "https://math.stackexchange.com/questions/3042183", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ] }
3,047,389
I'm brushing up on some higher level maths for a programming project. I was going along and then I realized that I have absolutely no idea how square roots can be computed. I mean sure, I've memorized a lot of perfect squares. But I wouldn't be able to get an exact answer from some arbitrary number like 432,434, would I? I Googled how to calculate square roots and everything always points to it basically being based on algorithms which have a degree of error because they're more or less guesses. I can't seem to find out why it's impossible to get an exact square root though. Like why can't you plug in $x$ to a function $f(x)$ and get the exact square root? Very curious about this.
The square roots of non-perfect-square-integers are irrational numbers, which means that they have an infinite number of decimals, that do not repeat. So it would be a little tedious to write down the exact value... Square roots can be computed, among others, by the so-called Heron's method (known BC), which is of the "guess" type, but converges extremely fast.
{ "source": [ "https://math.stackexchange.com/questions/3047389", "https://math.stackexchange.com", "https://math.stackexchange.com/users/564548/" ] }
3,047,976
In our analysis course, we just defined the following: Let $g := (g_1, \ldots, g_n) \colon [a, b] \to \mathbb{R}^n$ , where $g_1, \ldots, g_n \colon [a,b] \to \mathbb{R}$ are integrable. Then we call the integral of $g$ over $[a,b]$ \begin{equation*} \int_{a}^{b} g(t) \ \text{dt} := \begin{pmatrix} \int_{a}^{b} g_1(t) \ \text{dt} \\ \vdots \\ \int_{a}^{b} g_n(t) \ \text{dt} \end{pmatrix}. \end{equation*} I came across this definition again at the beginning of measure theory, when we stated: We ultimately want to integration functions $\mathbb{R}^n \to \mathbb{R}^m$ , but because of the above definition we can, without loss of generality, restrict ourselves to the case $m = 1$ . My Question What is the intuition behind this definition, why does it make sense, if you will, ''on a deeper level''?
The Riemann Integral, say, is based on sums. The sums of vectors are defined component-wise. And different norms on $\mathbb R$ are topologically equivalent. Therefore, this is the exact thing you would end up with anyway for Riemann integrals if you defined them by analogy instead of component-wise explicitly.
{ "source": [ "https://math.stackexchange.com/questions/3047976", "https://math.stackexchange.com", "https://math.stackexchange.com/users/545914/" ] }
3,048,245
This is a variant of Prime number building game . Player $A$ begins by choosing a single-digit prime number. Player $B$ then appends any digit to that number such that the result is still prime, and players alternate in this fashion until one player loses by being unable to form a prime. For instance, play might proceed as follows: $A$ chooses 5 $B$ chooses 3, forming 53 $A$ loses since there are no primes of the form 53x Is there a known solution to this game? It seems like I might be able to try a programmatic search...or might some math knowledge help here?
The game is trivial to brute force; there just aren't very many possibilities. Assuming I have not made a mistake brute-forcing it by hand (with the aid of a computer to test for primality), the second player to move can win via the following strategy (this is not the only winning strategy): If the first player starts with $2$ , move to $29$ , and then all the moves are forced until you win at $29399999$ If the first player starts with $3$ , move to $37$ . If they then move to $373$ , move to $3733$ and you will win no matter what (at either $37337999$ or $373393$ ). If they instead move to $379$ , you move to either $3793$ or $3797$ and win immediately. If the first player starts with $5$ , move to $53$ and win. If the first player starts with $7$ , move to $71$ and then every move is forced until you win at $719333$ . As a heuristic for why it should not be surprising that the game is so limited, note that by the prime number theorem, there are about $\frac{N}{\log N}$ primes less than $N$ , so the probability of a random $n$ -digit number being prime is about $\frac{1}{\log(10^n)}=\frac{1}{n\log(10)}$ . Assuming that the primality of a number is independent from the primality of a number obtained by adding a digit at the end (which seems like a reasonable heuristic assumption), this gives that there are about $\frac{10}{\log(10)}$ $1$ -digit numbers that are valid positions in this game, and then $\frac{10}{\log(10)}\cdot\frac{10}{2\log(10)}$ $2$ -digit numbers, and in general $\frac{10^n}{n!\log(10)^n}$ $n$ -digit numbers. Adding up all the valid positions (including the empty string at the start) gives about $$\sum_{n=0}^\infty\frac{10^n}{n!\log(10)^n}=e^{10/\log(10)}\approx 77$$ total positions. In fact, this heuristic estimate is not far from the actual value, which is $84$ .
{ "source": [ "https://math.stackexchange.com/questions/3048245", "https://math.stackexchange.com", "https://math.stackexchange.com/users/227485/" ] }
3,049,428
This problem was brought up by my mother from a corporate party along with a question on how that worked. There was a showman who asked to tell him a number from $10$ to $99$ (If i'm not mistaken). The number $83$ was named after which he took a piece of paper and quickly put down a matrix (note he did that fast): $$ \begin{bmatrix} 8 & 11 & 63 & 1\\ 62 & 2 & 7 & 12\\ 3 & 65 & 9 & 6 \\ 10 & 5 & 4 & 64 \end{bmatrix} $$ If you take a closer look every row, column and diagonal has the sum of $83$ . Moreover consider the corners of the matrix also have the sum of $83$ . For example: $$ \begin{bmatrix} \color\red{8} & \color\red{11} & 63 & 1\\ \color\red{62} & \color\red{2} & 7 & 12\\ 3 & 65 & 9 & 6 \\ 10 & 5 & 4 & 64 \end{bmatrix} $$ Also the central square is $83$ in sum as well: $$ \begin{bmatrix} 8 & 11 & 63 & 1\\ 62 & \color\red{2} & \color\red{7} & 12\\ 3 & \color\red{65} & \color\red{9} & 6 \\ 10 & 5 & 4 & 64 \end{bmatrix} $$ Clearly numbers $1,2,3,4,5,6,7,8,9,10,11,12$ are filled in in a circular manner. And then consequent $62, 63, 64, 65$ are as well. I'm not very familial with linear algebra so my question is: What was that rule he used to build it? Can we construct a matrix with the same properties given a random number in some range? Is it possible to build a similar one but for $5\times 5$ , $6\times 6$ or $N\times N$ matrix?
This is all based on the amazing matrix $$\pmatrix{8&11&14&1\\13&2&7&12\\3&16&9&6\\10&5&4&15}$$ Note that this works for $34$ . For $34+n$ you just replace $13,14,15,16$ with $13+n,14+n,15+n,16+n$ . I do not know what the showman's backup plan for numbers below $34$ is. EDIT: There is a Numberphile video on YouTube on this exact trick.
{ "source": [ "https://math.stackexchange.com/questions/3049428", "https://math.stackexchange.com", "https://math.stackexchange.com/users/53017/" ] }
3,049,628
On the calculus exam, we were given a function $$f(x) = \frac{2x^2+x-1}{x^2-1} $$ and asked to find intervals of function increasing and decreasing. In the post-exam solutions PDF, it was written that f(x) is increasing on $$(-\infty; -1), (-1; 1), (1; +\infty)$$ And specifically noted that writing the function increase interval as $$(-\infty; -1) \cup (-1; 1) \cup (1; +\infty)$$ is wrong and unacceptable, however, I fail to understand why so. So, why is union-sign notation incorrect?
The function is decreasing (not increasing, as you said) on each of those three intervals separately. But it is not decreasing on their union because, for example, $f(5)>f(0)$ . Consider a similar case: my bank balance is decreasing from December 9 through December 14, and from December 16 through December 21. Is it therefore decreasing on the entire interval? No, because on December 15 I received my salary, so my bank balance increased sharply between December 14 and 16.
{ "source": [ "https://math.stackexchange.com/questions/3049628", "https://math.stackexchange.com", "https://math.stackexchange.com/users/404827/" ] }
3,050,696
The following integral was proposed by Cornel Ioan Valean and appeared as Problem $12054$ in the American Mathematical Monthly earlier this year. Prove $$\int_0^1 \frac{\arctan x}{x}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx=\frac{\pi^3}{16}$$ I had small tries for it, such as writting: $$I=\int_0^1 \frac{\arctan x}{x}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx\overset{ x\to \tan \frac{x}{2}}=-\frac12 {\int_0^\frac{\pi}{2}\frac{x\ln(1-\sin x)}{\sin x} dx}$$ And with Feynman's trick we obtain: $$J(t)=\int_0^\frac{\pi}{2} \frac{x\ln(1-t\sin x)}{\sin x}dx\Rightarrow J'(t)=\int_0^\frac{\pi}{2} \frac{x}{1-t\sin x}dx$$ But I don't see a way to obtain a closed from for the above integral. Also from here we have the following relation: $$\int_0^1 \frac{\arctan x \ln(1+x^2)}{x} dx =\frac23 \int_0^1 \frac{\arctan x \ln(1+x)}{x}dx$$ Thus we can rewrite the integral as: $$I=\frac23 \int_0^1 \frac{\arctan x \ln(1+x)}{x}dx -2\int_0^1 \frac{\arctan x \ln(1-x)}{x}dx$$ Another option might be to rewrite: $$\ln\left(\frac{1+x^2}{(1-x)^2}\right)= \ln\left(\frac{1+x}{1-x}\right)+\ln\left(\frac{1+x^2}{1-x^2}\right)$$ $$\Rightarrow I= \int_0^1 \frac{\arctan x}{x}\ln\left(\frac{1+x}{1-x}\right)dx+\int_0^1 \frac{\arctan x}{x}\ln\left(\frac{1+x^2}{1-x^2}\right)dx$$ And now to use the power expansion of the log functions to obtain: $$\small I=\sum_{n=0}^\infty \frac{2}{2n+1}\int_0^1 \frac{\arctan x}{x} \, \left(x^{2n+1}+x^{4n+2}\right)dx=\sum_{n=0}^\infty \frac{2}{2n+1}\int_0^1\int_0^1 \frac{\left(x^{2n+1}+x^{4n+2}\right)}{1+y^2x^2}dydx$$ This seems like an awesome integral and I would like to learn more so I am searching for more approaches. Would any of you who also already solved it and submitted the answer to the AMM or know how to solve this integral kindly share the solution here? Edit: In the meantime I found a nice solution by Roberto Tauraso here and another impressive approach due to Yaghoub Sharifi here .
Another approach, Perform integration by parts, \begin{align*} I&=\int_0^1 \frac{\arctan x}{x}\ln\left(\frac{1+x^2}{(1-x)^2}\right)\,dx\\ &=\Big[\ln (x) \ln\left(\frac{1+x^2}{(1-x)^2}\right)\arctan x\Big]_0^1 -\int_0^1 \frac{\ln x}{1+x^2}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx-\int_0^1 \frac{2(1+x)\ln (x)\arctan (x)}{(1-x)(1+x^2)}dx\\ &=-\int_0^1 \frac{\ln x}{1+x^2}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx-2\int_0^1 \frac{(1+x)\ln (x)\arctan (x)}{(1-x)(1+x^2)}dx\\ \end{align*} For $x\in [0;1]$ define the function $R$ by, \begin{align*} R(x)=\int_0^x \frac{(1+t)\ln t}{(1-t)(1+t^2)}dt=\int_0^1 \frac{x(1+tx)\ln (tx)}{(1-tx)(1+t^2x^2)}dt\\ \end{align*} Observe that, \begin{align*} R(1)=\int_0^1 \frac{t\ln t}{1+t}dt+\int_0^1 \frac{\ln t}{1-t}dt \end{align*} Perform integration by parts, \begin{align*} I&=-\int_0^1 \frac{\ln x}{1+x^2}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx-2\Big[R(x)\arctan x\Big]_0^1+2\int_0^1\frac{R(x)}{1+x^2}dx\\ &=-\int_0^1 \frac{\ln x}{1+x^2}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx-\frac{\pi}{2}R(1)+2\int_0^1 \int_0^1 \frac{x(1+tx)\ln (tx)}{(1-tx)(1+t^2x^2)(1+x^2)}dtdx\\ &=-\int_0^1 \frac{\ln x}{1+x^2}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx-\frac{\pi}{2}R(1)+\int_0^1 \ln x\left[\frac{1}{1+x^2}\ln\left(\frac{1+t^2x^2}{(1-tx)^2}\right)\right]_{t=0}^{t=1} dx+\\ &\int_0^1 \ln t\left[\frac{1}{1+t^2}\ln\left(\frac{1+x^2}{(1-tx)^2}\right)+\frac{2\arctan (tx)}{1-t^2}-\frac{2t\arctan x}{1+t^2}-\frac{2t\arctan x}{1-t^2}\right]_{x=0}^{x=1} dt\\ &=-\frac{\pi }{2}R(1)+\ln 2\int_0^1 \frac{\ln t}{1+t^2}dt-2\int_0^1 \frac{\ln (1-t)\ln t}{1+t^2}dt+2\int_0^1 \frac{\ln t\arctan t}{1-t^2}dt-\\ &\frac{\pi}{2} \int_0^1 \frac{t\ln t}{1+t^2}dt-\frac{\pi}{2} \int_0^1\frac{t\ln t}{1-t^2} dt\\ \end{align*} For $x\in [0;1]$ define the function $S$ by, \begin{align*} S(x)=\int_0^x \frac{\ln t}{1-t^2}dt=\int_0^1 \frac{x\ln(tx)}{1-t^2x^2} dt \end{align*} Perform integration by parts, \begin{align*} \int_0^1 \frac{\ln x\arctan x}{1-x^2}dx&=\Big[S(x)\arctan x\Big]_0^1-\int_0^1 \frac{S(x)}{1+x^2}dx\\ &=\frac{\pi}{4}S(1)-\int_0^1 \int_0^1 \frac{x\ln(tx)} {(1-t^2x^2)(1+x^2)} dtdx\\ &=\frac{\pi}{4}S(1)-\frac{1}{2}\int_0^1 \left[ \frac{\ln x}{1+x^2}\ln\left(\frac{1+tx}{1-tx} \right)\right]_{t=0}^{t=1} dx-\\ &\frac{1}{2}\int_0^1 \left[ \frac{\ln t}{1+t^2}\ln\left(\frac{1+x^2}{1-t^2x^2} \right)\right]_{x=0}^{x=1}dt\\ &=\frac{\pi}{4}S(1)-\frac{\ln 2}{2}\int_0^1 \frac{\ln t}{1+t^2}dt+\int_0^1 \frac{\ln(1-x)\ln x}{1+x^2}dx \end{align*} Therefore, \begin{align*}I&=\pi\int_0^1\frac{2t\ln t}{t^4-1} dt\end{align*} Perform the change of variable $y=t^2$ , \begin{align*}I&=\frac{1}{2}\pi \int_0^1 \frac{\ln y}{y^2-1}dy\\ &=\frac{1}{2}\pi\times \frac{3}{4}\zeta(2)\\ &=\frac{\pi^3}{16} \end{align*}
{ "source": [ "https://math.stackexchange.com/questions/3050696", "https://math.stackexchange.com", "https://math.stackexchange.com/users/515527/" ] }
3,050,984
I understand the fundamentals of algrebra, but have very limited knowledge of geometry and trigonometry. I wish to learn calculus at this point. Is it reasonable to begin learning calculus, and learn these other concepts as I encounter them rather than learning the prerequisites in the standard fixed progression (i.e. Algebra I -> Algebra 2 -> Geometry -> Trigonometry, etc)? How difficult will it be to learn these concepts without the prescribed linear progression?
Here's a description of how Peter Scholze (a Fields medalist) learns math: At 16, Scholze learned that a decade earlier Andrew Wiles had proved the famous 17th-century problem known as Fermat’s Last Theorem, which says that the equation $x^n + y^n = z^n$ has no nonzero whole-number solutions if $n$ is greater than two. Scholze was eager to study the proof, but quickly discovered that despite the problem’s simplicity, its solution uses some of the most cutting-edge mathematics around. “I understood nothing, but it was really fascinating,” he said. So Scholze worked backward, figuring out what he needed to learn to make sense of the proof. “To this day, that’s to a large extent how I learn,” he said. “I never really learned the basic things like linear algebra, actually — I only assimilated it through learning some other stuff.” So, I'd say that this backtracking approach to learning math is fine if you enjoy it and it comes naturally to you. But be sure to aim for true, deep understanding of the topics you're interested in rather than rote memorization.
{ "source": [ "https://math.stackexchange.com/questions/3050984", "https://math.stackexchange.com", "https://math.stackexchange.com/users/556557/" ] }
3,052,273
If this question is ill-defined or is otherwise of poor quality, then I'm sorry. What, if anything , is the sum of all complex numbers? If anything at all, it's an uncountable sum, that's for sure. I'm guessing some version of the Riemann series theorem would mean that there is no such thing as the sum of complex numbers, although - and I hesitate to add this - I would imagine that $$\sum_{z\in\Bbb C}z=0\tag{$\Sigma$}$$ is, in some sense, what one might call the "principal value" of the sum. For all $w\in\Bbb C$ , we have $-w\in\Bbb C$ with $w+(-w)=0$ , so, if we proceed naïvely, we could say that we are summing $0$ infinitely many times ${}^\dagger$ , hence $(\Sigma)$ . We have to make clear what we mean by "sum", though, since, of course, with real numbers, one can define all sorts of different types of infinite sums. I'm at a loss here. Has this sort of thing been studied before? I'd be surprised if not. Please help :) $\dagger$ I'm well aware that this is a bit too naïve. It's not something I take seriously.
Traditionally, the sum of a sequence is defined as the limit of the partial sums; that is, for a sequence $\{a_n\}$ , $\sum{a_n}$ is that number $S$ so that for every $\epsilon > 0$ , there is an $N$ such that whenever $m > N$ , $|S - \sum_{n = 0}^ma_n| < \epsilon$ . There's no reason we can't define it like that for uncountable sequences as well: let $\mathfrak{c}$ be the cardinality of $\mathbb{C}$ , and let $\{a_{\alpha}\}$ be a sequence of complex numbers where the indices are ordinals less than $\mathfrak{c}$ . We define $\sum{a_{\alpha}}$ as that value $S$ so that for every $\epsilon > 0$ , there is a $\beta < \mathfrak{c}$ so that whenever $\gamma > \beta$ , $|S - \sum_{\alpha = 0}^{\gamma}a_{\alpha}| < \epsilon$ . Note that this requires us to recursively define transfinite summation, to make sense of that sum up to $\gamma$ . But here's the thing: taking $\epsilon$ to be $1$ , then $1/2$ , then $1/4$ , and so on, we get a sequence of "threshold" $\beta$ corresponding to each one; call $\beta_n$ the $\beta$ corresponding to $\epsilon = 1/2^n$ . This is a countable sequence (length strictly less than $\mathfrak{c}$ ). Inconveniently, $\mathfrak{c}$ is regular : any increasing sequence of ordinals less than $\mathfrak{c}$ with length less than $\mathfrak{c}$ must be bounded strictly below $\mathfrak{c}$ . So that means there's some $\beta_{\infty}$ that's below $\mathfrak{c}$ but greater than every $\beta_n$ . But by definition, that means that all partial sums past $\beta_{\infty}$ are less than $1/2^n$ away from $S$ for every $n$ . So they must be exactly equal to $S$ . And that means that we must be only adding $0$ from that point forward. This is a well-known result that I can't recall a reference for: the only uncountable sequences that have convergent sums are those which consist of countably many nonzero terms followed by nothing but zeroes. In other words, there's no sensible way to sum over all of the complex numbers and get a convergence.
{ "source": [ "https://math.stackexchange.com/questions/3052273", "https://math.stackexchange.com", "https://math.stackexchange.com/users/104041/" ] }
3,052,746
I want to solve $2x = \sqrt{x+3}$ , which I have tried as below: $$\begin{equation} 4x^2 - x -3 = 0 \\ x^2 - \frac14 x - \frac34 = 0 \\ x^2 - \frac14x = \frac34 \\ \left(x - \frac12 \right)^2 = 1 \\ x = \frac32 , -\frac12 \end{equation}$$ This, however, is incorrect. What is wrong with my solution?
You made a mistake when completing the square. $$x^2-\frac{1}{4}x = \frac{3}{4} \color{red}{\implies\left(x-\frac{1}{2}\right)^2 = 1}$$ This is easy to spot since $(a\pm b)^2 = a^2\pm2ab+b^2$ , which means the coefficient of the linear term becomes $-2\left(\frac{1}{2}\right) = -1 \color{red}{\neq -\frac{1}{4}}$ . This means something isn’t correct... Note that the equation is rewritten such that $a = 1$ , so you need to add $\left(\frac{b}{2}\right)^2$ to both sides and factor. (In other words, divide the coefficient of the linear term $x$ by $2$ and square the result, which will then be added to both sides.) $$b = -\frac{1}{4} \implies \left(\frac{b}{2}\right)^2 \implies \frac{1}{64}$$ Which gets $$x^2-\frac{1}{4}x+\color{blue}{\frac{1}{64}} = \frac{3}{4}+\color{blue}{\frac{1}{64}}$$ Factoring the perfect square trinomial yields $$\left(x-\frac{1}{8}\right)^2 = \frac{49}{64}$$ And you can probably take it on from here. Edit : As it has been noted in the other answers (should have clarified this as well), squaring introduces the possibility of extraneous solutions, so always check your solutions by plugging in the values obtained in the original equation. By squaring, you’re solving $$4x^2 = x+3$$ which is actually $$2x = \color{blue}{\pm}\sqrt{x+3}$$ so your negative solution will satisfy this new equation but not the original one, since that one is $$2x = \sqrt{x+3}$$ with no $\pm$ .
{ "source": [ "https://math.stackexchange.com/questions/3052746", "https://math.stackexchange.com", "https://math.stackexchange.com/users/389593/" ] }
3,052,775
Problem Denote $$f(x)=\int_x^{x+1}\cos t^2 {\rm d}t.$$ Prove $\lim\limits_{x \to +\infty}f(x)=0.$ Proof Assume $x>0$ . Making a substitution $t=\sqrt{u}$ ,we have ${\rm d}t=\dfrac{1}{2\sqrt{u}}{\rm d}u.$ Therefore, \begin{align*} f(x)&=\int_x^{x+1}\cos t^2 {\rm d}t\\ &=\int_{x^2}^{(x+1)^2} \frac{\cos u}{2\sqrt{u}}{\rm d}u\\ &=\frac{1}{2\sqrt{\xi}}\int_{x^2}^{(x+1)^2}\cos u{\rm d}u\\ &=\frac{\sin(x+1)^2-\sin x^2}{2\sqrt{\xi}}, \end{align*} where $x^2 \leq \xi\leq (x+1)^2$ . Further, $$0\leq |f(x)|\leq \frac{|\sin(x+1)^2|+|\sin x^2|}{2\sqrt{\xi}}\leq \frac{1}{\sqrt{\xi}}\leq \frac{1}{x}\to 0(x \to +\infty),$$ which implies $f(x)\to 0(x \to +\infty)$ , according to the squeeze theorem.
You made a mistake when completing the square. $$x^2-\frac{1}{4}x = \frac{3}{4} \color{red}{\implies\left(x-\frac{1}{2}\right)^2 = 1}$$ This is easy to spot since $(a\pm b)^2 = a^2\pm2ab+b^2$ , which means the coefficient of the linear term becomes $-2\left(\frac{1}{2}\right) = -1 \color{red}{\neq -\frac{1}{4}}$ . This means something isn’t correct... Note that the equation is rewritten such that $a = 1$ , so you need to add $\left(\frac{b}{2}\right)^2$ to both sides and factor. (In other words, divide the coefficient of the linear term $x$ by $2$ and square the result, which will then be added to both sides.) $$b = -\frac{1}{4} \implies \left(\frac{b}{2}\right)^2 \implies \frac{1}{64}$$ Which gets $$x^2-\frac{1}{4}x+\color{blue}{\frac{1}{64}} = \frac{3}{4}+\color{blue}{\frac{1}{64}}$$ Factoring the perfect square trinomial yields $$\left(x-\frac{1}{8}\right)^2 = \frac{49}{64}$$ And you can probably take it on from here. Edit : As it has been noted in the other answers (should have clarified this as well), squaring introduces the possibility of extraneous solutions, so always check your solutions by plugging in the values obtained in the original equation. By squaring, you’re solving $$4x^2 = x+3$$ which is actually $$2x = \color{blue}{\pm}\sqrt{x+3}$$ so your negative solution will satisfy this new equation but not the original one, since that one is $$2x = \sqrt{x+3}$$ with no $\pm$ .
{ "source": [ "https://math.stackexchange.com/questions/3052775", "https://math.stackexchange.com", "https://math.stackexchange.com/users/560634/" ] }