INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Coin sequence paradox from Martin Gardner's book "An event less frequent in the long run is likely to happen before a more frequent event!" How can I show that THTH is more likely to turn up before HTHH with a probability of 9/14, even though waiting time of THTH is 20 and HTHH, 18! I would be very thankful if you could show me the way of calculating the probability of turning up earlier, and the waiting time. Thank you!
Here is a both nontrivial and advanced solution. Consider an ideal gambling where a dealer Alice tosses a fair coin repeatedly. After she made her $(n-1)$-th toss (or just at the beginning of the game if $n = 1$), the $n$-th player Bob joins the game. He bets $2^0\$$ that the $n$-th coin is $T$. If he loses, he leaves the game. Otherwise he wins $2^1\$$, which he bets that the $(n+1)$-th coin is $H$. If he loses, he leaves the game. Otherwise he wins $2^2\$$, which he bets that the $(n+2)$-th coin is $T$. This goes on until he wins $2^4\$$ for $THTH$ and leaves the game, or the game stops. Thus let $X^{(n)}_k$ be the r.v. of the winning the $n$-th player made when $k$-th coin toss is made. If we let $$X_n = X^{(1)}_n + \cdots + X^{(n)}_n,$$ Then it is easy to see that $X_n - n$ is a martingale null at 0. Thus if $S$ denotes the stopping time of the first occurrence of $THTH$, and if we assume that $\mathbb{E}[S] < \infty$, then $$0 = \mathbb{E}[X_{S} - S],$$ thus we have $$\mathbb{E}[S] = \mathbb{E}[X_{S}] = 2^4 + 2^2 = 20.$$ Let $Y_n$ be the total winning corresponding to $HTHH$, and $T$ be the stopping time of the first occurrence of $HTHH$. Then we have $$\mathbb{E}[T] = \mathbb{E}[Y_{T}] = 2^4 + 2^1 = 18.$$ Finally, let $U = S \wedge T$ be the minimum of $S$ and $T$. Then it is also a stopping time. Now let $p = \mathbb{P}(S < T)$ be the probability that $THTH$ precedes $HTHH$. Then $$ \mathbb{E}[U] = \mathbb{E}[X_{U}] = \mathbb{E}[X_{S}\mathbf{1}_{\{S < T\}}] + \mathbb{E}[X_{T}\mathbf{1}_{\{S > T\}}] = 20p + 0(1-p),$$ and likewise $$ \mathbb{E}[U] = \mathbb{E}[Y_{U}] = \mathbb{E}[Y_{S}\mathbf{1}_{\{S < T\}}] + \mathbb{E}[Y_{T}\mathbf{1}_{\{S > T\}}] = (2^3 + 2^1)p + 18(1-p).$$ Therefore we have $p = 9/14 \approx 64.2857 \%$.
Quadratic Diophantine equation in three variables How would one determine solutions to the following quadratic Diophantine equation in three variables: $$x^2 + n^2y^2 \pm n^2y = z^2$$ where n is a known integer and $x$, $y$, and $z$ are unknown positive integers to be solved. Ideally there would be a parametric solution for $x$, $y$, and $z$. [Note that the expression $y^2 + y$ must be an integer from the series {2, 6, 12, 20, 30, 42 ...} and so can be written as either $y^2 + y$ or $y^2 - y$ (e.g., 12 = $3^2 + 3$ and 12 = $4^2 - 4$). So I have written this as +/- in the equation above.] Thanks,
We will consider the more general equation: $X^2+qY^2+qY=Z^2$ Then, if we use the solutions of Pell's equation: $p^2-(q+1)s^2=1$ Solutions can be written in this ideal: $X=(-p^2+2ps+(q-1)s^2)L+qs^2$ $Y=2s(p-s)L+qs^2$ $Z=(p^2-2ps+(q+1)s^2)L+qps$ And more: $X=(p^2+2ps-(q-1)s^2)L-p^2-2ps-s^2$ $Y=2s(p+s)L-p^2-2ps-s^2$ $Z=(p^2+2ps+(q+1)s^2)L-p^2-(q+2)ps-(q+1)s^2$ $L$ - integer and given us.
Another residue theory integral I need to evaluate the following real convergent improper integral using residue theory (vital that i use residue theory so other methods are not needed here) I also need to use the following contour (specifically a keyhole contour to exclude the branch cut): $$\int_0^\infty \frac{\sqrt{x}}{x^3+1}\ \mathrm dx$$
Close format for this type of integrals: $$ \int_0^{\infty} x^{\alpha-1}Q(x)dx =\frac{\pi}{\sin(\alpha \pi)} \sum_{i=1}^{n} \,\text{Res}_i\big((-z)^{\alpha-1}Q(z)\big) $$ $$ I=\int_0^\infty \frac{\sqrt{x}}{x^3+1} dx \rightarrow \alpha-1=\frac{1}{2} \rightarrow \alpha=\frac{3}{2}$$ $$ g(z) =(-z)^{\alpha-1}Q(z) =\frac{(-z)^{\frac{1}{2}}}{z^3+1} =\frac{i \sqrt{z}}{z^3+1}$$ $$ z^3+1=0 \rightarrow \hspace{8mm }z^3=-1=e^{i \pi} \rightarrow \hspace{8mm }z_k=e^{\frac{\pi+2k \pi}{3}} $$ $$z_k= \begin{cases} k=0 & z_1=e^{i \frac{\pi}{3}}=\frac{1}{2}+i\frac{\sqrt{3}}{2} \\ k=1 & z_2=e^{i \pi}=-1 \\k=2 & z_3=e^{i \frac{5 \pi}{3}}=\frac{1}{2}-i\frac{\sqrt{3}}{2} \end{cases}$$ $$R_1=\text{Residue}\big(g(z),z_1\big)=\frac{i \sqrt{z_1}}{(z_1-z_2)(z_1-z_3)}$$ $$R_2=\text{Residue}\big(g(z),z_2\big)=\frac{i \sqrt{z_2}}{(z_2-z_1)(z_2-z_3)}$$ $$R_3=\text{Residue}\big(g(z),z_3\big)=\frac{i \sqrt{z_3}}{(z_3-z_2)(z_3-z_1)}$$ $$ I=\frac{\pi}{\sin\left( \frac{3}{2} \pi\right)} (R_1+R_2+R_3) = \frac{\pi}{-1} \left(\frac{-1}{3}\right)=\frac{\pi}{3}$$ Matlab Program syms x f=sqrt(x)/(x^3+1); int(f,0,inf) ans = pi/3 Compute R1,R2,R3 with Malab z1=exp(i*pi/3); z2=exp(i*pi); z3=exp(5*i*pi/3); R1=i*sqrt(z1)/((z1-z2)*(z1-z3)); R2=i*sqrt(z2)/((z2-z1)*(z2-z3)); R3=i*sqrt(z3)/((z3-z2)*(z3-z1)); I=(-pi)*(R1+R2+R3);
Find $y$ to minimize $\sum (x_i - y)^2$ I have a finite set of numbers $X$. I want to minimize the following expression by finding the appropriate value for y: $$\sum\limits_{i=1}^n (x_i - y)^2$$
This is one of those problems where you just turn the crank and out pops the answer. The basic optimization technique of "set the derivative equal to zero and solve" to find critical points works in its simplest form without issue here. And as the others have mentioned, the special form of being quadratic allows you to apply the specialized techniques you've learned for dealing with quadratic equations.
Proof of greatest integer theorem: floor function is well-defined I have to prove that $$\forall x \in \mathbb{R},\exists !\,n \in \mathbb{Z} \text{ s.t. }n \leq x < n+1\;.$$ where $\exists !\,n $ means there exists a unique (exactly one) $n$. I'm done with proving that there are at least one integers for the solution. I couldn't prove the "uniqueness" of the solution, and so I looked up the internet, and here's what I found: Let $\hspace{2mm}n,m \in \mathbb{Z} \text{ s.t. }n \leq x < n+1$ and $m \leq x < m+1$. Since $n \leq x \text{ and } -(m+1) < -x$, by adding both, $n + (-m-1) < (x-x) = 0$. And (some steps here) likewise, $(x-x) < n+m+1$. Now, can I add up inequalities like that, even when the book is about real analysis (and in which assumptions are supposed to be really minimal)? Or should I also prove those addition of inequalities? Thank you :D
The usual proof in the context of real analysis goes like this: Let $A= \{ n \in \mathbb Z : n \le x \}$. Then $A$ is not empty. Indeed, there is $n\in \mathbb N$ such that $n>-x$, because $\mathbb N$ is unbounded. But then $-n\in A$. Let $\alpha=\sup A$. Then there is $n\in A$ such that $\alpha-1<n\le\alpha$. But then $\alpha<n+1\le\alpha+1\le x+1$ and so $n\le x$ and $n+1\notin A$, that is, $n\le x < n+1$. If $m\in A$ then $m\le n$ because $m>n$ implies $m\ge n+1>x$. If $m<n$ then $m+1\le n\le x$ and $m$ cannot be a solution. Hence the solution is unique.
Name this paradox about most common first digits in numbers I remember hearing about a paradox (not a real paradox, more of a surprising oddity) about frequency of the first digit in a random number being most likely 1, second most likely 2, etc. This was for measurements of seemingly random things, and it didn't work for uniformly generated pseudorandom numbers. I also seem to recall there was a case in history of some sort of banking fraud being detected because the data, which was fudged, was not adhering to this law. It was also generalisable so that it didn't matter what base number system you used, the measurements would be distributed in an analagous way. I've googled for various things trying to identify it but I just can't find it because I don't know the right name to search for. I would like to read some more about this subject, so if anyone can tell me the magic terms to search for I'd be grateful, thanks!
It is Benford's Law
Asymptotics of a solution Let $x(n)$ be the solution to the following equation $$ x=-\frac{\log(x)}{n} \quad \quad \quad \quad (1) $$ as a function of $n,$ where $n \in \mathbb N.$ How would you find the asymptotic behaviour of the solution, i.e. a function $f$ of $n$ such that there exist constants $A,B$ and $n_0\in\mathbb N$ so that it holds $$Af(n) \leq x(n) \leq Bf(n)$$ for all $n > n_0$ ?
Call $u_n:t\mapsto t\mathrm e^{nt}$, then $x(n)$ solves $u_n(x(n))=1$. For every $a$, introduce $$ x_a(n)=\frac{\log n}n-a\frac{\log\log n}n. $$ Simple computations show that, for every fixed $a$, $u_n(x_a(n))\cdot(\log n)^{a-1}\to1$ when $n\to\infty$. Thus, for every $a\gt1$, there exists some finite index $n(a)$ such that $x(n)\geqslant x_a(n)$ for every $n\geqslant n(a)$, and, for every $a\lt1$, there exists some finite index $n'(a)$ such that $x(n)\leqslant x_a(n)$ for every $n\geqslant n'(a)$. Finally, when $n\to\infty$, $$ nx(n)=\log n-\log\log n+o(\log\log n). $$ The assertion in your post holds with $f(n)=(\log n)/n$, $n_0=\max\{n(A),n'(B)\}$, $B=1$ and every $A\lt1$.
Are $3$ and $11$ the only common prime factors in $\sum\limits_{k=1}^N k!$ for $N\geq 10$? The question was stimulated by this one. Here it comes: When you look at the sum $\sum\limits_{k=1}^N k!$ for $N\geq 10$, you'll always find $3$ and $11$ among the prime factors, due to the fact that $$ \sum\limits_{k=1}^{10}k!=3^2\times 11\times 40787. $$ Increasing $N$ will give rise to factors $3$ resp. $11$. Are $3$ and $11$ the only common prime factors in $\sum\limits_{k=1}^N k!$ for $N\geq 10$? I think, one has to show, that $\sum\limits_{k=1}^{N}k!$ has a factor of $N+1$, because the upcoming sum will always share the $N+1$ factor as well. This happens for $$ \underbrace{1!+2!}_{\color{blue}{3}}+\color{blue}{3}! \text{ and } \underbrace{1!+2!+\cdots+10!}_{3^2\times \color{red}{11}\times 40787}+\color{red}{11}! $$
As pointed out in the comments, the case is trivial( at least if you know some theorems) if you fix $n>10$, instead of $n>p-1$ for prime $p$. It follows from Wilson's Theorem, that if you have a multiple of $p$ at index $n= p-2$ you won't at index $p-1$ because it will decrease out of being one for that index. $p>12$ implies at least $1$ index where it is NOT a multiple if it worked at index $11$. That leaves us with $p<12$ which would have to be factors of the sum up to $p-1$, $2$ is out as the sum is odd, $5$, needs $24+1+2+6=33$ to be a multiple of $5$, it isn't. Lastly $7$ needs $1+2+6+24+120+720=873$ to be a multiple which would force $33$ to be a multiple of $7$ which it isn't.
Set of harmonic functions is locally equicontinuous (question reading in Trudinger / Gilbarg) I'm working through the book Elliptic Parial Differential Equations of Second Order by D. Gilbarg and N. S. Trudinger. Unfortunately I get stuck at some point. On page 23 they prove the following Theorem: Let $u$ be harmonic in $\Omega$ and let $\Omega'$ be any compact subset of $\Omega$. Then for any multi-index $\alpha$ we have $$\sup_{\Omega'}|D^\alpha u|\le \left(\frac{n|\alpha|}{d}\right)^{|\alpha|} \sup_{\Omega}|u|$$ where $d=\operatorname{dist}(\Omega',\partial\Omega)$. Now they conclude: An immediate consequence of the bound above is the equicontinuity on compact subdomains of the derivatives of any bounded set of harmonic functions. How could they conclude that? Let $\{u_i\}$ a family of of bounded harmonic functions: why are the $u_i$ equicontinuous on compact subdomains? Thanks for your help, hulik
If $\{u_i\}_{i\in \mathcal{I}}$ is a bounded family of harmonic functions defined in $\Omega$ (i.e., there exists $M\geq 0$ s.t. $|u_i(x)|\leq M$ for $x\in \Omega$) then inequality: $$\sup_{\Omega'}|D^\alpha u|\le \left(\frac{n|\alpha|}{d}\right)^{|\alpha|} \sup_{\Omega}|u|$$ with $|\alpha|=1$ implies: $$\sup_{\Omega'}|\nabla u_i|\le C(\Omega^\prime)\ \sup_{\Omega}|u_i| \leq C(\Omega^\prime)\ M$$ for each $i\in \mathcal{I}$ (here $C(\Omega^\prime)\geq 0$ is a suitable constant depending on $\Omega^\prime$). Therefore the family $\{u_i\}_{i\in \mathcal{I}}$ is equi-Lipschitz on each compact subdomain $\Omega^\prime \subseteq \Omega$, for: $$\forall i \in \mathcal{I},\quad |u_i(x)-u_i(y)|\leq C(\Omega^\prime)\ M\ |x-y|$$ for all $x,y\in \Omega^\prime$, and equi-continuity follows.
Proving an algebraic identity using the axioms of field I am trying to prove (based on the axioms of field) that $$a^3-b^3=(a-b)(a^2+ab+b^2)$$ So, my first thought was to use the distributive law to show that $$(a-b)(a^2+ab+b^2)=(a-b)\cdot a^2+(a-b)\cdot ab+(a-b)\cdot b^2$$ And then continuing from this point. My problem is that I'm not sure if the distributive law is enough to prove this identity. Any ideas? Thanks!
Indeed you need distributive, associative and commutative laws to prove your statement. In fact: $$\begin{split} (a-b)(a^2+ab+b^2) &= (a+(-b))a^2 +(a+(-b))ab+(a+(-b))b^2\\ &= a^3 +(- b)a^2+a^2b+(-b)(ab)+ab^2+(-b)b^2\\ &= a^3 - ba^2+a^2b - b(ab) +ab^2-b^3\\ &= a^3 - a^2b+a^2b - (ba)b +ab^2-b^3\\ &= a^3 -(ab)b +ab^2-b^3\\ &= a^3 -ab^2 +ab^2-b^3\\ &= a^3-b^3\; . \end{split}$$
closed form of a Cauchy (series) product I hope this hasn't been asked already, though I have looked around the site and found many similar answers. Given: Form Cauchy product of two series: $a_k\;x^k$ and $\tfrac{1}{1-x}=1+x+x^2+\cdots$. So I come up with, $\sum_{n=0}^{\infty}\;c_n = \sum_{n=0}^{\infty}\;\sum_{k+l=n}\;a_l\;b_k = \cdots = \sum_{n=0}^{\infty}\;x^n\;\sum_{k=0}^n\;a_k = x^0\;(a_0)+x^1\;(a_0+a_1)+x^2\;(a_0+a_1+a_2)+\cdots$. It asks for what values of $x$ this would be valid: This is a funny question to me because it depends upon the coefficients in the power series, right? If I take the ratio test, I get $\lim_{n\to\infty}\;\bigg| \frac{a_{k+1}\;x^{k+1}}{a_k\;x^k}\bigg| = |x|\cdot \lim_{k\to\infty}\;\big| \frac{a_{k+1}}{a_k} \big|$. For this series to be convergent, doesn't this have to come to a real number, $L$ (not in $\mathbb{\bar{R}}$)? Therefore, $|x|<1/L$? I know that the other series, $\sum_{n=0}^{\infty}\;x^n$ converges for $r \in (-1,1)$. So for the product to be convergent, doesn't the requirement of $x$ have to be, $|x| < \min\{1,1/L\}$? The reason I include this, other than the questions above, is that the question suggests using "this approach" to attain a closed form $\sum_{k=0}^{\infty}\;k\;x^k$, for $x \in (-1,1)$. By using the ratio test (my favorite), I'm pretty sure that for this to converge, $|x|<1$ - which is given. I tried writing out some of the terms but they do not seem to reach a point whereby future terms cancel (as they do in a series like $\sum_{n=1}^{\infty} \frac{1}{k\;(k+2)\;(k+4)}$). I've tried bounding (Squeeze) them but didn't get very far. Thanks for any suggestions!
Note that $\sum_{n\ge 0}nx^n$ is almost the Cauchy product of $\sum_{n\ge 0}x^n$ with itself: that Cauchy product is $$\left(\sum_{n\ge 0}x^n\right)^2=\sum_{n\ge 0}x^n\sum_{k=0}^n 1^2=\sum_{n\ge 0}(n+1)x^n\;.\tag{1}$$ If you multiply the Cauchy product in $(1)$ by $x$, you get $$x\sum_{n\ge 0}(n+1)x^n=\sum_{n\ge 0}(n+1)x^{n+1}=\sum_{n\ge 1}nx^n=\sum_{n\ge 0}nx^n\;,\tag{2}$$ since the $n=0$ term is $0$ anyway. Combining $(1)$ and $(2)$, we have $$\sum_{n\ge 0}nx^n=x\left(\sum_{n\ge 0}x^n\right)^2=x\left(\frac1{1-x}\right)^2=\frac{x}{(1-x)^2}\;,$$ with convergence for $|x|<1$ by the reasoning that you gave.
Find the identity under a given binary operation I have two problems quite similar. The first: In $\mathbb{Z}_8$ find the identity of the following commutative operation: $$\overline{a}\cdot\overline{c}=\overline{a}+\overline{c}+2\overline{a}\overline{c}$$ I say: $$\overline{a}\cdot\overline{i}=\overline{a}+\overline{i}+2\overline{a}\overline{i} = \overline{a}$$ Since $\overline{a}$ is always cancelable in $\mathbb{Z}_9$ I can write: $$\overline{i}+2\overline{a}\overline{i} = \overline{0}$$ $$\overline{i}(\overline{1}+2\overline{a}) = \overline{0}$$ so $i=0$ whatever $\overline{a}$. Second question: In $\mathbb{Z}_9\times\mathbb{Z}_9$ find the identity of the following commutative operation: $$(\overline{a}, \overline{b})\cdot (\overline{c}, \overline{d})= (\overline{a}+ \overline{c}, \overline{8}\overline{b}\overline{d})$$ So starting from: $$(\overline{a}, \overline{b})\cdot (\overline{e_1}, \overline{e_2})= (\overline{a}+ \overline{e_1}, \overline{8}\overline{b}\overline{e_2})=(\overline{a}, \overline{b})$$ that is: $$\overline{a}+\overline{e_1}=\overline{a}\qquad (1)$$ $$\overline{8}\overline{b}\overline{e_2}=\overline{b}\qquad (2)$$ In (1) there's always cancellable element for $\overline{a}$ since $\overline{-a}$ is always present in $\mathbb{Z}_9$. In (2) I should multiply both member for $\overline{8^{-1}}$ and $\overline{b^{-1}}$ to know exactly $\overline{e_2}$. This happens only if both number are invertible. $\overline{8}$ is easily to demonstrate that it's invertible, cause $gcd(8,9)=1$. But what about $b$.
Hint $\ $ Identity elements $\rm\:e\:$ are idempotent $\rm\:e^2 = e\:$. Therefore $\rm(1)\ \ mod\ 8\!:\ \ e = e\cdot e = 2e+2e^2\ \Rightarrow\ e\:(1+2e) = 0\ \Rightarrow\ e = \ldots\:$ by $\rm n^2 \equiv 1\:$ for $\rm\:n\:$ odd $\rm(2)\ \ mod\ (9,9)\!:\ \ (a,b) = (a,b)^2 = (2a,-bb)\ \Rightarrow\ (-a, b\:(b+1)) = (0,0)\ \Rightarrow\ \ldots$
De Rham cohomology of $S^n$ Can you find mistake in my computation of $H^{k}(S^{n})$. Sphere is disjoint union of two spaces: $$S^{n} = \mathbb{R}^{n}\sqcup\mathbb{R^{0}},$$ so $$H^{k}(S^n) = H^{k}(\mathbb{R}^{n})\oplus H^{k}(\mathbb{R^{0}}).$$ In particular $$H^{0}(S^{n}) = \mathbb{R}\oplus\mathbb{R}=\mathbb{R}^{2}$$ and $$H^{k}(S^{n}) = 0,~~~k>0.$$ Where is mistake? Thanks a lot!
You are wrong: $S^n$ is not the disjoint union $\mathbb R^n \sqcup \mathbb R^0$ - topologically. Although $S^n$ is $\mathbb R^n$ with one point at infinity, the topology of this point at infinity is very different from that of $\mathbb R^0$.
Existence theorems for problems with free endpoints? It is well known that the problem of minimizing $$ J[y] = \int_{0}^{1} \sqrt{y(x)^2 + \dot{y}(x)^2} dx $$ with $y \in C^2[0,1]$ and $y(0) = 1$ and $y(1) = 0$ has no solutions. However, if we remove the condition $y(1) = 0$ and instead let the value of $y$ at $x = 1$ be free, then an optimal solution does exist. An easy way to see this is to observe that $J[y]$ is really just the arc length of the plane curve with polar equation $r(\theta) = y(\theta)$. Clearly then, the function $y$ which traces out in that way the shortest line segment joining the point $(0,1)$ (given in polar coordinates) and the ray $\theta = 1$ is the (unique) solution to this new problem. Inspired by this little example, I wonder: are there results regarding the existence of solutions to variational problems with freedom at one or both endpoints and similar integrands?
Sure. You can take any smooth $f(x)$ with $f(0) = 0,$ then minimize $$ \int_0^1 \sqrt{1 + \left( f(\dot{y}(x)) \right)^2} \; dx $$ with $y(0) = 73.$ The minimizer is constant $y.$ More interesting is the free boundary problem for surface area. Given a wire frame that describes a nice curve $\gamma$ in $\mathbb R^3,$ once $\gamma$ is close enough to the $xy$ plane, there is a surface (topologically an annulus) with one boundary component being $\gamma$ and the other being a curve in the $xy$ plane, that minimizes the surface area among all such surfaces. The optimal surface meets the $xy$ plane orthogonally.
Prove the map has a fixed point Assume $K$ is a compact metric space with metric $\rho$ and $A$ is a map from $K$ to $K$ such that $\rho (Ax,Ay) < \rho(x,y)$ for $x\neq y$. Prove A have a unique fixed point in $K$. The uniqueness is easy. My problem is to show that there a exist fixed point. $K$ is compact, so every sequence has convergent subsequence. Construct a sequence ${x_n}$ by $x_{n+1}=Ax_{n}$,$\{x_n\}$ has a convergent subsequence $\{ x_{n_k}\}$, but how to show there is a fixed point using $\rho (Ax,Ay) < \rho(x,y)$?
I don't have enough reputation to post a comment to reply to @андрэ 's question regarding where in the proof it is used that $f$ is a continuous function, so I'll post my answer here: Since we are told that $K$ is a compact set. $f:K\rightarrow K$ being continuous implies that the $\mathrm{im}(f) = f(K)$ is also a compact set. We also know that compact sets are closed and bounded, which implies the existence of $\inf_{x\in K} f(x)$. If it is possible to show that $f(K) \subseteq K$ is a closed set, then it is necessarily compact as well: A subset of a compact set is compact? However, I am not aware of how you would do this in this case without relying on continuity of $f$.
Modules with $m \otimes n = n \otimes m$ Let $R$ be a commutative ring. Which $R$-modules $M$ have the property that the symmetry map $$M \otimes_R M \to M \otimes_R M, ~m \otimes n \mapsto n \otimes m$$ equals the identity? In other words, when do we have $m \otimes n = n \otimes m$ for all $m,n \in M$? Some basic observations: 1) When $M$ is locally free of rank $d$, then this holds iff $d \leq 1$. 2) When $A$ is a commutative $R$-algebra, considered as an $R$-module, then it satisfies this condition iff $R \to A$ is an epimorphism in the category of commutative rings (see the Seminaire Samuel for the theory of these epis). 3) These modules are closed under the formation of quotients, localizations (over the same ring) and base change: If $M$ over $R$ satisfies the condition, then the same is true for $M \otimes_R S$ over $S$ for every $R$-algebra $S$. 4) An $R$-module $M$ satisfies this condition iff every localization $M_{\mathfrak{p}}$ satisfies this condition as an $R_{\mathfrak{p}}$-module, where $\mathfrak{p} \subseteq R$ is prime. This reduces the whole study to local rings. 5) If $R$ is a local ring with maximal ideal $\mathfrak{m}$ and $M$ satisfies the condition, then $M/\mathfrak{m}M$ satisfies the condition over $R/\mathfrak{m}$ (by 3). Now observation 1 implies that $M/\mathfrak{m}M$ has dimension $\leq 1$ over $R/\mathfrak{m}$, i.e. it is cyclic as an $R$-module. If $M$ was finitely generated, this would mean (by Nakayama) that $M$ is also cyclic. Thus, if $R$ is an arbitrary ring, then a finitely generated $R$-module $M$ satisfies this condition iff every localization of $M$ is cyclic. But there are interesting non-finitely generated examples, too (see 2). I don't expect a complete classification (this is already indicated by 2)), but I wonder if there is any nice characterization or perhaps even existing literature. It is a quite special property. Also note the following reformulation: Every bilinear map $M \times M \to N$ is symmetric.
The question has an accepted answer at MathOverflow, and perhaps it is time to leave the Unanswered list.
De Rham cohomology of $S^2\setminus \{k~\text{points}\}$ Am I right that de Rham cohomology $H^k(S^2\setminus \{k~\text{points}\})$ of $2-$dimensional sphere without $k$ points are $$H^0 = \mathbb{R}$$ $$H^2 = \mathbb{R}^{N}$$ $$H^1 = \mathbb{R}^{N+k-1}?$$ I received this using Mayer–Vietoris sequence. And I want only to verify my result. If you know some elementery methods to compute cohomology of this manifold, I am grateful to you. Calculation: Let's $M = S^2$, $U_1$ - set consists of $k$ $2-$dimensional disks without boundary and $U_2 = S^2\setminus \{k~\text{points}\}$. $$M = U_1 \cup U_2$$ each punctured point of $U_2$ covered by disk (which contain in $U_1$). And $$U_1\cap U_2$$ is a set consists of $k$ punctured disks (which homotopic to $S^1$). Than collection of dimensions in Mayer–Vietoris sequence $$0\to H^0(M)\to\ldots\to H^2(U_1 \cap U_2)\to 0$$ is $$0~~~~~1~~~~~k+\alpha~~~~~k~~~~~0~~~~~\beta~~~~~k~~~~~1~~~~~\gamma~~~~~0~~~~~0$$ whrer $\alpha, \beta, \gamma$ are dimensions of $0-$th, $1-$th and $2-$th cohomolody respectively. $$1 - (k+\alpha) + k = 0,$$ so $$\alpha = 1.$$ $$\beta - k + 1 - \gamma = 0,$$ so $$\beta = \gamma + (k-1).$$ So $$H^0 = \mathbb{R}$$ $$H^2 = \mathbb{R}^{N}$$ $$H^1 = \mathbb{R}^{N+k-1}$$ Thanks a lot!
It helps to use the fact that DeRahm cohomology is a homotopy invariant, meaning we can reduce the problem to a simpler space with the same homotopy type. I think the method you are trying will work if you can straighten out the details, but if you're still having trouble then try this: $S^2$ with $1$ point removed is homeomorphic to the disk $D^2$. If we let $S_k$ denote $S^2$ with $k$ points removed, then $S_k$ is homeomorphic to $D_{k-1}$ (where $D_{k-1}$ denotes $D^2$ with $k-1$ interior points removed). Hint: Find a nicer space which is homotopy equivalent to $D_{k-1}$. Can you, for instance, make it $1$-dimensional? If you could, that would immediately tell you something about $H^2(S_k)$. If you get that far and know Mayer-Vietoris, you should be able to work out the calculation.
Finding probability $P(X+Y < 1)$ with CDF Suppose I have a Cumulative Distribution Function like this: $$F(x,y)=\frac { (x\cdot y)^{ 2 } }{ 4 } $$ where $0<x<2$ and $0<y<1$. And I want to find the probability of $P(X+Y<1)$. Since $x<1-y$ and $y<1-x$, I plug these back into the CDF to get this: $$F(1-y,1-x)=\frac { ((1-y)\cdot (1-x))^{ 2 } }{ 4 } $$ Because of the constraint where $0<x<2$ and $0<y<1$, I integrate according to the range of values: $$\int _{ 0 }^{ 1 }{ \int _{ 0 }^{ 2 }{ \frac { ((1-y)\cdot (1-x))^{ 2 } }{ 4 } dxdy } } =\frac { 1 }{ 18 } $$ This answer, however, is incorrect. My intuition for doing this is that because the two variables are somewhat dependent on each other to maintain the inequality of less than $1$, I want to "sum"(or integrate) all the probabilities within the possible range of values of $x$ and $y$ that satisfy the inequality. Somehow, the answer, which is $\frac{1}{24}$, doesn't seem to agree with my intuition. What have I done wrong?
We have the cumulative distribution function (CDF) $$F_{X,Y}(x,y)=\int_0^y\int_0^x f_{X,Y}(u,v)dudv=\frac{(xy)^2}{4}.$$ Differentiate with respect to both $x$ and $y$ to obtain the probability density function (PDF) $$f_{X,Y}(x,y)=\frac{d^2}{dxdy}\frac{(xy)^2}{4}=xy.$$ Finally, how do we parametrize the region given by $x+y<1$ inside the rectangle? Well, $x$ must be nonnegative so $y$ can be anything from $0$ to $1$, and simultaneously $x$ must be between $0$ and $1-y$; $$P(X+Y<1)=\int_0^1\int_0^{1-y} xy \; dx dy =\int_0^1 \frac{(1-y)^2}{2}ydy=\frac{1}{24}.$$
Number of ways to pair 2n elements from two different sets Say I have a group of 20 people, and I want to split them to pairs, I know that the number of different ways to do it is $\frac{(2n)!}{2^n \cdot n!}$ But let's say that I have to pair a boy with a girl? I got confused because unlike the first option the number of total elements (pairs) isn't $(2n)!$, and I failed to count it. I think that the size of each equivalence class is the same - $2^n \cdot n!$, I am just missing the number of total elements. Any ideas? Thanks!
If you have $n$ boys and $n$ girls, give them each a number and sort the pairs by wlog the boy's number. Then there are $n!$ possible orderings for the girls, so $n!$ ways of forming the pairs.
Solve $ x^2+4=y^d$ in integers with $d\ge 3$ Find all triples of integers $(x,y,d)$ with $d\ge 3$ such that $x^2+4=y^d$. I did some advance in the problem with Gaussian integers but still can't finish it. The problem is similar to Catalan's conjecture. NOTE: You can suppose that $d$ is a prime. Source: My head
See a similar question that I asked recently: Nontrivial Rational solutions to $y^2=4 x^n + 1$ This question might also be related to Fermat's Last Theorem.
How to come up with the gamma function? It always puzzles me, how the Gamma function's inventor came up with its definition $$\Gamma(x+1)=\int_0^1(-\ln t)^x\;\mathrm dt=\int_0^\infty t^xe^{-t}\;\mathrm dt$$ Is there a nice derivation of this generalization of the factorial?
Here is a nice paper of Detlef Gronau Why is the gamma function so as it is?. Concerning alternative possible definitions see Is the Gamma function mis-defined? providing another resume of the story Interpolating the natural factorial n! . Concerning Euler's work Ed Sandifer's articles 'How Euler did it' are of value too, in this case 'Gamma the function'.
interchanging integrals Why does $$\int_0^{y/2} \int_0^\infty e^{x-y} \ dy \ dx \neq \int_0^\infty \int_0^{y/2} e^{x-y} \ dx \ dy$$ The RHS is 1 and the LHS side is not. Would this still be a legitimate joint pdf even if Fubini's Theorem does not hold?
The right side, $$\int_0^\infty \int_0^{y/2} e^{x-y} \ dx \ dy,$$ refers to something that exists. The left side, as you've written it, does not. Look at the outer integral: $$ \int_0^\infty \cdots\cdots\; dy. $$ The variable $y$ goes from $0$ to $\infty$. For any particular value of $y$ between $0$ and $\infty$, the integral $\displaystyle \int_0^{y/2} e^{x-y}\;dx$ is something that depends on the value of $y$. The integral $\displaystyle \int_0^\infty \cdots\cdots dy$ does not depend on anything called $y$. But when you write $\displaystyle \int _0^{y/2} \int_\text{?}^\text{?} \cdots \cdots$ then that has to depend on something called $y$. What is this $y$? On the inside you've got $\displaystyle\int_0^\infty e^{x-y}\;dy$. Something like that does not depend on anything called $y$, but does depend on $x$. It's like what happens when you write $$ \sum_{k=1}^4 k^2. $$ What that means is $$ 1^2 + 2^2 + 3^2 + 4^2 $$ and there's nothing called "$k$" that it could depend on.
Is my proof correct: if $n$ is odd then $n^2$ is odd? Prove that for every integer $n,$ if $n$ is odd then $n^2$ is odd. I wonder whether my answer to the question above is correct. Hope that someone can help me with this. Using contrapositive, suppose $n^2$ is not odd, hence even. Then $n^2 = 2a$ for some integer $a$, and $$n = 2(\frac{a}{n})$$ where $\frac{a}{n}$ is an integer. Hence $n$ is even.
You need to show that $a/n$ is an integer. Try thinking about the prime factorizations of $a$ and $n$.
Is $BC([0,1))$ ( space of bounded real valued continuous functions) separable? Is $BC([0,1))$ a subset of $BC([0,\infty))$? It is easy to prove the non-separability of BC([0,$\infty$)) and the separability of C([0,1]). It seems to me we can argue from the fact that any bounded continuous function of BC([0,$\infty$)) must also be in BC([0,1)) to somehow show BC([0,1)) is not separable, but BC([0,1)
$BC([0,1))$ is not a subset of $BC([0,\infty))$; in fact, these two sets of functions are disjoint. No function whose domain is $[0,1)$ has $[0,\infty)$ as its domain, and no function whose domain is $[0,\infty)$ has $[0,1)$ as its domains. What is true is that $$\{f\upharpoonright[0,1):f\in BC([0,\infty))\}\subseteq BC([0,1))\;.$$ There is, however, a very close relationship between $BC([0,\infty))$ and $BC([0,1))$, owing to the fact that $[0,\infty)$ and $[0,1)$ are homeomorphic. An explicit homeomorphism is $$h:[0,\infty)\to[0,1):x\mapsto \frac2\pi\arctan x\;.$$ This implies that $BC([0,1))$ and $BC([0,\infty))$ are actually homeomorphic, via the map $$H:BC([0,1))\to BC([0,\infty)):f\mapsto f\circ h\;,$$ as is quite easily checked. Thus, one of $BC([0,\infty))$ and $BC([0,1))$ is separable iff the other is.
Help Understanding Why Function is Continuous I have read that because a function $f$ satisfies $$ |f(x) - f(y)| \leq |f(y)|\cdot|x - y| $$ then it is continuous. I don't really see why this is so. I know that if a function is "lipschitz" there is some constant $k$ such that $$ |f(x) - f(y)| \leq k|x - y|. $$ But the first inequality doesn't really prove this because the |f(y)| depends on one of the arguments so isn't necessarily lipschitz. So, why does this first inequality imply $f$ is continuous?
You're right that the dependence on $y$ means this inequality isn't like the Lipschitz condition. But the same proof will show continuity in both cases. (In the Lipschitz case you get uniform continuity for free.) Here's how: Let $y\in\operatorname{dom} f$; we want to show $f$ is continuous at $y$. So let $\epsilon > 0$; we want to find $\delta$ such that, if $|x-y| < \delta$, then $|f(x) - f(y)| < \delta$. Let's choose $\delta$ later, when we figure out what it ought to be, and just write the proof for now: if $x$ is such that $|x-y| < \delta$ then $$ |f(x) - f(y)| < |f(y)|\cdot|x-y| < |f(y)|\delta = \epsilon $$ The first step is the hypothesis you've given; the second step is the assumption on $|x-y|$; the last step is just wishful thinking, because we want to end up with $\epsilon$ at the end of this chain of inequalities. But this bit of wishful thinking tells us what $\delta$ has to be to make the argument work: $\delta = |f(y)|^{-1}\epsilon$. (If $f$ were Lipschitz, the same thing would work with $|f(y)|$ replaced with $k$, and it would yield uniform continuity because the choice of $\delta$ wouldn't depend on $y$.) (Oh, and a technical matter: the condition you've stated only makes sense for $x\ne y$; otherwise the LHS is at least $0$ but the RHS is $0$, so the strict inequality cannot hold. But this doesn't affect the argument for continuity; you just assume at the right moment that $x\ne y$.)
Cauchy Sequence that Does Not Converge What are some good examples of sequences which are Cauchy, but do not converge? I want an example of such a sequence in the metric space $X = \mathbb{Q}$, with $d(x, y) = |x - y|$. And preferably, no use of series.
A fairly easy example that does not arise directly from the decimal expansion of an irrational number is given by $$a_n=\frac{F_{n+1}}{F_n}$$ for $n\ge 1$, where $F_n$ is the $n$-th Fibonacci number, defined as usual by $F_0=0$, $F_1=1$, and the recurrence $F_{n+1}=F_n+F_{n-1}$ for $n\ge 1$. It’s well known and not especially hard to prove that $\langle a_n:n\in\Bbb Z^+\rangle\to\varphi$, where $\varphi$ is the so-called golden ratio, $\frac12(1+\sqrt5)$. Another is given by the following construction. Let $m_0=n_0=1$, and for $k\in\Bbb N$ let $m_{k+1}=m_k+2n_k$ and $n_{k+1}=m_k+n_k$. Then for $k\in\Bbb N$ let $$b_k=\frac{m_k}{n_k}$$ to get the sequence $$\left\langle 1,\frac32,\frac75,\frac{17}{12},\frac{41}{29},\dots\right\rangle\;;$$ it’s a nice exercise to show that this sequence converges to $\sqrt2$. These are actually instances of a more general source of examples, the sequences of convergents of the continued fraction expansions of irrationals are another nice source of examples; the periodic ones, like this one, are probably easiest.
How to check convexity? How can I know the function $$f(x,y)=\frac{y^2}{xy+1}$$ with $x>0$,$y>0$ is convex or not?
The book "Convex Optimization" by Boyd, available free online here, describes methods to check. The standard definition is if f(θx + (1 − θ)y) ≤ θf(x) + (1 − θ)f(y) for 0≤θ≤1 and the domain of x,y is also convex. So if you could prove that for your function, you would know it's convex. The Hessian being positive semi-definite, as mentioned in comments, would also show that the function is convex. See page 67 of the book for more.
How to prove $ \phi(n) = n/2$ iff $n = 2^k$? How can I prove this statement ? $ \phi(n) = n/2$ iff $n = 2^k $ I'm thinking n can be decomposed into its prime factors, then I can use multiplicative property of the euler phi function to get the $\phi(n) = \phi(p_1)\cdots\phi(p_n) $. Then use the property $ \phi(p) = p - 1$. But I'm not sure if that's the proper approach for this question.
Edit: removed my full answer to be more pedagogical. You know that $\varphi(p) = p-1$, but you need to remember that $\varphi(p^k) = p^{k-1}(p-1).$ Can you take it from here?
Represent every Natural number as a summation/subtraction of distinct power of 3 I have seen this in a riddle where you have to chose 4 weights to calculate any weight from 1 to 40kgs. Some examples, $$8 = {3}^{2} - {3}^{0}$$ $$12 = {3}^{2} + {3}^{1}$$ $$13 = {3}^{2} + {3}^{1}+ {3}^{0}$$ Later I found its also possible to use only 5 weights to calculate any weight between 1-121. $$100 = {3}^{4} + {3}^{3} - {3}^{2} + {3}^{0}$$ $$121 = {3}^{4} + {3}^{3} + {3}^{2} + {3}^{1} + {3}^{0}$$ Note: It allows negative numbers too. how I represent 8 and 100. I want to know if any natural number can be represented as a summation of power of 3. I know this is true for 2. But is it really true for 3? What about the other numbers? Say $4, 5, 6, ... $
You can represent any number $n$ as $a_k 3^k + a_{k-1} 3^{k-1} + \dots + a_1 3 + a_0$, where $a_i \in \{-1,0,1\}$. This is called balanced ternary system, and as Wikipedia says, one way to get balanced ternary from normal ternary is to add ..1111 to the number (formally) with carry, and then subtract ..1111 without carry. For a generalization, see here.
A smooth function f satisfies $\left|\operatorname{ grad}\ f \right|=1$ ,then the integral curves of $\operatorname{grad}\ f$ are geodesics $M$ is riemannian manifold, if a smooth function $f$ satisfies $\left| \operatorname{grad}\ f \right|=1,$ then prove the integral curves of $\operatorname{grad}\ f$ are geodesics.
Well $\text{grad}(f)$ is a vector such that $g(\text{grad}(f),-)=df$, therefore integral curves satisfy $$ \gamma'=\text{grad}(f)\Rightarrow g(\gamma',X)=df(X)=X(f) $$ Now let $X,Y$ be a vector fields $$ XYf=Xg(\text{grad}(f),Y)= g(\nabla_X\text{grad}(f),Y)+g(\text{grad}(f),\nabla_XY)= g(\nabla_X\text{grad}(f),Y)+\nabla_XY(f) $$ and $$ YXf=Yg(\text{grad}(f),X)= g(\nabla_Y\text{grad}(f),X)+g(\text{grad}(f),\nabla_YX)= g(\nabla_Y\text{grad}(f),X)+\nabla_YX(f) $$ which gives after subtraction and vanishing torsion $$ [X,Y]f-\nabla_XY(f)+\nabla_YX(f)=0=g(\nabla_X\text{grad}(f),Y)-g(\nabla_Y\text{grad}(f),X) $$ It follows that $$ g(\nabla_X\text{grad}(f),Y)=g(\nabla_Y\text{grad}(f),X) $$ Now the easy part, substitute $X=\text{grad}(f)$ and conclude that for every $Y$ $$ g(\nabla_{\text{grad}(f)}\text{grad}(f),Y)=g(\nabla_Y\text{grad}(f),\text{grad}(f))=0 $$ The last one because $g(\text{grad}(f),\text{grad}(f))=1$ is constant, so $$ 0=Yg(\text{grad}(f),\text{grad}(f))=2g(\nabla_Y\text{grad}(f),\text{grad}(f)) $$
Martingales, finite expectation I have some uncertainties about one of the requirements for martingale, i.e. showing that $\mathbb{E}|X_n|<\infty,n=0,1,\dots$ when $(X_n,n\geq 0)$ is some stochastic process. In particularly, in some solutions I find that lets say $\mathbb{E}|X_n|<n$ is accepted, for example here (2nd slide, 1.2 example). So my question is: what is the way of thinking if n goes to infinity? Why are we accepting n as a boundary or maybe I misunderstood something?
The condition $\mathbb E|X_n|\lt n$ is odd. What is required for $(X_n)$ to be a martingale is, in particular, that each $X_n$ is integrable (if only to be able to consider its conditional expectation), but nothing is required about the growth of $\mathbb E|X_n|$. Consider for example a sequence $(Z_n)$ of i.i.d. centered $\pm1$ Bernoulli random variables, and a real valued sequence $(a_n)$. Then $X_n=\sum\limits_{k=1}^na_kZ_k$ defines a martingale $(X_n)$ and $\mathbb E|X_n|\gt |a_n|$ by convexity. One sees that the growth of $n\mapsto\mathbb E|X_n|$ cannot be limited a priori.
Numbers are too large to show $65^{64}+64^{65}$ is not a prime I tried to find cycles of powers, but they are too big. Also $65^{n} \equiv 1(\text{mod}64)$, so I dont know how to use that.
Hint $\rm\ \ x^4 +\: 64\: y^4\ =\ (x^2+ 8\:y^2)^2 - (4xy)^2\ =\ (x^2-4xy + 8y^2)\:(x^2+4xy+8y^2)$ Thus $\rm\ x^{64} + 64\: y^{64} =\ (x^{32} - 4 x^{16} y^{16} + 8 y^{32})\:(x^{32} - 4 x^{16} y^{16} + 8 y^{32})$ Below are some other factorizations which frequently prove useful for integer factorization. Aurifeuille, Le Lasseur and Lucas discovered so-called Aurifeuillian factorizations of cyclotomic polynomials $\rm\;\Phi_n(x) = C_n(x)^2 - n\ x\ D_n(x)^2\,$ (aka Aurifeuillean). These play a role in factoring numbers of the form $\rm\; b^n \pm 1\:$, cf. the Cunningham Project. Below are some simple examples of such factorizations: $$\begin{array}{rl} x^4 + 2^2 \quad=& (x^2 + 2x + 2)\;(x^2 - 2x + 2) \\\\ \frac{x^6 + 3^2}{x^2 + 3} \quad=& (x^2 + 3x + 3)\;(x^2 - 3x + 3) \\\\ \frac{x^{10} - 5^5}{x^2 - 5} \quad=& (x^4 + 5x^3 + 15x^2 + 25x + 25)\;(x^4 - 5x^3 + 15x^2 - 25x + 25) \\\\ \frac{x^{12} + 6^6}{x^4 + 36} \quad=& (x^4 + 6x^3 + 18x^2 + 36x + 36)\;(x^4 - 6x^3 + 18x^2 - 36x + 36) \\\\ \end{array}$$
Is $ f(x) = \left\{ \begin{array}{lr} 0 & : x = 0 \\ e^{-1/x^{2}} & : x \neq 0 \end{array} \right. $ infinitely differentiable on all of $\mathbb{R}$? Can anyone explicitly verify that the function $ f(x) = \left\{ \begin{array}{lr} 0 & : x = 0 \\ e^{-1/x^{2}} & : x \neq 0 \end{array} \right. $ is infinitely differentiable on all of $\mathbb{R}$ and that $f^{(k)}(0) = 0$ for every $k$?
For $x\neq 0$ you get: $$\begin{split} f^\prime (x) &= \frac{2}{x^3}\ f(x)\\ f^{\prime \prime} (x) &= 2\left( \frac{2}{x^6} - \frac{3}{x^4}\right)\ f(x)\\ f^{\prime \prime \prime} (x) &= 4\left( \frac{2}{x^9} - \frac{9}{x^7} +\frac{6}{x^5} \right)\ f(x) \end{split}$$ In the above equalities you can see a path, i.e.: $$\tag{1} f^{(n)} (x) = P_{3n}\left( \frac{1}{x}\right)\ f(x)$$ where $P_{3n}(t)$ is a polynomial of degree $3n$ in $t$. Formula (1) can be proved by induction. You have three base case, hence you have only to prove the inductive step. So, assume (1) holds for $n$ and evaluate: $$\begin{split} f^{(n+1)} (x) &= \left( P_{3n}\left( \frac{1}{x}\right)\ f(x) \right)^\prime\\ &= -\frac{1}{x^2}\ \dot{P}_{3n} \left( \frac{1}{x}\right)\ f(x) + P_{3n} \left( \frac{1}{x}\right)\ f^\prime (x)\\ &= \left[ -\frac{1}{x^2}\ \dot{P}_{3n} \left( \frac{1}{x}\right) +\frac{2}{x^3}\ P_{3n} \left( \frac{1}{x}\right)\right]\ f(x)\\ &= \left[ -t^2\ \dot{P}_{3n}( t) +2t^3\ P_{3n}( t)\right]_{t=1/x}\ f(x) \end{split}$$ where the dot means derivative w.r.t. the dummy variable $t$; now the function $-t^2\ \dot{P}_{3n}( t) +2t^3\ P_{3n}( t)$ is a polynomial in $t$ of degree $3n+3=3(n+1)$, therefore: $$f^{(n+1)}(x) = P_{3(n+1)} \left( \frac{1}{x}\right)\ f(x)$$ as you wanted. Formula (1) proves that $f\in C^\infty (\mathbb{R}\setminus \{0\})$. Now, for each fixed $n$, you have: $$\lim_{x\to 0} f^{(n)}(x) = \lim_{x\to 0} P_{3n}\left( \frac{1}{x}\right)\ f(x) =0$$ for $f\in \text{o}(1/x^{3n})$ as $x\to 0$. Therefore, by an elementary consequence of Lagrange's Mean Value Theorem, any derivative of your functions is differentiable also in $0$. Thus $f\in C^\infty (\mathbb{R})$.
Units and Nilpotents If $ua = au$, where $u$ is a unit and $a$ is a nilpotent, show that $u+a$ is a unit. I've been working on this problem for an hour that I tried to construct an element $x \in R$ such that $x(u+a) = 1 = (u+a)x$. After tried several elements and manipulated $ua = au$, I still couldn't find any clue. Can anybody give me a hint?
If $u=1$, then you could do it via the identity $$(1+a)(1-a+a^2-a^3+\cdots + (-1)^{n}a^n) = 1 + (-1)^{n}a^{n+1}$$ by selecting $n$ large enough. If $uv=vu=1$, does $a$ commute with $v$? Is $va$ nilpotent?
Compute $\lim \limits_{x\to\infty} (\frac{x-2}{x+2})^x$ Compute $$\lim \limits_{x\to\infty} (\frac{x-2}{x+2})^x$$ I did $$\lim_{x\to\infty} (\frac{x-2}{x+2})^x = \lim_{x\to\infty} \exp(x\cdot \ln(\frac{x-2}{x+2})) = \exp( \lim_{x\to\infty} x\cdot \ln(\frac{x-2}{x+2}))$$ But how do I continue? The hint is to use L Hopital's Rule. I tried changing to $$\exp(\lim_{x\to\infty} \frac{\ln(x-2)-\ln(x+2)}{1/x})$$ This is $$(\infty - \infty )/0 = 0/0$$ But I find that I can keep differentiating?
you can use $$\left( \frac{x-2}{x+2}\right)^x = \left(1 - \frac{4}{x+2}\right)^x$$ and $(1 + \frac ax)^x \to \exp(a)$, HTH, AB
Prove that the following integral is divergent $$\int_0^\infty \frac{7x^7}{1+x^7}$$ Im really not sure how to even start this. Does anyone care to explain how this can be done?
The only problem is in $+\infty$. We have for $x\geq 1$ that $1+x^7\leq 2x^7$ so $\frac{7x^7}{1+x^7}\geq \frac 72\geq 0$ and $\int_1^{+\infty}\frac 72dt$ is divergent, so $\int_1^{+\infty}\frac{7x^7}{1+x^7}dx$ is divergent. Finally, $\int_0^{+\infty}\frac{7x^7}{1+x^7}dx$ is divergent.
Why do we look at morphisms? I am reading some lecture notes and in one paragraph there is the following motivation: "The best way to study spaces with a structure is usually to look at the maps between them preserving structure (linear maps, continuous maps differentiable maps). An important special case is usually the functions to the ground field." Why is it a good idea to study a space with structure by looking at maps that preserve this structure? It seems to me as if one achieves not much by going from one "copy" of a structured space to another copy.
There is no short and simple answer, as has already been mentioned in the comments. It is a general change of perspective that has happened during the 20th century. I think if you had asked a mathematician around 1900 what math is all about, he/she would have said: "There are equations that we have to solve" (linear or polynomial equations, differential and integral equations etc.). Then around 1950 you would have met more and more people saying "there are spaces with a certain structure and maps betweeen them". And today more and more people would add "...which together are called categories". It's essentially a shift towards a higher abstraction, towards studying Banach spaces instead of bunches of concrete spaces that happen to have an isomorphic Banach space structure, or studying an abstract group instead of a bunch of isomorphic representations etc. I'm certain all of this will become clearer after a few years of study.
What exactly do we mean when say "linear" combination? I've noticed that the term gets abused alot. For instance, suppose I have $c_1 x_1 + c_2 x_2 = f(x)$...(1) Eqtn (1) is such what we say "a linear combination of $x_1$ and $x_2$" In ODE, sometimes when we want to solve a homogeneous 2nd order ODE like $y'' + y' + y = 0$, we find the characteristic eqtn and solve for the roots and put it into whatever form necessary. But in all casses, the solution takes form of $c_1y_1 + c_2y_2 = y(t)$. The thing is that $y_1$ and $y_2$ itself doesn't even have linear terms, so does it make sense to say $c_1y_1^2 +c_2y_2^2 = f(t)$ is a "quadratic" combination of y_1 and y_2?
It's a linear combination in the vector space of continuous (or differentiable or whatever) functions. $y_1$ and $y_2$ are vectors (that is, elements of the vector space in question) and $c_1$ and $c_2$ are scalars (elements of the field for the vector space, in this case $\mathbb{R}$). In linear algebra it does not matter what kind of elements vector spaces consists of (so these might be tuples in the case of $\mathbb{R}^n$, or linear operators, or just continuous functions, or something entirely different), but that vector spaces satisfy the axioms which are algebraic laws.
Sum of three consecutive cubes When I noticed that $3^3+4^3+5^3=6^3$, I wondered if there are any other times where $(a-1)^3+a^3+(a+1)^3$ equals another cube. That expression simplifies to $3a(a^2+2)$ and I'm still trying to find another value of $a$ that satisfies the condition (the only one found is $a=4$) Is this impossible? (It doesn't happen for $3 \leq a \leq 10000$) Is it possible to prove?
How about $$\left(-\frac{1}{2}\right)^3 + \left(\frac{1}{2}\right)^3 + \left(\frac{3}{2}\right)^3 = \left(\frac{3}{2}\right)^3 ?$$ After all, the OP didn't specify where $a$ lives... (by the way, there are infinitely many distinct rational solutions of this form!). Now for a more enlightened answer: no, there are no other integral solutions with $a\in\mathbb{Z}$, other than $a=0$ and $a=4$. Here is why (what follows is a sketch of the argument, several details would be too lengthy to write fully). Suppose $(a-1)^3+a^3+(a+1)^3=b^3$. Then $3a^3+6a=b^3$. Hence $(a:b:1)$ is a point on the elliptic curve $E:3Z^3+6ZY^2=X^3$ with origin at $(0:1:0)$. In particular, a theorem of Siegel tells us that there are at most finitely many integral solutions of $3a^3+6a=b^3$ with $a,b\in\mathbb{Z}$. Now the hard part is to prove that there are exactly $2$ integral solutions. With a change of variables $U=X/Z$ and $V=Y/Z$ followed by a change $U=x/6$ and $V=y/36$, we can look instead at the curve $E':y^2=x^3-648$. This curve has a trivial torsion subgroup and rank $2$, with generators $(18,72)$ and $(9,9)$. Moreover each point $(x,y)$ in $E'$ corresponds to a (projective) point $(x/6:y/36:1)$ on $E$, and a point $(X:Y:Z)$ on $E$ corresponds to a solution $a=Z/Y$ and $b=X/Y$. This means that $E$ is generated by $P_1=(3:2:1)$ and $P_2=(18:3:12)$ which correspond respectively to $(a,b)=(1/2,3/2)$ and $(4,6)$. The origin $(0:1:0)$ corresponds to $(a,b)=(0,0)$. Now it is a matter of looking through all $\mathbb{Z}$-linear combinations of $P_1$ and $P_2$ to see if any gives another $(a,b)$ integral. However, this is a finite search, because of the way heights of points work, and one can calculate a bound on the height for a point $(a,b)$ to have both integral coordinates. Once this bound is found, a search among a few small linear combinations of $P_1$ and $P_2$ shows that $(0,0)$ and $(4,6)$ are actually the only two possible integral solutions. Here is another rational solution, not as trivial as the first one I offered, that appears from $P_1-P_2$: $$\left(-\frac{10}{11}\right)^3 + \left(\frac{1}{11}\right)^3 + \left(\frac{12}{11}\right)^3 = \left(\frac{9}{11}\right)^3 $$
Proof that $\pi$ is rational I stumbled upon this proof of $\pi$ being rational (coincidentally, it's Pi Day). Of course I know that $\pi$ is irrational and there have been multiple proofs of this, but I can't seem to see a flaw in the following proof that I found here. I'm assuming it will be blatantly obvious to people here, so I was hoping someone could point it out. Thanks. Proof: We will prove that pi is, in fact, a rational number, by induction on the number of decimal places, N, to which it is approximated. For small values of N, say 0, 1, 2, 3, and 4, this is the case as 3, 3.1, 3.14, 3.142, and 3.1416 are, in fact, rational numbers. To prove the rationality of pi by induction, assume that an N-digit approximation of pi is rational. This number can be expressed as the fraction M/(10^N). Multiplying our approximation to pi, with N digits to the right of the decimal place, by (10^N) yields the integer M. Adding the next significant digit to pi can be said to involve multiplying both numerator and denominator by 10 and adding a number between between -5 and +5 (approximation) to the numerator. Since both (10^(N+1)) and (M*10+A) for A between -5 and 5 are integers, the (N+1)-digit approximation of pi is also rational. One can also see that adding one digit to the decimal representation of a rational number, without loss of generality, does not make an irrational number. Therefore, by induction on the number of decimal places, pi is rational. Q.E.D.
This "proof" shows that any real number is rational... The mistake here is that you are doing induction on the sequence $\pi_n$ of approximations. And with induction you can get information on each element of the sequence, but not on their limit. Or, put in another way, the proof's b.s. is on "therefore, by induction on the number of decimal places..."
Dynamic Optimization - Infinite dimensional spaces - Reference request Respected community members, I am currently reading the book "recursive macroeconomic theory" by Sargent and Ljungqvist. While reading this book I have realized that I do not always fully understand what is going on behind "the scenes". In particular, in chapter 8, the authors uses the Lagrange method. This method is pretty clear to me in the finite dimensional case, i.e. optimization over $R^n$. However here we are dealing with infinite dimensional problems. Why does the Lagrange method work here? Can someone point me to any good references? To clarify: I do understand how to apply the "recipe" given in the book, however I do not understand why it works. What are the specific assumptions needed in order to assure that a solution exists? That it is unique? This is the kind of questions that I would like to be able to answer rigorously. I hope I am clear enough about this, if not please let me know. Furthermore I really appreciate any help from you. Thank you for your time. Btw: If anyone have the time to look into the matters, the book is available for free here: elogica.br.inter.net/bere/boo/sargent.pdf
A fairly rigorous treatment with many economics applications is Stokey, Lucas and Prescott's (SLP) Recursive Methods in Economic Dynamics. This MIT OCW course gives good additional readings. I find the ones on transversality conditions very important. Standard mathematical treatments are Bertsekas's Dynamic Programming and Optimal Control and Puterman's Markov Decision Processes.
numerically solving differential equations $\frac{d^2 \theta}{dx^2} (1 + \beta \theta) + \beta \left(\frac{d \theta}{d x}\right)^2 - m^2 \theta = 0$ Boundary Conditions $\theta=100$ at $x = 0$, $\frac{d\theta}{dx} = 0$ at $x = 2$ $\beta$ and $m$ are constants. Please help me solve this numerically (using finite difference). The squared term is really complicating things! Thank You!
Choose an integer $N$, let $h=2/N$ and let $\theta_k$ be the approximation given by the finite difference method to the exact value $\theta(k\,h)$, $0\le k\le N$. We get the system of $N-1$ equations $$ \frac{\theta_{k+1}-2\,\theta_k+\theta_{k-1}}{h^2}(1+\beta\,\theta_k)+\beta\,\Bigl(\frac{\theta_k-\theta_{k-1}}{h}\Bigr)^2-m^2\,\theta_k=0,\quad 1\le k\le N-1\tag1 $$ complemented with two more coming from the boundary conditions: $$ \theta_0=100,\quad \theta_N-\theta_{N-1}=0. $$ I doubt that this nonlinear system can be solved explicitly. I suggest two ways of proceeding. The first is to solve the system numerically. The other is to apply a shooting method to the equation. Choose a starting value $\theta_N=a$. The system (1) can be solved recursively, obtaining at the end a value $\theta_0=\theta_0(a)$. If $\theta_0(a)=100$, you are done. If not, change the value of $a$ and repeat the process. Your first goal is to find two values $a_1$ and $a_2$ such that $\theta_0(a_1)<100<\theta_0(a_2)$. Then use the bissection method to approximate a value of $a$ such that $\theta_0(a)=100$.
Number of distinct limits of subsequences of a sequence is finite? "The number of distinct limits of subsequences of a sequence is finite?" I've been mulling over this question for a while, and I think it is true, but I can't see how I might prove this formally. Any ideas? Thanks
No, the following is a counter-example: Let $E: \mathbb N \to \mathbb N^2$ be an enumeration of $\mathbb N^2$, and set $a_n = (E(n))_1$. Then $a_n$ contains a constant sub-sequence $a_{n_i} = k$ for every $k \in \mathbb N$.
Show that $\displaystyle{\frac{1}{9}(10^n+3 \cdot 4^n + 5)}$ is an integer for all $n \geq 1$ Show that $\displaystyle{\frac{1}{9}(10^n+3 \cdot 4^n + 5)}$ is an integer for all $n \geq 1$ Use proof by induction. I tried for $n=1$ and got $\frac{27}{9}=3$, but if I assume for $n$ and show it for $n+1$, I don't know what method to use.
${\displaystyle{\frac{1}{9}}(10^n+3 \cdot 4^n + 5)}$ is an integer for all $n \geq 1$ Proof by induction: For $n=1, {\displaystyle{\frac{1}{9}}(10^1+3 \cdot 4^1 + 5) = \frac{27}{9} = 3}$, so the result holds for $n=1$ Assume the result to be true for $n=m$, i.e. $\displaystyle{\frac{1}{9}(10^m+3 \cdot 4^m + 5)}$ is an integer To show ${\displaystyle{\frac{1}{9}}(10^{m+1}+3 \cdot 4^{m+1} + 5)}$ is an integer. $$ \begin{align*} \displaystyle{\frac{1}{9}(10^{m+1}+3 \cdot 4^{m+1} + 5) -(10^m+3 \cdot 4^m +5 )} &= \displaystyle{\frac{1}{9}\left((10^{m+1}-10^m) +3\cdot (4^{m+1}-4^m) \right)} \\ &=\displaystyle{\frac{1}{9}\left[\left(10^m(10-1)+3 \cdot 4^m (4-1) \right)\right]}\\ &= \left(10^m+4^m\right) \end{align*} $$ which is an integer, and therefore ${\displaystyle{\frac{1}{9}}(10^{m+1}+3 \cdot 4^{m+1} + 5)}$ is an integer.
How to prove a trigonometric identity $\tan(A)=\frac{\sin2A}{1+\cos 2A}$ Show that $$ \tan(A)=\frac{\sin2A}{1+\cos 2A} $$ I've tried a few methods, and it stumped my teacher.
First, lets develop a couple of identities. Given that $\sin 2A = 2\sin A\cos A$, and $\cos 2A = \cos^2A - \sin^2 A$ we have $$\begin{array}{lll} \tan 2A &=& \frac{\sin 2A}{\cos 2A}\\ &=&\frac{2\sin A\cos A}{\cos^2 A-\sin^2A}\\ &=&\frac{2\sin A\cos A}{\cos^2 A-\sin^2A}\cdot\frac{\frac{1}{\cos^2 A}}{\frac{1}{\cos^2 A}}\\ &=&\frac{2\tan A}{1-\tan^2A} \end{array}$$ Similarly, we have $$\begin{array}{lll} \sec 2A &=& \frac{1}{\cos 2A}\\ &=&\frac{1}{\cos^2 A-\sin^2A}\\ &=&\frac{1}{\cos^2 A-\sin^2A}\cdot\frac{\frac{1}{\cos^2 A}}{\frac{1}{\cos^2 A}}\\ &=&\frac{\sec^2 A}{1-\tan^2A} \end{array}$$ But sometimes it is just as easy to represent these identities as $$\begin{array}{lll} (1-\tan^2 A)\sec 2A &=& \sec^2 A\\ (1-\tan^2 A)\tan 2A &=& 2\tan A \end{array}$$ Applying these identities to the problem at hand we have $$\begin{array}{lll} \frac{\sin 2A}{1+\cos 2A}&=& \frac{\sin 2A}{1+\cos 2A}\cdot\frac{\frac{1}{\cos 2A}}{\frac{1}{\cos 2A}}\\ &=& \frac{\tan 2A}{\sec 2A +1}\\ &=& \frac{(1-\tan^2 A)\tan 2A}{(1-\tan^2 A)(\sec 2A +1)}\\ &=& \frac{(1-\tan^2 A)\tan 2A}{(1-\tan^2 A)\sec 2A +(1-\tan^2 A)}\\ &=& \frac{2\tan A}{\sec^2 A +(1-\tan^2 A)}\\ &=& \frac{2\tan A}{(\tan^2 A+1) +(1-\tan^2 A)}\\ &=& \frac{2\tan A}{2}\\ &=& \tan A\\ \end{array}$$ Lessons learned: As just a quick scan of some of the other answers will indicate, a clever substitution can shorten your workload considerably.
Proof by contrapositive Prove that if the product $ab$ is irrational, then either $a$ or $b$ (or both) must be irrational. How do I prove this by contrapositive? What is contrapositive?
The statement you want to prove is: If $ab$ is irrational, then $a$ is irrational or $b$ is irrational. The contrapositive is: If not($a$ is irrational or $b$ is irrational), then not($ab$ is irrational). A more natural way to state this (using DeMorgan's Law) is: If both $a$ and $b$ are rational, then $ab$ is rational. This last statement is indeed true. Since the truth of a statement and the truth of its contrapositive always agree, one can conclude the original statement is true, as well.
An example of an endomorphism Could someone suggest a simple $\phi\in $End$_R(A)$ where $A$ is a finitely generated module over ring $R$ where $\phi$ is injective but not surjective? I have a hunch that it exists but I can't construct an explicit example. Thanks.
Let $R=K$ be a field, and let $A=K[x]$ be the polynomial ring in one variable over $K$ (with the module structure coming from multiplication). Then let $\phi(f)=xf$. It is injective, but has image $xK[x]\ne K[x]$.
Blowing up a singular point on a curve reduces its singular multiplicity by at least one Let $X$ be the affine plane curve given by $y^2=x^3$, and $O=(0,0)$. Then $X$ has a double singularity at $O$, since its tangent space at $O$ is the doubled $x$-axis. How do we see that, if $\widetilde{X}$ is the blow-up of $X$ at $O$, then $O$ is a nodal point of $\widetilde{X}$, i.e. the tangent space of $\widetilde{X}$ at $O$ consists of two distinct tangent lines?
Blowing up a cuspidal plane curve actually yields a nonsingular curve. So the multiplicity is actually reduced by more than one. This is e.g. Exercise 19.4.C in Vakil's notes "Foundations of Algebraic Geometry". One can compute this quite easily in local charts following e.g. Lecture 20 in Harris' Book "Algebraic Geometry". Recall that $\tilde X$ can be computed by blowing up the affine plane first and then taking the proper transform of the curve $X$. So: The blow-up of the affine plane is given by the points $z_1 W_2=z_2 W_1$ in $\mathbb A^2\times \mathbb P^1$ with coordinates $z_1, z_2$ on $\mathbb A^2$ and $W_1,W_2$ on $\mathbb P^1$. Taking Euclidean coordinates $w_1=W_1/W_2$ on $U_2=\{W_2\neq0\}$ yields an isomorphism from $U_2$ to $\mathbb A^2$ with coordinates $(z_2,w_1)$. We have $z_1^2-z_2^3=z_2^2w_1^2-z_2^3$ on $U_2$ and thus the proper transform $\tilde X$ is defined by the polynomial $w_1^2-z_2$. Hence it is smooth!
Graph Theory - How can I calculate the number of vertices and edges, if given this example An algorithm book Algorithm Design Manual has given an description: Consider a graph that represents the street map of Manhattan in New York City. Every junction of two streets will be a vertex of the graph. Neighboring junctions are connected by edges. How big is this graph? Manhattan is basically a grid of 15 avenues each crossing roughly 200 streets. This gives us about 3,000 vertices and 6,000 edges, since each vertex neighbors four other vertices and each edge is shared between two vertices. If it says "The graph is a grid of 15 avenues each crossing roughly 200 streets", how can I calculate the number of vertices and edges? Although the description above has given the answers, but I just can't understand. Can anyone explain the calculation more easily? Thanks
Every junction between an avenue and a street is a vertex. As there are $15$ avenues and (about) $200$ streets, there are (about) $15*200=3000$ vertices. Furthermore, every vertex has an edge along an avenue and an edge along a street that connect it to two other vertices. Hence, there are (about) $2*3000 = 6000$ edges1. Does that answer your question? 1 With regards to edges, a visual way to imagine it would be to imagine that the avenues are going north-south and the streets are going east-west. Start with the junction/vertex in the northwesternmost corner. It is adjacent to two other vertices: one south along the avenue and one east along the street. Similarly, every vertex has a vertex to the south and a vertex to the east (as well as north and west for most of them, but those are irrelevant for this). Hence, there are two edges for every vertex.
Is it mathematically correct to write $a \bmod n \equiv b$? This is not a technical question, but a question on whether we can use a particular notation while doing modular arithmetic. We write $a \equiv b \bmod n$, but is it right to write $a \bmod n \equiv b$?
It is often correct. $\TeX$ distinguishes the two usages: the \pmod control sequence is for "parenthesized" $\pmod n$ used to contextualize an equivalence, as in your first example, and the \bmod control sequence is for "binary operator" $\bmod$ when used like a binary operator (in your second example). But in the latter case, you should use $=$, not $\equiv$. $7\bmod4 = 3$, and the relation here is a numeric equality, indicated by $=$, not a modular equivalence, which would be indicated by $\equiv$.
Left or right edge in cubic planar graph Given a cubic planar graph, if I "walk" on one edge to get to a vertex, it it possible to know which of the other two edges is the left edge and which one is the right edge? Am I forced to draw the graph on paper, without edge crossing, and visually identify left and right edges?
My comment as an answer so it can be accepted: The answer is no: This can't be possible, since you could draw the mirror image instead, and then left and right edges would be swapped, so they can't be determined by the abstract graph alone.
Traveling between integers- powers of 2 Moderator Note: At the time that this question was posted, it was from an ongoing contest. The relevant deadline has now passed. Consider the integers. We can only travel directly between two integers with a difference whose absolute value is a power of 2 and every time we do this it is called a step. The distance $d$ between two integers is the minimum number of steps required to get from one to the other. Note however that we can travel backwards. For instance $d(2,17)$ is 2: $2+16=18 \rightarrow 18-1=17$. How can we prove that for any integer n, we will always have some $d(a,b)=n$ where$b>a$? If we are only able to take forward steps I know that the number of 1s in the binary representation of $b-a$ would be $d(a,b)$. However, we are able to take steps leftward on the number line...
It is easy to see that the function $s(n):=d(0,n)$ $\ (n\geq1)$ satisfies the following recursion: $$s(1)=1,\qquad s(2n)\ =\ s(n), \qquad s(2n+1)=\min\{s(n),s(n+1)\}+1 \ .$$ In particular $s(2)=s(4)=1$, $s(3)=2$. Consider now the numbers $$a_r:={1\over6}(4^r+2)\qquad (r\geq2)$$ satisfying the recursion $$a_2=3,\qquad a_{r+1}=4 a_r-1\quad (r\geq2).$$ The first few of these are $3$, $11$, $43$, $171$. I claim that $$s(a_r-1)=s(a_r+1)=r-1,\quad s(a_r)=r\qquad (r\geq2)\ .$$ The claim is true for $r=2$. Assume that it is true for $r$. Then $$s(2a_r-1)=\min\{s(a_r),s(a_r-1)\}+1=r,\qquad s(2a_r)=r$$ and therefore $$s(4a_r-2)=s(4a_r)=r,\quad s(4a_r-1)=\min\{s(2a_r),s(2a_r-1)\}+1=r+1\ .$$ The last line can be read as $$s(a_{r+1}-1)=s(a_{r+1}+1)=r, \qquad s(a_{r+1})=r+1\ .$$
If $A$ is a subset of $B$, then the closure of $A$ is contained in the closure of $B$. I'm trying to prove something here which isn't necessarily hard, but I believe it to be somewhat tricky. I've looked online for the proofs, but some of them don't seem 'strong' enough for me or that convincing. For example, they use the argument that since $A\subset \overline{B} $, then $ \overline{A} \subset \overline{B} $. That, or they use slightly altered definitions. These are the definitions that I'm using: Definition #1: The closure of $A$ is defined as the intersection of all closed sets containing A. Definition #2: We say that a point x is a limit point of $A$ if every neighborhood of $x$ intersects $A$ in some point other than $x$ itself. Theorem 1: $ \overline{A} = A \cup A' $, where $A'$ = the set of all limit points of $A$. Theorem 2: A point $x \in \overline{A} $ iff every neighborhood of $x$ intersects $A$. Prove: If $ A \subset B,$ then $ \overline{A} \subset \overline{B} $ Proof: Let $ \overline{B} = \bigcap F $ where each $F$ is a closed set containing $B$. By hypothesis, $ A \subset B $; hence, it follows that for each $F \in \overline{B} $, $ A \subset F \subset \overline{B} $. Now that we have proven that $ A \subset \overline{B} $, we show $A'$ is also contained in $\overline{B} $. Let $ x \in A' $. By definition, every neighborhood of x intersects A at some point other than $x$ itself. Since $ A \subset B $, every neighborhood of $x$ also intersects $B$ at some other point other than $x$ itself. Then, $ x \in B \subset \overline{B} $. Hence, $ A \cup A' \subset \overline{B}$. But, $ A \cup A' = \overline{A}$. Hence, $ \overline{A} \subset \overline{B}.$ Is this proof correct? Be brutally honest, please. Critique as much as possible.
I think it's much simpler than that. By definition #1, the closure of A is a subset of any closed set containing A; and the closure of B is certainly a closed set containing A (because it contains B, which contains A). QED.
Find the value of $(-1)^{1/3}$. Evaluate $(-1)^{\frac{1}{3}}$. I've tried to answer it by letting it be $x$ so that $x^3+1=0$. But by this way, I'll get $3$ roots, how do I get the actual answer of $(-1)^{\frac{1}{3}}$??
Just put it like complex numbers: We know that $z=\sqrt[k]{m_\theta}$, so $z=\sqrt[3]{-1}$ $-1=1_{\pi}$ $\alpha_n=\dfrac{\theta+k\pi}{n}$ $\alpha_0=60$ $\alpha_1=180$ $\alpha_2=300$ So the answers are: $z_1=1_{\pi/3}$ $z_2=1_{\pi}$ $z_3=1_{5\pi/3}$
Suppose two $n\times n$ matricies, $A$ and $B$, how many possible solutions are there. Suppose i construct a $n\times n$ matrix $C$, by multiplying two $n\times n$ matrices $A$ and $B$ i.e. $AB = C$. Given $B$ and $C$, how many other $A$'s can yield $C$ also i.e. is it just the exact $A$, infinitely many other $A$'s or no other $A$'s. There are no assumptions made about the invertability of $A$ and $B$. In the case that $A$ and $B$ are invertable there exists only one such $A$.
In general, there could be infinitely many $A$. Given two solutions $A_1B=C$ and $A_2B=C$, we see that $(A_1-A_2)B=0$ So, if there is at least one solution to $AB=C$, you can see that there are as many solutions to $AB=C$ as there are to $A_0B=0$ Now if $B$ is invertible, the only $A_0$ is $A_0=0$. If $A_0B=0$ then $(kA_0)B=0$ for any real $k$, so if there is a non-zero root to $A_0B=0$ there are infinitely many. So you have to show that if $B$ is not invertible, then there is at least one matrix $A_0\neq 0$ such that $A_0B=0$. Indeed, the set of such $A_0$ forms a vector space which depends on the "rank" of $B$.
Eigenvalue or Eigenvector for a bidiagonal $n\times n$ matrix Let $$J = \begin{bmatrix} a & b & 0 & 0 & \cdots & \cdots\\\\ 0 & a & b & 0 & \cdots & \cdots\\\\ \vdots & \vdots & \ddots & \cdots & \cdots & \cdots \\\\ \vdots & \vdots & \vdots & \ddots & \cdots & \cdots \\\\ \vdots & \vdots & \vdots &\ddots & a & b \\\\ \vdots & \vdots & \vdots & \vdots & 0 & a \\ \end{bmatrix}$$ I have to find eigenvalues and eigenvectors for $J$. My thoughts on this... a = 2 3 0 2 octave-3.2.4.exe:2> b=[2,3,0;0,2,3;0,0,2] b = 2 3 0 0 2 3 0 0 2 octave-3.2.4.exe:3> eig(a) ans = 2 2 octave-3.2.4.exe:4> eig(b) ans = 2 2 2 octave-3.2.4.exe:5> I can see that the eigenvalue is $a$ for $n \times n$ matrix. Any idea how I can prove it that is the diagonal for any $N \times N$ matrix. Thanks!!! I figured out how to find the eigenvalues. But my eigenvector corresponding to the eigenvalue a comes out to be a zero vector... if I try using matlab, the eigenvector matrix has column vctors with 1 in the first row and zeros in rest of the col vector... what am I missing? can someone help me figure out that eigenvector matrix?
(homework) so some hints: * *The eigenvalues are the roots of ${\rm det}(A-xI) = 0.$ *The determinant of a triangular matrix is the product of all diagonal entries. *How many diagonal entries does an $n\times n$ matrix have? *How many roots does $(a - x)^n = 0$ have?
Stability of trivial solution for DE system with non-constant coefficient matrix Given the arbitrary linear system of DE's $$x'=A(t)x,$$ with the condition that the spectral bound of $A(t) $ is uniformly bounded by a negative constant, is the trivial solution always stable? All the $(2\times 2)$ matrices I've tried which satisfy the above property yield stable trivial solutions, which seems to suggest this might be the case in general. I can't think of a simple counterexample, so I'm asking if one exists. If there isn't what would be some steps toward proving the statement? This is indeed homework.
You can elongate a vector a bit over a short time using a constant matrix with negative eigenvalues, right? Now just do it and at the very moment it starts to shrink, change the matrix. It is not so easy to come up with an explicit formula (though some periodic systems will do it) but this idea of a counterexample is not hard at all ;).
Why does $PSL(2,\mathbb C)\cong PGL(2,\mathbb C)$ but $PSL(2,\mathbb R) \not\cong PGL(2,\mathbb R)$? Why does $PSL(2,\mathbb C)\cong PGL(2,\mathbb C)$ but $PSL(2,\mathbb R) \not\cong PGL(2,\mathbb R)$?
You have surjective morphisms $xL(n,K)\to PxL(n,K)$ (whose kernel consists of the multiples of the identity) for $x\in\{G,S\}$, $n\in\mathbb N$ and and $K\in\{\mathbb C,\mathbb R\}$. You also have embeddings $SL(n,K)\to GL(n,K)$. Since the kernel of the composed morphism $SL(n,K)\to GL(n,K)\to PGL(n,K)$ contains (and in fact coincides with) the kernel of the morphism $SL(n,K)\to PSL(n,K)$ (it is the (finite) set of multiples of the identity in $SL(n,K)$), one may pass to the quotient to obtain a morphism $PSL(n,K)\to PGL(n,K)$, which is injective (because of "coincides with" above). The question is whether this morphism is surjective. The question amounts to the following: given and $g\in GL(n,K)$, does its image in $PGL(n,K)$ coincide with the image of some $g'\in SL(n,K)\subset GL(n,K)$? For that to happen, there should be a $\lambda\in K^\times$ such that $\lambda g\in SL(n,K)$, and since $\det(\lambda g)=\lambda^n\det g$, one is led to search for solutions $\lambda$ of $\lambda^n=(\det g)^{-1}$, where $(\det g)^{-1}$ could be any element of $K^\times$. It is easy to see how the solvability of this polynomial equation depends on $n$ and $K$.
first order differential equation I needed help with this Differential Equation, below: $$dy/dt = t + y, \text{ with } y(0) = -1$$ I tried $dy/(t+y) = dt$ and integrated both sides, but it looks like the $u$-substitution does not work out.
This equation is not separable. In other words, you can't write it as $f(y)\;dy=g(t)\;dt$. A differential equation like this can be solved by integrating factors. First, rewrite the equation as: $$\frac{dy}{dt}-y=t$$ Now we multiply the equation by an integrating factor so we can use the product rule, $d(uv)=udv+vdu.$ For this problem, that integrating factor would be $e^{-t}$. $$e^{-t}\frac{dy}{dt}-e^{-t}y=\frac d{dt}(e^{-t}y)=te^{-t}$$ $$e^{-t}y=\int te^{-t}dt=-te^{-t}+\int e^{-t}dt=-te^{-t}-e^{-t}+C$$ $$y=Ce^t-t-1$$ For this specific problem, we could also follow Iasafro's suggestion. $$z=y+t,\frac{dz}{dt}=\frac{dy}{dt}+1,\frac{dy}{dt}=\frac{dz}{dt}-1$$ $$\frac{dz}{dt}-1=z,\frac{dz}{dt}=z+1,\frac{dz}{z+1}=dt$$ As you can see, this substitution resulted in a separable equation, allowing you to integrate both sides.
Limit of $\arctan(x)/x$ as $x$ approaches $0$? Quick question: I came across the following limit: $$\lim_{x\rightarrow 0^{+}}\frac{\arctan(x)}{x}=1.$$ It seems like the well-known limit: $$\lim_{x\rightarrow 0}\frac{\sin x}{x}=1.$$ Can anyone show me how to prove it?
We can make use of L'Hopital's rule. Since $\frac{d}{dx}\arctan x=\frac{1}{x^2+1}$ and $\frac{d}{dx}x=1$, we have $$\lim\limits_{x\to0^+}\frac{\arctan x}{x}=\lim\limits_{x\to0^+}\frac{1}{x^2+1}=1.$$
Deriving even odd function expressions What is the logic/thinking process behind deriving an expression for even and odd functions in terms of $f(x)$ and $f(-x)$? I've been pondering about it for a few hours now, and I'm still not sure how one proceeds from the properties of even and odd functions to derive: $$\begin{align*} E(x) &= \frac{f(x) + f(-x)}{2}\\ O(x) &= \frac{f(x) - f(-x)}{2} \end{align*}$$ What is the logic and thought process from using the respective even and odd properties, $$\begin{align*} f(-x) &= f(x)\\ f(-x) &= -f(x) \end{align*}$$ to derive $E(x)$ and $O(x)$? The best I get to is: For even: $f(x)-f(-x)=0$ and for odd: $f(x)+f(-x)=0$ Given the definition of $E(x)$ and $O(x)$, it makes a lot of sense (hindsight usually is) but starting from just the properties. Wow, I feel I'm missing something crucial.
This is more intuitive if one views it in the special case of polynomials or power series expansions, where the even and odd parts correspond to the terms with even and odd exponents, e.g. bisecting into even and odd parts the power series for $\:\rm e^{{\it i}\:x} \:,\;$ $$\begin{align} \rm f(x) \ &= \ \rm\frac{f(x)+f(-x)}{2} \;+\; \frac{f(x)-f(-x)}{2} \\[.4em] \Rightarrow\quad \rm e^{\,{\large \it i}\,x} \ &= \ \rm\cos(x) \ +\ {\it i} \ \sin(x) \end{align}\qquad$$ Similarly one can perform multisections into $\rm\:n\:$ parts using $\rm\:n\:$'th roots of unity - see my post here for some examples and see Riordan's classic textbook Combinatorial Identities for many applications. Briefly, with $\rm\:\zeta\ $ a primitive $\rm\:n$'th root of unity, the $\rm\:m$'th $\rm\:n$-section selects the linear progression of $\rm\: m+k\:n\:$ indexed terms from a series $\rm\ f(x)\ =\ a_0 + a_1\ x + a_2\ x^2 +\:\cdots\ $ as follows $\rm\quad\quad\quad\quad a_m\ x^m\ +\ a_{m+n}\ x^{m+n} +\ a_{m+2\:n}\ x^{m+2n}\ +\:\cdots $ $\rm\quad\quad\, =\,\ \frac{1}{n} \big(f(x)\ +\ f(x\zeta)\ \zeta^{-m}\ +\ f(x\zeta^{\:2})\ \zeta^{-2m}\ +\:\cdots\: +\ f(x\zeta^{\ n-1})\ \zeta^{\ (1-n)m}\big)$ Exercisse $\;$ Use multisections to give elegant proofs of the following $\quad\quad\rm\displaystyle sin(x)/e^{x} \ \ $ has every $\rm 4k\,$'th term zero in its power series $\quad\quad\rm\displaystyle cos(x)/e^{x} \ \, $ has every $\rm 4k\!+\!2\,$'th term zero in its power series See the posts in this thread for various solutions and more on multisections. When you later study representation theory of groups you will learn that this is a special case of much more general results, with relations to Fourier and other transforms. It's also closely related to various Galois-theoretic results on modules, e.g. see my remark about Hilbert's Theorem 90 in the linked thread.
Find the angle in a triangle if the distance between one vertex and orthocenter equals the length of the opposite side Let $O$ be the orthocenter (intersection of heights) of the triangle $ABC$. If $\overline{OC}$ equals $\overline{AB}$, find the angle $\angle$ACB.
Position the circumcenter $P$ of the triangle at the origin, and let the vectors from the $P$ to $A$, $B$, and $C$ be $\vec{A}$, $\vec{B}$, and $\vec{C}$. Then the orthocenter is at $\vec{A}+\vec{B}+\vec{C}$. (Proof: the vector from $A$ to this point is $(\vec{A}+\vec{B}+\vec{C})-\vec{A} = \vec{B}+\vec{C}$. The vector coinciding with the side opposite vertex $A$ is $\vec{B}-\vec{C}$. Now $(\vec{B}+\vec{C})\cdot(\vec{B}-\vec{C}) = |\vec{B}|^2 - |\vec{C}|^2 = R^2-R^2 = 0$, where $R$ is the circumradius. So the line through $A$ and the head of $\vec{A}+\vec{B}+\vec{C}$ is the altitude to $BC$. Similarly for the other three altitudes.) Now the vector coinciding with $OC$ is $\vec{O}-\vec{C}=\vec{A}+\vec{B}$. Thus $|OC|=|AB|$ if and only if $$|\vec{A}+\vec{B}|^2 = |\vec{A}-\vec{B}|^2$$ if and only if $$\vec{A}\cdot\vec{A} + \vec{B}\cdot\vec{B} + 2\vec{A}\cdot\vec{B} = \vec{A}\cdot\vec{A} + \vec{B}\cdot\vec{B} - 2\vec{A}\cdot\vec{B}$$ if and only if $$4\vec{A}\cdot\vec{B} = 0$$ if and only if $$\angle APB = \pi /2$$ if and only if $$\boxed{\angle ACB = \pi/4 = 45^\circ}.$$
Hom of the direct product of $\mathbb{Z}_{n}$ to the rationals is nonzero. Why is $\mathrm{Hom}_{\mathbb{Z}}\left(\prod_{n \geq 2}\mathbb{Z}_{n},\mathbb{Q}\right)$ nonzero? Context: This is problem $2.25 (iii)$ of page $69$ Rotman's Introduction to Homological Algebra: Prove that $$\mathrm{Hom}_{\mathbb{Z}}\left(\prod_{n \geq 2}\mathbb{Z}_{n},\mathbb{Q}\right) \ncong \prod_{n \geq 2}\mathrm{Hom}_{\mathbb{Z}}(\mathbb{Z}_{n},\mathbb{Q}).$$ The right hand side is $0$ because $\mathbb{Z}_{n}$ is torsion and $\mathbb{Q}$ is not.
Let $G=\prod_{n\geq2}\mathbb Z_n$ and let $t(G)$ be the torsion subgroup, which is properly contained in $G$ (the element $(1,1,1,\dots)$ is not in $t(G)$, for example) Then $G/t(G)$ is a torsion-free abelian group, which therefore embeds into its localization $(G/t(G))\otimes_{\mathbb Z}\mathbb Q$, which is a non-zero rational vector space, and in fact generates it as a vector space. There is a non-zero $\mathbb Q$-linear map $(G/t(G))\otimes_{\mathbb Z}\mathbb Q\to\mathbb Q$ (here the Choice Police will observe that we are using the axiom of choice, of course...). Composing, we get a non-zero morphism $$G\to G/t(G)\to (G/t(G))\otimes_{\mathbb Z}\mathbb Q\to\mathbb Q.$$ Remark. If $H$ is a torsion-free abelian group, its finitely generated subgroups are free, so flat. Since $H$ is the colimit of its finitely generated subgroups, it is itself flat, and tensoring the exact sequence $0\to\mathbb Z\to\mathbb Q$ with $H$ gives an exact sequence $0\to H\otimes_{\mathbb Z}\mathbb Z=H\to H\otimes_{\mathbb Z}\mathbb Q$. Doing this for $H=G/t(G)$ shows $G/t(G)$ embeds in $(G/t(G))\otimes_{\mathbb Z}\mathbb Q$, as claimed above.
Residue of a pole of order 6 I am in the process of computing an integral using the Cauchy residue theorem, and I am having a hard time computing the residue of a pole of high order. Concretely, how would one compute the residue of the function $$f(z)=\frac{(z^6+1)^2}{az^6(z-a)(z-\frac{1}{a})}$$ at $z=0$? Although it is not needed here, $a$ is a complex number with $|a|<1$. Thanks in advance for any insight.
$$g(z)=\frac{1}{(z-a)(z-\frac{1}{a})}=\frac{\frac{1}{a-\frac{1}{a}}}{z-a}+\frac{\frac{-1}{a-\frac{1}{a}}}{z-\frac{1}{a}}$$ we know: $$(a+b)^n =a^n+\frac{n}{1!}a^{n-1}b+\frac{n(n-1)}{2!}a^{n-2}b^2+...+b^n$$ $$ \text{As regards }: |a|<1 $$ Taylor series of f(z) is: $$g(z)=\frac{\frac{1}{a-\frac{1}{a}}}{z-a}+\frac{\frac{1}{a-\frac{1}{a}}}{\frac{1}{a}-z}=(\frac{1}{a-\frac{1}{a}}) \left[ \frac{-\frac{1}{a}}{1-\frac{z}{a}}+\frac{a}{1-az} \right]$$ $$g(z)=(\frac{1}{a-\frac{1}{a}}) \left[ \frac{-1}{a} \sum_{n=0}^{\infty}(\frac{z}{a})^n+a \sum_{n=0}^{\infty} (az)^n \right]$$ $$f(z)=\frac{(z^6+1)^2}{az^6}g(z)=\frac{z^{12}+2z^2+1}{az^6}g(z)=\left( \frac{z^6}{a} + \frac{2}{az^4} + \frac{1}{az^6} \right)g(z)$$ $$ f(z)= \left( \frac{z^6}{a} + \frac{2}{az^4} + \frac{1}{az^6} \right) \left(\frac{1}{a-\frac{1}{a}}\right) \left[ \frac{-1}{a} \sum_{n=0}^{\infty}(\frac{z}{a})^n+a \sum_{n=0}^{\infty} (az)^n \right]$$ $$ \text{ so residue is coefficient of term }z^{-1} $$ $$ f(z)=\frac{1}{a(a-\frac{1}{a})} \left[ \frac{-1}{a}\left( \sum_{n=0}^{\infty}\frac{z^{n+6}}{a^n} +2\sum_{n=0}^{\infty}\frac{z^{n-4}}{a^n} +\sum_{n=0}^{\infty} \frac{z^{n-6}}{a^n}\right) +a \left( \sum_{n=0}^{\infty} a^nz^{n+6}+2\sum_{n=0}^{\infty} a^nz^{n-4} +\sum_{n=0}^{\infty} a^nz^{n-6} \right) \right]$$ $$ \text{residue of function at z=0 is :} $$ $$ \frac{1}{a(a-\frac{1}{a})} \left[ \frac{-1}{a}\left( 0 +2\frac{1}{a^3} +\frac{1}{a^5}\right) +a \left( 0+2a^3 +a^5 \right) \right] $$
Dimension of subspace of all upper triangular matrices If $S$ is the subspace of $M_7(R)$ consisting of all upper triangular matrices, then $dim(S)$ = ? So if I have an upper triangular matrix $$ \begin{bmatrix} a_{11} & a_{12} & . & . & a_{17}\\ . & a_{22} & . & . & a_{27}\\ . & . & . & . & .\\ 0 & . & . & . & a_{77}\\ \end{bmatrix} $$ It looks to me that this matrix can potentially have 7 pivots, therefore it is linearly independent and so it will take all 7 column vectors to span it. But that answer is marked as incorrect when I enter it so what am I missing here?
I guess the answer is 1+2+3+...+7=28. Because every element in matrices in S can be a base in that space.
Can One use mathematica or maple online? Is it possible to use some of these algebra packages online ? I have some matrices that I would like to know the characteristic polynomial of. Where could I send them to get a nicely factorised answer ? My PC is very slow & it would be nice to use someone elses super powerful computer ! Any suggestions
For Mathematica, you can try Wolfram Alpha: factor the characteristic polynomial of [[0, 3], [1, 4]] For Sage, you can try Sagenb.org. There, you can do import numpy n=numpy.array([[0, 3],[1, 4]],'complex64') m = matrix(n) m.characteristic_polynomial().factor() I'm not an expert on this in Sage, but the numpy appears to be necessary to get the polynomial to factor over the complex numbers. Wolfram Alpha is going to be a lot more user friendly. Notice I just typed in the description of what I wanted and I got it. But, it's not as powerful because it's not a full fledged computer algebra system. If you want to do several steps of calculations where you use a previous step in the next one, it's going to be very difficult or impossible. But, with Sage, which can be used online for free, or you can download it for free, it will be more complicated to use but more powerful as well, overall. So, it depends on what your needs are.
Given a simple graph and its complement, prove that either of them is always connected. I was tasked to prove that when given 2 graphs $G$ and $\bar{G}$ (complement), at least one of them is a always a connected graph. Well, I always post my attempt at solution, but here I'm totally stuck. I tried to do raw algebraic manipulations with # of components, circuit ranks, etc, but to no avail. So I really hope someone could give me a hint on how to approach this problem.
Let $G$ be a simple disconnected graph $\Longrightarrow\ \exists$ atleast $2$ vertices, say $u$ and $v$ such that there does not exist a path between $u$ and $v$. $\Longrightarrow$ All vertices of $G$ are not adjacent to both $u$ and $v$ (Why?). $u$ and $v$ are obviously not adjacent. $\Longrightarrow$ In $\bar{G}$ all the vertices are adjacent to either $u$ or $v$ or both and $u$ is adjacent to $v$. $\Longrightarrow$ Take any two vertices in $\bar{G}$, say $a$ and $b$, we see that $a$ can be adjacent to either $u$ or $v$ or both. Same holds for $b$. Since, $u$ and $v$ are adjacent, there must exist a path through either $u$ or $v$ or both that connects $a$ and $b$. $\Longrightarrow$ Hence, $\bar{G}$ is connected. Same holds for $G$, if $\bar{G}$ is disconnected. Thus, at least one of $G$ and $\bar{G}$ must be connected.
Compressed sensing, approximately sparse, Power law An x in $\mathbb{R}^n$ is said to be sparse if many of it's coefficients are zeroes. x is said to be compressible(approximately sparse) if many of its coefficients are close to zero.ie Let $x=(x_1,x_2,....x_n)$. Sort the absolute values of the coefficients in decreasing order with new indices as $|x_{(1)}|\geq|x_{(2)}|\geq,..,|x_{(n)}|$. x is said to be compressible if $|x_{(k)}|\leq C k^{-r},r>0$ for all $k=1,2,..,n$ where $C$ is a fixed constant. Coefficients $x_k$ don't follow any probability distribution. My questions: 1) With this definition, every $x\in\mathbb{R}^n$ satisfies this power law as I can always find a constant(because I have only finite values) $C$ such that above power law is satisfied. But this shouldn't be the case because not every $x$ in $\mathbb{R}^n$ is compressible. 2)How do I choose $C$ so that this definition makes sense? 3) Is this definition even right? What am I missing?
The definition looks just like the rule for determining convergent series. Using the ratio test, you can tell if a sequence of roots converges. As I recall, $\frac{1}{x}$ doesn't converge to zero. Rational values of $r$ don't converge quickly, $\frac{1}{\sqrt{x}}$, I wouldn't bet any money that's a sparse system. Are you sure $r > 0$ and not $r > 1$ ? I should have made this a comment, sorry...
Getting the angles of a non-right triangle when all lengths are known I have a triangle that I know the lengths of all the sides. But I need to know the angles. Needs to work with non-right triangles as well. I know it is possible, and I could have easily done this years ago when I was in trig, but it has completely slipped my mind. Id prefer a solution that I can code into a function, or something that does not require constructing right triangles from it.
As has been mentioned in comments, the formula that you're looking for is the Law of Cosines. The three formulations are: $$c^2 = a^2 + b^2 - 2ab \cos(C)$$ $$b^2 = a^2 + c^2 -2ac \cos(B)$$ $$a^2 = b^2 + c^2 - 2bc \cos(A)$$ You can use this law to find all three angles. Alternatively, once you have one of the angles, you can find the next using the Law of Sines, which states that: $${\sin{A} \over a} = {\sin{B} \over b} = {\sin{C} \over c}$$ And of course, once you have solved for two of the angles, you need only subtract their sum from $180$ degrees to get the measure of the third.
Binomial coefficient $\sum_{k \leq m} \binom{m-k}{k} (-1)^k$ This is example 3 from Concrete Mathematics (Section 5.2 p.177 in 1995 edition). Although the proof is given in the book (based on a recurrent expression), I were trying to find an alternative solution, noticing that half of the terms are 0 (denote for simplicity $\frac{m}{2}=s$): $$ \sum_{k=0}^{m} \binom{m-k}{k} (-1)^k = \sum_{k=0}^{s}\binom{s+k}{s-k}(-1)^{s-k} $$ I think it would be good to absorb the $(-1)^k$ term in the binomial coefficient but can't think of much else. Wolfram Alpha doesn't give closed expression for either this or the one in the book
For a nice proof, see Arthur T. Benjamin and Jennifer J. Quinn: Proofs that Really Count, Identity 172, pp. 85-86. Sketch of the proof: Clearly, $\binom{m-k}k$ counts the number of ways that you can tile a $1×m$ board with $1×2$ dominoes (denoted by $d$) and $1×1$ squares (denoted by $s$) such that the number of dominoes is $k$ (and the number of squares is $m-2k$). Let $\mathcal E$ denote the set of $m$-tilings with an even number of dominoes, let $\mathcal O$ denote the set of $m$-tilings with an odd number of dominoes. With these notations, we have to calculate $|\mathcal E|-|\mathcal O|$. The answer is $0$ or $\pm1$, depending on the remainder of $m$ modulo $6$ (see sigma.z.1980's answer for the details). As a proof, we will give a(n almost) bijection between $\mathcal E$ and $\mathcal O$. Since I don't want to draw figures, I encode tilings in a sequence (from left to right), for example $sdd$ means a square followed by two dominoes. For a tiling (sequence) $S$, I denote the concatenation $S\dots S$ ($t$ times) by $S^t$, and let $S^0$ be the empty sequence. The size of a domino is $2$, the size of a square is $1$, and the size of a sequence is the sum of the sizes of its elements. With these notations, every tiling $S$ can be uniquely written as $S=(sd)^tR$ for some $t\in\Bbb N_0$, where $R$ is a tiling such that it doesn't start with $sd$. If the size of $R$ is at least $2$, then $R$ either starts with $d$ or with $ss$. If it starts with $d$, we replace this $d$ with $ss$; if it starts with $ss$, we replace this $ss$ with $d$. Note that we defined an involution $\phi$ on the set of $m$-tilings here and $\phi$ changes the parity of the number of dominoes, so we found the required correspondence between $\mathcal E$ and $\mathcal O$. If $m$ has the form $3l+2$, then $\phi(S)$ is defined for all tilings $S$, thus $|\mathcal E|-|\mathcal O|=0$. If $m$ has the form $6l+1$, then $\phi(S)$ is not defined for $S=(sd)^{2l}s$ (which is in $\mathcal E$), and it is defined everywhere else, thus $|\mathcal E|-|\mathcal O|=1$. The rest of the cases are analogous.
orthonormal basis in $l^{2}$ I need an orthonormal basis in $l^{2}$. One possible choice would be to take as such the sequences $\{1,0,0,0,...\}, \{0,1,0,0,...\}, \{0,0,1,0,...\}$, but I need a basis where only finitely many components of the basis vectors are zero. Does anyone know a way to construct such a basis? One possible vector for such a basis would be $\{1,1/2,1/3,1/4,...\}$ devided by its norm. However, I don't know how to find similar vectors that are orthogonal to this one and to each other.
How about this: consider $w = (1,1/2,1/3,1/4,\dots)$ (or any other vector with no no-zero entry), and complement it with all the standard basis vectors $e_1$, $e_2$, $\dots$ to a complete set and apply Gram-Schmidt orthonormalization?
Do two injective functions prove bijection? I'm trying to prove $|A| = |B|$, and I have two injective functions $f:A \to B$ and $g:B \to A$. Is this enough proof for a bijection, which would prove $|A| = |B|$? It seems logical that it is, but I can't find a definitive answer on this. All I found is this yahoo answer: One useful tool for proving that two sets admit a bijection between them is a theorem which says that if there is an injective function $f: A \to B$ and an injective function $g: B \to A$ then there is a bijective function $h: A \to B$. The theorem doesn't really tell you how to find $h$, but it does prove that $h$ exists. The theorem has a name, but I forget what it is. But he doesn't name the theorem name and the yahoo answers are often unreliable so I don't dare to base my proof on just this quote.
Yes this is true, it is called Cantor–Bernstein–Schroeder theorem.
Complex number: equation I would like an hint to solve this equation: $\forall n\geq 1$ $$\sum_{k=0}^{2^n-1}e^{itk}=\prod_{k=1}^{n}\{1+e^{it2^{k-1}}\} \qquad \forall t \in \mathbb{R}.$$ I went for induction but without to much success; I will keep trying, but if you have an hint... Many thanks.
Fix $n\in \mathbb{N}$ and $t\in \mathbb{R}$. * *If $t=0 \mod 2\pi$, your equality is obviously true, for it reduces to: $$\sum_{k=0}^{2^n -1} 1 =2^n = \prod_{k=1}^n 2$$ (remember that $e^{\imath\ t}$ is $2\pi$-periodic). *Now, assume $t\neq 0 \mod 2\pi$. Evaluate separately: $$\begin{split} (1-e^{\imath\ t})\ \sum_{k=0}^{2^n-1}e^{itk} &= \sum_{k=0}^{2^n-1}e^{itk} - \sum_{k=1}^{2^n}e^{itk}\\ &= 1-e^{\imath\ t\ 2^n} \end{split}$$ and: $$\begin{split} (1-e^{\imath\ t})\ \prod_{k=1}^{n} (1+e^{it2^{k-1}}) &= (1-e^{\imath\ t})\ (1+e^{\imath\ t})\ \prod_{k=2}^{n} (1+e^{it2^{k-1}})\\ &= (1-e^{\imath\ t\ 2})\ (1+e^{\imath\ t\ 2})\ \prod_{k=3}^{n} (1+e^{it2^{k-1}})\\ &= (1-e^{\imath\ t\ 4})\ (1+e^{\imath\ t\ 4})\ \prod_{k=4}^{n} (1+e^{it2^{k-1}})\\ &= \cdots\\ &= (1-e^{\imath\ t\ 2^{n-1}})\ (1+e^{\imath\ t\ 2^{n-1}})\\ &= 1-e^{\imath\ t\ 2^n}\; ; \end{split}$$ therefore you got: $$(1-e^{\imath\ t})\ \sum_{k=0}^{2^n-1}e^{itk} = (1-e^{\imath\ t})\ \prod_{k=1}^{n} (1+e^{it2^{k-1}})\; ,$$ which yields the desidered equality (because $1-e^{\imath\ t}\neq 0$).
What's the difference between tuples and sequences? Both are ordered collections that can have repeated elements. Is there a difference? Are there other terms that are used for similar concepts, and how are these terms different?
Using a basic set theoretic definition, a tuple (a, b, c, ..) represents an element of the Cartesian product of sets A x B x C ... In a vector space the tuple represents the components of a vector in terms of basis vectors. A sequence on the other hand represents a function (usually of the natural numbers) to some set A, and strictly speaking a sequence is then a subset of N x A. For numeric sequences, It makes sense to consider whether they are convergent. One could add the elements of a numeric sequence to get a series and consider if the corresponding series is convergent. I can't think of any equivalent concept for tuples even when they comprise numeric values.
Double Subsequences Suppose that $\{a_{n}\}$ and $\{b_{n}\}$ are bounded. Prove that $\{a_{n}b_{n}\}$ has a convergent subsequence. In class this is how my professor argued: By the Bolzano-Weierstrass Theorem, there exists a subsequence $\{a_{n_k}\}$ that converges to $a$. Since $\{b_n\}$ is bounded, $\{b_{n_k}\}$ is also bounded. So by the Bolzano-Weierstrass Theorem, there exists a subsequence of $\{b_{n_k}\}$ namely $\{b_{n_{{k_j}}}\}$ such that $\{b_{n_{{k_j}}}\}$ converges to $b$. In particular $\{a_{n_{{k_j}}}\}$ will converge to $a$. And note that $\{a_{n_{{k_j}}}b_{n_{{k_j}}}\}$ is a subsequence of $\{a_{n}b_{n}\}$. So $a_{n_{{k_j}}}b_{n_{{k_j}}} \to ab$. My question is why do we have to use so many subsequences. Is it wrong to argue as follows? $\{a_{n}\},\{ b_{n} \}$ are both bounded, so by the Bolzano-Weierstrass Theorem, both sequences have a convergent subsequence. Namely $a_{n_k} \to a$ and $b_{n_k} \to b$. Then note that $\{a_{n_k}b_{n_k}\}$ is a subsequence of $\{a_{n}b_{n}\}$ which converges to $ab$. And we are done.
The problem with your argument is that by writing $\{a_{n_k}\}$ and $\{b_{n_k}\}$ you are implicitly (and incorrectly) assuming that the convergent subsequences of $\{a_n\}$ and $\{b_n\}$ involve the same terms. In general, there is no reason why this should be the case. We need to introduce all of the subsequences that we do to get around this problem. EDIT: Here's a specific example. Take $a_n$ to be $0$ for $n$ odd, and $(-1)^{n/2}$ for $n$ even. Take $b_n$ to be $(-1)^{(n+1)/2}$ for $n$ odd, and $0$ for $n$ even. These are bounded sequences. Bolzano-Weierstrass guarantees that some subsequence converges (and indeed, there are plenty of convergent subsequences, with different limit points). One possibility would be to take the terms $a_n$ for $n = 0,4,8,16,...$ and $b_n$ for $n = 0,2,4,6,...$ In other words, $a_{n_k} = a_{4k}$, and $b_{n_k} = b_{2k}$. Therefore, your candidate subsequence $\{a_{n_k} b_{n_k}\}$ of $\{a_n b_n\}$ would be the sequence $\{a_{4k} b_{2k}\}$. But this doesn't make any sense, because the terms of the product sequence $\{a_n b_n\}$ are products where the indices on the $a$'s and $b$'s match.
Partial derivative involving trace of a matrix Suppose that I have a symmetric Toeplitz $n\times n$ matrix $$\mathbf{A}=\left[\begin{array}{cccc}a_1&a_2&\cdots& a_n\\a_2&a_1&\cdots&a_{n-1}\\\vdots&\vdots&\ddots&\vdots\\a_n&a_{n-1}&\cdots&a_1\end{array}\right]$$ where $a_i \geq 0$, and a diagonal matrix $$\mathbf{B}=\left[\begin{array}{cccc}b_1&0&\cdots& 0\\0&b_2&\cdots&0\\\vdots&\vdots&\ddots&\vdots\\0&0&\cdots&b_n\end{array}\right]$$ where $b_i = \frac{c}{\beta_i}$ for some constant $c>0$ such that $\beta_i>0$. Let $$\mathbf{M}=\mathbf{A}(\mathbf{A}+\mathbf{B})^{-1}\mathbf{A}$$ Can one express a partial derivative $\partial_{\beta_i} \operatorname{Tr}[\mathbf{M}]$ in closed form, where $\operatorname{Tr}[\mathbf{M}]$ is the trace operator?
Define some variables for convenience $$\eqalign{ P &= {\rm Diag}(\beta) \cr B &= cP^{-1} \cr b &= {\rm diag}(B) \cr S &= A+B \cr M &= AS^{-1}A \cr }$$ all of which are symmetric matrices, except for $b$ which is a vector. Then the function and its differential can be expressed in terms of the Frobenius (:) product as $$\eqalign{ f &= {\rm tr}(M) \cr &= A^2 : S^{-1} \cr\cr df &= A^2 : dS^{-1} \cr &= -A^2 : S^{-1}\,dS\,S^{-1} \cr &= -S^{-1}A^2S^{-1} : dS \cr &= -S^{-1}A^2S^{-1} : dB \cr &= -S^{-1}A^2S^{-1} : c\,dP^{-1} \cr &= c\,S^{-1}A^2S^{-1} : P^{-1}\,dP\,P^{-1} \cr &= c\,P^{-1}S^{-1}A^2S^{-1}P^{-1} : dP \cr &= c\,P^{-1}S^{-1}A^2S^{-1}P^{-1} : {\rm Diag}(d\beta) \cr &= {\rm diag}\big(c\,P^{-1}S^{-1}A^2S^{-1}P^{-1}\big)^T d\beta \cr }$$ So the derivative is $$\eqalign{ \frac{\partial f}{\partial\beta} &= {\rm diag}\big(c\,P^{-1}S^{-1}A^2S^{-1}P^{-1}\big) \cr &= \frac{1}{c}\,{\rm diag}\big(BS^{-1}A^2S^{-1}B\big) \cr &= \Big(\frac{b\circ b}{c}\Big)\circ{\rm diag}\big(S^{-1}A^2S^{-1}\big) \cr\cr }$$ which uses Hadamard ($\circ$) products in the final expression. This is the same as joriki's result, but with more details.
Formula for calculating residue at a simple pole. Suppose $f=P/Q$ is a rational function and suppose $f$ has a simple pole at $a$. Then a formula for calculating the residue of $f$ at $a$ is $$ \text{Res}(f(z),a)=\lim_{z\to a}(z-a)f(z)=\lim_{z\to a}\frac{P(z)}{\frac{Q(z)-Q(a)}{z-a}}=\frac{P(a)}{Q'(a)}. $$ In the second equality, how does the $Q(z)-Q(a)$ appear? I only see that it would equal $\lim_{z\to a}\frac{P(z)}{\frac{Q(z)}{z-a}}$.
Since the pole at $\,a\,$ is simple we have that $$Q(z)=(z-a)H(z)\,\,,\,H(z)\,\,\text{a polynomial}\,\,,\,P(a)\cdot H(a)\neq 0\,$$ Thus, as polynomials are defined and differentiable everywhere: $$Res_{z=a}(f)=\lim_{z\to a}\frac{P(z)}{H(z)}=\frac{P(a)}{H(a)}$$ and, of course, $$Q'(z)=H(z)+(z-a)H'(z)\xrightarrow [z\to a]{}H(a)$$
Theorem formulation "Given ..., then ..." or "For all ..., ..."? When formulating a theorem, which of the following forms would be preferred, and why? Or is there another even better formulation? Are there reasons for or against mixing them in one paper? Formulation 0: If $x\in X$, then (expression involving $x$). Formulation 1: Given $x$ in $X$, then (expression involving $x$). Formulation 2: For all $x$ in $X$ it holds that (expression involving $x$).
I guess I'm a simpleton, I always preferred a theorem stated memorably: Theorem 1. Even numbers are interesting. Compared to... Theorem 2. If $x$ is an even number, then $x$ is interesting. ...which is long-winded; or... Theorem 3. Given an even number $x$, $x$ is interesting. Theorem 3 also has odd consonance (writing "...$x$, $x$ is ..." seems like bad style). Theorem 4. For all even numbers $x$, we have $x$ be interesting. This also suffers from peculiar consonance ("we have $x$ be interesting" is quite alien, despite being proper grammar). Theorem 1 has a quick and succinct enunciation ("Even numbers are interesting"), the proof can begin with a specification "Let $x$ be an even number. Then [proof omitted]." Also we see theorem 1 has additional merit: it avoids needless symbols. There is one warning I should give: each theorem is different. Some theorems can be stated beautifully without symbols (e.g., theorem 1). Others cannot be coherently stated without symbols. There is not "iron law" on how theorems should be formulated; it's a case-by-case problem.
What is the inverse cycle of permutation? Given cyclic permutations, for example, $σ = (123)$, $σ_{2} = (45)$, what are the inverse cycles $σ^{-1}$, $σ_2^{-1}$? Regards.
Every permutation n>1 can be expressed as a product of 2-cycles. And every 2-cycle (transposition) is inverse of itself. Therefore the inverse of a permutations is Just reverse products of its 2-cycles (ab)^-1 = b^-1 a^-1
What are the probabilities of getting a "Straight flush" in a poker game? I'm not a pretty much fun of Poker, but I'd like to study that game. What are the probabilities of getting a Straight flush in a Poker game considering this factors? Number of playersHow are cards dealtWho is the first player
If you are dealt five cards, there are $4\times10 =40$ possible straight flushes ($4\times 9 =36$ if you exclude royal flushes) out of the ${52 \choose 5}= 2598960$ possible hands. So the probability is $\dfrac{40}{2598960} = \dfrac{1}{64974} \approx 0.00001539\ldots$. The probability will increase if you can have more than five cards to choose from. The probability that somebody will have a straight flush will increase if the number of players increases. It may reduce if you might drop out of the betting before seeing all five cards.
Definition of the gamma function I know that the Gamma function with argument $(-\frac{1}{ 2})$ -- in other words $\Gamma(-\frac{1}{2})$ is equal to $-2\pi^{1/2}$. However, the definition of $\Gamma(k)=\int_0^\infty t^{k-1}e^{-t}dt$ but how can $\Gamma(-\frac{1}{2})$ be obtained from the definition? WA says it does not converge... 
The functional equation $\Gamma(z+1)=z\Gamma(z)$ allows you to define $\Gamma(z)$ for all $z$ with real part greater than $-1$, other than $z=0$: just set $\Gamma(z) = \Gamma(z+1)/z$, and the integral definition of $\Gamma(z+1)$ does converge. The value $\Gamma(\frac12)=\sqrt\pi$ is well known and can be derived from Euler's reflection formula. (Repeated use of the functional equation shows that $\Gamma$ can be defined for all complex numbers other than nonpositive integers.)
If $f$ continuous differentiable and $f'(r) < 1,$ then $x'=f(x/t)$ has no other solution tangent at zero to $\phi(t)=rt$ Suppose $f:\mathbb{R}\to\mathbb{R}$ is a continuous differentiable function such that $f(r)=r,$ for some $r.$ Then how to show that If $f'(r) < 1,$ then the problem $$x'=f(x/t)$$ has no other solution tangent at zero to $\phi(t)=rt, t>0$. Tangent here means $$\lim_{t\to 0^{+}}\frac{\psi(t)-\phi(t)}{t}=0$$ I could only prove that $\psi(0^+)=0,$ and $\psi'(0^+)=r.$ The problem was to use the fact that $f'(r) < 1.$
Peter Tamaroff gave a very good hint in comments. Here is what comes out of it. Suppose that $x$ is a solution tangent to $rt$ and not equal to it. Since solution curves do not cross, either (i) $x(t)>rt$ for all $t>0$, or (ii) $x(t)<rt$ for all $t>0$. I will consider (i), the other case being similar. By assumption, $x/t\to r$ as $t\searrow 0$. From $$f(x/t)=r+f'(r)(x/t-r)+o(x/t-r)$$ and $f'(r)<1$ we obtain $$t(x/t)'=f(x/t)-x/t = (f'(r)-1)(x/t-r)+o(x/t-r)$$ which is negative for small $t$. This means that $x/t$ increases as $t\searrow 0$, contradicting $x/t\to r$.
Proof by exhaustion: all positive integral powers of two end in 2, 4, 6 or 8 While learning about various forms of mathematical proofs, my teacher presented an example question suitable for proof by exhaustion: Prove that all $2^n$ end in 2, 4, 6 or 8 ($n\in\mathbb{Z},n>0$) I have made an attempt at proving this, but I cannot complete the proof without making assumptions that reduce the rigour of the answer. All positive integral powers of two can be represented as one of the four cases ($k\in\mathbb{Z},k>0$, same for $y$): * *$2^{4k}=16^k=10y+6$ *$2^{4k+1}=2*16^k=10y+2$ *$2^{4k+2}=4*16^k=10y+4$ *$2^{4k+3}=8*16^k=10y+8$ The methods of proving the four cases above were similar; here is the last one: $8*16^k=8*(10+6)^k$ Using binomial expansion, $8*(10+6)^k=8*\sum\limits_{a=0}^k({k \choose a}10^k6^{k-a})$ All of the sum terms where $a\neq0$ end in zero, as they are a multiple of $10^k$, and therefore, a multiple of 10. The sum term where $a=0$ is $6^k$, because ${k\choose0}=10^0=1$. Therefore, the result of the summation ends in six. Assuming that all positive integral powers of six end in six, and eight multiplied by any number ending in six ends in eight, all powers of two of the form $2^{4k+3}$ end in eight. That conclusion doesn't seem very good because of the two assumptions I make. Can I assume them as true, or do I need to explicitly prove them? If I do need to prove them, how can I do that?
Hint $\ $ mod $10,\:$ the powers of $\:2\:$ repeat in a cycle of length $4,\:$ starting with $2,\:$ since $$\rm 2^{K+4} = 2^K(1+15) = 2^K + 30\cdot2^{K-1}\equiv\: 2^K\ \ (mod\ 10)\quad for\quad K\ge 1$$ Now it suffices to prove by induction that if $\rm\:f:\mathbb N\to \mathbb N = \{1,2,3\ldots\}\:$ then $$\rm f(n+4)\: =\: f(n)\ \ \Rightarrow\ \ f(n)\in \{f(1),\:f(2),\:f(3),\:f(4)\}$$ Informally: once a cyclic recurrence begins to loop, all subsequent values remain in the loop. Similarly, suppose there are integers $\rm\:a,b,\:$ such that $\rm\: f(n+2)\ =\: a\:f(n+1) + b\:f(n)\:$ for all $\rm\:n\ge 1.\:$ Show that $\rm\:f(n)\:$ is divisible by $\rm\:gcd(f(1),f(2))\:$ for $\rm\:n\ge 1$.
Proving that a natural number is divisible by $3$ I am trying to show that $n^2 \bmod 3 = 0$ implies $n \bmod 3 = 0$. This is a part a calculus course and I don't know anything about numbers theory. Any ideas how it can be done? Thanks!
Hint $\rm\ (1+3k)^2 = 1 + 3\:(2k+3k^2)$ and $\rm\ \ \ (2+3k)^2 = 1 + 3\:(1+4k+3k^2)$ Said mod $3\!:\ (\pm1)^2 \equiv 1\not\equiv 0\ \ $ (note $\rm\: 2\equiv -1$)
Analysis Problem: Prove $f$ is bounded on $I$ Let $I=[a,b]$ and let $f:I\to {\mathbb R}$ be a (not necessarily continuous) function with the property that for every $x∈I$, the function $f$ is bounded on a neighborhood $V_{d_x}(x)$ of $x$. Prove that $f$ is bounded on $I$. Thus far I have that, For all $n∈I$ there exist $x_n∈[a,b]$ such that $|f(x_n)|>n$. By the Bolzano Weierstrass theorem since $I$ is bounded we have the sequence $X=(x_n)$ is bounded. This implies there is a convergent sub-sequence $X'=(x_{n_r})$ of $X$ that converges to $c$, $c∈[a,b]$. Since $I$ is closed and the element of $X'$ belongs to $I$, it follows from a previous theorem that I proved that $c∈I$. Here is where I get stuck, I want to use that the function $f$ is bounded on a neighborhood $V_{d_x}(x)$ somehow to show that $f$ is bounded on $I$. I'm not sure how to proceed. $f$ is bounded on $I$ means if there exist a d-neighborhood $V_d(c)$ of $c$ and a constant $M>0$ such that we have $|f(x)|\leq M$ for all $x$ in $A ∩ V_d(c)$. I would like to do try a proof by contradiction somehow.
If you cannot use the Heine-Borel theorem, argue via sup. Here's a sketch: Let $A= \{ x \in I : f \text{ is bounded in } [a,x] \}$. Then $A$ is not empty because $a\in A$. Also, $A$ is bounded above because $A\le b$. Prove that if $x\in A$ and $x<b$ then $x+h\in A$ for some $h>0$ using that $f$ is locally bounded at $x$. This means that no $x< b$ is an upper bound for $A$, which implies that $b=\sup A$. Finally, using that $f$ is locally bounded at $b$, argue that $b \in A$, thus proving that $f$ is bounded in $I$. This proof appears in Spivak's Calculus.
The preimage of $(-\infty,a]$ under $f$ is closed for $a \in \mathbb{R}$, then $f$ is semi-continuous. So I've been thinking about this for the last two hours, but I am stuck. Suppose $f:X \to \mathbb{R}$ where $X$ is a topological space. $f$ is said to be semicontinuous if for any $x \in X$ and $\epsilon > 0$, there is a neighborhood of $x$ such that $f(x) - f(x') < \epsilon$ for all $x'$ in the neighborhood of $x$. The question is the if $f^{-1}((-\infty,a])$ is closed for $a \in \mathbb{R}$, then $f$ is lower semi-continuous. I started with choosing an $x \in f^{-1}((-\infty,a])$ and letting $\epsilon > 0$. So far I don't know much characterization of closed sets in a topological space except it is the complements of open sets. Not sure if this is correct, but I approached this problem with the idea of nets. Since $f^{-1}(-\infty,a]$ is closed, then for each $x \in f^{-1}(-\infty,a]$, there's a net $\{x_i\}_{i \in I}$ such that it converges to $x$ (not sure if I'm allowed to do that). Pick any neighborhood of $x$ denote by $N_x$ (which will contain terms from $f^{-1}(-\infty,a]$), which contains an open set which has $x$. Let $f(x) = b$. Then $N_{x'}:=[N_x \backslash f^{-1}(-\infty, b)] \backslash [\mathrm{boundary \ of \ this \ set \ to \ the \ left}]$. So this gives me an open set such that it contains $x$ and such that $f(x) - f(x') < \epsilon$ for all $x' \in N_{x'}$. So $f$ must be semi-continuous. Not sure if there is more to know about closed sets in a topological space, except its complement is open. Any hint on how to think about this is appreciated.
I’m guessing from what you’ve written that your definition of lower semi-continuity is such that you want to start with an arbitrary $x_0\in X$ and $\epsilon>0$ and show that there is an open nbhd $U$ of $x_0$ such that $f(x)>f(x_0)-\epsilon$ for every $x\in U$. You know that for any $a\in\Bbb R$, $f^{-1}[(-\infty,a]]$ is closed, so its complement, which is $f^{-1}[(a,\infty)]$, must be open. Take $a=x_0-\epsilon$, and let $U=f^{-1}[(a,\infty)]$. Does this $U$ meet the requirements?
What does proving the Collatz Conjecture entail? From the get go: i'm not trying to prove the Collatz Conjecture where hundreds of smarter people have failed. I'm just curious. I'm wondering where one would have to start in proving the Collatz Conjecture. That is, based on the nature of the problem, what's the starting point for attempting to prove it? I know that it can be represented in many forms as an equation(that you'd have to recurse over): $$\begin{align*} f(x) &= \left\{ \begin{array}{ll} n/2 &\text{if }n \bmod2=0 \\ 3n+1 &\text{if }n \bmod2=1 \end{array} \right.\\ \strut\\ a_i&= \left\{ \begin{array}{ll} n &\text{if }n =0\\ f(a_i-1)&\text{if }n>0 \end{array} \right.\\ \strut\\ a_i&=\frac{1}{2}a_{i-1} - \frac{1}{4}(5a_{i-1} + 2)((-1)^{a_i-1} - 1) \end{align*}$$ Can you just take the equation and go from there? Other ways I thought of would be attempting to prove for only odd or even numbers, or trying to find an equation that matches the graph of a number vs. its "Collatz length" I'm sure there's other ways; but I'm just trying to understand what, essentially, proving this conjecture would entail and where it would begin.
Proving this conjecture indirectly would entail two things: * *Proving that there is no number n which increases indefinitely *Proving there is no number n which loops indefinitely (besides the 4, 2, 1) loop If one does these things then you have an answer to the collatz conjecture (and if you find a case of either of these things you have disproven the collatz conjecture obviously) Of course this is just one approach that comes to mind, there are other possible methods which are beyond my own knowledge
How do i scale my errorbars when i scale my data? I am plotting distributions of data with the standard deviation and median of my data. Now when i want to scale my median by a another variable, how do i need to modify the standart deviation?
Let $X$ be some real-value random variable and $m$ be its median: $$ \mathsf P\{X\leq m\} = \mathsf P\{X>m\}. $$ Clearly, to scale median by the factor $\lambda> 0$ you just scale $X$ by the same factor since $$ \mathsf P\{\lambda X\leq \lambda m\} = \mathsf P\{\lambda X>\lambda m\}. $$ Note that although for the variance we have$\mathsf V[\lambda X] = \lambda^2 \mathsf V[X]$, the standard deviation scales with the same factor $\lambda$ being the square root of the variance: $$ \sigma[\lambda X] = |\lambda| \sigma[X]. $$
Multiplying exponents with variables inside Why is $$(-1)^n(2^{n+2}) = (-2)^{n+2} ?$$ My thinking is that $-1^n \times 2^{n+2}$ should be $-2^{2n+2}$ but clearly this is not the case. Why is the variable essentially ignored, is there a special case of multiplication I'm unaware of?
The exponent rules (for positive integer exponents, at any rate) are: * *$(a^n)^m = a^{nm}$ *$(ab)^n = a^nb^n$ *$a^na^m = a^{n+m}$. Here, $a$ and $b$ are any real numbers, and $n$ and $m$ are positive integers. (The rules are valid in greater generality, but one has to be careful with the values of $a$ and $b$; also, the 'explanation' below is not valid for exponents that are not positive integers.) To see these, remember what the symbols mean: $a^1 = a$, and $a^{n+1}=a^na$; that is, $a^k$ "means" $$a^k = \underbrace{a\times a\times\cdots\times a}_{k\text{ factors}}$$ The following can be proven formally with induction, but informally we have: $$\begin{align*} (a^n)^m &= \underbrace{a^n\times a^n\times\cdots\times a^n}_{m\text{ factors}}\\ &= \underbrace{\underbrace{a\times\cdots\times a}_{n\text{ factors}}\times\cdots \times \underbrace{a\times\cdots\times a}_{n\text{ factors}}}_{m\text{ products}}\\ &= \underbrace{a\times\cdots \times a\times a\times\cdots \times a\times\cdots \times a}_{nm\text{ factors}}\\ &= a^{nm} \end{align*}$$ Similarly, $$\begin{align*} (ab)^n &= \underbrace{(ab)\times (ab)\times\cdots\times (ab)}_{n\text{ factors}}\\ &= \underbrace{(a\times a\times\cdots \times a)}_{n\text{ factors}}\times\underbrace{(b\times b\times\cdots \times b)}_{n\text{ factors}}\\ &= a^nb^n, \end{align*}$$ and $$\begin{align*} a^{n+m} &= \underbrace{a\times a\times\cdots\times a}_{n+m\text{ factors}}\\ &= \underbrace{(a\times a\times \cdots \times a)}_{n\text{ factors}}\times\underbrace{(a\times a\times \cdots \times a)}_{m\text{ factors}}\\ &= a^na^m. \end{align*}$$ You have $$(-1)^n2^{n+2}.$$ Because the bases are different ($-1$ and $2$), you do not apply rule 3 above (which is what you seem to want to do); instead, you want to try to apply rule 2. You can't do that directly because the exponents are different. However, since $(-1)^2 = (-1)(-1) = 1$, we can first do this: $$(-1)^n2^{n+2} = 1(-1)^n2^{n+2} = (-1)^2(-1)^n2^{n+2};$$ then we apply rule 3 to $(-1)^2(-1)^n$ to get $(-1)^{2+n} = (-1)^{n+2}$, and now we have the situation of rule 2, so we get: $$(-1)^n2^{n+2} = (-1)^{n+2}2^{n+2} = \bigl( (-1)2\bigr)^{n+2} = (-2)^{n+2}.$$ (You seem to be trying to apply a weird combination of rules 2 and 3, to get that $a^nb^m = (ab)^{n+m}$; this is almost always false; the exponent rules don't let you do that)
Coercivity vs boundedness of operator The definition of coercivity and boundedness of a linear operator $L$ between two $B$ spaces looks similar: $\lVert Lx\lVert\geq M_1\lVert x\rVert$ and $\lVert Lx\rVert\leq M_2\lVert x\rVert$ for some constants $M_1$ and $M_2$. Thus in order to show the existence of a PDE $Lu=f$ one needs to show that it is coercive. However if my operator $L$ happen to be bounded and $M_2 \leq M_1$? What is the intuition behind those two concepts because they are based on computation of the same quantities and comparing the two?
With boundedness everything is clear, because it is well known that linear operator $L$ is continuous iff $L$ is bounded. With continuity of $L$ you can solve the equation with sequential approximations. Moreover, you can apply the whole theory developed for continuous functions and, in particular, for continuous linear operators. Since continuity is very natural condition when solving differential equations, we require $L$ to be bounded. As for the coercivity, note that it, in particular, implies injectivity. Injectivity guarantees us uniqueness of the solution $u$ of the equation $Lu=f$. But when you are solving such an equation, it is desirable that solution depends on right hand side $f$ continuously. Well, this property depends on $L$, and it is sufficient to require coercivity of $L$. Speaking functional-analytically bounded coercive operator perform linear homeomorphism between domain and its range. Hence there is a "nice" correspondence between initial data $f$ and solution $u$.
Evaluate $\int\limits_0^{\frac{\pi}{2}} \frac{\sin(2nx)\sin(x)}{\cos(x)}\, dx$ How to evaluate $$ \int\limits_0^{\frac{\pi}{2}} \frac{\sin(2nx)\sin(x)}{\cos(x)}\, dx $$ I don't know how to deal with it.
Method 1. Let $I(n)$ denote the integral. Then by addition formula for sine and cosine, $$\begin{align*} I(n+1) + I(n) &= \int_{0}^{\frac{\pi}{2}} \frac{[\sin((2n+2)x) + \sin(2nx)]\sin x}{\cos x} \; dx \\ &= \int_{0}^{\frac{\pi}{2}} 2\sin((2n+1)x) \sin x \; dx \\ &= \int_{0}^{\frac{\pi}{2}} [\cos(2nx) - \cos((2n+2)x)] \; dx \\ &= 0, \end{align*}$$ if $n \geq 1$. Thus we have $I(n+1) = -I(n)$ and by double angle formula for sine, $$I(1) = \int_{0}^{\frac{\pi}{2}} \frac{\sin (2x) \sin x}{\cos x} \; dx = \int_{0}^{\frac{\pi}{2}} 2 \sin^2 x \; dx = \frac{\pi}{2}.$$ Therefore we have $$I(n) = (-1)^{n-1} \frac{\pi}{2}.$$ Method 2. By the substitution $x \mapsto \pi - x$ and $x \mapsto -x$, we find that $$\int_{0}^{\frac{\pi}{2}} \frac{\sin (2nx) \sin x}{\cos x} \; dx = \int_{\frac{\pi}{2}}^{\pi} \frac{\sin (2nx) \sin x}{\cos x} \; dx = \int_{-\frac{\pi}{2}}^{0} \frac{\sin (2nx) \sin x}{\cos x} \; dx.$$ Thus we have $$\begin{align*} I(n) & = \frac{1}{4} \int_{-\pi}^{\pi} \frac{\sin (2nx) \sin x}{\cos x} \; dx \\ & = \frac{1}{4} \int_{|z|=1} \frac{\left( \frac{z^{2n} - z^{-2n}}{2i} \right) \left( \frac{z - z^{-1}}{2i} \right)}{\left( \frac{z + z^{-1}}{2} \right)} \; \frac{dz}{iz} \\ & = \frac{i}{8} \int_{|z|=1} \frac{(z^{4n} - 1) (z^2 - 1)}{z^{2n+1}(z^2 + 1)} \; dz. \end{align*}$$ The last integrad has poles only at $z = 0$. (Note that singularities at $z = \pm i$ is cancelled since numerator also contains those factors.) Expanding partially, $$ \begin{align*} \frac{(z^{4n} - 1) (z^2 - 1)}{z^{2n+1}(z^2 + 1)} & = \frac{z^{2n-1} (z^2 - 1)}{z^2 + 1} - \frac{z^2 - 1}{z^{2n+1}(z^2 + 1)} \\ & = \frac{z^{2n-1} (z^2 - 1)}{z^2 + 1} - \frac{1}{z^{2n+1}} + \frac{2}{z^{2n+1}(z^2 + 1)} \\ & = \frac{z^{2n-1} (z^2 - 1)}{z^2 + 1} - \frac{1}{z^{2n+1}} + 2 \sum_{k=0}^{\infty} (-1)^{k} z^{2k-2n-1}. \end{align*}$$ Thus the residue of the integrand at $z = 0$ is $2 (-1)^n$, and therefore $$I(n) = \frac{i}{8} \cdot 2\pi i \cdot 2 (-1)^{n} = (-1)^{n-1}\frac{\pi}{2}.$$ Method 3. (Advanced Calculus) This method is just a sledgehammer method, but it reveals an interesting fact that even a nice integral with nice value at each integer point can yield a very bizarre answer for non-integral argument. By the substitution $x \mapsto \frac{\pi}{2} - x$, we have $$I(n) = (-1)^{n-1} \int_{0}^{\frac{\pi}{2}} \frac{\sin 2nx}{\sin x} \cos x \; dx.$$ Now, from a lengthy calculation, we find that for all $w > 0$, $$ \int_{0}^{\frac{\pi}{2}} \frac{\sin 2wx}{\sin x} \cos x \; dx = \frac{\pi}{2} + \left[ \log 2 - \psi_0 (1 + w) + \psi_0 \left( 1 + \frac{w}{2}\right) - \frac{1}{2w} \right] \sin \pi w.$$ Thus plugging a positive integer $n$, we obtain $$ \int_{0}^{\frac{\pi}{2}} \frac{\sin 2nx}{\sin x} \cos x \; dx = \frac{\pi}{2},$$ which immediately yields the formula for $I(n)$.
Let $p$ be a prime. Prove that $p$ divides $ab^p−ba^p$ for all integers $a$ and $b$. Let $p$ be a prime. Prove that $p$ divides $ab^p−ba^p$ for all integers $a$ and $b$.
$$ab^p-ba^p = ab(b^{p-1}-a^{p-1})$$ If $p|ab$, then $p|(ab^p-ba^p)$ and also if $p \nmid ab$, then gcd$(p,a)=$gcd$(p,b)=1, \Rightarrow b^{p-1} \equiv a^{p-1} \equiv 1\pmod{p}$ (by Fermat's little theorem). This further implies that $\displaystyle{p|(b^{p-1}-a^{p-1}) \Rightarrow p|(ab^p-ba^p)}$. Q.E.D.
The Fundamental Theorem of Algebra and Complex Numbers We had a quiz recently in a linear algebra course, and one of the true/false question states that The Fundamental Theorem of Algebra asserts that addition, subtraction, multiplication and division for real numbers can be carried over to complex numbers as long as division by zero is avoided. According to our teacher, the above statement is true. When asked him of the reasoning behind it, he said something about the FTA asserts that the associative, commutative and distributive laws are valid for complex numbers, but I couldn't see this. Can someone explain whether the above statement is true and why? Thanks.
The statement is false. The Fundamental Theorem of Algebra asserts that any non-constant polynomial with complex coefficients has a root in the complex numbers. This does not state anything about the relationship between the complex numbers and the real numbers; and any proof of the FTA will certainly use the associativity and commutativity of addition and multiplication in the complex numbers, as well as multiplication's distributivity over addition, so the FTA can't imply those properties. The statements * *the associative, commutative and distributive laws are valid for complex numbers *addition, subtraction, multiplication and division for real numbers can be carried over to complex numbers as long as division by zero is avoided might be summarized by the statement "the complex numbers form a ring, which is a division algebra over the real numbers".
Sum of two closed sets in $\mathbb R$ is closed? Is there a counterexample for the claim in the question subject, that a sum of two closed sets in $\mathbb R$ is closed? If not, how can we prove it? (By sum of sets $X+Y$ I mean the set of all sums $x+y$ where $x$ is in $X$ and $y$ is in $Y$) Thanks!
It's worth mentioning that : if one is closed + bounded, another one is closed,then the addition is closed Since closedness can be charaterized by sequence in $\Bbb{R}^n$,if $(x_n) \in A+B$ we need to show limit of the convergence sequence still lies in it.assume $A$ is compact $B $ is closed. Since $x_n= a_n +b_n \to x$,compactness implies sequential compactness hence $a_{n_k} \to a\in A$ for some subsequence. now $x_{n_k} \to x$ which means subsequence $b_{n_k}\to x-a$ converge,since $B$ is closed,$x-a \in B$ ,hence $x = a+b \in A+B$,which means the sum is closed.
How to evaluate one improper integral Please show me the detailed solution to the question: Compute the value of $$\int_{0}^{\infty }\frac{\left( \ln x\right) ^{40021}}{x}dx$$ Thank you a million!
Since this is an exercise on improper integrals, it is natural to replace the upper and lower limits by $R$, $\frac{1}{R}$ respectively and define the integral to be the limit as $R \rightarrow \infty$ . Then write the integral as the sum of the integral from $\frac{1}{R}$ to $1$ and from $1$ to $R$. In the second integral make the usual transformation replacing $x$ by $\frac{1}{x}$. The two integrals cancel.
What is the difference between Green's Theorem and Stokes Theorem? I don't quite understand the difference between Green's Theorem and Stokes Theorem. I know that Green's Theorem is in $\mathbb{R}^2$ and Stokes Theorem is in $\mathbb{R}^3$ and my lecture notes give Greens Theorem and Stokes Theorem as $$\int \!\! \int_{\Omega} curl \, \underline{v} \, \mathrm{d}A = \int_{\partial \Omega} \underline{v} \, \mathrm{d} \underline{r}$$ and $$\int \!\! \int_\Omega \nabla \times \underline{v} . \underline{n} \, \mathrm{d}A = \int_{\partial \Omega} \underline{v} \, \mathrm{d} \underline{r}$$ respectively. So why does being in $\mathbb{R}^3$ constitute the unit normal $\underline{n}$ to be dotted with the curl? Thanks!
Green's Theorem is a special case of Stokes's Theorem. Since your surface is in the plane and oriented counterclockwise, then your normal vector is $n = \hat{k}$, the unit vector pointing straight up. Similarly, if you compute $\nabla \times v$, where $v\, dr = Mdx + Ndy$, you would get $\left( \frac{\partial N}{\partial x} - \frac{\partial M}{\partial y} \right) \hat{k} = \text{curl}\, v \,\hat{k}$ as a result. When you dot $(\nabla \times v) \cdot n$, you get the product $\text{curl}\, v$. So even in $\mathbb{R}^2$ you are dotting the curl with the unit normal vector; but because the unit normal is aligned with the curl vector, the dot product is simply the magnitude of the curl. (In order words, when you look at the dot product formula $a \cdot b = |a||b| \cos \theta$, $\theta = 0$, $a$ is your curl and $|b| = 1$.)
Primes modulo which a given quadratic equation has roots Given a quadratic polynomial $ax^2 + bx + c$, with $a$, $b$ and $c$ being integers, is there a characterization of all primes $p$ for which the equation $$ax^2 + bx + c \equiv 0 \pmod p$$ has solutions? I have seen it mentioned that it follows from quadratic reciprocity that the set is precisely the primes in some arithmetic progression, but the statement may require some tweaking. The set of primes modulo which $1 + \lambda = \lambda^2$ has solutions seems to be $$5, 11, 19, 29, 31, 41, 59, 61, 71, 79, 89, 101, 109, 131, 139, 149, 151, 179, 181, 191, 199, \dots$$ which are ($5$ and) the primes that are $1$ or $9$ modulo $10$. (Can the question also be answered for equations of higher degree?)
I never noticed this one before. $$ x^3 - x - 1 \equiv 0 \pmod p $$ has one root for odd primes $p$ with $(-23|p) = -1.$ $$ x^3 - x - 1 \equiv 0 \pmod p $$ has three distinct roots for odd $p$ with $(-23|p) = 1$ and $p = u^2 + 23 v^2 $ in integers. $$ x^3 - x - 1 \equiv 0 \pmod p $$ has no roots for odd $p$ with $(-23|p) = 1$ and $p = 3u^2 + 2 u v + 8 v^2 $ in integers (not necessarily positive integers). Here we go, no roots $\pmod 2,$ but a doubled root and a single $\pmod {23},$ as $$ x^3 - x - 1 \equiv (x - 3)(x-10)^2 \pmod {23}. $$ Strange but true. Easy to confirm by computer for primes up to 1000, say. The example you can see completely proved in books, Ireland and Rosen for example, is $x^3 - 2,$ often with the phrase "the cubic character of 2" and the topic "cubic reciprocity." $2$ is a cube for primes $p=2,3$ and any prime $p \equiv 2 \pmod 3.$ Also, $2$ is a cube for primes $p \equiv 1 \pmod 3$ and $p = x^2 + 27 y^2$ in integers. However, $2$ is not a cube for primes $p \equiv 1 \pmod 3$ and $p = 4x^2 +2 x y + 7 y^2$ in integers. (Gauss) $3$ is a cube for primes $p=2,3$ and any prime $p \equiv 2 \pmod 3.$ Also, $3$ is a cube for primes $p \equiv 1 \pmod 3$ and $p = x^2 + x y + 61 y^2$ in integers. However, $3$ is not a cube for primes $p \equiv 1 \pmod 3$ and $p = 7x^2 +3 x y + 9 y^2$ in integers. (Jacobi)
About the exchange of $\sum$ and LM Given $f_i,g_i\in k[x_1,\cdots,x_n],1\leq i\leq s$, fix a monomial order on $k[x_1,\cdots,x_n]$, I was wondering whether there is an effective criterion to judge if this holds,$$\text{LM}(\sum_{i=1}^sf_ig_i)=\sum_{i=1}^s\text{LM}(f_ig_i),$$ where LM( ) is the leading monomial with respect to the fixed monomial order defined as follows, $$\text{LM}(f)=x^{\text{multideg}(f)}.$$ And $\text{multideg}(f)=\text{max}(\alpha\in\mathbb Z_{\geq 0}^{n}:a_{\alpha}\neq0),$ where $f=\sum_{\alpha}a_{\alpha}x^{\alpha}.$
In characteristic 0, assuming all terms are non-zero so that LM is defined, this only works if $s=1$: taking the sum of coefficients on both sides of the equation you obtain the equation $1=s$. So you can never exchange a non-trivial sum and LM. Note that only the image of LM needs to be of characteristic 0: this still holds for any $k$ if you view LM as a map from $k[x_1,\dots,x_n]$ to $\mathbb Z[x_1,\dots,x_n]$.
There's a real between any two rationals, a rational between any two reals, but more reals than rationals? The following statements are all true: * *Between any two rational numbers, there is a real number (for example, their average). *Between any two real numbers, there is a rational number (see this proof of that fact, for example). *There are strictly more real numbers than rational numbers. While I accept each of these as true, the third statement seems troubling in light of the first two. It seems like there should be some way to find a bijection between reals and rationals given the first two properties. I understand that in-between each pair of rationals there are infinitely many reals (in fact, I think there's $2^{\aleph_0}$ of them), but given that this is true it seems like there should also be in turn a large number of rationals between all of those reals. Is there a good conceptual or mathematical justification for why the third statement is tue given that the first two are as well? Thanks! This has been bothering me for quite some time.
Here's an attempt at a moral justification of this fact. One (informal) way of understanding the difference between a rational number and a real number is that a rational number somehow encodes a finite amount of information, whereas an arbitrary real number may encode a (countably) infinite amount of information. The fact that the algebraic numbers (roots of polynomial equations with integer coefficients) are countable suggests that this perspective is not unreasonable. Naturally, when your objects are free to encode an infinite amount of information, you can expect more variety, and that is ultimately what causes the cardinality of $\mathbb{R}$ to exceed that of $\mathbb{N}$, as in Cantor's Diagonal Argument. However, because real numbers encode a countable amount of information, any two distinct real numbers disagree after some finite point, and that is why we may introduce a rational in the middle. All in all, this is seen to boil down to the way we constructed $\mathbb{R}$: as the set of limit points of rational cauchy sequences. This is because a limiting process is built out of "finite" steps, and so we can approximate the immense complexity of an uncountable set with a countable collection of finite objects.
Constructive proof need to know the solutions of the equations Observe the following equations: $2x^2 + 1 = 3^n$ has two solutions $(1, 1) ~\text{and}~ (2, 2)$ $x^2 + 1 = 2 \cdot 5^n$ has two solutions $(3, 1) ~\text{and}~ (7, 2)$ $7x^2 + 11= 2 \cdot 3^n$ has two solutions $(1, 2) ~\text{and}~ (1169, 14)$ $x^2 + 3 = 4 \cdot 7^n$ has two solutions $(5, 1) ~\text{and}~ (37, 3)$ How one can determine the only number of solutions are two or three or four...depends up on the equation. especially, the above equations has only two solutions. How can we prove there is no other solutions? Or how can we get solutions by any particular method or approach?
All four of your equations (and many more) are mentioned in Saradha and Srinivasan, Generalized Lebesgue-Ramanujan-Nagell equations, available at http://www.math.tifr.res.in/~saradha/saradharev.pdf. The solutions are attributed to Bugeaud and Shorey, On the number of solutions of the generalized Ramanujan-Nagell equation, J Reine Angew. Math. 539 (2001) 55-74, MR1863854 (2002k:11041).