title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
A simple riddle related to addition of odd numbers
A direct approach: Any given integer is either odd or even. If $n$ is even, then it is equal to $2m$ for some integer $m$; and if $n$ is odd, then it is equal to $2m+1$ for some integer $m$. Thus, adding up nine odd integers looks like $${(2a+1)+(2b+1)+(2c+1)+(2d+1)+(2e+1)\atop +(2f+1)+(2g+1)+(2h+1)+(2i+1)}$$ (the integers $a,b,\ldots,i$ may or may not be the same). Grouping things together, this is equal to $$2(a+b+c+d+e+f+g+h+i+4)+1.$$ Thus, the result is odd. An simpler approach would be to prove these three simple facts: $$\begin{align*} \mathsf{odd}+\mathsf{odd}&=\mathsf{even}\\ \mathsf{odd}+\mathsf{even}&=\mathsf{odd}\\ \mathsf{even}+\mathsf{even}&=\mathsf{even} \end{align*}$$ Thus, starting from $$\mathsf{odd}+\mathsf{odd}+\mathsf{odd}+\mathsf{odd}+\mathsf{odd}+\mathsf{odd}+\mathsf{odd}+\mathsf{odd}+\mathsf{odd}$$ and grouping into pairs, $$\mathsf{odd}+(\mathsf{odd}+\mathsf{odd})+(\mathsf{odd}+\mathsf{odd})+(\mathsf{odd}+\mathsf{odd})+(\mathsf{odd}+\mathsf{odd})$$ we use our facts to see that this is $$\mathsf{odd}+\mathsf{even}+\mathsf{even}+\mathsf{even}+\mathsf{even}.$$ Grouping again, $$\mathsf{odd}+(\mathsf{even}+\mathsf{even})+(\mathsf{even}+\mathsf{even})$$ becomes $$\mathsf{odd}+\mathsf{even}+\mathsf{even}$$ becomes $$\mathsf{odd}+(\mathsf{even}+\mathsf{even})=\mathsf{odd}+\mathsf{even}= \mathsf{odd}$$
Showing that the Gamma Function has Poles by "staying close" to its integral identity $\Gamma(z) = \int_0^\infty x^{z-1}e^{-x}dx$
The integral representation $$ \Gamma(z)=\int_{0}^{+\infty} x^{z-1} e^{-x}\,dx \tag{1}$$ holds for any $z:\text{Re}(z)>0$. Over such region the functional relation $\Gamma(z+1)=z\,\Gamma(z)$ is a straightforward consequence of integration by parts and Euler's product $$ \Gamma(s)=\lim_{n\to +\infty}\frac{n! n^s}{s(s+1)\cdots(s+n)}\tag{2}$$ is a consequence of the dominated convergence theorem (see my notes, pages 70+). By $(2)$ the $\Gamma$ function is non-vanishing over $\text{Re}(z)>0$ and by $(1)$ it is holomorphic. The functional relation provides an analytic continuation to the complex plane: for instance, since $\Gamma(z)=1+O(z)$ in a neighbourhood of $z=1$, the Laurent series of $\Gamma(z)$ centered at the origin is given by $\frac{1}{z}+O(1)$, since $\Gamma(z)=\frac{\Gamma(z+1)}{z}$. In particular the origin is a simple pole with residue $1$ for the $\Gamma$ function. The same principle leads to the fact that the singularities of the $\Gamma$ function are located at $\{0,-1,-2,-3,\ldots\}$ and these points are simple poles with residues $\{\frac{1}{0!},-\frac{1}{1!},\frac{1}{2!},-\frac{1}{3!},\ldots\}$.
If M is a linear subspace of X such that $M^⊥$ = {0}, then M is dense - counterexample
Your counter-example does not work because you are using complex scalars and real sequences do not form a linear subspace. Actually there is no such example. If $M$ is not dense then Hahn Banach theorem gives a non-zero continuous linear functional which is $0$ on $M$. By Riesz Theorem this means ther exist $y \in H$ such that $ \langle x, y \rangle=0$ for all $x \in M$ but $y \neq 0$. But the $y \in M^{\perp} $ and $y \neq 0$.
Fubini-Tonelli proof purely using complex analysis?
Yes, there is a version for Riemann-Integrals, too. A quite straight-forward proof can be found in "The College Mathematics Journal" Vol. 33, No. 2 p.126-130, it is available online too. The main part is a clever application of the mean value theorem. Note also that the term Fubini-Tonelli is mostly used for the Lebesgue-Integral "version" of the theorem.
Every subgroup of a quotient group is a quotient group itself
You're nearly there. Just use the set-theoretic fact that if $f : S \to T$ is a surjective function, and $B \subseteq T$, then $$ f( f^{-1}(B)) = B. $$ In your case, you obtain that $B = \pi(A) = A/N$.
How to find solution to this ODE with powerseries
Note that by binomial theorem $n^2-1=(n-1)(n+1)$ so that the recursion cancels to $$ a_{n+2}=-\frac{n-1}{n+2}a_n=\frac{(n-1)(n-3)}{(n+2)n}a_{n-2}=... $$ For the even subsequence one can write $$ a_{2(k+1)}=-\frac{2k(2k-1)}{4(k+1)k}a_{2k}=(-1)^{k}\frac{(2k)(2k-1)\cdots2\cdot1}{4^{k}(k+1)k^2(k-1)^2\cdots2^2\cdot1}a_2 \\=(-1)^k\frac{(2k)!}{2^{2k+1}(k+1)!k!}a_0 =\frac{(-1)^k}{2^{2k+1}(k+1)}\binom{2k}{k}a_0 $$ etc.
Compute the following two cardinals: $\mathfrak{c}^{\mathfrak{c}^\mathfrak{c}}$ and $n^{\aleph _0} \aleph_0 ^n$
Since $(A^B)^C\cong A^{B\times C}$, we have that $$\mathfrak c^{\mathfrak c^{\mathfrak c}}=\left(2^{\aleph_0}\right)^{\left(2^{\aleph_0}\right)^{2^{\aleph_0}}}=2^{\left(\aleph_0\cdot2^{\left(\aleph_0\cdot 2^{\aleph_0}\right)}\right)}=2^{\aleph_0\cdot2^{2^{\aleph_0}}}=2^{2^{2^{\aleph_0}}}$$ Your assertion on $\aleph_0^n$ is essentially correct (though you forgot to mention that $\aleph_0^0=1$). However for $2\le n<\aleph_0$ we have that $n^{\aleph_0}=2^{\aleph_0}>\aleph_0$. Therefore, $$n^{\aleph_0}\cdot\aleph_0^n=\begin{cases}0&\text{if }n=0\\ \aleph_0&\text{if }n=1\\ 2^{\aleph_0}&\text{if }n\ge2\end{cases}$$
Question in munkres topology page 160
Look at the map $$g: [c,d]\to [a,b], x\mapsto \frac{(x-c)(b-a)}{d-c}+a$$ It is an example of the mentioned homeomorphism. Soo if $f: [a,b]\to X$ is a path form $x$ to $y$ then $$f\circ g : [c,d]\to X$$ is also a path from $x$ to $y$.
Section 4.2 in Loring Tu's Differential Geometry
You're completely correct. This is a slightly unfortunate sentence. Later in the book, Tu will introduce the more general notion of an affine connection $\nabla$ on $TM$. This is a gadget quite similar to $D$, in that it is a map $\nabla:\mathfrak{X}(M)\times \mathfrak{X}(M)\to \mathfrak{X}(M)$ which is written $\nabla(X,Y)=\nabla_X Y$ and "differentiates" $Y$ with respect to $X$. It satisfies moreover the properties of being $C^\infty(M)$ linear in $X$ and $\Bbb{R}-$linear in $Y$. The point of saying all of this is that for a general affine connection $\nabla$, we define the quantity $T(X,Y)=\nabla_X Y-\nabla_Y X-[X,Y]$ to be the torsion of $\nabla$, which is a tensor that eats a pair of vector fields and returns a vector field. The reason we want to introduce this terminology is that a Riemannian manifold $(M,g)$ has a unique torsion free connection $\nabla$ compatible with the metric $g$. Compatibility here means that for all $X,Y,Z\in \mathfrak{X}(M)$, we have $$ X g(Y,Z)=g(\nabla_XY,Z)+g(Y,\nabla_X Z)\:\:\:\:\text{(a version of the product rule)}. $$ We call this the Levi-Civita connection and it shows us that a Riemannian manifold comes for free with a "canonical" choice of connection. This is in turn useful, because it gives us a notion of parallel transport of vector fields. Given a parametrized curve $\gamma:I\to M$, we say that a vector field $V$ along $\gamma$ is parallel with respect to $\nabla$ if $$ \nabla_{\gamma'(t)}V=0\:\:\:\text{(parallel transport equation)}. $$ If you look here: https://mathoverflow.net/questions/20493/what-is-torsion-in-differential-geometry-intuitively at Anonymous's answer, they provide an example of a connection on $\Bbb{R}^3$ which is not the Levi-Civita connection (because it has nonzero torsion) and with respect to which the parallel translation rotates a vector as it "moves" along a curve. This perhaps explains the reason why it is called torsion. $T(X,Y)\equiv 0$ means (roughly) that there is no twisting in the translation in some sense.
What is $\left(\delta_{ab}\right)^{-1}$?
Whether it makes sense to regard the value of expression as infinite will depend on the context; in some contexts "undefined" might be a better description. It usually makes sense to regard something as infinite when it approaches infinity, e.g. $1/r$ as $r$ approaches $0$ from above; but in this case there's no approach, and in particular there's no reason to prefer calling this $+\infty$ over calling it $-\infty$. This has nothing specific to do with the Kronecker symbol, which is just a convenient notation that captures the fact that this Wigner $3$-$j$ symbol vanishes unless $a=b$.
permotation group S13 question
Hint: Prove that if $\;(i_1\;i_2\;\ldots\;i_k)\;$ is a cycle in $\;S_n\;$ and we take any element $\;\sigma\in S_n\;$ , then $$\sigma(i_1\;i_2\;\ldots\;i_k)\sigma^{-1}=(\sigma(i_1)\;\sigma(i_2)\;\ldots\;\sigma(i_k))$$ Further hint: if $\;c_1,..,c_r\;$ are cycles (or, in fact, general permutations), then $$\sigma(c_1\cdot\ldots\cdot c_k)\sigma^{-1}=(\sigma c_1\sigma^{-1})(\sigma c_2\sigma^{-1})\cdot\ldots\cdot(\sigma c_r\sigma c_r^{-1})$$
Proof that a perfect set is uncountable
There is an alternative proof, using what is a consequence of Baire's Theorem: THM Let $(M,d)$ be a complete metric space with no isolated points. Then $(M,d)$ is uncountable. PROOF Assume $M$ is countable, and let $\{x_1,x_2,x_3,\dots\}$ be an enumeration of $M$. Since each singleton is closed, each $X_i=X\smallsetminus \{x_i\}$ is open for each $i$. Moreover, each of them is dense, since each point is an accumulation point of $X$. By Baire's Theorem, $\displaystyle\bigcap_{i\in\Bbb N} X_i$ must be dense, hence nonempty, but it is readily seen it is empty, which is absurd. $\blacktriangle$. COROLLARY Let $(M,d)$ be complete, $P$ a perfect subset of $M$. Then $P$ is uncountable. PROOF $(P,d\mid_P)$ is a complete metric space with no isolated points. ADD It might be interesting to note that one can prove Baire's Theorem using a construction completely analogous to the proof suggested in the post. THM Let $(X,d)$ be complete, and let $\langle G_n\rangle$ be a sequence of open dense sets in $X$. Then $G=\displaystyle \bigcap_{n\in\Bbb N}G_n$ is dense. PROOOF We can construct a sequence $\langle F_n\rangle$ of closed sets as follows. Let $x\in X$, and take $\epsilon >0$, set $B=B(x,\epsilon)$. Since $G_1$ is dense, there exists $x_1\in B\cap G_1$. Since both $B$ and $G_1$ are open, there exists a ball $B_1=B(x_1,r_1)$ such that $$\overline{B_1}\subseteq B\cap G_1$$ Since $G_2$ is open and dense, there is $x_2\in B_1\cap G_2$ and again an open ball $B_2=B(x_2,r_2)$ such that $\overline{B_2}\subseteq B_1\cap G_2$, but we ask now that $r_2\leq r_1/2$. We then successively take $r_{n+1}<\frac{r_n}2$. Inductively, we see we can construct a sequence of closed bounded sets $F_n=\overline{B_n}$ such that $$F_{n+1}\subseteq F_n\\ \operatorname{diam}D_n\to 0$$ Since $X$ is complete, there exists $\alpha\in \displaystyle\bigcap_{n\in\Bbb N}F_n$. But, by construction, we see that $\displaystyle\alpha\in \bigcap_{n\in\Bbb N}G_n\cap B(x,\epsilon)$ Thus $G$ is dense in $X$.$\blacktriangle.$
Classifying Algebraic Structures as Fields
If you want the quotient to be a ring, then you must take the quotient by an ideal. The ideal generated by $2\pi$ is all of $\Bbb R$, so this quotient is the zero ring, which is not considered to be a field. This is nothing special with $2\pi$: every nonzero number will make the quotient zero. If you try to mod out by zero, you will of course get just $\Bbb R$ back, and so that would be a field! If you have convinced yourself the thing you are modding out is not all of $\Bbb R$, then unfortunately you are thinking of something which isn't an ideal, and so the quotient by this object will not even be a ring (and of course could not be a field, then.)
Proof of equivalence theorem using equational calculus
The simplest way of solving this, is to treat it as a simplification problem. And here it seems most appropriate to simplify both sides separately, using the basic laws of logic.$ \newcommand{\calc}{\begin{align} \quad &} \newcommand{\op}[1]{\\ #1 \quad & \quad \unicode{x201c}} \newcommand{\hints}[1]{\mbox{#1} \\ \quad & \quad \phantom{\unicode{x201c}} } \newcommand{\hint}[1]{\mbox{#1} \unicode{x201d} \\ \quad & } \newcommand{\endcalc}{\end{align}} \newcommand{\Ref}[1]{\text{(#1)}} \newcommand{\then}{\Rightarrow} \newcommand{\when}{\Leftarrow} \newcommand{\true}{\text{true}} \newcommand{\false}{\text{false}} $ The left hand side is immediately equivalent to $\;\true\;$, by the law of the excluded middle. The right hand side we simplify as follows: $$\calc \tag{RHS} \Big((p \lor q) \land \lnot \big(\lnot p \land (\lnot q \lor \lnot r)\big)\Big) \;\lor\; (\lnot p \land \lnot q) \lor (\lnot p \land \lnot r) \op\equiv\hint{left part: simplify using DeMorgan} \Big((p \lor q) \land \big(p \lor (q \land r)\big)\Big) \;\lor\; (\lnot p \land \lnot q) \lor (\lnot p \land \lnot r) \op\equiv\hints{left part: extract common $\;p \lor {}\;$;}\hint{right part: extract common $\;\lnot p \land {}\;$} \big(p \lor (q \land q \land r)\big) \;\lor\; \big(\lnot p \land (\lnot q \lor \lnot r)\big) \op\equiv\hints{left part: simplify using idempotence of $\;\land\;$;}\hint{right part: simplify using DeMorgan} \big(p \lor (q \land r)\big) \;\lor\; \lnot \big(p \lor (q \land r)\big) \op\equiv\hint{excluded middle} \true \endcalc$$ So both sides of the original equivalence are $\;\true\;$, making the equivalence true as well.
Logic question involving prime numbers
$n^2+4n+3 = (n+1)(n+3)$, hence the conclusion.
Function to make values close to zero, zero leaving others as is
The definition of such a function isn't very far from your verbal description. I wouldn't bother formalizing it further, but if you must, you can define $f(u,v) = (g_x(u),g_x(v))$ where $ g_x(t) = \begin{cases} 0 & t < |x| \\ t & \mbox{otherwise} \end{cases} $
Are there any formulas that include $\pi^n?$
Yes, though I'm assuming your asking after interesting formulas. Some examples are $$\sum_{n=1}^{\infty} \frac{1}{n^2}=\frac{\pi^2}{6}$$ As seen from the Basel Problem. Similarly, $$\sum_{n=1}^{\infty} \frac{1}{n^4}=\frac{\pi^4}{90}$$ And so on. As mentioned by @imranfat, generally $$\sum_{k=1}^{\infty} \frac{1}{k^{2n}}=\zeta (2n)=(-1)^{n+1}\frac{B_{2n}(2\pi)^{2n}}{2(2n)!}$$ Where $\zeta(n)$ denotes the Riemann zeta function.
can you integrate the dirac delta over any domain?
An integral of the form \begin{equation} \int_1^∞\delta(x)f(x)dx \end{equation} is just $0$. Intuitively, the spike of the Dirac delta is not in the domain, so the integral only picks up $0$. If you look closer at the integral you mention: \begin{equation} \int_1^∞x^{-4}\delta[\sin (\pi x)]\cos(\pi x)\pi dx \end{equation} you will see that the argument in the delta is in fact oscillating between $-1$ and $1$. The value of the integral is just the sum of $x^{-4}\cos(\pi x)\pi$ divided by $\pi$ whenever $\sin (\pi x)=0$ that is for $x=1,2,3...$ \begin{equation} \sum_{x=1}^∞\frac{1}{x^4}\cos(\pi x)=\sum_{x=1}^∞\frac{(-1)^x}{x^4} \end{equation} which is in fact different from what your source states. In fact I would have: \begin{equation} \int_1^∞x^{-4}\delta[\sin (\pi x)] \pi dx = \sum_{x=1}^∞\frac{1}{x^4} \end{equation} but the author justifies the cosine by saying it "comes from the derivative in the functional differential". Perhaps you know what it means, or somebody can correct my mistakes.
How to show all the terms in a sequence are greater than a number?
Since $\{x_n\}$ converges to $3$, given $\epsilon >0,$ there exists $N \in \mathbb{N} $ such that $ |x_n-3|< \epsilon $ for all $ n> N$. So take $\epsilon=1$. Then there exists $N \in \mathbb{N} $ such that $ |x_n-3|< 1 $ for all $ n> N$. $\Rightarrow -1<x_n-3 < 1$ for all $ n> N$. $\Rightarrow 2 < x_n$ for all $ n> N$.
Showing that $(2^a - 1)\bmod (2^b - 1) = 2^{a \; \bmod \; b} - 1 $
If you are working modulo $2^b-1$, you have $$2^b \equiv 1 \pmod{2^b-1}.$$ Suppose that $a=nb+c$. (That is, $a \equiv c \pmod{b}$.) Then you can simplify $$2^a = 2^{nb+c} = (2^b)^n\cdot 2^c \equiv 1^n\cdot 2^c \pmod{2^b-1}.$$ The result you are looking for follows by subtracting 1 from both sides.
question about $\|f_n-f\|_p\to 0$ if and only if $\|f_n\|_p\to\|f\|_p$
The $\infty$ norm behaves much differently than the finite $p$ norms. Consider $f,f_n$ defined on $\mathbf R$ as follows: $f(x) = 1$ and $f_n(x) = \chi_{[-n,n]}(x)$. Then $\|f\|_\infty = 1$, $\|f_n\|_\infty = 1$, and $f_n(x) \to f(x)$ everywhere. But $\|f_n - f\|_\infty = 1$ for all $n$.
A card grouping game: find the expected number of groups
We can calculate the expected number of new piles formed, starting with the available run $1\ldots n$, with three slightly different initial conditions: No initial piles (call the expected number of piles $A_n$); An initial pile at $0$ (call the expected number of new piles $B_n$); Initial piles at $0$ and $n+1$ (call the expected number of new piles $C_n$). Clearly $A_1=A_2=1$ and $B_1=C_1=C_2=0$ and $B_2=1/2$. Beyond this, we can calculate each sequence recursively by considering how the first card can be drawn and what available runs are formed thereby. Newly formed available runs will evolve independently. With no initial piles, the result is a newly created pile and either a shortened run with one neighboring pile (if the drawn card is $1$ or $n$) or two runs with one neighboring pile: $$ A_n=1 + \frac{2}{n}B_{n-1}+\sum_{k=1}^{n-2}\frac{1}{n}\left(B_{k}+B_{n-1-k}\right)=1+\frac{2}{n}\sum_{k=1}^{n-1}B_k. $$ With an initial pile at $0$, the result is either a shortened run with one neighboring pile (if the drawn card is $1$), or a new pile and a shortened run with two neighboring piles (if the drawn card is $n$), or a new pile and a run with two neighboring piles and a run with one neighboring pile: $$ B_n=\frac{1}{n}B_{n-1}+\sum_{k=1}^{n-2}\frac{1}{n}\left(1+C_{k}+B_{n-1}\right)+\frac{1}{n}\left(1+C_{n-1}\right)=1-\frac{1}{n}+\frac{1}{n}\sum_{k=1}^{n-1}B_k+\frac{1}{n}\sum_{k=1}^{n-1}C_k. $$ Finally, with initial piles at $0$ and $n+1$, the result is either a shortened run with two neighboring piles (if the drawn card is $1$ or $n$), or a new pile and two runs with two neighboring piles: $$ C_n=\frac{2}{n}C_{n-1}+\sum_{k=1}^{n-2}\frac{1}{n}\left(1+C_{k}+C_{n-1-k}\right)=1-\frac{2}{n}+\frac{2}{n}\sum_{k=1}^{n-1}C_k. $$ We know that $C_2=0$; let's assume, as an inductive hypothesis, that $C_k=a(k-2)$ for $2 \le k < n$ (for some fixed $n > 2$, and with the value of $a$ to be chosen later). Then $$ C_n=1-\frac{2}{n}+\frac{2}{n}\sum_{k=3}^{n-1}a(k-2)=1-\frac{2}{n}+\frac{2}{n}\left(\frac{1}{2}a(n-2)(n-3)\right)=\frac{(a(n-3)+1)(n-2)}{n}. $$ The choice $a=1/3$ gives the self-consistent result $C_n=\frac{1}{3}(n-2)$. We conclude by induction that $C_n=\frac{1}{3}(n-2)$ for all $n\ge 2$. In an exactly analogous way, we can prove by induction that $B_n=\frac{1}{2}+\frac{1}{3}(n-2)$ and (OP's result) that $$A_n=1+\frac{1}{3}(n-2)=\frac{1}{3}(n+1) \qquad {\text{for }} n\ge 2.$$
Example function $f:(0,1) \to (0,1)$ such that $f^{-1}(y)$ is uncountable for all $y$.
$f(0.x_1x_2x_3x_4x_5\ldots)=0.x_1x_3x_5\ldots$
Left multiplication isometry?
This is almost tautological. "Bi-invariant" means the metric is invariant under both left and right translation. The resulting distance function satisfies, in particular, $$d(gx,gy)=d(x,y)$$ Hence left translation is an isometry (as is right translation).
Is this series convergent? (which could not be solved by comparison theorems)
\begin{align} 0<c_n&=\sum_{k=1}^{2n-1} \frac1{k^a}\left(\frac1{(2n-k)^a}-\frac1{(2n+1-k)^a}\right)\\ &<\sum_{k=1}^{n-1} \frac1{1^a}\left(\frac1{(2n-k)^a}-\frac1{(2n+1-k)^a}\right)+\sum_{k=n}^{2n-1} \frac1{n^a}\left(\frac1{(2n-k)^a}-\frac1{(2n+1-k)^a}\right)\\ &n-k\mapsto m,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad 2n-k\mapsto p\\ &=\sum_{m=1}^{n-1} \frac1{1^a}\left(\frac1{(n+m)^a}-\frac1{(n+m+1)^a}\right)+\sum_{p=1}^{n} \frac1{n^a}\left(\frac1{p^a}-\frac1{(p+1)^a}\right)\\ &=\frac{1}{(n+1)^a}-\frac1{(2n)^a}+\frac1{n^{a}}-\frac{1}{n^a(n+1)^a}\\ &<\frac{2}{n^a} \end{align} Therefore $\sum_{n=1}^{\infty}c_n$ converges if $a>1$.
Total Variation Measure Defined by Measurable Function
Let $P = \{ x \in \Omega : f(x) \ge 0 \}$ and $N = \{ x \in \Omega : f(x) < 0 \}$. Then it one can show directly by the definition that $W = P \cup N$ is a Hahn decomposition of the space $\Omega$ with respect to the measure $\omega$. It is then easy to see that $\omega_+$ and $\omega_-$ defined by $$ \omega_+(W) = \int_{P \cap W} f \, d\mu = \int_{P \cap W} |f| \, d\mu, \quad \omega_-(W) = \int_{N \cap W} -f \, d\mu = \int_{N \cap W} |f| \, d\mu. $$ is a Jordan decomposition of $\omega$. By the uniqueness of the Jordan decomposition and the definition of the total variation measure, we have $|\omega| = \omega_+ + \omega_-$ so $$ |\omega|(W) = \int_W |f| d\, \mu. $$
Number of pairs $(i,j)$ less than or equal to $m$
If $i>0$, $j\ge0$ and $i+j\le m$, then $(j,-i),(-i,-j),(-j,i)$ are distinct solutions; these four sets of solutions cover the entire solution space save for $(0,0)$. Thus the number of solutions is four times the number of solutions in a quadrant plus one or $4\left(\frac{m(m+1)}2\right)+1=2m^2+2m+1$.
Minimum value function
There are two ways to answer this question: Answer 1: (This was given by Nitin in the comments) We can write $$\min(a,b) = \frac{a+b - |b-a|}{2}$$ and $$\max(a,b) = \frac{a+b - |b-a|}{2}$$ To where this first expression comes from, observe that if $b \ge a$, then $|b-a| = b-a$, so $\frac{a+b - |b-a|}{2} = \frac{a+b - (b-a)}{2} = a$, and if $b < a$, then $|b-a| = a-b$, so $\frac{a+b - |b-a|}{2} = \frac{a+b + (b-a)}{2} = b$. To get the second expression, notice that $\min(a,b) + \max(a,b) = a+b$, then rearrange and solve for $\max(a,b)$. If we want to take the maximum over larger (finite) sets of real numbers, we can use the identities $$\min(a_1, a_2, \cdots a_n) = \min(\min(a_1, a_2, \cdots, a_{n-1}), a_n)$$ $$\max(a_1, a_2, \cdots a_n) = \max(\max(a_1, a_2, \cdots, a_{n-1}), a_n)$$ Answer 2: The expressions $\min$ and $\max$ are already well defined functions. Many people get the idea in high school that a function is something that you can write out a 'rule' or 'formula' for, but this isn't correct. The definition of a function is an object which takes an input and returns an output (this can be made precise using set-theoretic ideas). The functions $\min$ and $\max$ take as input pairs of (real) numbers (or, if you like, nonempty sets of real numbers), and return the smallest/largest number in the pair (set), respectively.
Finding the Galois group of a quintic
We know that the transitive automorphism(the one of of order 5) and the complex conjugation (the double transposition) are part of the Galois group of $x^5 + 3x^2 + 1$ over $\mathbb{Q}$. Unfortunatelly this isn't enough for us to distinguish whether the Galois group is $A_5$ or $S_5$, as the two forementioned automorphisms generate an automorphism group isomorphic to $A_5$, but that doesn't mean that there isn't another automorphism on the roots, which combined with these two automorphisms spans over $S_5$. So as I mentioned combining all possible combinations of $(12345)$ and $(23)(45)$ won't help us much, except leaving us with two candidates $A_5$ or $S_5$. But thankfully here the Dedekind's Theorem comes in very handy. Factoring the polynomial in $\mathbb{Z}_{19}$ we have that: $$x^5 + 3x^2 + 1 \equiv (x+2)(x+9)(x+13)(x^2+14x+16) \pmod{19}$$ Therefore by the Dedekind's Theorem the Galois group of $x^5 + 3x^2 + 1$ over $\mathbb{Q}$ contains an automorphism isomorphic to the cycle of the type $(1,1,1,2)$, which is a transposition. Now using the fact that if a subgroup of $S_n$, $n$ a prime, contains a $n-$cycle and transposition it's actually $S_n$ itself we can conclude that the Galois group of $x^5 + 3x^2 + 1$ over $\mathbb{Q}$ is $S_5$
Finding the value of this complex integral
Suppose $f$ is an entire function such that $|f(z)|\leq M|z|$ for some real constant $M$. Then, by Cauchy's Integral Formula, $\displaystyle f^{\prime}(z) = \frac{1!}{2\pi i} \int_{\gamma_{\rho}} \frac{f(\zeta)}{(\zeta - z)^{2}}d\zeta$. So, $\displaystyle |f^{\prime}(z)| = \frac{1}{2\pi }\left\vert \int_{\gamma_{\rho}} \frac{f(\zeta)}{(\zeta - z)^{2}}d\zeta\right\vert \leq \frac{1}{2\pi} \int_{\gamma_{\rho}}\left\vert\frac{f(\zeta)}{(\zeta - z)^{2}}d \zeta\right\vert = \frac{1}{2\pi} \int_{\gamma_{\rho}}\frac{|f(\zeta)|}{|(\zeta - z)^{2}|}|d\zeta| \leq \frac{1}{2\pi}\int_{\gamma_{\rho}}\frac{M|\zeta|}{|(\zeta-z)^{2}|}|d\zeta| = \frac{M}{2\pi}\int_{\gamma_{\rho}}\frac{|\zeta|}{|(\zeta-z)^{2}|}|d\zeta| = \frac{MI}{2\pi}$, where $\displaystyle I = \int_{\gamma_{\rho}}\frac{|\zeta|}{|(\zeta-z)^{2}|}|d\zeta|$. Thus, we have established that $f^{\prime}(z)$ is bounded. It is certainly differentiable, since $f$ itself being first-differentiable implies that $f$ is infinitely differentiable. By Liouville's Theorem, then, $f^{\prime}(z)$ is constant. I.e., $f^{\prime}(z) = a$, for $a \in \mathbb{C}$. Integrating this, we obtain that $f(z) = \int f^{\prime}(z) dz = \int a dz = az+b$, where $b \in \mathbb{C}$ is a constant of integration.
Summation related to arctan
Hint: $$\frac{2r}{2+r^2+r^4} = \frac{(r^2+r+1)-(r^2-r+1)}{1+(r^2+r+1)(r^2-r+1)}$$ Now you may proceed!
Verifying product rule
If you just crunch it out by components you can see that it works out. $$\begin{align}\vec\nabla\times(f\vec G)&=\left\langle\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{\partial}{\partial z}\right\rangle\times\left\langle fG_x,fG_y,fG_z\right\rangle\\ &=\left\langle\frac{\partial(fG_z)}{\partial y}-\frac{\partial(fG_y)}{\partial z},\frac{\partial(fG_x)}{\partial z}-\frac{\partial(fG_z)}{\partial x},\frac{\partial(fG_y)}{\partial x}-\frac{\partial(fG_x)}{\partial y}\right\rangle\\ &=\left\langle\frac{\partial f}{\partial y}G_z-\frac{\partial f}{\partial z}G_y,\frac{\partial f}{\partial z}G_x-\frac{\partial f}{\partial x}G_z,\frac{\partial f}{\partial x}G_y-\frac{\partial f}{\partial y}G_x\right\rangle\\ &\quad+f\left\langle\frac{\partial G_z}{\partial y}-\frac{\partial G_y}{\partial z},\frac{\partial G_x}{\partial z}-\frac{\partial G_z}{\partial x},\frac{\partial G_y}{\partial x}-\frac{\partial G_x}{\partial y}\right\rangle\\ &=\left\langle\frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial z}\right\rangle\times\left\langle G_x,G_y,G_z\right\rangle+f\left\langle\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{\partial}{\partial z}\right\rangle\times\left\langle G_x,G_y,G_z\right\rangle\\ &=\left(\vec\nabla f\right)\times\vec G+f\vec\nabla\times\vec G\end{align}$$ I'm not sure about the product rule notation you are referring to is. Can you clarify a bit?
question about bases of a given topology
Yes, there can be several different bases for a given topology. For example, on $\Bbb R$, we can generate the usual topology via open intervals of rational length, open intervals of irrational length, etcetera.
Disproving that if Z-Y is a subset of Z-X then X is a subset of Y
Consider $$Z=\{1,2,3\},\ Y=\{1,2,4\}\ \text{and}\ X=\{1,5\}.$$ Thus, $$Z-Y=\{3\}\ \text{and}\ Z-X=\{2,3\}\Rightarrow Z-Y\subset Z-X.$$ However, $X\not\subset Y$, neither $Y\not\subset X$.
Sketching a set in the complex plane?
$$ 2 < |z-1| < 5 \iff 2 < |z-1|, \text {and } |z-1| < 5$$ As you know $ |z-1| < 5$ is the set of points inside the circle of radius $5$ and centered at $z=1.$ On the other hand, the set $ 2 < |z-1| $ is the set of points outside the circle of radius $2$ and centered at $z=1.$ Thus $$ 2 < |z-1| < 5 $$ is the region between the two circles.
Open Cover of Compact Set Minus a Point on the Boundary
Let $X$ be a compact Hausdorff space and $p\in X$ apoint such that $Y:=X-\{p\}$ is not closed. For each $x\in Y$ there exist disjoint open sets $x\in U_x,p\in V_x$. Then the $U_x$ cover $Y$. Assume there is a finite subcover $U_{x_1}\cup\ldots\cup U_{x_n}$. Then this subcover misses the open set $V_{x_1}\cap \ldots\cap V_{x_n}$ which contains $p$ ans must be strictly larger that $\{p\}$ because $\{p\}$ is not open in $X$.
Is the stable homotopy group of sphere a commutative ring? If not, are there easy examples?
If we take the basic approach of defining the sphere spectrum as a sequence $\mathbb{S}=(S^0,S^1,...)$ with a monoidal structure given by the smash product, this is only associative and commutative up to homotopy. From this point of view, the product $[f]\cdot [g]$ in $\pi_*^S$ is anticommutative in the sense that if we have representatives $f:S^{n+i}\to S^n$ and $g:S^{n+i+j}\to S^{n+i}$ for sufficiently large $n$, then $$[f]\cdot [g] = (-1)^{ij}([g]\cdot [f])$$ due to the interchange of suspension coordinates. As for your question about $\mathbb{S}$ being commutative (this is just additional info, not really relevant to your original question), you can introduce symmetric spectra (a spectrum with an action of the symmetric group on the coordinates). The symmetric sphere spectrum is commutative with a canonical isomorphism $S^n \wedge S^m \to S^{n+m}$. Here, you get a symmetric monoidal structure at the level of the maps before passing onto homotopy classes.
Verify whether or not a subset is a subspace of R3
You are correct. For example, $(1,1,1)$ is a member of $U$, but $(-1)\cdot(1,1,1)$ is not.
Series of Squares Under Triangle
Each square of side length $s$ has as "hat" a right triangle with horizontal leg $s$ and vertical leg $\frac sa$. Within a single instance of "square + hat", the square occupies a share of $\frac{s^2}{s^2+\frac12\cdot s\cdot \frac sa}=\frac{1}{1+\frac1{2a}}=\frac{2a}{2a+1}$. This proportion remains constant, no matter how many squares with hat we consider, hence also in the limit. We conclude that the total area occupied by the squares is $$ \frac{2a}{2a+1}\cdot \frac a2=\frac{a^2}{2a+1}.$$
Maximize two-variable linear function
How about using Lagrange multiplier? Because there are inequality constrains we need to make some modification, i.e. use KKT conditions. The fact that the function is linear will make things easier. So after applying Lagrange multipler the function should look like: $$F(x,y,\lambda,\lambda_1,\lambda_2) = ax + by + \lambda(x) + \lambda_1(y) - \lambda_2(cx + dy - N)$$ Now we take partial derivatives and we set the to 0: $$F_x = a + \lambda - \lambda_2c = 0$$ $$F_y = b + \lambda_1 - \lambda_2d = 0$$ Because the terms we've added must be equal to 0, now we have to check $2^3 = 8$ distinct cases. You can read more here and apply the new things depending on the constants $a,b,c,d,N$. Note that you can exclude the terms $\lambda(x) + \lambda(y)$ and apply Lagrange multiplier only to the last constrain and the check whether the obtained solution are in the bounded region (in this case $x,y\ge0$). But I recommend using them just to make double sure. If you decide to leave them and you don't obtain a solution then the stationary point is on the boundary of the region. We check the cases: $x=0$, $y=0$ and then both $x=y=0$
get largest eigenvalue with mix of real and complex numbers
$\lambda_{max}$ is an eigenvalue with the largest absolute value.
Rank of a matrix with real entries
Let $C:=\begin{pmatrix}1\\2\\3\\4\\5\end{pmatrix}$, $K_1:=\begin{pmatrix}2\\3\\4\\5\end{pmatrix}$ and $L:=\begin{pmatrix}1\\2\\3\\4\end{pmatrix}.$ Initial equation yields $$\tag{1} \forall s, \ \ \ A(L+sK_1)=C.$$ Taking in (1), two different values of $s$ and subtracting gives : $AK_1=0$, which means that $K_1 \in Ker(A)$. Thus, as $K_1 \neq 0, dim(Ker(A)) \geq 1$ ; thus $dim(range(A))=rank(A) \leq 3$ (by rank nullity theorem). Let us assume that $rank(A)<3$ and show that it leads to a contradiction. It would mean that dim $Ker(A) \geq 2$ for the same reason as above. Thus, there would exist a second vector $K_2$ in the kernel of $A$, independent from $K_1$. By definition: $$\tag{2}AK_2=0.$$ Combining (1) and (2), we would have: $$\forall s, \forall t, \ \ \ A(L+sK_1+tK_2)=C.$$ In this way, the general solution to equation $AX=C$ would no longer be $X=L+sK_1$, but a larger set (2D or 3D affine subspace instead of an affine line). Contradiction with the above assumption. Thus $rank(A)=3 \ $ (answer b)).
Help to understand a proof of Tao's book (analysis)
The main thing to recall is that Tao is assuming, to get a contradiction, that $x^2<2$ implies $(x+\varepsilon)^2<2$ for all $x$. Set $x=0$; one can show $0^2=0<2$, so $\varepsilon^2=(x+\varepsilon)^2<2$ holds. Now, assume $(n\varepsilon)^2<2$. Then set $x=n\varepsilon$, and note that since $x^2<2$, $(x+\varepsilon)^2<2$. That is, $((n+1)\varepsilon)^2<2$.
False theorem on particular case of Cauchy Sequences -- why is it wrong?
Your assertion about $\epsilon_1 = (m-n+1) \epsilon_0$ depends on $m$ and $n$ ! That is your mistake.
Find the divergence of $F=-(x,y)/r^2$ using the definition
Assuming you mean $\vec{F} = \frac{-x}{\sqrt{x^2 + y^2}} \hat i + \frac{-y}{\sqrt{x^2 + y^2}} \hat j$ div$\vec{F} = \nabla \cdot \vec{F} = \frac{\partial}{\partial x}\vec{F} + \frac{\partial}{\partial y}\vec{F}$. Then we have div$\vec{F} = \frac{-y^2}{(x^2 + y^2)^{3/2}} + \frac{-x^2}{(x^2 + y^2)^{3/2}} = - \frac{x^2 + y^2}{(x^2 + y^2)^{3/2}} = - \frac{1}{\sqrt{x^2+y^2}} = -\frac{1}{r}$.
How to find a condition which makes box topology have a better properties?
There is a unique topology on $X=\prod_{j\in J} X_j $ such that (1) For every $A$, $f:A\to X$ is continuous iff $f_j:A\to X_j$ are continuous, which is the product topology. Let $A$ be any directed set with a maximal element $1$. Consider the topology on $A$ generated by sets like $(\lambda,1]$ and sets like $ \{\lambda\}$ with $\lambda\neq 1$. Now suppose (1) holds. (1) implies that, $(\forall \{\lambda_\eta\in A\}_{\eta\in H})(\lambda_\eta \to \lambda\Rightarrow f(\lambda_\eta)\to f(\lambda))\Leftrightarrow(\forall \{\lambda_\eta\in A\}_{\eta\in H})(\forall j\in J)(\lambda_\eta\to \lambda\Rightarrow f_j(\lambda_\eta)\to f_j(\lambda)))$ Fix a directed set $\Lambda$ (which has no maximal elements) and a net $\{f_\lambda\}_{\lambda\in \Lambda}$. Equip $A=\Lambda\cup\{1\}$ with the above topology and define $f(\lambda)=f_\lambda$, $f(1)=f$. First assume $f_\lambda \to f$. For every $\{\lambda_\eta\in A\}_{\eta\in H}$, if $\lambda_\eta\to \lambda$, then either $\lambda=1$ and $\lambda_\eta\to 1$ or $\lambda\in \Lambda$ and $\lambda_\eta=\lambda$ for sufficiently large $\eta$. Whichever it is, we have, $(\forall \{\lambda_\eta\in A\}_{\eta\in H})(\lambda_\eta \to \lambda\Rightarrow f(\lambda_\eta)\to f(\lambda))$. This shows $f_\lambda(j)=f_j(\lambda)\to f_j(1)=f(j)$ for every $j$ . The converse is similar.
Absolute series convergence of linear functional values implies a kind of sequence value series convergence in Banach spaces.
The first step in such kinds of question is often to transform the given qualitative information to a quantitative information (you will see what I mean by this). One possibility to do so is to use that closed graph theorem. Let $$ \Phi : X^\ast \to \ell^1 , \phi \mapsto ( \phi (x_n))_n. $$ By assumption, this map is well-defined. It is easy to see that it has closed graph, so that it is bounded. Hence, we have shown $$ \sum |\phi(x_n)| \leq C $$ for all $\|\phi\|\leq 1$. Now, let $(a_n)_n\in c_0$ be arbitrary. We want to show that the sequence $y_n = \sum_{i=1}^n a_i x_i$ is Cauchy. Let $\epsilon>0$ and choose $N_0$ with $|a_n|\leq\epsilon$ for $n \geq N_0$. Then for $n>m>N_0$, $$ |\phi(y_n -y_m)| =|\sum_{i=m+1}^n a_i \phi(x_i)| \leq \epsilon \sum_{i=m+1}^n |\phi (x_i)| \leq C\epsilon $$ as soon as $\|\phi\|\leq 1$. As a consequence of the Hahn Banach theorem, this yields $\|y_n -y_m\|\leq C\epsilon$ as desired.
Why is the Zariski topology coarser than standard topology
It's a good first step to understand the version with $\mathbb{C}$ in place of $\mathbb{C}^2$. Here we're looking at single-variable polynomials over $\mathbb{C}$, and these are relatively simple. In particular, we have a good understanding of $\{u: f(u)=0\}$ for such an $f$: It's either finite or all of $\mathbb{C}$. This means that any closed in the usual sense subset of $\mathbb{C}$ not satisfying this same "size condition" cannot be Zariski closed. For example: The unit disc $\{x+yi: x^2+y^2\le 1\}$ is closed in the usual sense and infinite but not all of $\mathbb{C}$. OK, now how can we lift this to $\mathbb{C}^2$? Well, there are various ways to do this, but one I quite like is to consider sections. Suppose $f(u,v)$ is a polynomial over $\mathbb{C}$ in two variables. Fix some $z\in\mathbb{C}$; we then get a single-variable polynomial $$g(u)=f(u,z).$$ What can we say about $\{u: g(u)=0\}$ (thinking about the previous section of this answer? How does that give an example of a subset $A\subseteq\mathbb{C}^2$ which is closed in the usual topology but not in the Zariski topology? HINT: think about $[0,1]$ ...
Inflection point, understanding problem
I'm not a fan of it, but some teachers call the solutions to $f"(x)=0$, PPI's. (Possible Point of Inflection.) It's an actual inflection point only if the concavity changes at that point. But also, concavity can change at a point where the 2nd derivative fails to exist, which is the case in your problem. You can factor one $x$ out of the bottom of the first derivative, and you're left with a non-removable singularity at $x=0$.. So the first derivative doesn't exist at $x=0$. Therefore, neither does the second derivative. When graphing functions, I have my students collect all the points where $f'$ and $f''$ are zero or discontinuous. Those are the points where increasing can change to decreasing and vv, and where concave up can change to concave down. (I don't worry about naming the points so much.) But finally, the answer to your question is that the concavity only may change at one of these points. In your example, it doesn't.
How to iterate through all the possibilities in with this quantifier?
Following the approach in the link, you would write $\forall y L(y,x) \wedge \forall z L(z,w) \implies w=x$
What is the remainder when $P(x)$ is divided by $x^2-2x-35$?
Hint The remainder of $P(x)$ divided by $x^2-2x-35=(x-7)(x+5)$ is of the form $ax+b.$ That is: $$P(x)=(x^2-2x-35)Q(x)+ax+b.$$ Now, using that $P(7)=11$ and $P(-5)=-3$ we can get $a,b.$
Differentiability of a piecewise (salt-pepper like) function
Note that $f(h)$ is either $0$ or $h^{2}$. Hence $\frac {f(h)} h$ is either $0$ or $h$. Of course this tends to $0$ as $ h \to 0$ so $f'(0)=0$.
Existence of a continuous function between $T_2$ and $T_1$
Suppose such an $r$ does exist. Recall that $\pi_1(T_1)\cong\mathbb{Z}\oplus\mathbb{Z}$ and $\pi_1(T_2)\cong\mathbb{Z}$ (because $T_2$ is homotopy equivalent to a circle). From induced maps, we have $$\pi_1(T_1) \stackrel{i_*}{\rightarrow} \pi_1(T_2) \stackrel{r_*}{\rightarrow}\pi_1(T_1)$$ should be equal to $Id_{\pi_1(T_1)}\colon \pi_1(T_1) \rightarrow\pi_1(T_1)$. Given the groups $\pi_1(T_1)$ and $\pi_1(T_2)$ above, can you see how such a factoring of the identity isomorphism is not possible? In particular, consider the kernel of $i_*$.
Proof related to a finite state machine
For each $i\in\Bbb N$, $\overset{i}\equiv$ is an equivalence relation on the set of states of the machine, so its equivalence classes form a partition $\pi_i$ of the state set, and $\overset{i+1}\equiv~=~\overset{i}\equiv$ if and only if $\pi_{i+1}=\pi_i$. If you’re defining these relations in the usual way, $s\overset{i+1}\equiv t$ implies $s\overset{i}\equiv t$, so the partition $\pi_{i+1}$ refines the partition $\pi_i$. That is, each member of $\pi_{i+1}$ is a union of members of $\pi_i$. Thus, if $\pi_{i+1}\ne\pi_i$, then $|\pi_{i+1}|>|\pi_i|$: $\pi_{i+1}$ breaks up the state set into more parts than $\pi_i$ does. But a partition of the state set can have at most $n$ parts, one for each state, so the number of parts can increase at most $n-1$ times, from $1$ to $n$. Thus, it must be the case that $\overset{n+1}\equiv~=~\overset{n}\equiv$.
Closed form of a power series solution to a differential equation
You can try to "factor" out terms until they start to look similar enough, in your case you have a suspicion what it could look like so we can try to give your powerseries this form: $$\frac{x^{2(n+1)+1}}{(n+1)!} \overset{m=n+1}{=} \frac{x^{2m+1}}{m!} = x \frac{(x^2)^m}{m!}$$ So we get $$\sum_{n=0}^\infty \frac{x^{2(n+1)+1}}{(n+1)!} = x \sum_{m=1}^\infty \frac{(x^2)^m}{m!} = x (\exp(x^2) - 1)$$ Note that this does require practice and you need to be able to recognize the power series. At the same time you have to keep in mind that most power series do not admit a closed form, at least the ones you will encounter outside of textbooks and problem sets. EDIT: Many similar techniques arise in the study of generating functions. There is a great free textbook by Herbert S. Wilf on this topic: https://www.math.upenn.edu/~wilf/DownldGF.html
Complex Analysis. How to use cauchy intergral
If I understand your notation, by $[-2,2]$ you mean the circle of radius $2$ centered at $-2$. But the integrand's only pole is then at $z=-1$. So you get $\oint_Cf(z)/(z+1)$, where $f(z)=z/((z-1)(z-3))$. So the integral is equal to $2\pi if(-1)=2\pi i(-1/8)=-\pi i/4$.
Solve the differential equation $\frac{dx}{dt}=-\frac{1}{5}\sqrt{(x+1)}$
You didn't put $+C$ in the right place. You should have $C-\frac{t}{10}=\sqrt{x+1}$. Solve for $C$ here, then isolate $x$.
Number of true statements
You are correct in that each of the statements contradict one another. So at most one of the statements is true. If none of the statements are true, then all $100$ of them are false. However that would make the $100$th statement true, which is a contradiction. Hence the only possibility is that exactly one statement is true and $99$ are false. If this is the case, the the $99$th statement is true and the remaining $99$ are indeed false.
Using probability or moment generating functions to find the distribution when given the distribution with random parameters
Your calculation is correct as written. You got a PGF because you started with a PGF. You wrote $$g_X(t) = \operatorname{E}[t^X] = \ldots.$$ So if this is how you began, your result will be a PGF. It isn't because $\psi(g(t)) = g(t)$ or anything like that. Had you attempted to begin by finding $\psi_X(t) = \operatorname{E}[e^{tX}] = \ldots$, you would have either gotten stuck (if the computation became intractable), or ended up with an expression for the MGF of $X$.
$2$-cycles of the map. Suppose that $f(p)=q$ and $f(q)=p$
$2)$ follows from $1)$, since if $f(p)=q$ and $f(q)=p$, then when we let $f(x)=rx-x^3$, we also let $f(rx-x^3)=x$. So in equation $f(x_n)=rx_n-x_n^3$, write $rx-x^3$ instead of $x_n$ and you get $f(x)=r(rx-x^3)-(rx-x^3)^3=x$.
What is a sheaf of rings? (question regarding the definition)
Are you comfortable with category theory? A sheaf of rings $\mathcal{F}$ on a topological space $X$ is a functor from the category of open subsets of $X$, where the morphisms are inclusions, to the category of rings where the objects are rings and the morphisms are ring homomorphism, satisfying the standard sheaf axioms. In particular, for an open subset $U$ we have $\mathcal{F}(U)$ is a ring and if you have $V \hookrightarrow U$, both open subsets of $X$, then the induced morphism $\mathcal{F}(U) \rightarrow \mathcal{F}(V)$ is a ring homomorphism.
Easy way to draw conics with Bezier control points?
The steps are as follows: Draw the tangent lines at the end-points $\mathbf{P}_0$ and $\mathbf{P}_1$ of the hyperbola. Draw a line $L$ that's tangent to the hyperbola and parallel to the chord $\mathbf{P}_0\mathbf{P}_1$. Intersect the line $L$ with the two tangent lines to get the points $\mathbf{P}_r$ and $\mathbf{P}_s$. Compute (or construct) the points $$ \mathbf{P}_a = \mathbf{P}_0 + \tfrac43(\mathbf{P}_r - \mathbf{P}_0) $$ $$ \mathbf{P}_b = \mathbf{P}_1 - \tfrac43(\mathbf{P}_1 - \mathbf{P}_s) $$ In other words, you construct $\mathbf{P}_a$ so that $\mathbf{P}_r$ is three-quarters of the way along $\mathbf{P}_0\mathbf{P}_a$, and similarly for $\mathbf{P}_b$. The points $\mathbf{P}_a$ and $\mathbf{P}_b$ are the Bézier control points for the desired cubic curve. The points $\mathbf{P}_r$ and $\mathbf{P}_s$ are called the Timmer control points of the curve, after their inventor, Henry (Hank) Timmer. The idea was used in internal systems at Douglas Aircraft for many years, and eventually published as "Alternative representation for parametric cubic curves and surfaces." Computer-Aided Design, 12:25-28, 1980. In some ways, the Timmer control points are more intuitive controls than the Bézier points, but they are not well known, and I don't know of any software that uses them today. This construction will give you a fairly decent approximation of any conic section curve. Both the conic and the cubic will be tangent to the line $L$ at the mid-point $\mathbf{P}_m$ of $\mathbf{P}_r\mathbf{P}_s$. If the conic happens to be a parabola, the cubic will replicate it exactly. The technique works for circular arcs, in particular. Please refer to this question to learn more about the basic technique and a slight improvement that's possible (though probably not worth the trouble in your case). The picture below illustrates the construction
"Continuity" of stochastic integral wrt Brownian motion
Hint Let $\varepsilon>0$, $\delta>0$. We have $$\begin{align*} \mathbb{P} &\left( \left| \frac{1}{B_{\varepsilon}} \cdot \int_0^{\varepsilon} H_s \, dB_s - H_0 \right|>\delta \right) \\ &= \mathbb{P} \left( \left| \frac{1}{B_{\varepsilon}} \cdot \int_0^{\varepsilon} (H_s-H_0) \, dB_s \right|>\delta \right) \\ &\leq \mathbb{P} \left( \left| \frac{1}{B_{\varepsilon}} \cdot \int_0^{\varepsilon} (H_s-H_0) \, dB_s \right|>\delta, \left| \frac{\sqrt{\varepsilon}}{B_{\varepsilon}} \right| \leq K \right)+ \mathbb{P} \left( \left| \frac{\sqrt{\varepsilon}}{B_{\varepsilon}} \right| > K \right) \\ &\leq \mathbb{P} \left( \left| \frac{1}{\sqrt{\varepsilon}} \cdot \int_0^{\varepsilon} (H_s-H_0) \, dB_s \right|>\frac{\delta}{K} \right)+ \mathbb{P} \left( \frac{|B_{\varepsilon}|}{\sqrt{\varepsilon}} < \frac{1}{K} \right)\\ &=: I_1+I_2 \end{align*}$$ for any $K>0$. Since $\frac{B_{\varepsilon}}{\sqrt{\varepsilon}} \sim N(0,1)$, we can choose $K>0$ (independent of $\varepsilon$) such that $$I_2 \leq \frac{\varepsilon}{4}$$ For the first term $I_1$ apply Markov's inequality and Itô's isometry to show that it converges to zero as $\varepsilon \to 0$, using the continuity of $H$ at $0$. Remark A detailed proof can be found in Dean Isaacson, Stochastic Integrals and Derivatives (1969).
How can there be rotational degrees and more than one axis in space?
Rotation as such takes place within a 2D subspace. Everything what is orthogonal to that subspace serves as "axis" to rotate around. So in a 2D space you just can rotate around a point. Within 3D you can rotate around a linear axis. Within 4D you can rotate around an orthogonal 2D subspace. And therefore you can have there 2 independend rotations at the same time, one within the one suspace and one within the orthogonal subspace. This then is what is called a Clifford rotation. Within 5D you even can rotate around a 3D subspace. Etc. --- rk
Formal demonstration of $\left|\thinspace\sin{\frac{1}{x}}\right|\leq 1;\thinspace x≠0$
HINT.- For all $x\ne0$ one has $1/x=y\in\mathbb R$ and it is well known that $|\sin(y)|\le1$
Equality of Hausdorff dimension
No, a single cover won't do. In order to show that $\dim_H (C) \geq \log_3 2$ using the result proved in class you have to show that $H_{\alpha} (C) >0$ for some $\alpha \geq \log_3 2$ and $H_{\alpha} (C) $ is defined as an infimum over all covers. For this you have to consider an arbitrary cover.
Circumcircle of triangle $ABC$
$PA^2+PB^2+PC^2$ is the moment of inertia of $\{A,B,C\}$ with respect to $P$. In an equilateral triangle the centroid $G$ and the circumcenter $O$ are the same point, hence by the parallel axis theorem, for any point $P$ on the circumcircle of an equilateral triangle $ABC$ we have $$ PA^2+PB^2+PC^2 = 3 OP^2 + OA^2+OB^2+OC^2 = 6R^2.$$
Closure and interior of a connected set $A$
Your proof of the first assertion is incomplete. How do you know that $A\subset B$ or $A\subset C$? Yes, it is true, but you did not justify it. Your counterexample for the second assertion is correct. Another possibility would be $\{(x,y)\in\mathbb R^2\,|\,xy\geqslant0\}$, for instance.
Finding the limit of a quotient involving fractions
Hint: how about simplifying this by multiplying numerator and denominator by $4x$? Or try to simplify $\dfrac{1}{4(4+x)} + \dfrac{1}{x(4+x)}$?
How to prove two regular expressions are identical in mathematical way?
That's an extremely difficult problem. You could convert the regular expression to a DFA and minimize it, but that's PSPACE-complete... I think procedures for determining equivalence for regular expressions is known to take at least exponential space and time and at most double exponential time, but I could be wrong.
let $y^2=4ax$ be a parabola and $x^2+y^2 +2bx=0$ be a circle. parabola and circle touch each other externally
Consider the equations $$y^2=4ax\tag 1$$ $$x^2+y^2 +2bx=0\tag 2$$ Replace $y^2$ from $(1)$ into $(2)$. This gives $$x^2+(4a+2b)x=0 \tag 3$$ the solution of which being $$x_1=0 \qquad, \qquad x_2=-2(2a+b)$$ Using thesese values in $(1)$, we then have $$y_1=0 \qquad, \qquad y_{2,3}=\pm 2 \sqrt{2} \sqrt{-a (2 a+b)}$$
On the existence of a certain type of function
$f'(x)\geq f(x)^2\ \forall x\in\mathbb [0,\infty)\iff\frac{f'(x)}{f(x)^2}\geq1\forall x\in[0,\infty)\iff\frac{d}{dx}(-\frac{1}{f(x)})\geq1$ which gives $-\frac{1}{f(x)}+1\geq x$. Now $f$ is strictly increasiong on $[0,\infty)$. So LHS is bounded. But the RHS is unbounded. Contradiction.
Hitting times of reversible markov chain with known steady state probabilites
The steady state distribution is not enough to determine the mean hitting time $E(T)$ of a given subset $A$. To see this in a simple case, assume there are $2n$ states and consider the simple random walks on the discrete circle $C_{2n}$ and on the complete graph $K_{2n}$. By symmetry, the steady state distribution is uniform in both cases. Choose $A=\{j\}$ and $i$ at distance $n$ from $j$ in $C_{2n}$. Then, $T$ for $C_{2n}$ is distributed like the first hitting time of $\pm n$ by a standard symmetric random walk on the discrete line starting from $0$, hence $E_{C_{2n}}(T)=n^2$. On the other hand, $T$ for $K_{2n}$ is distributed like the time of first success in an i.i.d. sequence of trials with probability of success $1/(2n-1)$ at each trial, hence $E_{K_{2n}}(T)=2n-1$. For every $n\ge2$, $E_{C_{2n}}(T)\ne E_{K_{2n}}(T)$.
Is this matrix always positive definite?
Recall that $\lambda_1>\lambda_n>0$. Since $A^{-1}B$ is similar to $A^{-1/2}BA^{-1/2}$, we deduce that $A^{-1/2}BA^{-1/2}\geq \lambda_nI_n$, that is equivalent to $B-\lambda_n A\geq 0$. In the same way, $\lambda_1A-B\geq 0$. We want to prove that, for every $t> 0$, $Y_t=({\lambda_1}^t-{\lambda_n}^t)B+(\lambda_1{\lambda_n}^t-\lambda_n{\lambda_1}^t)A> 0$. Note that $Y_t={\lambda_1}^t(B-\lambda_nA)+{\lambda_n}^t(\lambda_1A-B)\geq 0$. Assume that $x^TY_tx=0$; then $x^T(B-\lambda_nA)x=x^T(\lambda_1A-B)x=0$ and, therefore, $x\in\ker(B-\lambda_nA)\cap\ker(\lambda_1A-b)$. Thus $A^{-1}Bx=\lambda_nx=\lambda_1x$ and $x=0$; finally $Y_t>0$ as required.
Inequality for Integral of Vector Function
Yes, you can get it out of Cauchy-Schwarz: Let $\vec v = \displaystyle\int_a^b \vec r(t)dt$. If $\vec v=\vec 0$, there's nothing to prove. If not, we have $$\|\vec v\|^2 = \vec v \cdot \int_a^b \vec r(t)dt = \int_a^b \big(\vec v\cdot \vec r(t)\big)dt \le \int_a^b \|\vec v\|\|\vec r(t)\|dt = \|\vec v\| \int_a^b \|\vec r(t)\|dt.$$ Canceling a $\|\vec v\|$, we get what you want.
Rotational Volume of a sphere on the edge of a sphere.
Suggestions for organizing the calculation: Calculate the volume enclosed by a sphere of radius $\rho > 0$ lying to one side of a plane at distance $a$ from the center. The volume sought, the intersection of two solid balls, is a union of two pieces of the type 1. The point is, calculating the integral 1. using a notationally-simple lower limit of $a$ (instead of using the crossing point $R - r^{2}/(2R)$) usefully encapsulates the algebraic messiness.
How to solve this recurrence $T(n) = 2T(n/2) + n\log n$
Let us take $n = 2^m$. Then we have the recurrence $$T(2^m) = 2T(2^{m-1}) + 2^m \log_2(2^m) = 2T(2^{m-1}) + m 2^m$$ Calling $T(2^m)$ as $f(m)$, we get that \begin{align} f(m) & = 2 f(m-1) + m 2^m\\ & = 2(2f(m-2) + (m-1)2^{m-1}) + m2^m\\ & = 4f(m-2) + (m-1)2^m + m2^m\\ & = 4(2f(m-3) +(m-2)2^{m-2}) + (m-1)2^m + m2^m\\ & = 8f(m-3) +(m-2)2^m + (m-1)2^m + m2^m\\ \end{align} Proceeding on these lines, we get that \begin{align} f(m) &= 2^m f(0) + 2^m (1+2+3+\cdots+m) = 2^m f(0) + \dfrac{m(m+1)}{2}2^m\\ & = 2^m f(0) + m(m+1)2^{m-1} \end{align} Hence, $T(n) = n T(1) + n \left(\dfrac{\log_2(n) (1+\log_2(n))}{2} \right) = \mathcal{\Theta}(n \log^2 n)$.
How does this game work? (Number game: subtract prime)
It sounds like the key point is that you can subtract $1$, $2$, or $3$, but nothing equivalent to $0$ mod 4. (In other words, the primes are a distraction.) Thus, if $N \not\equiv 1$ (mod 4), Bob can subtract either $1$, $2$, or $3$ from $N$ so that $N' \equiv 1$ (mod 4), where $N'$ is the new $N$. However, if $N \equiv 1$ (mod 4), then Bob is forced to make $N' \not\equiv 1$, and Alice can reply by making the next number equivalent to 1 (mod 4). (I'm assuming that the endgame is such that if you start with $N = 1$, you lose.)
Additive group of rational numbers
This answer is basically combining what Jack Schmidt has remarked and condensing Lubos Motl's answer. First we notice that $\mathbb{Z}[\frac{1}{p}]$ has a ring structure. This allows us to view $\mathbb{Z}[\frac{1}{p}]$ as a free $\mathbb{Z}[\frac{1}{p}]$-module of rank 1. We show that any endormorphism of $\mathbb{Z}[\frac{1}{p}]$ as an abelian group is in fact an endomorphism of $\mathbb{Z}[\frac{1}{p}]$-modules as well. For a commutative ring with unit, $\mathrm{End}_R(R)=R$ since it only depends on the image of $1$. Last, $\mathrm{Aut}(\mathbb{Z}[\frac{1}{p}])$ is all invertible elements in $\mathrm{End}(\mathbb{Z}[\frac{1}{p}])$. In other words, we look for the units in the ring $\mathbb{Z}[\frac{1}{p}])$. One can easily show that the units are $p^{\mathbb{Z}}\times \langle \pm 1\rangle$.
Can the graph of $x^x$ have a real-valued plot below zero?
Although it doesn't deal directly with $x^x$, here's a WolframAlpha blog post that details how real and complex roots are currently treated by WolframAlpha. As far as $x^x$ goes, the plot does indeed show the real and imaginary parts of the principal value of $x^x$ on the same axis. Another approach is to plot the complex point itself in a plane that's perpendicular to the $x$-axis at the corresponding point. This leads to a spectacular image called the $x^x$-spindle, which was described in a great paper in Mathematics Magazine back in 1996. This looks like so: Using the fact that the complex logaritm is multi-valued, this can be generalized to obtain more threads on the spindle: It sounds like you've seen the claim that $(p/q)^{p/q}$ is defined for $p$ negative and $q$ odd and positive. Thus the graph might look something like so. From the complex perspective, the dots arise as spots where one of the spiral threads punctures the $x$-$z$ plane. Note that the Mathematica code for these images is all provided in this answer over on mathematica.SE.
Find $a$ and $b$ such that $f(x) = ax^3 + bx^2 - 28x + 15$ is divisible by $(x+3)$, etc
The remainder theorem states that the remainder when $P(x)$ is divided by $(x-a)$ is given by $P(a)$. Since $(x+3) = (x-(-3))$ divides $f(x)$, $f(-3) = 0$ which means that $a(-3)^3 + b(-3)^2 - 28(-3) + 15 = 0$ which will give you one equation for $a$ and $b$. Since the remainder when $f(x)$ is divided by $(x-3)$ is $-60$, $f(3) = -60$ meaning that $a(3)^3 + b(3)^2 - 28(3) + 15 = -60$ which will give you a second equation for $a$ and $b$, and you can solve them simultaneously to get the two values.
The Chaos Game and Benford's law: will I notice the bias?
Yes, you'll notice it! https://www.openprocessing.org/sketch/445696 Source code for posterity: function setup() { createCanvas(windowWidth, windowHeight); background(255); stroke(0); speed = 100; // Set up the triangle radius = min(windowWidth, windowHeight) / 2; cx = windowWidth / 2; cy = windowHeight / 2; v1x = cx + radius * cos(0); v1y = cy + radius * sin(0); v2x = cx + radius * cos(2 * PI / 3); v2y = cy + radius * sin(2 * PI / 3); v3x = cx + radius * cos(4 * PI / 3); v3y = cy + radius * sin(4 * PI / 3); // Probabilities // https://en.wikipedia.org/wiki/Benford's_law#Mathematical_statement p123 = (log(1+1) + log(1+1/2) + log(1+1/3)) / log(10); p123456 = p123 + (log(1+1/4) + log(1+1/5) + log(1+1/6)) / log(10); // Current point px = v1x; py = v1y; } function draw() { for (i = 0; i < speed; ++i) { r = random(1); if (r < p123) { px = (px + v1x) / 2; py = (py + v1y) / 2; } else if (r < p123456) { px = (px + v2x) / 2; py = (py + v2y) / 2; } else { px = (px + v3x) / 2; py = (py + v3y) / 2; } point(px, py); } }
Module of differentials of a finitely generated $R$-algebra
If $A:=R[x_1,..,x_r]$ is a polynomial ring over $R$ in $r$ variables it follows there is an isomorphism of $A$-modules $\Omega_{A/R} \cong \oplus_i Adx_i$. The sum is a direct sum, hence $\Omega_{A/R}$ is a free $A$-module of rank $r$ on the elements $dx_i$. It follows $S\otimes_R \Omega_{A/R} \cong \oplus_i S\otimes_R Adx_i$ and $S\otimes_R A \neq S$ are non-isomorphic rings, hence it seems your formula is incorrect. If the elements $f_1,..,f_l$ generate your ideal $I$, and you want a formula for the $S$-module $\Omega_{S/R}$ the following holds: Formula 1. $\Omega_{S/R} \cong \oplus_i Sdx_i/C,$ where the module $C$ is the left $S$-module generated by the differentials $df_i$ for all $i=1,..,l$. You may find details on this construction in a commutative algebra book where they discuss the module of derivations and differentials. There are "fundamental exact sequences" involving the module of differentials, and these sequences can be used to prove such results. Mastumuras book "Commutative Ring Theory" studies derivations and differentials and gives explicit proofs of properties of this construction. Formula 1 is proved.
How to build a graph with these properties?
If $\kappa'=1$, take two disjoint copies of $K_{\delta+2}$ (the complete graph on $\delta+2$ vertices), remove an edge $u_1v_1$ in the first one, an edge $x_1y_1$ in the second one, then add edge $u_1x_1$. Removing this edge or vertex $u_1$ or $x_1$ disconnects the graph and each vertex has degree $\delta+1$ except vertices $v_1$ and $y_1$ where the degree is $\delta$. If $\delta=1$ this construction results in $P_6$ which shows us that the construction is not optimal, because $P_3$ would suffice. If $\kappa=1$, add another $\kappa'-1$ disjoint copies of the previously constructed graph and add edges $u_ix_j$ for all $i\neq j$, $1\leq i,j\leq \kappa'$. Then we need to remove $\kappa'$ edges at $u_i$ or $x_i$ or any of these vertices to disconnect the graph. The construction gets trickier for $\kappa>1$. First let's assume $\kappa=\kappa'$. Take again two disjoint copies of $K_{\delta+2}$. Select a (near-)perfect matching in each of these complete graphs. Since both complete graphs are of the same size, we have a bijection between the edges of the matchings. Select $\kappa$ if $\delta$ is even or $\kappa-1$ if $\delta$ is odd of these edgepairs. We change the adjacencies of each edge pair like before, i.e. if we have an edge pair ($uv$,$xy$) we remove these edges and add $ux$, resulting in a(n unchanged) degree of $\delta+1$ for $u$ and $x$ and a degree of $\delta$ for $v$ and $y$. If $\delta$ is odd, add an edge between the two unmatched vertices in each copy. This results in the graph with the desired properties. If $\kappa < \kappa'$, we extend our construction similar as we did before. Take another $\kappa'-\kappa$ disjoint copies of the graph we just constructed. In each one and the original one, select one of the $\kappa$ edges we constructed in the previous step, which gives us $n:=\kappa'-\kappa+1$ edges $u_ix_i$. Like before, add all edges $u_ix_j$ with $i\neq j$ and $1\leq i,j\leq n$, which leads to the graph with the desired properties. The idea of using perfect matchings in complete graphs I got from Akiyama in "Regular graphs containing a given graph" (1983).
Show that there exists a satisfactory assignment for the unstandard language of arithmetic $\{\textbf{0}, ', <_1\}$
It looks like you are trying to interpret everything in $\omega$. The idea would be to make sure the order works before anything else. So let the order be defined as is. Now the order looks like; $1&lt;_{1}3&lt;_{1}5&lt;_{1}7&lt;_{1}...&lt;_{1}2m-1&lt;_{1}2m+1&lt;...&lt;_{1}0&lt;_{1}2&lt;_{1}4&lt;_{1}6&lt;_{1}...&lt;_{1}2n&lt;_{1}2n+2&lt;_{1}...$ Interpret $\bf{0}$ as $1$. Now interpret $m'$ by $m+2$. Now everything you want is satisfied!
Definition of Bounded Variation Function with vectorial arguments
No, the definition above only works for $f: [a,b] \rightarrow \mathbb{R}^n$, in other words, where the domain is a real interval. Because what is the supremum over? On the real line, the supremum over partitions of the interval is the appropriate supremum, but over $\mathbb{R}^m$, the notion of "partition" doesn't quite work (or, if it can be generalized in a way I'm unaware of, it must be rather complicated). To properly generalize to vector arguments, one needs to instead characterize the variation norm by weak derivatives. See Wikipedia for details.
How to parametize an intersection of sphere and a plane
Yes, it is an ellipse. But you can say more: it is a circle. Take a vector $v$ of the plane whose norm is $1$. Now, let$$w=v\times\left(\frac A{\sqrt{A^2+B^2+C^2}},\frac B{\sqrt{A^2+B^2+C^2}},\frac C{\sqrt{A^2+B^2+C^2}}\right).$$Then $w$ also belongs to the plane and its norm is also equal to $1$. So, both $v$ and $w$ belong to the unit sphere. Now, take the parametrization$$\begin{array}{ccc}[0,2\pi]&amp;\longrightarrow&amp;\Bbb R^3\\\theta&amp;\mapsto&amp;\cos(\theta)v+\sin(\theta)w.\end{array}$$
Is this condition sufficient to ensure the locally convexity of a function at a given point?
Take the function $$ f(x) = \min(\ (x-1)^2,\ (x+1)^2 \ ). $$ It is continuous, twice continuously differentiable outside zero, second derivative is positive definite (when it exists), yet $f$ is not convex.
Two Proofs for Open Sets and Metric Subspaces
Both proofs are correct. The second proofs looks more complicated because it strives to prove openness of sets using the definition (i.e. a union of open sets is open). By appealing to the basis formulation of topology, the first proof saved a lot of effort.
Meaning of = in antisymmetric relation
In this case, The (partial order) relation R is defined: $$ x R y \Leftrightarrow score(x) \leq score(y), \,\,\, x,y\in \{Bob, John, Tom\} $$ After this, you are comparing this relation with another relation R': $$ x R' y \Leftrightarrow x = y, \,\,\, x,y\in \{Bob, John, Tom\} $$ which is not the same one. Here, R' is an equivalence relation.
Which number can I erase?
A prime $p&gt;5$ can be erased if $m=2(p-1)$ has been erased, as $2+(p-1)=p+1$. Note that $$m=2\times(p-1)=4\times\frac{p-1}{2},$$ where $p-1$ is even because $p&gt;5$ is prime. This factorization of $m$ shows that it can be erased if $\frac{p+5}{2}$ has been erased, because $$\frac{p-1}{2}+4=\frac{p+5}{2}+1,$$ where clearly $\frac{p+5}{2}&lt;p$ becase $p&gt;5$. A composite number $n=uv$ with $u,v&gt;1$ can be erased if $m=u+v-1$ has been erased, as $$1+(u+v-1)=u+v.$$ Of course $m&lt;n$ because $u,v&gt;1$. In particular, an integer $n&gt;5$ can be erased if all integers less than $n$ have been erased, so you can use induction.
Wolfram double solution to $\int{x \cdot \sin^2(x) dx}$
We have that $\sin^2 x = \frac{1-\cos (2x)}{2}$. Therefore, $\frac14 \sin^2 x = \frac{1-\cos (2x)}{8} = \frac18 - \frac18 \cos (2x)$. The extra $\frac18$ is taken care of by the constant of integration. In other words, if two functions differ by a constant, they have the same derivative. This is something that happens often when integrating trigonometric functions because of the various identities.
$ S_n=\sum_{k=1}^n\frac{1}{k}$ then is $S_n$ bounded?
Yes, $$\sum_{k=1}^{2^n}\frac1k&gt;\int_{1}^{2^n+1}\frac1x\,dx=\ln(2^n+1)&gt;n\ln2&gt;\frac{n}2$$ Actually you can prove the stronger inequality $$\sum_{k=1}^{2^n}\frac1k\ge 1+\frac{n}{2}$$
Finding the unbiased estimator of the given parameter.
Let's consider the most natural estimate of the random area: $$\pi\frac1n\sum_{i=1}^nX_i^2.$$ The question is if this is an unbiased estimate assuming that $X_i=R+e_i$ where $e_i$ are independent normal random variables with $0$ mean and $\sigma^2$ as variance. So, we need to calculate the expectetion if our estimate above. $$\pi E\left[\frac1n\sum_{i=1}^nX_i^2\right]=\pi\frac1nE\left[\sum_{i=1}^n(R+e_i)^2\right]=$$ $$=\pi R^2+2\pi RE[e_i]+\pi\sigma^2=\pi R^2+\pi\sigma^2.$$ So, it seems that $$\pi\frac1n\sum_{i=1}^nX_i^2-\pi\sigma^2$$ is an unbiased estimate.
L'Hopital's Rule on Limits with Trigonometric Functions
You could write this particular function as $$\exp[x\log(1-\cos(3x))],$$ and the problem simply reduces to taking the limit of $x\log(1-\cos(3x))$ since $\exp{}$ is continuous. But as $x$ goes to $+\infty,$ the argument of the $\log{}$ only hurtles continuously between $1$ and $2$ because of the cosine, so that the logarithm bounces continuously between $0$ and $\log 2.$ The other factor, $x,$ in the meantime grows without bound. So combining these two behaviours, you see that you have a function which gets arbitrarily large in magnitude but fluctuates continuously between positive and negative values. So the limit does not exist. There is no need to use L'hopital here. One uses it in cases where one has a limiting value of the form $0/0,$ and cases that can be reduced to this. No such thing happens here. There simply isn't any limiting value.
Probability: Pair of socks problem
(a) Is right. Generalising: the probability of going to class with $n\geq 2$ matched pairs of socks in the drawer is: $$p_n = 1 - \frac{(2n-2)(2n-4)}{(2n-1)(2n-2)} = \frac{3}{(2n-1)} \\ p_8 = \tfrac 1 5 = 0.2$$ (b) Your answer to this is cute, though correct. &nbsp; However, presumably the question assumes the student is able to draw at least three socks. &nbsp; Find $n$ where $p_n &lt; 0.1$ Now can you do (c) in a similar manner?
Who did first use the concept of "supremum"?
(To answer this question so that it doesn't stay in the unanswered list) Maybe it was first defined by Bolzano in 1817, in Reinanalytischer Beweisdes Lehrsatzes, dass zwischenjezweiWerthen, die ein entgegengesetztes Resultat gewähren, wenigstens eine reelle Wurzel der Gleichung liege, see Wanner, Hairer - Analysis by its history, p. 182. also in page 175 of A History of Analysis, edited by Hans Niels Jahnke. – lhf
Algebra for moment generating function help
It may help to look at a case where $K$ is small and fixed, say $K = 3$. Then $$Y = (Y_1, Y_2, Y_3) \sim \operatorname{Multinomial}(n, \pi_1, \pi_2, \pi_3)$$ and $$\Pr[Y = (y_1, y_2, y_3)] = \frac{3!}{y_1! y_2! y_3!} \pi_1^{y_1} \pi_2^{y_2} \pi_3^{y_3}.$$ We have $$M_Y(t_1, t_2, t_3) = \operatorname{E}[e^{t_1 Y_1 + t_2 Y_2 + t_3 Y_3}] = \sum_{y_1 + y_2 + y_3 = n} e^{t_1 y_1 + t_2 y_2 + t_3 y_3} \frac{3!}{y_1! y_2! y_3!} \pi_1^{y_1} \pi_2^{y_2} \pi_3^{y_3}.$$ This is where you are able to follow so far. The next step simply rewrites the first factor as a product, and the combines this with the factor $\pi_1^{y_1} \pi_2^{y_2} \pi_3^{y_3}$; i.e., $$e^{t_1 y_1 + t_2 y_2 + t_3 y_3} = \prod_{i=1}^3 (e^{t_i})^{y_i}$$ so that $$e^{t_1 y_1 + t_2 y_2 + t_3 y_3} \pi_1^{y_1} \pi_2^{y_2} \pi_3^{y_3} = \prod_{i=1}^3 (e^{t_i})^{y_i} \pi_i^{y_i} = \prod_{i=1}^3 (e^{t_i} \pi_i)^{y_i}.$$ The final step is an application of the multinomial theorem (the generalization of the binomial theorem), which states $$(x_1 + x_2 + \cdots + x_k)^n = \sum_{c_1 + \cdots + c_k = n} \frac{n!}{c_1! c_2! \cdots c_k!} x_1^{c_1} x_2^{c_2} \cdots x_k^{c_k}.$$ In our case of $K = 3$, we have the trinomial theorem/expansion $$(e^{t_1} \pi_1 + e^{t_2} \pi_2 + e^{t_3} \pi_3)^n = \sum_{y_1 + y_2 + y_3 = n} \frac{3!}{y_1! y_2! y_3!} (e^{t_1} \pi_1)^{y_1} (e^{t_2} \pi_2)^{y_2} (e^{t_3} \pi_3)^{y_3}.$$ So all you need to note is that the LHS is the factorization of the RHS, and in general, the MGF for general $K$ is going to be the degree-$n$ polynomial in $K$ variables $$M_Y(t_1, \ldots, t_K) = \left( \prod_{i=1}^K e^{t_i} \pi_i \right)^n.$$
Clarification of Solution of PDE Laplace Transform Problem
First I want to mention that you forgot the $\mathcal{L}$ symbol at some places: So we get \begin{align} &amp;-\dfrac{\mathcal{L}\{f(t)\}}{\beta} = \dfrac{sA(s)}{c} - \dfrac{sB(s)}{c} \\ &amp; \Rightarrow -\dfrac{\mathcal{L}\{f(t)\}}{\beta} = \dfrac{s}{c}(A(s) - B(s)) \\ &amp; \Rightarrow \dfrac{c}{\beta} \dfrac{\mathcal{L}\{f(t)\}}{s} + A(s) = B(s) \end{align} Using this to eliminate $B(s)$ from the general solution leaves $$\dfrac{\partial}{\partial{x}} \mathcal{L} \{ u(x, t) \} = \dfrac{sA(s)}{c} e^{\frac{sx}{c}} - \left( \dfrac{c}{\beta} \dfrac{\mathcal{L}\{f(t)\}}{s} + A(s) \right) \dfrac{s}{c} e^{-\frac{sx}{c}}$$ Now we can show the equation. Substituting the expression for $B(s)$ in $\mathcal{L}\{u(x,t)\}$ and multiplying with $s$ gives $$ s\mathcal{L} \{ u(x, t) \} = s A(s) \left(e^{\frac{sx}{c}}+e^{-\frac{sx}{c}}\right) + \dfrac{c}{\beta} \dfrac{s\mathcal{L}\{f(t)\}}{s} e^{-\frac{sx}{c}}. $$ From the boundary condition $f(t) =-\beta \frac{\partial u(0,t)}{\partial x}$ for all $t&gt;0$, and the initial condition $u(x,0)=0$ for all $x&gt;0$, it follows that $\lim_{t\to 0}f(t)=0$. Then the initial value theorem implies $\lim_{s\to\infty}s\mathcal{L}\{f(t)\}=0$. This and the fact that $x$, $s$ and $c$ are positive implies that. $$\lim_{s\to\infty} \frac{c}{\beta} \frac{s\mathcal{L}\{f(t)\}}{s} e^{-\frac{sx}{c}}=0.$$ Edit: As was pointed out by The Pointer in the comments (pun not intended), it suffices to note that $\lim_{s\to \infty} \mathcal{L}\{f(t)\}=0$ (asymptotic property of Laplace transforms) and that the constants are positive. Since $\lim_{s\to\infty} \mathcal{L}\{u(x,t)\} = 0$ (asymptotic property of Laplace transforms), also $\lim_{s\to\infty} A(s)=0$, and thus $\lim_{s\to \infty} A(s) s e^{-\frac{sx}{c}}=0$. We conclude that $$\lim_{s\to \infty} s \mathcal{L}\{u(x,t)\} = \lim_{s\to\infty }s A(s) e^{\frac{sx}{c}}.$$ I hope this answer is helpful to you.
Ways to determine how fast a sequence diverge/converge
You might want to Google rate of convergence and asymptotic growth. Many books on numerical analysis also cover this to decide how useful certain algorithms are. A couple of basic ways these are defined are: Linear convergence $\iff \displaystyle \lim_{n \rightarrow \infty} \frac{|x_{n+1} - L |}{|x_n - L|} \in (0,1)$ Super-linear and $k$-th order convergence $\iff \displaystyle 0 &lt; \lim_{n \rightarrow \infty} \frac{|x_{n+1} - L |}{|x_n - L|^k} &lt; \infty, \lim_{n \rightarrow \infty} \frac{|x_{n+1} - L |}{|x_n - L|} = 0$ You are interested in the growth rate of the harmonic series, which does not converge (a very elementary proof here), so there is no $L$ to check speed of convergence as $\infty$ is not allowed here. Assuming by "speed of divergence" you mean the rate of growth (how fast it blows up) there is the famous result that: $$\sum_{k=1}^{n} \frac{1}{k} \approx \log n + \gamma, \hspace{2mm} \gamma \approx 0.5772 $$ And so the series grows as fast as the natural logarithm, since we don't really care about constants. This can be proved several ways, one which is a direct application of Euler's summation formula and is on page 480 here in Knuth's excellent book, but it is not easy. A more basic way to do this with less accuracy but is good enough for just comparing the growth is using calculus answered here, the following picture is useful. Any function's growth rate can be expressed in "O notation", the simplest is big O, which states that $f(x) \in \mathcal{O}(g(x)) \iff \exists C \in \mathbb{R}_+$ $s.t. |f(x)| \leq C|g(x)|$ $\displaystyle \forall x &gt; x_0 \iff \lim_{x \rightarrow \infty} \frac{f(x)}{g(x)} &lt; \infty$ For the harmonic series example, if we pretend our approximation is exact, then $|\log n + \gamma | &lt; 2|\log n |$ $\forall x &gt; e^{\gamma}$, and $\displaystyle \lim_{x \rightarrow \infty} \frac{\log(x) + \gamma}{\log(x)} = 1 &lt; \infty$ so $\sum_{k=1}^{n} \frac{1}{k} \in \mathcal{O}(\log n)$ which quantifies your "rate of divergence" as less than or equal to the logarithm (in fact they are equal, the symbol $\Theta$ is used if $f(x) \in \mathcal{O}(g(x))$ and $g(x) \in \mathcal{O}(f(x))$, try to show the other way round for the example above).