title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
A property of minimal homeomorphism
For $x\in X$, and $n\in\Bbb N$, let $$O_n(x)=\{\,f^k(x)\mid 1\le k\le n\,\} $$ and $$U_n(x)=\{\,y\in X\mid d(y,O_n)<\tfrac12\epsilon\,\}. $$ By minimality of $f$, we have $\bigcup_n U_n(x)=X$ and then by compactness $X=U_n(x)$ for some $n=n(x)$. Then $d_H(X,O_{n(x)}(x))<\frac12\epsilon$, and as $d_H(X,O_{n(x)}(x'))$ is continuous in $x'$, we see that $n(x')\le n(x)$ for all $x'$ in an open neighbourhood $V_x$ of $x$. The $V_x$ cover $X$, so by compactness there is a finite subcover. If $N$ is the maximum over the finitely many $n(x)$ involved in this subcover then we find that $$d_H(X,O_N(x))<\frac12\epsilon $$ for all $x\in X$. Let $\delta_0=\frac12\epsilon$. By continuity of $f$ and compactness once again, there recursively exists $\delta_{n+1}>0$ such that $d(x,x')<\delta_{n+1}$ implies $d(f(x),f(x'))<\frac12\delta_n$. Let $\delta=\frac12\min\{\delta_0,\ldots,\delta_N\}$. Now if $d(f,g)<\delta)$, then for any $z$, we can show by induction that $d(f^k(z),g^k(z))<\delta_{N-k}$ for $k=1,\ldots, N$. In particular, $d(f^N(z),g^N(z))<\frac12\epsilon$ and the claim follows.
Power series expansion of an Operator.
Let $L$ denote the matrix $$ L = \pmatrix{u \mathrm{I} + i S_3 && i S_- \\ i S_+ && u \mathrm{I} - i S_3} $$ My best guess is that whatever the author is getting at has something to do with the fact that $$ \operatorname{tr}(L^N) = 2I\,u^N + q_{2}u^{N-2} + \cdots + q_N $$ or, if $N$ is odd, $$ \operatorname{tr}(L^N) = 2I\,u^N + q_{2}u^{N-2} + \cdots + q_Nu $$ I could come up with a proof that this will generally happen (i.e. there are no terms $u^{N - k}$ for odd $k$) if that's something you're interested in (this is purely a consequence of how matrix multiplication works). In the mean time, note that $$ \operatorname{tr}(L^2) = 2u^2 - 2S_3^2 - S_+\,S_- - S_-\,S_+\\ \operatorname{tr}(L^3) = 2u^3 - (6S_3^2 + 3 S_+S_- + 3S_-S_+)u $$
Notation $E[t^X]$ where $X$ is a random variable
Hint: Suppose $X$ is a discrete random variable over $\mathbb{Z}_{\geq 0}$. Then (either taken as a definition or a theorem), $$E[g(X)] = \sum\limits_{x=0}^{\infty}g(x)\Pr\left(X = x\right)\text{.}$$
Show that $(A \cup C) \cap (B \cup C') \subset A \cup B$
If (the applied) equality $(P\cup C)\cap(P\cup C')=P$ has already been proved earlier then your proof is okay. Alternative: Let it be that $x\notin A\cup B$ or equivalently that $x\notin A$ and $x\notin B$. Evidently $x\notin C$ or $x\notin C'$. If $x\notin C$ then $x\notin A\cup C$, and if $x\notin C'$ then $x\notin B\cup C'$. So in both cases: $x\notin (A\cup C)\cap (B\cup C')$. Proved is now: $$x\notin A\cup B\implies x\notin (A\cup C)\cap (B\cup C')$$ or equivalently: $$x\in (A\cup C)\cap (B\cup C')\implies x\in A\cup B$$ This is true for every $x$ so we are allowed to conclude that:$$(A\cup C)\cap (B\cup C')\subseteq A\cup B$$
Probably an ambiguous word problem
Fish Burger must go with chicken nuggets, so there are only 3 combinations that fish burger can have (as there are 3 drinks and only 1 side). Chicken burger must not go with coffee so there are 3 sides and 2 drinks for the chicken burger, so there are 6 total combinations with the chicken burger. And finally, the double burger and hamburger can go with 3 sides and 3 drinks, so there are 18 total there (9 for double and 9 for hamburger). Therefore there are 3+6+18=27. Which isn't an available answer. But it works if you say that chicken nuggets can only go with fish burger and fish burger is free to take all 3 sides, as you then get 9+4+6+6=25. It's just a poorly worded question.
Robinson's theorem- a proof.
Wrong. Let $\Delta = \{ P \}$ and $\Delta' = \{ \neg P \}$ Then $\Delta_0 = \{ P,\neg P \}$ would be a finite subset of $\Delta \cup \Delta'$, but obviously $\Delta_0$ is not satisfiable. OK, so we know something went wrong in your proof. ... but where? You say: It means that there exists a such $ϕ$ that $\Delta \vDash \phi$ and $\Delta' \vDash \neg \phi$. What is a $\phi$? $\phi$ is just a conjuction of sentences from $\Delta$ ( $\Delta$ is finite). So $\Delta_0$ was satisfable. Now, your elaboration of your argument for why $\Delta \vDash \phi$ and $\Delta' \vDash \neg \phi$ certainly looks correct. ... But why does it follow from the fact that that $\Delta_0$ is satisfiable? Again, see my above counterexample for this: With $\Delta = \{ P \}$ and $\Delta' = \{ \neg P \}$ and $\Delta_0 = \{ P,\neg P \}$, we have $\phi = P$, and we have $\{ P \} \vDash P$ (so indeed $\Delta \vDash \phi$) and $\{ \neg P \} \vDash \neg P$ (so indeed $\Delta \vDash \neg \phi$), but $\Delta_0$ is still not satisfiable.
probability of bingo
Define event $W_i$ = "Line $i$ wins" for $i = 1,2,3,4$. Then the probability of none of the $4$ lines winning is $1 - P(W_1 \cup W_2 \cup W_3 \cup W_4)$. Because more than one of the $4$ lines could win on any given draw, $P(W_1 \cup W_2 \cup W_3 \cup W_4) \neq P(W_1) + P(W_2) + P(W_3) + P(W_4)$. This is why, while $P_1 = C(50,4)/C(75,4)$ is correct, multiplying that by $4$ is not. Instead, we use the inclusion-exclusion principle: \begin{eqnarray*} P(\mbox{no winning lines}) &=& 1 - P(W_1 \cup W_2 \cup W_3 \cup W_4) \\ && \\ &=& 1 - \sum_{i=1}^{4}{P(W_i)} + \sum_{i \lt j}{P(W_i \cap W_j)} - \sum_{i \lt j \lt k}{P(W_i \cap W_j \cap W_k)} \\ && \qquad + P(W_1 \cap W_2 \cap W_3 \cap W_4) \\ && \\ &=& 1 - \binom{4}{1} \binom{50}{4} \bigg/ \binom{75}{4} + \binom{4}{2} \binom{50}{8} \bigg/ \binom{75}{8} \\ && \qquad - \binom{4}{3} \binom{50}{12} \bigg/ \binom{75}{12} + \binom{4}{4} \binom{50}{16} \bigg/ \binom{75}{16} \\ && \\ &\approx& 1 - 0.7579086 + 0.1909348 - 0.0185883 + 0.0005759 \\ && \\ &=& 0.4150138 \end{eqnarray*}
Prove that $\lim_{n\to\infty} \left( \frac{ g_n^{\gamma}}{\gamma^{g_n}} \right)^{2n} = \frac{e}{\gamma}$
Since $g_n=\gamma+\frac1{2n}+O(\frac1{n^2})$ (see this), \begin{align} \left(\frac{g_n^\gamma}{\gamma^{g_n}}\right)^{2n}&=\left(\frac{(\gamma+\frac1{2n}+O(\frac1{n^2}))^\gamma}{\gamma^\gamma\gamma^{\frac1{2n}+O(\frac1{n^2})}}\right)^{2n} =\left(\frac{(1+\frac1{2\gamma n}+O(\frac1{n^2}))^\gamma}{\gamma^{\frac1{2n}+O(\frac1{n^2})}}\right)^{2n}\\ &=\frac{(1+\frac1{2\gamma n}+O(\frac1{n^2}))^{2\gamma n}}{\gamma^{1+O(\frac1{n})}} =\frac e\gamma+O\left(\frac1n\right). \end{align}
Removing redundant linear constraints using Gaussian elimination
See my answer to this MO question.
Galois group of $(x^3-2)(x^2-2)$ over $\mathbb{Q}$
Well, you know what the generating automorphisms of the Galois group are, namely $\sigma, \tau, \rho$ which act as follows on the generators of the splitting field: $$\sigma(\sqrt[3]{2}) = \omega\sqrt[3]{2}; \sigma(\omega) = \omega; \sigma(\sqrt{2}) = \sqrt{2}$$ $$\tau(\sqrt[3]{2}) = \sqrt[3]{2}; \tau(\omega) = \omega^{2}; \tau(\sqrt{2}) = \sqrt{2}$$ $$\rho(\sqrt[3]{2}) = \sqrt[3]{2}; \rho(\omega) = \omega; \rho(\sqrt{2}) = -\sqrt{2}$$ Now, we compute $$(\sigma\rho)(\sqrt[3]{2}) = \omega\sqrt[3]{2}; (\sigma\rho)(\omega) = \omega; (\sigma\rho)(\sqrt{2}) = -\sqrt{2}$$ It is easy to see that $\sigma\rho$ has order $6$ (why?). Further, $(\sigma\rho)^{-1} = \rho^{-1}\sigma^{-1} = \rho\sigma^{2}$. Finally, note that $\sigma$ and $\rho$ commute, as well as $\tau$ and $\rho$, and $\sigma, \tau$ obey the relations $\sigma\tau = \tau\sigma^{2}$. Now, your group $G$ cannot be abelian, and must contain $S_{3}$. A good candidate for $G$ is thus $D_{6}$, which is nonabelian of order $12$ and contains an isomorphic copy of $D_{3} \cong S_{3}$. Let $\beta = \sigma\rho$. Then $$\beta\tau = \sigma\rho\tau = \sigma\tau\rho = \tau\sigma^{2}\rho = \tau\rho\sigma^{2} = \tau\beta^{-1}$$ Furthermore, $\beta^{4} = \sigma$, $\beta^{3} = \rho$, so $G$ has the presentation $\langle \beta, \tau \mid \beta^{6} = \tau^{2} = e, \beta\tau = \tau\beta^{-1}\rangle \cong D_{6}$.
Why is $f^{(n)}(z)$ real when z is real for all $n$, if this is true for $n=0$?
If $z_0 \in \mathbb{R}$ $f'(z_0) = \lim_{h \to 0} \frac{f(z_0 + h) - f(z_0)}{h}$ If $f'$ exists, this limit must be the same if $h$ approaches from the real or imaginary axis, that said: $f'(z_0) = \lim_{h\to 0\ h \in \mathbb{R}} \frac{f(z_0+h)-f(z_0)}{h} \in \mathbb{R}$ $f^{(n)}$ follows the same reasoning.
Is there an interval notation for complex numbers?
Perhaps $\{z \in \mathbb{C}: \operatorname{Re}(z) \in [a,b], \; \operatorname{Im}(z) \in [c,d]\}$. The complex numbers have no inherent order, so unless you invent something like $[[a+ci, b+di]]$ I know no more compact way to write this.
The integral values which makes the expression a perfect square.
Let $a=m^2-n^2,\,b=2mn$. Then our expression which we want to be a perfect square is $16a^2b^2+(a^2+b^2)^2$, or $a^4+18a^2b^2+b^4$. Since 18 is in OEIS A253200, there are no solutions with positive $a$ and $b$, so the only solutions are those when either of them equal to 0, which in terms of $(m,n)$ translates to the two cases already disclosed in the comments.
Random subgroup of a group
Choose a random element as cyclic subgroup generator, generate the subgroup from that element, and you already have a random cyclic subgroup.
Does this theorem assume the commutativity?
Being left $R$-free module, $M\simeq R^{(I)}$ for some index set $I$, thus there is a natural ($R$,$R$)-bimodule structure on $M$ (Choosing different isomorphism $M\stackrel{\sim}{\longrightarrow} R^{(I)}$, we may get different right $R$-module structure if $R$ is not commutative, but they all are ($R$,$R$)-isomorphic. Thus $M\otimes_R N$ can be equipped with a left $R$-module structure (not very natural though) which makes it a free left $R$-module.
Green's formula on a positively oriented simple closed contour
We have $$\frac{1}{2i} \int_C \bar{z}\, dz = \frac{1}{2i} \int_C (x - iy)(dx + i\, dy) = \frac{1}{2i} \int_C (x\, dx + y\, dy) + i(x\, dy - y\, dx).$$ Since $x\, dx + y\, dy$ is exact, $$\int_C (x\, dx + y\, dy) = 0.$$ Therefore $$\frac{1}{2i} \int_C \bar{z}\, dz = \frac{1}{2}\int_C (x\, dy - y\, dx).$$ Now if $R$ is the region enclosed by $C$, by Green's theorem, the last expression equals $$\frac{1}{2}\iint_R \left(\frac{\partial}{\partial x} x - \frac{\partial}{\partial y}(-y)\right)\, dx\, dy = \frac{1}{2} \iint_R (1 + 1)\, dx\, dy = \text{Area}(R).$$
Laurent series, am I correct in this reasoning?
It said "centered at 1"; it didn't say "near 1". There is a Laurent series, centered at 1, valid in $|z-1|<3$. But there is a separate Laurent series, centered at 1, valid in $|z-1|>3$. The first of these, is, indeed, a Taylor series. But the second of these is the one in the book's solution.
Recurrence Relation - Merge Sort
That doesn't seem quite right! Assume we're at the stage where we have $k$ sorted array, each of size $m$ and we'd like to know how many comparisons are needed to form the final sorted array. Take the first two arrays, each of size $m,$ we need $2m-1$ comparisons to make a sorted array, then take the third and fourth arrays, again with $2m-1$ comparisons and continue to get to the final array. So far, we have done $k/2(2m-1)$ comparisons in total, as there're $k/2$ pairs, like $(1,2), (3,4), \cdots.$ Now, we have, $k/2$ arrays each of size $2m.$ With the same approach, take the first and second one (which the first is the result of the first and second arrays from the previous step), we need $4m-1$ comparisons. Again, continue pairing them up until you get the last array. In this step, we've done $(k/4)(4m-1)$ comparisons. Counting the comparisons of the previous step we have done $k/2(2m-1) + k/4(4m-1)$ comparisons in total, so far. Keep going until you get to the final step, where there is only one sorted array. Now, we have to sum up all comparisons. For simplicity, let $k=2^t,$ so $t=\log k$ Then the total number of comparisons is $$k/2(2m-1) + k/4(4m-1)+\cdots + k/2^t(2^tm-1)$$ which is $$\sum_{i=1}^t2^{t-i}(2^im-1)=t2^tm-(2^t-1) = n \log k - k +1$$ where we used $2^0+2^1+\cdots+2^{t-1} = 2^t-1$ and $mk=n.$ Therefore, $T(n)=kT(n/k) + n \log k - k + 1.$
Show that $S^n \times R$ is parallelizable for all n
Consider $f:S^n\times \mathbb{R}\rightarrow \mathbb{R}^{n+1}-\{0\}$ defined by $f(x,t)=e^tx$, it is a diffeomorphism where you identify $S^n $ with the unit sphere of $\mathbb{R}^{n+1}-\{0\}$; $\mathbb{R}^{n+1}-\{0\}$ is parallelizable since it is an open subset of $\mathbb{R}^{n+1}$.
Find the magnitude of the total force exerted on the hinge by the wall?
Let: $L$ = width of platform $T$ = combined tension in both chains $V$ = upward force on platform at wall $H$ = outward force on platform at wall $mg$ = weight of platform (acting at centre of mass) = 200N $Mg$ = weight of man = 850N Then: Balance of vertical forces: $V + T/\sqrt2 = mg + Mg$ Balance of horizontal forces: $H = T/\sqrt2$ Balance of moments about free edge of platform: $VL = mg \frac{L}{2}$ Therefore: $$ V = mg/2 = 100N \\T = \sqrt2 (mg - mg/2 + Mg) = 1343.50N $$ Tension in each chain $$ T/2 = 671.75N $$ Total force exerted by hinge on platform $$ \sqrt{V^2 + H^2} = \sqrt{(mg/2)^2 + T^2/2} = 955.25N $$
Given continuous $f_n(x)\ge 0$ are there continuous $g_n(x)\ge 0$ so that $f_n(x)g_n(x)\to +\infty$ whenever all $f_n(x)\neq 0$?
Let's try for $f_ng_n\simeq n$. At the zero set of $f_n$ this of course can not be satisfied, so one has to modify this equality slightly. Multiply with $f_n$ to get a positive term $$ f_n^2g_n\simeq nf_n $$ and then desingularize $$ (n^{-2}+f_n^2)g_n=nf_n $$ So taking $$ g_n=\frac{n^3f_n}{1+n^2f_n^2} $$ gives a function sequence that has a chance to come close to your requirements. But as said in the other answer, without further assumption on $f_n$ it will be hard to prove pointwise divergence close to the zero set.
Addition inside root problem
Remember these rules: $\sqrt A\times \sqrt B=\sqrt {AB}, ~ A,B≥0$ $(A-B)(A+B)=A^2-B^2$ Then you can start: $$\sqrt {A+\sqrt B}× \sqrt {A-\sqrt B}=\sqrt {(A+\sqrt B)×(A-\sqrt B)}=...$$
Prove f is analytic and periodic
For every $n$, the function $e^{-nz}f_n(z)$ is bounded by $1$ in $\mathbb C$, hence constant by Liouville's theorem. In other words, $f_n(z)=c_n e^{nz}$ where $|c_n|\le 1$. Any compact subset $K$ of the left halfplane is contained in some halfplane $x\le x_0$ with $x_0<0$. On $K$ we have $|f_n(z)|\le (e^{x_0})^n$ for all $n$. Since $e^{x_0}<1$, the Weierstrass test for uniform convergence applies. And since all $f_n$ are $2\pi i$-periodic, so is $f$.
Why do you need to substract initial investment in NPV formula?
NPV measures the net present value of the investment, that is, the value of the investment minus the initial outlay. Therefore $0$ is meaningful, as an investment that breaks even under the discounting assumptions.
By using integral, how can I find the area of the region $B - A$ when $A$ and $B$ are circles?
To my mind, the easiest way to set up the integral is to rotate coordinates so that the center of circle $A$ is at $(d,0)$ while the center of circle $B$ remains at $(0,0)$. Then move the origin of coordinates right by $d/2$ so now circle $B$ is centered at $(-d/2,0)$. Its equation is now $$\left(x+\frac d2\right)^2+y^2=x^2+y^2+dx+\frac{d^2}4=r^2+dr\cos\theta+\frac{d^2}4=L^2$$ Then along the circle $$r=-\frac d2\cos\theta\pm\sqrt{\frac{d^2}4\cos^2\theta-\frac{d^2}4+L^2}=-\frac d2\cos\theta+\sqrt{L^2-\frac{d^2}4\sin^2\theta}$$ Where we have taken the $(+)$ sign because $r>0$ even when $\theta=\pi/2$. Circle $A$ is centered at $(d/2,0)$ so its equation is $$\left(x-\frac d2\right)^2+y^2=x^2+y^2-dx+\frac{d^2}4=r^2-dr\cos\theta+\frac{d^2}4=L^2$$ So along this circle, $$r=\frac d2\cos\theta\pm\sqrt{\frac{d^2}4\cos^2\theta-\frac{d^2}4+L^2}=\frac d2\cos\theta+\sqrt{L^2-\frac{d^2}4\sin^2\theta}$$ These cross when $$-\frac d2\cos\theta+\sqrt{L^2-\frac{d^2}4\sin^2\theta}=\frac d2\cos\theta+\sqrt{L^2-\frac{d^2}4\sin^2\theta}$$ So $\cos\theta=0$ and $\theta\in\{-\pi/2,\pi/2\}$ Then the area is $$\begin{align}\Delta&=\int_{-\pi/2}^{\pi/2}\int_{-\frac d2\cos\theta+\sqrt{L^2-\frac{d^2}4\sin^2\theta}}^{\frac d2\cos\theta+\sqrt{L^2-\frac{d^2}4\sin^2\theta}}r\,dr\,d\theta\\ &=\int_{-\pi/2}^{\pi/2}\frac12\left[\left(\frac d2\cos\theta+\sqrt{L^2-\frac{d^2}4\sin^2\theta}\right)^2-\left(-\frac d2\cos\theta+\sqrt{L^2-\frac{d^2}4\sin^2\theta}\right)^2\right]d\theta\\ &=\int_{\pi/2}^{\pi/2}2\left(\frac d2\cos\theta\right)\sqrt{L^2-\frac{d^2}4\sin^2\theta}\,d\theta\end{align}$$ If we let $\frac d2\sin\theta=L\sin\phi$ then $$\begin{align}\int2\left(\frac d2\cos\theta\right)\sqrt{L^2-\frac{d^2}4\sin^2\theta}\,d\theta&=\int2\left(L\cos\phi\right)^2d\phi=L^2\int\left(1+\cos2\phi\right)d\phi\\ &=L^2\left(\phi+\frac12\sin2\phi\right)+C=L^2\left(\phi+\sin\phi\cos\phi\right)+C\\ &=L^2\left(\sin^{-1}\left(\frac d{2L}\sin\theta\right)+\left(\frac d{2L}\sin\theta\right)\sqrt{1-\frac{d^2}{4L^2}\sin^2\theta}\right)+C\end{align}$$ So the area $$\begin{align}\Delta&=\left[L^2\left(\sin^{-1}\left(\frac d{2L}\sin\theta\right)+\left(\frac d{2L}\sin\theta\right)\sqrt{1-\frac{d^2}{4L^2}\sin^2\theta}\right)\right]_{-\pi/2}^{\pi/2}\\ &=2L^2\left(\sin^{-1}\left(\frac d{2L}\right)+\left(\frac d{2L}\right)\sqrt{1-\frac{d^2}{4L^2}}\right)\end{align}$$
Disproving the existence of a Hamilton Circuit
All of these graphs are bipartite with a different number of vertices in the two parts. Such graphs obviously cannot be Hamiltonian.
Finding all primes $p$ such that $\frac{(11^{p-1}-1)}{p}$ is a perfect square
Here is another idea. I'm going to argue that there is no such prime. I'll make use of a technique called lifting the exponent. This will serve primarily to shorten several steps of my argument (i.e., it is not completely essential). In brief, the method requires the following result, which is not terribly difficult to verify. If $p > 2$ is prime and $a$ and $b$ are integers with $v_p(a) = v_p(b) = 0$ and $v_p(a - b) > 0$, then for any positive integer $n$, \begin{align*} v_p(a^n - b^n) = v_p(a - b) + v_p(n). \end{align*} The assertion is also valid if $p = 2$, provided that $v_2(a-b) > 1$, i.e., provided that $4 \mid a-b$. Note that a nonzero integer $m$ is a perfect square if and only if $v_p(m) \equiv 0 \pmod2$ for all primes $p$. I will also use the fact that the set of quadratic residues modulo $11$ is $\{1,3,4,5,9\}$, and therefore that the set of nonresidues is $\{2,6,7,8,10\}$. In particular, $-1,2$, and $7$ are quadratic nonresidues modulo $11$. Note that, if $p$ is a prime such that $(11^{p-1} - 1)/p$ is a perfect square, then the residue class $\overline{p}$ of $p$ modulo $11$ is a quadratic nonresidue. Finally, one can check directly that $(11^{p-1} - 1)/p$ is not a perfect square when $p = 2,3,5$, or $7$. Here is the argument. Assume for a contradiction that $(11^{p-1} - 1)/p$ is a perfect square for some prime $p$. First of all $2 \mid p-1$, so $2^3\cdot 3\cdot 5 = 11^2 - 1 \mid 11^{p-1} - 1$. For $q = 2,3,5$, one has $$ v_q(11^{p-1} - 1) = v_q(11^2 - 1) + v_q\left(\frac{p-1}{2}\right) \equiv 1 + v_q\left(\frac{p-1}{2}\right) \pmod 2. $$ The left-hand side is even by assumption (since $p > 7$), so $v_q\left(\frac{p-1}{2}\right)$ is odd. This proves that $3$ and $4$ divide $p-1$. One can check that $v_7(11^3 - 1) = 1$. It follows that $$ v_7(11^{p-1} - 1) = 1 + v_7\left(\frac{p-1}{3}\right). $$ The left-hand side is even (again, since $p > 7$), so $r = v_7\left(\frac{p-1}{3}\right)$ is odd. Now write $p - 1 = 12 \cdot 7^r m$, so that $7\nmid m$. We claim that $11^{12m} - 1 = 7a^2$ for some integer $a$, with $p\nmid a$. The crucial part is that $p\nmid 11^{12m} - 1$. Since $7 \nmid 4m$ and $v_7(11^3 - 1)>0$, we can lift the exponent to obtain $$ v_7(11^{12m} - 1) = v_7(11^3 - 1) + v_7(4m) = v_7(11^3 - 1) = 1. $$ Assume now that $q\not = 7$ is any prime divisor of $11^{12m} - 1$. Then we can lift the exponent again to get $$ v_q(11^{12m} - 1) = v_q(11^{p-1} - 1) - v_q(7^r) = v_q(11^{p-1} - 1). $$ (When $q = 2$ we use the fact that $4 \mid 11^{12m} - 1$.) The right-hand side is even unless possibly $q = p$. So we have $11^{12m} - 1 = 7p^ea^2$ where $a$ is an integer not divisible by $p$ and $e$ is either $0$ or equal to $v_p(11^{p-1} - 1)$, in which case it is odd. The residue $\overline{7p^ea^2} = -1$ is a quadratic nonresidue modulo $11$. Since $\overline{p}$ and $7$ are both quadratic nonresidues and $\overline{a^2}$ is a quadratic residue, it must be the case that $e = 0$. (A product of quadratic nonresidues is a quadratic residue, likewise a product of quadratic residues is a quadratic residue.) Thus $11^{12m} - 1 = 7a^2$, with $p \nmid a$, as claimed. Ok, just one more step. Basically, we use an argument similar to the one above to show that $11^{6m} - 1 = 14b^2$ for some integer $b$. Since $\overline{14} = 3$ and $\overline{b^2}$ are quadratic residues modulo $11$, the product $\overline{14b^2} = -1$ will be as well, and this then gives the desired contradiction. First, since $11^{6m} - 1 \mid 11^{12m} - 1$ and $p \nmid 11^{12m} - 1$, we know that $p \nmid 11^{6m} - 1$. Thus if $q$ is a prime that divides $11^{6m} - 1$, then $q \not = p$, and by lifting the exponent we get $$ v_q(11^{6m} - 1) = v_q(11^{p-1} - 1) - v_q(2\cdot 7^r) \equiv v_q(2\cdot 7^r) \pmod 2. \tag{1} $$ If $q \not = 2$ or $7$, then $v_q(2) = v_q(7^r) = 0$, so $v_q(11^{6m} - 1)$ is even. Since $4 \mid 11^{6m} - 1$ and $7\mid 11^{6m} - 1$, the equation $(1)$ shows that $$v_2(11^{6m} - 1)\equiv 1\pmod 2\quad \text{and} \quad v_7(11^{6m} - 1) \equiv r \equiv 1\pmod 2,$$ i.e., both numbers are odd. (Recall that $r = v_7\left( \frac{p-1}{3}\right)$ is odd, as was shown in 1.) We conclude that $11^{6m} - 1 = 14 b^2$ for some integer $b$, as claimed, and we derive a contradiction as indicated at the beginning of this step.
Differential equation : $u' = \sin(t)\exp(u)$
The reason for usually having an absolute value inside the logarithm is that the logarithm isn't defined for negative values in real calculus. Assuming this is real calculus, the logarithm is only defined for all $t$ if $C>1$.
Verify trigonometry equation $\frac{\sin A+\tan A}{\cot A+\csc A}=\sin A \tan A$
For what it's worth (at this late date)... Since cotangent and cosecant are linked by the Pythagorean Identity, the "conjugate factor" method is helpful: $$ \frac{\sin A + \tan A}{\cot A + \csc A} \cdot \frac{\cot A - \csc A}{\cot A - \csc A} = \frac{(\sin A + \tan A)(\cot A - \csc A)}{\cot^2 A - \csc^2 A}$$ $$= \frac{\sin A \cot A - \sin A \csc A + \tan A \cot A - \tan A \csc A}{-1} = - (\cos A - 1 + 1 - \frac{1}{\cos A} )$$ $$= \frac{1 - \cos^2 A}{\cos A} = \frac{\sin^2 A}{\cos A} = \sin A \tan A .$$
What's the relationship between hyperbola, hyperbolic functions and the exponential function?
Consider $x=\cosh t$ and $y=\sinh t$. For $t\in\mathbb R$, The coordinates $(x,y)$ trace the curve $x^2-y^2=1$, which is a hyperbola. We sometimes call trig functions circular functions for a very similar reason. If $x=\cos t$ and $y=\sin t$, the coordinates $(x,y)$ trace a circle.
Number of roots of $f(x,y)\equiv0\bmod p$?
Sure for $d\geq 2$. Just pick $f(x,y)=x^2-q$ where $q$ is a quadratic nonresidue mod $p$, for example, or if $p=2$ pick $f(x,y)=x^2+x+1$. Note that an even more trivial example: $f(x,y)=1+pg(x,y)$ with $g(x,y)\in\mathbb{Z}[x,y]$ any degree $d$ polynomial, also give no zeros when reduced mod $p$. This is essentially the only example for $d=1$.
a reduction to Artinian case
This post on MathOverflow seems to provide the survey you're looking for.
Proving that a set infers a norm given certain conditions
Your notation is a little idiosyncratic. More precisely, $\|x\| = \inf \{ r \ge 0 | x \in r B \}$. Suppose $x = 0$, then $x \in rB$ for all $r >0$, hence $\|x\| = 0$. If $\|x\| = 0$, then there are $r_k \ge 0$ such that $r_k \to 0$ such that $x \in r_k B$. Since $\cap_k r_k B = \{0\}$ (from 5.), we see $x = 0$. Suppose $\lambda = 0$, then $0 = \| \lambda x\| = |\lambda| \|x\|$. Suppose $\lambda \neq 0$, then $x \in \lambda \bar{B}$ iff ${1 \over \lambda} x \in \bar{B}$. Also, from 3., we have $x \in \lambda \bar{B}$ iff $x \in |\lambda| \bar{B}$. Then $\|\lambda x\| = \inf \{ r \ge 0 | \lambda x \in r B \} = \inf \{ r \ge 0 | x \in {r \over \lambda} B \}= \inf \{ r \ge 0 | x \in {r \over |\lambda|} B \} = \inf \{ |\lambda| s \ge 0 | x \in s B \} = |\lambda| \|x\| $.
2 slightly different situations in which 2 coins are tossed. Does the knowledge of an observer effect the probabilities of the outcomes?
No. There are four possibilities with two coins. For the time being identify one coin as X and one coin as Y. You have either X = 1, Y = 1 X = 0, Y = 1 X = 1, Y = 0 X = 0, Y = 0 (where 1 is heads and 0 is tails). A truthful observer who only looks at coin X and ignores coin Y will say: "there is at least one head" in cases 1 and 3. A truthful observer who only looks at coin Y and ignores coin X will say: "there is at least one head" in cases 1 and 2. In either of those situations, only one of the two cases have the other coin as a head. So the probability of two heads is 1/2. On the other hand, a truthful observer who looks at both coins will say: "there is at least one head" in cases 1, 2, and 3. Out of those only 1 case has two heads. So the probability of that is 1/3. By choosing only one of the coins to look at, you are looking at the probability $$ \mathbb{P}\{ X = 1 | \mbox{ Observer chooses }Y\mbox{ and }Y = 1\} $$ plus the same with $X$ and $Y$ swapped. Since $X$ and $Y$ are independent, it is equal to $$ \mathbb{P}\{X = 1\} \cdot \mathbb{P}\{\mbox{ Observer chooses } Y\} + \mathbb{P}\{Y = 1\} \cdot \mathbb{P}\{\mbox{ Observer chooses } X\} = 1/2$$ In the case where the observer sees both coins, you are looking at the conditional probability $$ \mathbb{P}\{ Y + X = 2 | Y + X \geq 1 \} $$ that the sum of the variables $Y$ and $X$ is 2 when we know their sum is at least 1. Clearly the conditioning is not independent of what you are testing! You can establish that the above is 1/3 by counting the total number of ways $A+B$ can be at least 1 as was done above.
Find all inverse images of intervals [a,b] by Z(ω)=ω(1-ω).
Hint: Find the maximum of the function. Draw the graph and use that to deduce the inverse image of any given interval (hint2: it is either empty, an interval or two disjoint intervals, depending on where the maximum is relative to the interval $[a,b]$)
discrete sum of Gaussian functions
The third Jacobi theta function $$\theta_3(x,q) = \sum_{n=-\infty}^\infty q^{n^2} e^{2nix}$$ so (expanding out the square) your function is $$e^{-z^2/(2\sigma^2)} \sum_{n=-\infty}^\infty e^{-2 \pi^2 n^2/\sigma^2} e^{-2\pi n z/\sigma^2} = e^{-z^2/(2\sigma^2)} \theta_3(iz/\sigma^2, e^{-2\pi^2/\sigma^2})$$
Some doubts regarding half range cosine series.
I already know that the function is even or odd Not so fast. Recall that a function is determined not just by a formula (or some other assignment of $y$ to $x$), but also by its domain. That is, $$f(x)=x\quad \text{for }\ x\in\mathbb R \tag1$$ and $$f(x)=x\quad \text{for }\ x\in [0,\pi] \tag2$$ are two different functions. The function (1) is odd, because it satisfies $f(-x)=-f(x)$ for every $x$ in its domain. The function (2) is neither odd nor even, because $f(-x)$ is not necessarily defined for $x\in[0,\pi]$. If we extend function (2) to an interval $[-\pi,\pi]$, it may become even or odd (or neither), depending on how we do it: $$f(x)=|x|\quad \text{for }\ x\in [-\pi,\pi] \tag3$$ $$f(x)=x\quad \text{for }\ x\in [-\pi,\pi] \tag4$$ $$f(x)=2x-|x| \quad \text{for }\ x\in [-\pi,\pi] \tag5$$ Here, (3) is even, (4) is odd, (5) is neither even nor odd. All are different functions, from each other and from (2). The Fourier series of (3) is of the form $\sum a_n \cos nx$. The Fourier series of (4) is of the form $\sum b_n\sin nx$. Does this not defeat the purpose of half range series I think the series of either kind serve their purpose of representing our function, even if we are only interested in the interval $[0,\pi]$. You may think of the sine series as being more natural (because it's odd) and therefore preferable for representing the function on $[0,\pi]$. Then you're in for a surprise. I compared the performance of the cosine partial sum $$\frac{\pi}{2}-\frac{4}{\pi} \cos x-\frac{4}{9\pi}\cos 3x$$ and of the sine partial sum $$2\sin x-\sin 2x+\frac23 \sin 3x$$ in approximating $f(x)=x$ on $[0,\pi]$. This is what cosines do: and this is what the sines do: The second approximation, using same number of terms, is just horrible. The underlying reason is that besides the odd/even extension, we also have $2\pi$-periodic extension going on. The periodic extension of (3) is continuous, while the periodic extension of (4) is not. The discontinuity of periodic extension at $\pi$ is responsible for the behavior of the series there.
Are nonsquares actually squares in extensions of even degree?
If $F$ is finite, yes. The multiplicative group of a finite field of order $q$ is cyclic of order $q-1$, so if $[E:F]=r$, $F^\times$ is embedded in $E^\times$ as the subgroup of $(q^r-1)/(q-1)$th powers. If $q$ is odd and $r$ is even, $(q^r-1)/(q-1)$ is even, so every element of $F^\times$ is a square in $E^\times$. If $q$ is even, $q^r-1$ is odd, so every element of $E^\times$ is a square.
Confusing moments about the Law of exponents for natural powers
Yes, this last step is fine. If $x=y$ then $a^x=a^y$ and in this case you know that $(n+m)+1=n+m+1$.
continuous and bounded function without maximum or minimum
Your example of $\sin \frac{1}{x}$ doesn't work because it has a maximum value of $1$ and a minimum value of $-1$. But $\sin \frac{1}{x}$ oscillates near $0$, which is good; in order to get the behaviour you want you should multiply $\sin \frac{1}{x}$ by a function defined on $[0,1]$ which takes a maximum value at $0$, such as $e^{-x}$ or $1-x$. In other words, any function like $e^{-x}\sin \frac{1}{x}$ or $(1-x)\sin \frac{1}{x}$ will do, or more generally any $g(x)\sin \frac{1}{x}$ for $g : [0,1] \to \mathbb{R}$ where $|g|$ takes a maximum value at $0$.
How to calculate inverse Laplace transform of $\frac{7s^2 + 3s +5}{(s^2-4s+29)(s^2+25)}$?
We need to go a little further with the partial fractions. Note that$$\frac{As+B}{s^2+5^2}=\frac{A/2-iB/10}{s-5i}+\frac{A/2+iB/10}{s+5i}$$has inverse Laplace transform$$(A/2-iB/10)e^{5it}+(A/2+iB/10)e^{-5it}=A\cos5t+\frac{B}{5}\sin5t.$$Since $s^2-4s+29=(s-2)^2+5^2$, the other partial fraction admits a similar analysis.
What is $\int\delta(x-y)\delta(y-z)f(y)\:{\rm d}y$?
Talking non-rigorously, $\delta(x-y) \delta(y-z) f(y)$ will be non-zero only when $x-y=0$ and $y-z=0$, i.e. when $x=y=z.$ Therefore the integral over $y$ would be non-zero only when $x=z.$ We can thus expect the integral to be a multiple of $\delta(x-z).$ So, let $\phi$ be a nice function and study the formal integral $$ \int \left( \int \delta(x-y) \delta(y-z) f(y) \, dy \right) \phi(z) \, dz. $$ Swapping the order of integration gives $$ \int \delta(x-y) \left( \int \delta(y-z) \phi(z) \, dz \right) f(y) \, dy = \int \delta(x-y) \phi(y) f(y) \, dy = \phi(x) f(x). $$ Thus, $$ \int \delta(x-y) \delta(y-z) f(y) \, dy = f(x) \delta(z-x). $$
Parametrization of a curve in 3d.
Only a small step next. $$ (x,y,z)= 5( \cos (t), \sin (t) ,\, \frac12 - t / \pi) $$
If $R$ is a domain which is not a field and $M$ is an $R$-module for which $\bigcap_{r\in R\setminus \{0\}} rM \neq 0$, show $M$ is not projective.
I'll show the contrapositive: if $R$ is a domain which is not a field, and if $M$ is a projective $R$-module then $\displaystyle \bigcap_{r \in R \setminus \{0 \}} rM =0$. Observe first that it suffices to prove the claim for free modules. Indeed, if $M$ is projective then there is an embedding $i: M \to F$ where $F$ is a free module. Then \begin{equation*} i \Big( \displaystyle \bigcap_{r \in R \setminus \{0 \}} r M \Big) \subset \displaystyle \bigcap_{r \in R \setminus \{0 \}} r F. \end{equation*} So if the result is true for $F$, then it is also true for $M$. Now let $F$ be a free module. Let $\{ e_{\lambda} : \lambda \in \Lambda \}$ be a basis of $F$. Let $x \in \displaystyle \bigcap_{r \in R \setminus \{0 \}} rF.$ We want to show that $x =0$. We can write $x$ in the chosen basis of $F$ as a sum (of finite support) \begin{equation*} x = \sum_{\lambda \in \Lambda} a_{\lambda} \, e_{\lambda} \end{equation*} for some coefficients $a_{\lambda} \in R$. We will show that $a_{\lambda}=0$ for all $\lambda$. Indeed, suppose on the contrary that there was a $\lambda_0$ such that $a_{\lambda_0} \neq 0$. Then also $(a_{\lambda_0})^2 \neq 0$ since $R$ is a domain. Then since by assumption $ x \in \bigcap_{r \in R \setminus \{0 \}} rF$ there is some $y \in F$ such that $x = (a_{\lambda_0})^2 \, y$. This implies that \begin{equation*} a_{\lambda_0} = (a_{\lambda_0})^2 \, b \end{equation*} where $b \in R$ is the coefficient of $e_{\lambda_0}$ in $y$. Since $a_{\lambda_0} \neq 0$ and $R$ is a domain, this implies that $1 = a_{\lambda_0} \, b$, hence $a_{\lambda_0}$ is a unit. Now take any $r \in R \setminus \{0 \}$. Then we have $ x = r z$ for some $z \in F$. Comparing the coefficients of $e_{\lambda_0}$ as before, we must have \begin{equation*} a_{\lambda_0} = r \, c \end{equation*} for some $c \in R$. Since $a_{\lambda_0}$ is a unit, this implies that $r$ is also a unit. Since $r$ was arbitrary, we showed that $R$ is a field, which is a contradiction! Hence we must have $a_{\lambda} =0$ for all $\lambda$, which means that $x=0$. Hence we conclude that $\displaystyle \bigcap_{r \in R \setminus \{0 \}} rF =0$.
Doubts on a game that two players take turns to take an element and xor it to their sums.
With an even number of $1^{st}$ significant bits, the winner cannot be decided on the $1^{st}$ significant bit no matter how Alice and Bob play. In order for it to affect a game’s outcome. One side must have a move history containing an odd number of $1^{st}$ significant bits. But the total number is even, so the other player will also have made an odd number of such moves. Both players’ scores have a $1$ in front, and the winner is decided based on the remainder.
Affine chains from PMA Rudin. Confusing examples
When you have a binary operation $\newcommand{\bop}{\mathop{\scriptstyle\top}}\bop \colon S\times S \to S$, that induces a corresponding operation on the space of functions $D \to S$ by applying the operation pointwise, $$(f \bop g)(x) := f(x) \bop g(x).$$ The pointwise sum or product of real-valued functions are very familiar examples. The same construction applied to affine $k$-simplices gives the concept of affine $k$-chains. However, one can view affine $k$-simplices as functions in different ways, and it's the non-direct way that gives rise to $k$-chains. To avoid any ambiguity, let me use different notations for the two ways to view $k$-simplices as functions that Rudin mentions. First, by definition an affine $k$-simplex is a function (sufficiently regular) $\sigma \colon Q^k \to \mathbb{R}^n$. We have an addition on $\mathbb{R}^n$, and that induces an addition on the space of functions $Q^k \to \mathbb{R}^n$. Let's denote this addition by $\oplus$. Then $(\sigma_1 \oplus \sigma_2) \colon u \mapsto \sigma_1(u) + \sigma_2(u)$ is an affine $k$-simplex if $\sigma_1$ and $\sigma_2$ are affine $k$-simplices. In the context of integration of differential forms, this operation is however uninteresting and rarely - if ever - considered. The interesting concept arises when one views an affine $k$-simplex (or more generally a $k$-surface) in $E$ as a map $\Omega^k(E) \to \mathbb{R}$ via integration, where $\Omega^k(E)$ denotes the space of (continuous) $k$-forms in $E \subset \mathbb{R}^n$. Formally, since that is not exactly the same thing as the $k$-surface $\Phi$, it should be denoted differently, but doing that would be cumbersome, so it is customary to abuse notation and denote this map also by $\Phi$. For this discussion, I will however use the notation $I_\Phi$ for the map $\omega \mapsto \int_{\Phi} \omega$ to disambiguate. Thus every $k$-surface $\Phi$ in $E$ defines a map $I_{\Phi} \colon \Omega^k(E) \to \mathbb{R}$, and we have the induced addition $$(I_{\Phi} + I_{\Psi}) \colon \omega \mapsto I_{\Phi}(\omega) + I_{\Psi}(\omega) = \int_{\Phi} \omega + \int_{\Psi} \omega.$$ It is this induced addition that pertains to affine $k$-chains. An affine $k$-chain in $E$ "is" a map $\Omega^k(E) \to \mathbb{R}$ which we can write as the sum of finitely many $I_{\sigma_i}$, where each $\sigma_i$ is an affine $k$-simplex in $E$. But out of convenience, one drops the $I$s and writes $k$-chains as $\sigma_1 + \dotsc \sigma_r$ rather than $I_{\sigma_i} + \dotsc + I_{\sigma_r}$. In the penultimate paragraph of the section, Rudin explains that one could be tempted to interpret the notation $\sigma_1 + \sigma_2$ as $\sigma_1 \oplus \sigma_2$, but that is not how $(83)$ is to be interpreted. In the last paragraph, he gives an example illustrating that these two additions are very different. If $\sigma_1$ and $\sigma_2$ are affine $k$-simplices with $\sigma_2 = -\sigma_1$ in the sense of $(80)$, then by theorem 10.27 we have $I_{\sigma_1} + I_{\sigma_2} = 0$, and that means $\sigma_1 + \sigma_2 = 0$ in the sense of $k$-chains, but generally we don't have $\sigma_1 \oplus \sigma_2 = 0$, where the last $0$ is the constant map $0 \colon u \mapsto 0 \in \mathbb{R}^n$.
How to prove that there exists a composite almost prime number?
As pointed out by some comments, this is an easy problem. The following (very inefficient) R code provides you hunderds of examples: is.prime <- function(num) { if (num == 2) { return(TRUE) } else if (any(num %% 2:(num-1) == 0)) { return(FALSE) } else { return(TRUE) } } maxnum = 10000 dif <- numeric(maxnum) for(i in 1:maxnum) { if(is.prime(i)) { dif[i] <- i - log(i^2, base=2)^2 #We don't even bother with non-primes } } which(dif>0) ```
How to calculate set of subgradients
By a theorem of Danskin and Bertsekas (I call it "the Bertsekas-Danskin Theorem for subgradients", see link below), the subgradient of $f$ is the convex hull of all such $a_k$, and corresponds to a face of the polyhedron $\mathcal P_A := \text{conv}\{a_k | 1 \le k \le m\}$. I've proven a more general result here. Precisely, you deduce that $$ \begin{split} \partial f(x) &= \partial \max_{1 \le i \le m}a_i^Tx_i + b_i = \partial \max_{y \in \Delta_m}y^T(A^Tx + b)\\ &= \text{conv}\{\nabla_x (\hat{y}^T(A^Tx + b))| \hat{y} \in \Delta_m, \hat{y}^T(A^Tx + b) =f(x)\}\\ &= \text{conv}\{A\hat{y}| \hat{y} \in \Delta_m, \hat{y}^T(A^Tx + b) = f(x)\} \\ &= \text{conv}\{a_k | 1 \le k \le m, a_k^Tx + b_k = f(x) \}, \end{split} $$ as claimed.
Suppose $g : U \subseteq \mathbb{R}^m \to M \subseteq \mathbb{R}^k$, and $u \in U$ why does the derivative $dg_u$ extend to all of $\mathbb{R}^m$?
It seems that you're confusing the domain of $dg$ with that of $dg_u$. Remember that $dg_u$ is the derivative of $g : U \to M$ at the single point $u \in U$; i.e. the best linear approximation to $g(\cdot) - g(u)$ near $u$. You might be more familiar with this as the Jacobian matrix of the mapping $g$. Linear maps/matrices always map between vector spaces - in this case since $U \subset \mathbb R^m, M \subset \mathbb R^k$, the derivative is a $k \times m$ matrix, or equivalently a linear map $\mathbb R^m \to \mathbb R^k$. If you want to talk about the derivative $dg$ as a function of position $u$, then you really have a map $dg : U \times \mathbb R^m \to \mathbb R^k$ defined by $dg(u,v) = dg_u(v)$.
How to make soft maximum numeric stable and avoid overflow?
Let $M=\max(x_i)$ and divide both the numerator and denominator by $e^{\alpha M}$. This results in $$\mathcal{S}_{\alpha}\left(\left\{x_i\right\}_{i=1}^{n}\right) = \frac{\sum_{i=1}^{n}x_i e^{\alpha (x_i-M)}}{\sum_{i=1}^{n}e^{\alpha (x_i-M)}}$$ where all values of the exponential function are bounded by $1$. Your mistake was in subtracting $M$ everywhere. It only needs to be subtracted in the exponents.
Find multiple integrals $I_{\max}(k,n)$ and $I_{\min}(k,n)$ in various ways
1) Evaluation of $I_{\max}(k,n)$ It is clear that $\max x = x$ so $I_{\max}(1,n)=\int\limits_0^1 x_1^n\,dx_1=\frac{1}{n+1}$. Also it is obvious that $\max\limits_{1\le i\le k}x_i=\max\left(x_k,\max\limits_{1\le i\le k-1}x_i\right)$. Then $$\begin{align}I_{\max}(k,n)&=\underbrace{\int\limits_0^1\int\limits_0^1\dots\int\limits_0^1}_k\left(\max\left(x_k,\max\limits_{1\le i\le k-1}x_i\right)\right)^n\,dx_1dx_2\dots dx_k=\\ &=\underbrace{\int\limits_0^1\int\limits_0^1\dots\int\limits_0^1}_{k-1}\left(\int\limits_0^{\large\max\limits_{1\le i\le k-1}x_i}\left(\max\limits_{1\le i\le k-1}x_i\right)^n \,dx_k+\int\limits_{\large\max\limits_{1\le i\le k-1}x_i}^1 x_k^n \,dx_k\right)\,dx_1dx_2\dots dx_{k-1}=\\ &=\underbrace{\int\limits_0^1\int\limits_0^1\dots\int\limits_0^1}_{k-1}\left(\left.\left(\max\limits_{1\le i\le k-1}x_i\right)^n x_k\right|_0^{\large\max\limits_{1\le i\le k-1}x_i}+ \left.\frac{x_k^{n+1}}{n+1}\right|_{\large\max\limits_{1\le i\le k-1}x_i}^1\right)\,dx_1dx_2\dots dx_{k-1}=\\ &=\underbrace{\int\limits_0^1\int\limits_0^1\dots\int\limits_0^1}_{k-1}\left(\left(\max\limits_{1\le i\le k-1}x_i\right)^{n+1} +\frac{1-\left(\max\limits_{1\le i\le k-1}x_i\right)^{n+1}}{n+1}\right) \,dx_1dx_2\dots dx_{k-1}=\\ &=\frac{1}{n+1}\cdot\left(n\underbrace{\int\limits_0^1\int\limits_0^1\dots\int\limits_0^1}_{k-1}\left(\max\limits_{1\le i\le k-1}x_i\right)^{n+1}\, dx_1dx_2\dots dx_{k-1}+1\right)=\\ &=\frac{1}{n+1}\cdot\left(nI_{\max}(k-1,n+1)+1\right)\end{align}$$ By induction it can be shown that $I_{\max}(k,n)=\frac{k}{k+n}$: $\left.I_{\max}(k,n)\right|_{k=1}=\frac{1}{1+n}=\frac{1}{n+1}$ $I_{\max}(k+1,n)=\frac{1}{n+1}\cdot\left(nI_{\max}(k,n+1)+1\right)=\frac{1}{n+1}\cdot\left(n\cdot\frac{k}{k+(n+1)}+1\right)=\frac{nk+k+n+1}{(n+1)(k+n+1)}=\\=\frac{(n+1)(k+1)}{(n+1)(k+n+1)}=\frac{k+1}{(k+1)+n}$ 2) Evaluation of $I_{\min}(k,n)$ It is clear that $I_{\min}(1,n)=\int\limits_0^1 x_1^n\,dx_1=\frac{1}{n+1}$. We will find $I_{\min}(k,n)$ in the same manner as we did it for $I_{\max}(k,n)$: $$\begin{align}I_{\min}(k,n)&=\underbrace{\int\limits_0^1\int\limits_0^1\dots\int\limits_0^1}_k\left(\min\left(x_k,\min\limits_{1\le i\le k-1}x_i\right)\right)^n\,dx_1dx_2\dots dx_k=\\ &=\underbrace{\int\limits_0^1\int\limits_0^1\dots\int\limits_0^1}_{k-1}\left(\int\limits_0^{\large\min\limits_{1\le i\le k-1}x_i} x_k^n \,dx_k+\int\limits_{\large\min\limits_{1\le i\le k-1}x_i}^1 \left(\min\limits_{1\le i\le k-1}x_i\right)^n \,dx_k\right)\,dx_1dx_2\dots dx_{k-1}=\\ &=\underbrace{\int\limits_0^1\int\limits_0^1\dots\int\limits_0^1}_{k-1}\left(\left.\frac{x_k^{n+1}}{n+1}\right|_0^{\large\min\limits_{1\le i\le k-1}x_i}+ \left.\left(\min\limits_{1\le i\le k-1}x_i\right)^n x_k\right|_{\large\min\limits_{1\le i\le k-1}x_i}^1\right)\,dx_1dx_2\dots dx_{k-1}=\\ &=\underbrace{\int\limits_0^1\int\limits_0^1\dots\int\limits_0^1}_{k-1}\left(\left(\min\limits_{1\le i\le k-1}x_i\right)^n-\frac{n}{n+1}\cdot\left(\min\limits_{1\le i\le k-1}x_i\right)^{n+1}\right) \,dx_1dx_2\dots dx_{k-1}=\\ &=I_{\min}(k-1,n)-\frac{n}{n+1}\cdot I_{\min}(k-1,n+1)\end{align}$$ By induction it can be shown that $I_{\min}(k,n)=\frac{k!n!}{(k+n)!}=\binom{k+n}{k}^{-1}$: $\left.I_{\min}(k,n)\right|_{k=1}=\frac{1!n!}{(1+n)!}=\frac{1}{n+1}$ $I_{\min}(k+1,n)=I_{\min}(k,n)-\frac{n}{n+1}\cdot I_{\min}(k,n+1)=\frac{k!n!}{(k+n)!}-\frac{n}{n+1}\cdot\frac{k!(n+1)!}{(k+(n+1))!}=\\=\frac{k!n!}{(k+n+1)!}\cdot((k+n+1)-n)=\frac{(k+1)!n!}{((k+1)+n)!}$
How do I integrate the indefinite integral, $\frac {\sin t}{t+1}$ w.r.t to t.
I don't think there is an elementary integral, but we can use special functions. Use the substitution $s=t+1$ giving $$ \begin{align} \int\frac{\sin(t)}{t+1}\,\mathrm{d}t &=\int\frac{\sin(s-1)}{s}\,\mathrm{d}s\\ &=\int\frac{\sin(s)\cos(1)-\cos(s)\sin(1)}{s}\,\mathrm{d}s\\ &=\cos(1)\mathrm{Si}(s)-\sin(1)\mathrm{Ci}(s)+C\\[6pt] &=\cos(1)\mathrm{Si}(t+1)-\sin(1)\mathrm{Ci}(t+1)+C \end{align} $$ Where $\mathrm{Si}(x)$ is the Sine Integral and $\mathrm{Ci}(x)$ is the Cosine Integral.
Perturb a given smooth function to a Morse function relative to fixed level sets, which are already fine.
Here's a partial answer: Since the union of the regular level sets $f^{-1}(c_i)$ and the set of all critical points are disjoint closed sets, they are contained in open neighborhoods $U$ and $V$, respectively, whose closures are disjoint. Using the (smooth) Urysohn lemma, we can find a smooth function $\rho: M \to \mathbb{R}$ which satisfies $\rho|_{\bar V} =1$ but vanishes on $\bar U$. Then $\tilde f=f+\rho \langle \cdot, a\rangle$ restricts to $f$ on $\bar U$ (hence on the level sets $f^{-1}(c_i)$) and $f+\langle \cdot,a\rangle$ on $\bar V$ (hence near all critical points of $f$), as desired. This leaves two final conditions to verify: Show that $\tilde f$ has no nondegenerate critical points in $M \setminus (U \cup V)$. In particular, if we can make sure that there are no critical points in $M \setminus (U \cup V)$, then $\tilde f$ will be Morse. Show that no new points wound up in the level sets $f^{-1}(c_i)$ --- that is, show that there are no points $x$ outside of $U$ such that $\tilde f(x)=c_i$ for some $i$. I have an argument below that should work whenever $M \setminus (U \cup V)$ is compact, but this isn't always possible if $M$ isn't compact. (Consider a cylinder along the $x$-axis in $\mathbb{R}^3$, where $f$ is the $z$-coordinate projection and $c=0$.) I think the second step should be relatively straightforward, but I haven't thought through it fully. Claim. If we can choose $U$ and $V$ such that $M \setminus (U \cup V)$ is compact, then we can choose $a \in \mathbb{R}^n$ such that $\tilde f$ has no critical points in $M \setminus (U \cup V)$. (In particular, this is possible if $M$ is compact or, more generally, if the set of critical points of $f$ or each of the level sets $f^{-1}(c_i)$ is compact.) Proof. For convenience, embed $M$ into $\mathbb{R}^n$ such that the final coordinate is given by $f$. Let $X_f$ be the gradient vector field on $M$ defined via the inner product inherited from $\mathbb{R}^n$. Then $X_f(x)$ is just the projection of $(0,\ldots,0,1) \in \mathbb{R}^n$ onto $T_x M \subset T_x \mathbb{R}^n \cong \mathbb{R}^n$. Consider the function $df(X_f): M \to \mathbb{R}$ given by $$x \mapsto df_x(X_f(x))=\langle X_f(x),X_f(x)\rangle=\| X_f(x)\|^2.$$ This function vanishes precisely when $X_f$ vanishes, i.e. on the set of critical points of $f$. Therefore $df (X_f)$ is strictly positive on $M \setminus (U \cup V)$. If we can choose $U,V$ such that $M \setminus (U \cup V)$ is compact, then $df(X_f)$ is bounded below by some $\epsilon>0$ on $M \setminus (U \cup V)$. It follows that \begin{equation*} d\tilde f_x(X_f(x))> \epsilon + \rho(x) \langle X_f(x),a\rangle +\langle x,a\rangle d\rho_x(X_f(x)). \end{equation*} Using compactness again, we can find some $\delta >0$ such that $$\big|\rho(x) \langle X_f(x),a\rangle + \langle x,a\rangle d \rho_x(X_f(x))\big|<\delta$$ for $x \in M\setminus (U \cup V)$. Since the above function scales linearly when we replace $a$ with a (positive) scalar multiple of itself, we can make the bound $\delta$ arbitrarily small. In particular, choose $a$ small enough to ensure $\delta<\epsilon$. Then we have \begin{align*} d\tilde f_x(X_f(x))&> \epsilon + \rho(x) \langle X_f(x),a\rangle +\langle x,a\rangle d\rho_x(X_f(x))\\ & \geq \epsilon-\big|\rho(x) \langle X_f(x),a\rangle +\langle x,a\rangle d\rho_x(X_f(x))\big| \\ &> \epsilon - \delta \\ &>0 . \end{align*} (After making $a$ sufficiently small, we can perturb it slightly again to make sure that $\tilde f$ is still Morse on $V$.) It follows that $\tilde f$ has no critical points in $M \setminus (U \cup V)$.
Prove that $2019^{2018}+2020$ is divisible by at least three primes.
Quick scan with modular arithmetic - no other prime factors less than $100$. I suspect the next one after $11$ is large enough that it would be impractical even if I went to the trouble of writing a program for it. Time for another approach. Edit: as the next prime factor is $397$, found below - it wouldn't have been impractical with the program. Implementing smart modular exponentiation would have taken some time, but looping through a hundred primes is nothing once I've got that part. And no, there's nothing wrong with trying something, getting less than the full problem, and returning with a different approach. There's nothing theoretically wrong with the modular arithmetic, and we did find a factor that way. What's the new approach? Polynomial factorization. Split that $2020$ as $2019+1$. That gives us $n^{2018}+n+1$, evaluated at $n=2019$. Now, add and subtract $n^2$; we get $(n^{2018}-n^2)+(n^2+n+1)$. Both of those terms are divisible by $n^2+n+1$, since $2018\equiv 2\mod 3$. That means that $2019^2+2019+1=4078381$ is a factor of the big number we're looking at. The other factor is $n^{2016}-n^{2015}+n^{2013}-n^{2012}+\cdots+n^3-n^2+1$, which is equivalent to $673-672n^2\equiv 672n+1345\mod n^2+n+1$. Evaluated at $n=2019$ - that's $1358113$, which is relatively prime to $4078381$ (Euclidean algorithm - very easy to run). We have split $2019^{2018}+2019+1$ as a product of two factors that are relatively prime. It's equivalent to $88$ mod $121$, so there's exactly one factor of $11$ in there; as both terms from the polynomial factorization are much larger than $11$, one of them is a product of $11$ and at least one other prime. The other has at least one prime factor, not on the list of what we've already found, and we have at least three distinct prime factors. Done. Incidentally, I happened to have a sieve lying around, so I tested that factor $4078381$ against primes up to $30000$. It splits as $397\cdot 10273$, and both of those are prime. So that's three not-too-huge prime factors we've found, plus whatever else is in the big factor $2019^{2016}-2019^{2015}+2019^{2013}-2019^{2012}+\cdots+2019^3-2019^2+1$.
Proper use of implication and equivalence
Your doubts can be explained by the following theorem: $$\left(A\Leftrightarrow B\right) \Rightarrow \left(A\Rightarrow B\right)$$ Same holds for the other direction, of course. Practically spoken, this means that if two statements are equivalent, it is not a false statement to use an implication there. The reason why most people do this is because in many proofs, you just have to show that some implication is true, which is why you would structure your proof like the pattern $$A\Rightarrow B\Rightarrow \dots\Rightarrow X$$ But in such a case, it is rather consequent to just use the implication-part of any equivalence, because 1) It would be correct (see above) and 2) You don't have to concern the question if a certain statement is an equivalence or not, it just comes in handy. But the main thing to recognize here is, it just doesn't matter -- as long as you are not trying to prove the $"\Leftarrow"$ direction. BTW the Proof (although quite trivial): $$\left(A\Leftrightarrow B\right) \Leftrightarrow \left( \left(A\Rightarrow B\right) \land \left(A\Leftarrow B\right) \right) \Rightarrow \left(A\Rightarrow B\right)$$
How to prove this relation between Ramsey Numbers: $R(s, t) ≤ R(s, t-1) + R(s-1, t)$ for $s,t>2$
The usual approach is very nice: Suppose you are given the complete graph on $n=R(s,t-1)+R(s-1,t)$ vertices, and a coloring of its edges with colors red and blue. Pick one of the vertices, call it $v$. Divide the remaining $n-1$ into two sets $A$ and $B$, according to whether they are joined to $v$ by a red or a blue edge, respectively. Let $a=|A|$ and $b=|B|$. Then $a+b=n-1$, so either $a\ge R(s,t-1)$ or $b\ge R(s-1,t)$. This is because otherwise, $a+b\le n-2$. It should be easy to see how to continue from here. Just for fun, let's compute some upper bounds using this inequality, knowing that $R(2,t)=R(t,2)=t$. We have $R(3,3)\le R(2,3)+R(3,2)=3+3=6$. In fact, we have equality in this case. Moreover, the usual "party argument" one sees sometimes showing $R(3,3)\le 6$ is precisely the argument above. $R(4,3)\le R(3,3)+R(4,2)=6+4=10$. In fact, one can extend the argument above a bit to show that if both $R(s,t-1)$ and $R(s-1,t)$ are even, then we have strict inequality. This means that $R(4,3)\le 9$, and again this is sharp. $R(4,4)\le R(3,4)+R(4,3)=18$. Again, equality holds. $R(5,3)\le R(5,2)+R(4,3)=5+9=14$ and, again, equality holds. $R(5,4)\le R(5,3)+R(4,4)=14+18=32$. (So $R(5,4)\le31$ as both $14$ and $18$ are even.) In fact, $R(5,4)=24$, and the bound ceases being optimal around here. Improving this bound turns out to be remarkably difficult and the subject of much work. For a nice up to date list of the known values and bounds for Ramsey numbers, together with references, see the dynamic survey on "Small Ramsey numbers" by Stanisław Radziszowski, last updated March 3, 2017, in the Electronic Journal of Combinatorics. (I see I had suggested the same paper as an answer to this other question.)
Finding First Variation
Let $$\begin{align}I[r]&=\int_{\theta_0}^{\theta_1}L(\theta,r,r')\mathrm{d}\theta\\ L(\theta,r,r')&=\sqrt{(r')^2+r^2}\text{.}\end{align}$$ Write $$L(h)=L(\theta,r+h\delta r,r'+h\delta r')$$ where $h$ is a scalar. If $L$ is differentiable at $h=0$ for constant $\theta$, $r$, $r'$ $\delta r$, and $\delta r'$, then $$L(h)=L(0)+h L'(0)+o(h)$$ where $$L'(0)=L_{,r}(\theta,r,r')\delta r+L_{,r'}(\theta,r,r')\delta r'\text{.}$$ Then $$\begin{split}I[r+h\delta r]&=\int_{\theta_0}^{\theta_1}L(h)\,\mathrm{d}\theta\\ &=\int_{\theta_0}^{\theta_1}\left(L(0)+hL'(0)+o(h)\right)\,\mathrm{d}\theta\\ &=I[r]+h\int_{\theta_0}^{\theta_1}L'(0)\mathrm{d}\theta +o(h)\text{.} \end{split}$$ and $$\begin{split}\int_{\theta_0}^{\theta_1}L'(0)\mathrm{d}\theta&= \int_{\theta_0}^{\theta_1}\left(L_{,r}(\theta,r,r')\delta r+L_{,r'}(\theta,r,r')\delta r'\right)\mathrm{d}\theta \\ &=\int_{\theta_0}^{\theta_1}\left(L_{,r}(\theta,r,r')-\frac{\mathrm{d}}{\mathrm{d}\theta}L_{,r'}(\theta,r,r')\right)\delta r\mathrm{d}\theta+\left.L_{,r'}(\theta,r,r')\delta r\right\rvert_{\theta_0}^{\theta_1}\text{.} \end{split}$$ For each function $r$, the right side of this last equality is a linear functional in the function $\delta r$: call it $\nabla I[r]$: $$\langle \nabla I[r],\delta r\rangle\stackrel{\text{def}}{=}\int_{\theta_0}^{\theta_1}\left(L_{,r}-\frac{\mathrm{d}}{\mathrm{d}\theta}L_{,r'}\right)\delta r\mathrm{d}\theta+\left.L_{,r'}\delta r\right\rvert_{\theta_0}^{\theta_1}\text{.}$$ Then $$I[r+h\delta r]=I[r]+h\langle\nabla I[r],\delta r\rangle + o(h)\text{;}$$ $\nabla I[r]$ is precisely the Gâteaux derivative (or first variation) of $I$ with respect to $r$. Note that $\nabla I[r]$ is composed of two terms: a "bulk" term $\int_{\theta_0}^{\theta_1}\left(L_{,r}-\frac{\mathrm{d}}{\mathrm{d}\theta}L_{,r'}\right)\delta r\mathrm{d}\theta$—this vanishes for all $\delta r$ if and only if the Euler—Lagrange equations hold; and a "boundary" term $\left.L_{,r'}\delta r\right\rvert_{\theta_0}^{\theta_1}$—setting this to zero gives us boundary conditions for the Euler—Lagrange equation if the endpoints of the variation are not fixed. Frequently, one encounters minimization problems in mechanics in which boundary conditions are not fixed—using all of the first variation instead of just the Euler—Lagrange equations tells us what those boundary conditions should be.
Find the sum of first 99 terms of the sequence defined by $T_{n}=\frac{1}{5^{2n-100}+1}$
Paying attention carefully given first to me $T_{50-n}=5^{2n}T_{50+n}$ from which some properties emerge. However much better is $T_n+T_{100-n}=1$ and $T_{50} = \frac12$ from which the solution is immediate. We have $T_1+T_2+T_3+…….+T_{49}+ \frac12 +T_{51}+T_{52}+…..T_{98}+T_{99} $ from where, rearranging terms, we get $\sum_{n=1}^{n=49}(T_n+T_{100-n}) =49$ Thus the answer is $49+\frac12=\frac{99}{2}$
× ⊆ × . Prove ⊆ .
$$ S\times T \subseteq T\times W \implies S\subseteq T \text { and } T\subseteq W $$ Transitivity of inclusion implies $$S\subseteq W$$
Derivative of $x\arctan x$?
You don't need to go through the process of "$y = \arctan x$ therefore $\tan y = x$ and..." if you already know the following: $$ \frac d{dx} \arctan x = \frac1{x^2+1}$$ So, you can just use the product rule: $$ \frac d{dx} \left[f(x)g(x)\right] = f'(x) g(x) + f(x)g'(x)$$ with $f(x) = x$ and $g(x) = \arctan x$. If you don't know that $\dfrac d{dx} \arctan x = \dfrac1{x^2+1}$, then you are on the right track. $y = \arctan x$, so $x = \tan y$, and then you can differentiate both sides with respect to $x$, making sure to use the chain rule on the right-hand side: \begin{align*} x &= \tan y\\ \frac d{dx} x &= \frac d{dx} \tan y\\ 1 &= (\sec^2 y) \cdot \frac{dy}{dx}\\ \frac1{\sec^2 y} &= \frac{dy}{dx} \end{align*} Now you just need to express $\sec^2 y$ in terms of $x$. I'll leave that part to you, but here's how to get started: Draw a right triangle and label one of the acute angles and the corresponding sides accordingly so that the picture shows $x = \tan y$. It may be easier to think of it as $\tan y = \dfrac x1 = \dfrac{\text{opposite}}{\text{adjacent}}$.
Definition of quasiprojective variety by Shafarevich
He means "open" relative to the "closed projective set"; the latter gains a subspace topology as a subset of $\mathbb P^n$. It would be good to review the subspace topology. This is just a topological condition and has any number of equivalent guises: it's the intersection of a closed subset of $\mathbb P^n$ and an open subset of $\mathbb P^n$; it's a (relatively) open subset of its closure in $\mathbb P^n$; it's a relatively closed subset of some open subset of $\mathbb P^n$. In particular, it's the same as being locally closed. He probably uses the given definition because it's the most natural one in the classical language: the central objects are the closed subsets of projective space and everything should descend from those. In the language of schemes that he introduces in Volume II this becomes more subtle.
Compute the righthand limit; calculus
First note that $$ \frac{d}{dx}\int_a^x f(t)\ dt = \frac{d}{dx}\left[F(x)-F(a)\right] $$ $$= \frac{d}{dx}F(x) - \frac{d}{dx}F(a) = f(x) - 0 = f(x) $$ And $$ \frac{d}{dx}\int_a^b f(t)\ dt = \frac{d}{dx}\left[F(b)-F(a)\right] $$ $$= \frac{d}{dx}F(b) - \frac{d}{dx}F(a) = 0 - 0 = 0$$ Therefore $$ \lim\limits_{x\to\frac12^+} \left[\frac{F(x)-F\left(\frac12\right)}{x -\frac12}\right]$$ $$= \lim\limits_{x\to\frac12^+} \left[\frac{\frac{d}{dx}\left[F(x)-F\left(\frac12\right)\right]}{\frac{d}{dx}\left[x -\frac12\right]}\right]$$ $$= \lim\limits_{x\to\frac12^+} \left[\frac{\frac{d}{dx}F(x)-\frac{d}{dx}F\left(\frac12\right)}{\frac{d}{dx}x -\frac{d}{dx}\frac12}\right]$$ $$=\lim\limits_{x\to\frac12^+} \left[\frac{f(x)-0}{1 -0}\right]=\lim\limits_{x\to\frac12^+} f(x)$$ Note that $F\left(\frac12\right)$ produces a constant and the derivative of a constant is zero.
Basic probability limit problem
Outline: We define $f(x)$ to be $0$ except near non-zero integers. We will describe $f(x)$ when $x$ is near the positive integer $n$. For the negative integers, reflect across the $y$-axis. Let $a_n=n-\frac{1}{100\cdot 2^{n^2}}$ and let $b_n=n+\frac{1}{100\cdot 2^{n^2}}$. Over the interval $a_n\le x\le n$, define $f(x)$ by using the line that joins $(a_n,0)$ to $(n,\frac{1}{n})$. Over the interval $n\le x\le b_n$, define $f(x)$ analogously, using the line that falls from $(n,\frac{1}{n})$ to $(b_n,0)$. The areas under the spikes decrease very rapidly, and the sum of these areas is small. Let $g(x)=ke^{-x^2}+f(x)$, where $k$ is chosen so that $\int_{-\infty}^\infty f(x)\,dx=1$. This also takes care of the positivity requirement on $g(x)$. Because far out the spikes have very small area, the mean and variance of a random variable with density function $g(x)$ exist. Since the mean exists, we can conclude by symmetry that it is $0$. Note that $xg(x)=1$ whenever $x$ is a positive integer. The function $g(x)$ is continuous but not smooth: the derivative does not exist at $a_n$, $n$, and $b_n$. However, the spikes can be smoothed out. A small modification will get us differentiability. Indeed we can make $g(x)$ infinitely differentiable everywhere by replacing $f(x)$ by an infinite sum of bump functions that vanish outside $(a_n,b_n)$ and reach height $\frac{1}{n}$ at $n$.
Product of non-zero eigenvalues (pseudo-determinant) of a singular matrix
This doesn't hold. E.g. when $A=B=1$ and $C=0$, we have $$ P=\pmatrix{1&1\\ 1&1} $$ and hence $$ Det(P)=2\ne 1=\det(A)Det(C).\tag{2} $$ This counterexample can be easily extended to the case where $C$ is nonzero or even entrywise nonzero. First, by appending $1$s to $A,B$ and $C$, i.e. when $A=B=I_2$ and $C=\operatorname{diag}(0,1)$, the new $P$ is (by a simultaneous permutation of rows and columns) similar to $\pmatrix{1&1\\ 1&1}\oplus\pmatrix{1&1\\ 1&2}$. Therefore $Det(P)=2$ and the inequation $(2)$ still holds. Next, to make $C$ entrywise nonzero, we can simply replace $C$ by $QCQ^T$ for some appropriate real orthogonal matrix $Q$. In particular, the inequation $(2)$ is still true when $A=B=I_2$ and $$ C=\left[\frac{1}{\sqrt{2}}\pmatrix{1&1\\ 1&-1}\right]\pmatrix{0&0\\ 0&1}\left[\frac{1}{\sqrt{2}}\pmatrix{1&1\\ 1&-1}\right]=\frac12\pmatrix{1&-1\\ -1&1}. $$
Proof that $n^3-n$ is a multiple of $3$.
$n^3-n = (n+1)n(n-1).$ The right hand side is a product of three consecutive integers... For a proof by induction, note that $(n+1)^3-(n+1) - n^3 + n = 3n^2 + 3 n,$ which is obviously divisible by $3.$ So, if you prove the statement for $n=0,$ you are good to go.
How to show that $\{ f_n \}_{n=1}^{\infty} $ is locally uniform convergent, when $f_n (z) = \sum_{k = -n}^{k=n} \frac{1}{z+k} $?
Your idea was good, but it's better to write $g_n(z)=\dfrac{2z}{z^2-n^2}$. This denominator can be a problem when $z\approx n$. But when studying convergence, we only need to consider large $n$, and we get to decide what "large" means. For example, we can choose to deal only with $n\ge 2|z|$. In this case $$|z^2-n^2| \ge n^2 -|z|^2 \ge \frac{3}{4}n^2$$ So, the maximum of $|g_n|$ on the disk $|z|\le R$ does not exceed $\dfrac{2R}{\frac34 n^2}$ provided than $n\ge 2R$. The rest should be clear.
Given $G,H_1,H_2$ finite abelian groups and $G\times H_1 \cong G\times H_2$ then $H_1 \cong H_2$
Let $G,H_1,H_2$ be finite abelian groups. Let $\Phi$ be an isomorphism as follows $$\Phi:G\times H_1\to G\times H_2$$ We write $\Phi$ as its components: $\Phi=(\Phi_1,\Phi_2)$ where $$\Phi_1:G\times H_1\to G,\quad\Phi_2:G\times H_1\to H_2$$ are surjective group homomorphisms. Then they induce isomorphisms $$G\cong\frac{G\times H_1}{\ker\Phi_1},\quad H_2\cong\frac{G\times H_1}{\ker\Phi_2}$$ Lemma 1. The following are isomorphisms $$\phi_1=\Phi_1|_{\ker\Phi_2}:\ker\Phi_2\to G$$ $$\phi_2=\Phi_2|_{\ker\Phi_1}:\ker\Phi_1\to H_2$$ Proof. Apparently $\phi_1,\phi_2$ are homomorphisms. We first prove that $\phi_1$ is injective. Let $(a,b),(c,d)\in\ker\Phi_2$ be such that $$\Phi_1(a,b)=\Phi_1(c,d)$$ Then $\Phi_1(ac^{-1},bd^{-1})=1_G$. On the other hand, $\Phi_2(ac^{-1},bd^{-1})=1_{H_2}$, since $(a,b),(c,d)\in\ker\Phi_2$. This means $\Phi(ac^{-1},bd^{-1})=(1_G,1_{H_2})\implies(ac^{-1},bd^{-1})\in\ker\Phi\implies a=c,b=d$ as $\Phi$ is an isomorphism. Now note that $$G\times H_1\cong G\times H_2\implies|H_1|=|H_2|$$ $$H_2\cong\frac{G\times H_1}{\ker\Phi_2}\implies|H_2|=\frac{|G||H_1|}{|\ker\Phi_2|}$$ We get $$|G|=|\ker\Phi_2|$$ Hence $\phi_1$ is an injective homomorphism between two groups with equally many elements, and therefore an isomorphism. By the same argument $\phi_2$ is also an isomorphism. $\quad$ Lemma 2. Every $(a,b)\in G\times H_1$ can be written uniquely as $$(a,b)=xy,\ x\in\ker\Phi_1,\ y\in\ker\Phi_2$$ Proof. Let $$x=\phi_2^{-1}\Phi_2(a,b)\in\ker\Phi_1,\quad y=\phi_1^{-1}\Phi_1(a,b)\in\ker\Phi_2$$ This means $$\Phi(x)=(\Phi_1(x),\Phi_2(x))=(1_G,\Phi_2(a,b))$$ $$\Phi(y)=(\Phi_1(y),\Phi_2(y))=(\Phi_1(a,b),1_{H_2})$$ $$\implies\Phi(xy)=\Phi(x)\Phi(y)=(1_G,\Phi_2(a,b))(\Phi_1(a,b),1_{H_2})=(\Phi_1(a,b),\Phi_2(a,b))=\Phi(a,b)$$ $$\implies(a,b)=xy$$ Suppose $(a,b)=xy=x'y'$ with $x',y'$ also as above. Then $\Phi_1(y)=\Phi_1(xy)=\Phi_1(x'y')=\Phi_1(y')$. But $\Phi_1$ restricted to $\ker\Phi_2$ is injective, hence $y=y'$. similarly $x=x'$. Now we write elements in $G\times H_1$ as in Lemma 2 and define this mapping $$G\times H_1\to G\times H_1$$ $$xy\mapsto(\phi_1(y),\phi_2(x))$$ This is easily verified to be an automorphism that carries $\ker\Phi_2$ to $G$. By my comments above $$\frac{G\times H_1}{\ker\Phi_2}\cong\frac{G\times H_1}{G}$$ Combined with (obtained above Lemma 1) $$H_2\cong\frac{G\times H_1}{\ker\Phi_2}$$ we have $$H_2\cong\frac{G\times H_1}{\ker\Phi_2}\cong\frac{G\times H_1}{G}\cong H_1$$
Proving the the x-axis is closed and not open
Your approach is correct. You may consider two different cases for your point $(a,b)$ One case where $b>0$ and the other case when $b<0$ Then it is more straight forward to show that $z\ne 0$
Topologies coinciding at a point or a set.
I would guess that it means the every open neighborhood of $x$ in $\tau_1$ contains an open neighborhood of $x$ in $\tau_2$, and visa versa. This essentially means that "continuity at $x$" is the same in each topology. It's possible, though, that the stronger statement is intended: the neighborhoods of $x$ are the same in each topology. There's a lovely way to define the "local topology" at a point. Let $(X,\tau)$ be a topology and $x\in X$. Define a new topology, $\tau_x$ which has as a basis the singletons $\{y\}$ where $y\neq x$ and $U\in\tau$ such that $x\in U$. So the open sets are any set not containing $x$, or any set contain a $\tau$-neighborhood of $x$. It is not hard to see that this is a topology. In this case, my first definition would be equivalent to saying, for topologies $\tau$ and $\rho$, they coincide at $x$ if and only if $\tau_x=\rho_x$. Now, in the complete lattice of topologies on $X$, $$\bigcap_{x\in X}\tau_x = \tau$$ So we can recover $\tau$ if we know each $\tau_x$. Then, for a topological space, $Y$, we can define a function $f:X\to Y$ to be $\tau$-continuous at $x$ if it is continuous on $(X,\tau_x)$, and it turns out, due to the lattice meet property above, the function $f$ is continuous on $(X,\tau)$ if and only if it is continuous at each $(X,\tau_x)$. There's also a simple "local range topology", $\tau^x$ defined as just the $\tau$-open sets containing $x$, and the empty set. Then you can say that $f:Y\to X$ is continuous "to" $x$ if it is continuous when using the topology $\tau^x$. Again, we can recover $\tau$ if we know ecah $\tau^x$. Indeed, $\tau = \bigcup \tau^x$. The stronger definition above would happen exactly when $\tau^x=\rho^x$. It's true that if $\tau^x=\rho^x\implies\tau_x=\rho_x$, though so maybe this stronger definition is reasonable.
analogy between iid and sampling with replacement
Yes, in sampling with replacement with all outcomes equally likely, the samples are indeed independent and identically distributed. Suppose you have a bin with $n$ balls, numbered $\{1, ..., n\}$, and you sample 3 times with replacement. So all outcomes have the form $(x_1, x_2, x_3)$, where $x_i$ represents the ball number of sample $i \in \{1, 2, 3\}$, and $x_1, x_2, x_3 \in \{1, ..., n\}$. So there are a total of $n^3$ possible outcomes. Suppose all $n^3$ outcomes are equally likely. Then each outcome has probability $1/n^3$. Since an event is just a set of outcomes, the probability of any event $B$ is equal to: $$ P[B] = \frac{|B|}{n^3} $$ where $|B|$ denotes the number of outcomes in the event $B$. Define the random variable $X_i$ to be the ball number of sample $i$. For given integers $k_1, k_2, k_3 \in \{1, ..., n\}$, define the events: \begin{align} A_1 &= \{X_1 \leq k_1\}\\ A_2 &=\{X_2 \leq k_2 \}\\ A_3 &= \{X_3 \leq k_3\} \end{align} Then $A_1 \cap A_2 \cap A_3$ is the event that all three events happen. There are $k_1k_2k_3$ ways for this to occur, that is, $|A_1 \cap A_2 \cap A_3|=k_1k_2k_3$. So: $$P[A_1 \cap A_2 \cap A_3] = \frac{k_1k_2k_3}{n^3}=\frac{k_1}{n}\frac{k_2}{n}\frac{k_3}{n}$$ On the other hand, you can show that $|A_1|=k_1n^2$, $|A_2|=k_2n^2$, $|A_3|=k_3n^2$, and so $$P[A_1]=\frac{k_1}{n}, P[A_2]=\frac{k_2}{n}, P[A_3]=\frac{k_3}{n} $$ Thus, $$P[A_1 \cap A_2 \cap A_3] \overset{(a)}{=} P[A_1]P[A_2]P[A_3]$$ Similarly, you can show \begin{align} P[A_1\cap A_2] &\overset{(b)}{=}P[A_1]P[A_2]\\ P[A_1\cap A_3] &\overset{(c)}{=}P[A_1]P[A_3]\\ P[A_2\cap A_3]&\overset{(d)}{=}P[A_2]P[A_3] \end{align} We deduced all this directly from the fact that all samples are equally likely, without using the concept of independence. However, this motivates a good definition for independence. Indeed, three events $A_1, A_2, A_3$ are defined to be mutually independent if equalities (a)-(d) hold. In terms of the random variables $X_i$, the equality (a) implies: $$P[X_1\leq k_1, X_2\leq k_2, X_3 \leq k_3]\overset{(e)}{=}P[X_1\leq k_1]P[X_2\leq k_2]P[X_3\leq k_3]$$ This holds for all integers $k_1, k_2, k_3 \in \{1, ..., n\}$. With some thought, you can convince yourself that it also holds for all real numbers $k_1, k_2, k_3 \in \mathbb{R}$. In general, we define three random variables $X_1, X_2, X_3$ to be mutually independent if equality (e) holds for all real numbers $k_1, k_2, k_3 \in \mathbb{R}$. Hence, the three random variables $X_1, X_2,X_3$ of interest here are mutually independent. In fact, it is easy to see that they are also identically distributed.
How to prove these linear congruence equations?
For 2), $a\equiv a' \mod p$ means $a^2=1$. Now the congruence classes $\mathbf Z/p\mathbf Z$ are a field, which implies the quadatic equation $a^2-1=0$ has at most two roots. And indeed it has: $1$ and $-1$ˆ. For $3)$ (which is known as Wilson's theorem), note that each congruence class $a$, except the classes of $1$ and $p-1\equiv 1$, which are their own inverse, is associated with another class $a'$ such that $aa'\equiv 1$. So $$(p-1)!= 1\cdot2\cdots\cdot (p-2)(p-1)\equiv 1\cdot(-1)\cdot\!\prod_{2\le a\le p-2}\!a\equiv 1\cdot(-1) \cdot1^{\tfrac{p-3}2}=-1. $$
How to find the probability of one die roll being higher than a second die roll?
Let these dots be the points $\{(x,y): x\in X, y\in Y\}$. What is the ratio of number of points under the line $y = x$ marked black against the total number of points? This picture is for $x\ge y$. How does the picture look when $y\ge x$? Alternatively by law of total probability If we assume $x\ge y$, then $$P(X>Y) = \sum_{j = 1}^y P(X>j)P(Y = j) = \sum_{j = 1}^y\frac{x-j}{x}\cdot \frac{1}{y} =\frac{1}{xy}(xy-\frac{y(y+1)}{2}) $$ I intentionally didn't simplify the last expression, because $(xy-\frac{y(y+1)}{2})$ is the total number of black points, and we divide by total number of points $xy$. Now you can see the connection between probability method and geometric meaning. Now for the case $y\ge x$, we have $$P(X>Y) = \sum_{j = 1}^x P(X>j)P(Y = j)$$ I encourage you to work this out and draw a picture. [EDIT] As discussed in the comments, $$P(X>Y) = \sum_{j = 1}^{\max(x,y)} P(X>j)P(Y = j) = \sum_{j = 1}^{\min(x,y)} P(X>j)P(Y = j) $$ because when $j$ goes beyond $\min(x,y)$ then either $P(X>j) = 0$ or $P(Y = j) = 0$. Also this can be verified from picture as well.
Non-factorizable graphs in which every edge can be extended to a maximum matching
The the Edmonds-Gallai decomposition could be what you're looking for. It decomposes a graph into sets $A$, $B$ and $C$, where $A$ is all the vertices missed by at least one maximum matching, $B = N(A)$, and $C$ is all other vertices. Basically, every maximum matching matches vertices in $B$ to odd components of $G - B$ in $A$. In $G - B$, $C$ consists of even components with perfect matchings. If you follow what this means for a permissive graph, $B$ must be an independent set, since such edges must not be used in a maximum matching. Furthermore, there are no edges from $C$ to the rest of the graph (so if we assume $G$ is connected, $C$ is empty). That's quite a bit of structure right there. I believe the components of $A$ are automatically permissive, and that any of the edges from $A$ to $B$ can automatically be used in maximum matchings, but I'm not sure. Edit: I noticed the Lovasz article you cite basically makes the same points I was making above, so you may have been aware of this information. But regardless quite a bit is known about such graphs.
$f_n(z)={z^n\over n}$, $z\in D$ open unit disk then
In what follows $\|\cdot\|_{C^0(D)}$ denotes the supremum norm $\|f\|_{C^0(D)} = \sup_{z\in D}|f(z)|$. Your reasoning for (1) is right, perhaps you should mention that $\| \sum_{n=1}^N f_n\|_{C^0(D)} \ge \sum_{n=1}^N \frac 1n$, so as the partial sums are unbounded, we cannot have uniform convergence. We have $\|f_n\|_{C^0(D)} = \frac 1n\to 0$, so $f_n \to 0$ uniformly. For $f_n'(z) = z^{n-1}$, note that $f_n'(z) \to 0$ pointwise, but $\|f_n'\|_{C^0(D)} = 1$, if $(f_n')$ were uniformly convergent, necessarily $f_n' \to 0$, contradicting $\|f_n'\|_{C^0(D)} \to 1$. Yes, as you said, on $D$, we have $\sum_{n=1}^\infty z^{n-1} = \frac 1{1-z}$ pointwise (as fgp mentioned in his comment, not uniformly, but this isn't asked). For (4) we have, as you say, $f_n''(z) = (n-1)z^{n-2} \to 0$ for $z\in D$.
Graph with three bridges not in single path whith perfect matching
Please forgive the MS paint. This graph is clearly cubic and has a perfect matching. Notice also that there is no path that can cross all three bridges at once.
Let $\mathbb F = \mathbb Z_{3}[x]/\langle x^{2}+1 \rangle$ Show that $\phi : G \to \mathbb F\backslash\{0\}$ defined by ...
Hints: $$\Bbb F=\{a+bw\;;\;a,b\in\Bbb F_3\;,\;\;w^2=-1\}\;\;\text{and operations modulo}\;\;3$$ This field is a vector space of dimension two voer $\;\Bbb F_3\;$, with basis $\;\{1,w\}\;$ , and from here the solution to your first question (injectivity + surjectivity) As for the product: use that $\;w^2=-1\;$, though in your post you call $\;x=w\;$ , which can be a little confusing because the variable of the polynomials...
Determinant of exchange matrix nxn
Let $J_n$ denote the exchange matrix of size $n$. It suffices to show the following: $\det(J_1) = 1$ $\det(J_2) = -1$ For $n \geq 3$, $\det(J_n) = -\det(J_{n-2})$. The first two statements are easy to prove directly. For the third statement, proceed as follows: $$ \begin{align} \det(J_n) &= \det \pmatrix{0&J_{n-2}\\J_2 & 0} \\ & = \det\pmatrix{J_2 & 0\\0&J_{n-2}} \\ & = \det(J_2)\det(J_{n-2}) = -\det(J_{n-2}). \end{align} $$
If $B\subset A$ and the exist a injection $A\rightarrow B$, then $A$ and $B$ has same cardinality?
If $B \subset A$ then there exists an injection $g : B \to A$ defined in the natural way (identity). By existence of $g$ and the injection $f : A \to B,$ invoke Cantor–Bernstein–Schroeder theorem then there exist a bijection $h : A \to B.$ Hence $A$ and $B$ has the same cardinality.
Find the following limit $\lim_{x\to 0}\frac{\sqrt[3]{1+x}-1}{x}$ and $\lim_{x\to 0}\frac{\cos 3x-\cos x}{x^2}$
Revised to avoid l’Hospital’s rule: Your second one can be finished off like this: $$\begin{align*} \lim_{x\to 0}\frac{-2\sin 2x\sin x}{x^2}&=-2\left(\lim_{x\to 0}\frac{\sin 2x}x\right)\left(\lim_{x\to 0}\frac{\sin x}x\right)\\ &=-4\left(\lim_{x\to 0}\frac{\sin 2x}{2x}\right)\cdot1\\ &=-4\;. \end{align*}$$ Try multiplying the fraction in your first limit by $$\frac{(1+x)^{2/3}+(1+x)^{1/3}+1}{(1+x)^{2/3}+(1+x)^{1/3}+1}$$ and making use of the identity $(a^3-b^3)=(a-b)(a^2+ab+b^2)$.
Is it true that $\text{co}(A) = \{\lambda a + (1-\lambda)b: a,b\in A\}?$
If $A$ is convex then $co(A)=A$ and $\{\lambda a+(1-\lambda)b:0\leq \lambda \leq 1,a \in A,b \in A \} =A$
Project a point onto a plane, given some constraints
Suppose $P_2$ has coordinates $x_2$ and $y_2.$ Then you are asking for a point on the line $P_aP_b$ that has either the same $x$ coordinate or the same $y$ coordinate. Take the equation of the line and substitute $x=x_2$ so $y$ is the only unknown, and solve for $y$. If there is a solution, that gives you the point projected vertically. Next, in the original equation, substitute $y=y_2$ and solve for $x.$ If there is a solution, it’s your horizontal projection. Finally, choose the closer point. If you find the slope of the line (using the equation) you can just compute one point. If the slope is between $-\frac12$ and $\frac12$ then the vertical projection will be the closest one; otherwise the horizontal projection is the one you want.
Find a Fourier series to represent the function exp(x) for x belongs to (-pi,pi) and hence derive pi/sinh(pi).
Hint: Find $$a_0=\dfrac{1}{\pi}\int_{-\pi}^{\pi} e^x dx$$ $$a_n=\dfrac{1}{\pi}\int_{-\pi}^{\pi} e^x\cos(nx) dx$$ $$b_n=\dfrac{1}{\pi}\int_{-\pi}^{\pi} e^x\sin(nx) dx$$ then $$S(f)(x)=\frac{1}{2}a_0+\sum_{n=1}^{\infty}a_n\cos(nx)+b_n\sin(nx)$$
Real part of an analytic function
I will show an example with the power series for $e^z,\,z\in\mathbb{C}$. The power series is $$\sum_{n=0}^{\infty}\frac{z^n}{n!}=\sum_{n=0}^{\infty}\frac{(re^{i\varphi})^n}{n!}.$$ Here $r$ is the magnitude of $z$ and $\varphi$ the phase angle. Note that $\operatorname{Re}(z_1+z_2)=\operatorname{Re}(z_1)+\operatorname{Re}(z_2),$ so $$\operatorname{Re}\sum_{n=0}^{\infty}\frac{r^ne^{i\varphi n}}{n!}=\sum_{n=0}^{\infty}\operatorname{Re}\frac{r^ne^{i\varphi n}}{n!}.$$ Eulers formula tell us that $e^{ix}=\cos x+i\sin x,\,x\in\mathbb{R}.$ From now on you should be able to calculate the last steps. Hint: $r^n/n!$ is real so you can first take the real part of $e^{i\varphi n}$ and then multiply it.
sequencec vfs $x_n$ such that
Some proof idea: Let for some $n \in \mathbb{N}$, $x_n=\frac{2}{3}+\epsilon$, for some $-\frac{2}{3}<\epsilon<\frac{1}{3}$, assuming $0<x_n<1$ (note that we can always find such $n$). Then we have $x_{n+1}=3(\frac{2}{3}+\epsilon)(\frac{1}{3}-\epsilon)=\frac{2}{3}-\epsilon-3\epsilon^2$, and $x_{n+2}=3\left(\frac{2}{3}-(\epsilon+3\epsilon^2)\right)\left(\frac{1}{3}+(\epsilon+3\epsilon^2)\right)=\frac{2}{3}+\epsilon-18\epsilon^3-27\epsilon^4$ Hence, we have $|x_{n}-\frac{2}{3}| = |\epsilon|$ and $|x_{n+2}-\frac{2}{3}|= |\epsilon-18\epsilon^3-27\epsilon^4|$ Now, all we have to show is that $|\epsilon| > |\epsilon-18\epsilon^3-27\epsilon^4|$, so that we have $|x_{n+2}-\frac{2}{3}|<|x_{n}-\frac{2}{3}|$ and it can be extended to (by induction?) $|x_n-\frac{2}{3}|>|x_{n+2}-\frac{2}{3}|>|x_{n+4}-\frac{2}{3}|>\ldots$, and this non-negative monotonically decreasing sequence must be bounded from below by $0$, i.e., $\{x_n\}$ converges to $\frac{2}{3}$. For example, let's start with $x_0=\frac{1}{6}$, then with the following R code simulating the sequence, x <- 1/6 xs <- c(x) for (i in 1:10000) { x <- 3*x*(1-x) xs <- c(xs,x) } plot(1:length(xs), xs, typ='l', xlab='n', ylab='x_n') we get the following plot: abs(xs[1:100] - 2/3) # [1] 0.50000000 0.25000000 0.06250000 0.07421875 0.05769348 0.06767909 0.05393772 0.06266555 0.05088463 0.05865237 0.04833207 0.05534004 0.04615248 # [14] 0.05254263 0.04426045 0.05013741 0.04259613 0.04803942 0.04111606 0.04618765 0.03978776 0.04453695 0.03858633 0.04305305 0.03749235 0.04170938 # [27] 0.03649036 0.04048500 0.03556790 0.03936312 0.03471476 0.03833010 0.03392251 0.03737472 0.03318411 0.03648767 0.03249362 0.03566112 0.03184598 # [40] 0.03488847 0.03123686 0.03416408 0.03066253 0.03348310 0.03011975 0.03284134 0.02960568 0.03223517 0.02911785 0.03166140 0.02865407 0.03111723 # [53] 0.02821239 0.03060020 0.02779109 0.03010812 0.02738862 0.02963903 0.02700362 0.02919120 0.02663482 0.02876306 0.02628112 0.02835322 0.02594150 # [66] 0.02796039 0.02561504 0.02758343 0.02530089 0.02722130 0.02499830 0.02687304 0.02470656 0.02653780 0.02442504 0.02621479 0.02415314 0.02590326 # [79] 0.02389033 0.02560257 0.02363610 0.02531209 0.02338998 0.02503126 0.02315157 0.02475955 0.02292045 0.02449649 0.02269625 0.02424161 0.02247865 # [92] 0.02399451 0.02226730 0.02375480 0.02206193 0.02352212 0.02186225 0.02329612 0.02166799 0.02307650 plot(1:100, abs(xs[1:100] - 2/3),type='l', xlab='n', ylab='|x_n - 2/3|') Note from above that any subsequence formed with every alternate term of the above sequence $|x_n-\frac{2}{3}|$ is monotonically decreasing and thus converges to $0$.
Show that $X_t=e^{B(t)}-1-\frac{1}{2}\int_0^te^{B(s)}ds$ is a martingale
It is not ture that $B(k)$ is independent of $\mathcal F_s$ for $k >s$. Hence$E\left(\int_s^te^{B(k)}dk|{\scr F}_s\right)$ is not $\int_s^tEe^{B(k)}dk $. It is $\int_s^tE(e^{B(k)}|\mathcal F_s)dk$ and $E(e^{B(k)}|\mathcal F_s)=E(e^{B(k)-B(s)}|\mathcal F_s)e^{B(s)}=E(e^{B(k)-B(s)})e^{B(s)}$.
Integration with functions as boundaries
I think it is a mistake to write the statement of the Fundamental Theorem in the usual way, using the symbol $F$. My concern is that $F$ already has a (possibly different) meaning defined in the problem statement. If two functions are given the same name, it is all too easy to mistakenly try to equate them, which is an error I think you fell into. Since the letters we use to label the functions in the Fundamental Theorem are arbitrary, let's just choose different names, as follows: $$ \int_{a}^{b} g(y) dy = G(b) - G(a), $$ where $G'(y) = g(y)$. Now it is clear enough that $f(y)$ in the formula $$F(x,t) = \int_{x-ct}^{x+ct} f(y) dy$$ corresponds to $g(y)$ in our (relabeled) version of the FTC. So we need to correspond the $G$ in our FTC to some function such that $G'(y) = f(y)$ in our particular problem. That is, $$ G(y) = \int f(y) dy, $$ setting $G(y)$ equal to some antiderivative of $f$. (The constant of integration is not required here, because any single antiderivative of $f$ is sufficient for our needs; we do not need the ability to produce every possible antiderivative, as we would for the general solution of an indefinite integral.) The FTC then tells us that $$\int_{x-ct}^{x+ct} f(y) dy = G(x+ct) - G(x-ct).$$ Plugging this back into the definition of $F(x,t)$, we see that $$F(x,t) = G(x+ct) - G(x-ct).$$ That's it: the definition of the desired function as a function of two variables, $x$ and $t$. Unless there is some additional constraint not mentioned in the question, You can hold $x$ constant and vary $t$, and this may cause the value of $F(x,t)$ to vary, so we cannot say just yet that $F(x,t)$ can be expressed as a function of $x$ alone.
Sign of an integral of a product function
$\int_{-1}^1(x+1/2)dx=1$ and $\int_{-1}^1(x-1/2)dx=-1$ while $\int_{-1}^1(x+1/2)(x-1/2)dx=2/3-1/2\gt 0$
Evaluating the limit of series
We have $$\frac{1}{\sqrt{n^4+n}}\sum_{k=1}^n k \le \sum_{k=1}^n \frac{k}{\sqrt{k+n^4}}<\frac{1}{n^2} \sum_{k=1}^n k$$so the limit is $\frac12$ by the squeeze theorem, since $\sum_{k=1}^n k =\frac{n(n+1)}{2}$
How to solve this system of equation with two unknown?
There is no real root. If there were, $\alpha$ would have to be positive. And it is clear that for positive $\alpha$ and real $\beta$, the left-hand side of the second equation is greatr than the left-had side of the first.
Absolute value inequality $3 > |x + 4| \geq 1$
$|x-y|$ is the distance between $x$ and $y$ on a number line.
Does anyone have any advice as to what measure-theoretic Probability Theory books there are with lots of worked examples?
The closest books I know with lots of worked exercises on measure-theoretic probability theory are: "Problems and Solutions in Mathematical Finance: Volume 1 - Stochastic Calculus" by Chin, Nel and Olafsson. As you can see from the title, it contains more than just probability theory, but I think chapter I covers exactly the basics you need, and chapter 2 covers some part of martingale theory. The solutions are very detailed. Another relevant book (which starts at a little lower level) is "Probability Through Problems" by Capinski and Zastawniak, but it looks like it contains all the basics covered in a first graduate probability class. I've also found the book "Problems in Probability" by Shiryaev and Lyasoff, but it doesn't seem to contain step-by-step solutions, just hints. Also, the book "One Thousand Exercises in Probability" by Grimmett and Stirzaker seems to be too low level, as it is not using measure theory.
Does there exist an explicit expression for the series ( or function) $f(x)=\sum \limits_{n=1}^\infty e^{-xn^2}$?
There is some more information about your sum that can be obtained using Mellin transforms. We have $$\mathfrak{M}(f(x); s) = f^*(s) = \Gamma(s) \zeta(2s).$$ Now invert by calculating the sum of the residues of $f^*(s) x^{-s}$. They are $$ \operatorname{Res}(f^*(s) x^{-s}; s= 1/2) = \frac{\sqrt{\pi}}{2} \frac{1}{\sqrt{x}} \quad \text{and} \quad \operatorname{Res}(f^*(s) x^{-s}; s= 0) = - \frac{1}{2}.$$ The remaining poles of the gamma function are canceled by the zeros of the zeta function, giving $$f(x) \sim \frac{\sqrt{\pi}}{2} \frac{1}{\sqrt{x}} - \frac{1}{2}.$$ This asymptotic expansion holds in a neighborhood of zero.
Uniformly non-square Banach spaces are reflexive
Let $\epsilon =\frac {(1-r)K_1}{6}.$ For all $n$ we have $$0<1-\frac {K_n-\epsilon}{K_n+2\epsilon}=\frac {3\epsilon}{K_n+2\epsilon}<\frac {3\epsilon}{K_n}\leq \frac {3\epsilon}{K_1}=\frac {1-r}{2}<1-r<\delta .$$
How to evaluate $\lim_{x\to 0}\frac{x^{α+1}-(x^{α+1})^q}{(x-x^q)x^{αc}}$
It does not hold. Take $\alpha=-3$, $q=1/2$ and $c=1/2$. Your limit becomes \begin{equation} \lim_{x\to 0}\frac{x^{-2}-(x^{-2})^{1/2}}{(x-\sqrt{x})x^{-3/2}}=\lim_{x\to 0}\frac{1/x^{2}-1/|x|}{(x-\sqrt{x})x^{-3/2}}=-\infty\ . \end{equation}
Find polynomials $f(x), g(x)$, and $h(x)$
As $f, g, h$ are polynomials, they cannot have corners. However at $x=-1, 0$ there are corners. So at least one among $f, g$ must change sign at each of these points. As they are linear, they cannot change signs at two points, so exactly one function must change its sign at each of these points. It does not matter which you assume change points, the final solution will remain the same, (except for a sign variation - note that if $(f, g, h)$ is a solution, so is $(\pm f, \pm g, h)$.
Image of an arbitrary map falling on a algebraic set - criterion?
Consider just the case $\mathbb{R}^1 \to \mathbb{R}^n$, i.e. smooth curves in $\mathbb{R}^n$. It's not hard to come up with a small arc whose Zariski closure is all of $\mathbb{R}^n$. By replacing an arbitrarily small arc of any given curve with this bad arc, we get a curve which cannot be contained in any proper algebraic subset of $\mathbb{R}^n$. So at a minimum we see that the curves which cannot fit in a proper algebraic subset are dense among all curves. Moreover, if you have a large family of curves, each of which fits in some real algebraic hypersurface, you can do this to all of them simultaneously and get an equally large family of curves, none of which fit in any real algebraic hypersurface. In fact, you can do it in lots of different ways. I'm fairly sure that rules out the former class having a positive measure in any meaningful sense, but I'm not a functional analyst.
Trigonometric equation obtained from Ceva
From your work we obtain: $$\cos\left(\measuredangle BAP'-\measuredangle BAC+\measuredangle CAP\right)-\cos\left(\measuredangle BAP'+\measuredangle BAC-\measuredangle CAP\right)=$$ $$=\cos\left(\measuredangle BAC-\measuredangle BAP'-\measuredangle CAP\right)-\cos\left(\measuredangle BAC-\measuredangle BAP'+\measuredangle CAP\right)$$ or $$\cos\left(\measuredangle BAP'+\measuredangle BAC-\measuredangle CAP\right)=\cos\left(\measuredangle BAC-\measuredangle BAP'+\measuredangle CAP\right)$$ or $$\sin\measuredangle BAC\sin(\measuredangle CAP-\measuredangle BAP')=0$$ and we are done!
$U(R) \equiv U(B) \equiv U(N)$, trying to find unique values that result in indifference.
The systems of equations that you have are U(R)-U(B)=0, U(B)-U(N)=0, and U(R)-U(N)=0. This is a linear system in [p q]. Solve the linear system. :)
Expected value, problems understanding answer to question
You're right to consider all of the possibilities for logging in. (There's an infinite number of cases!) The expected value will be $$E(x) = \sum_{y=1}^{\infty}yP(y),$$ where $P(y)$ is the probability of getting in on the $y$th time. If the probability of getting in on any attempt is $p$ then the probability of not getting in is $1-p$. So $P(y) = p(1-p)^{y-1}.$ (In words, you fail to log in $y-1$ times, and log in the last time.) So we have $$E(x) = \sum_{y=1}^{\infty}yp(1-p)^{y-1} = \frac{1}{p}.$$ That's where the formula comes from. For your specific problem, $p=0.7$.
Probability of two dice against one
Let’s try asking “If I were to roll a number $k$ before rolling a $6$-sided dice, then what would be the chance the second die would have a lowet value than the other?” Let us try $k=1$. We can roll the numbers $2,3,4,5,6$ to top that $1$. Since there are $6$ possibilities, and only $1$ does not meat the requirement, the probability of getting equal or less is $\frac16$. For $k=2$ there are $2$ possibilities=$\frac26$, for $k=3$ there are $3$ possibilities=$\frac36$ e.t.c. Now what if we were to use $2$ dice? The second die has the same probability as the first as they are identical, so we square the chance for $1$ die of being under or equal to $1$ and the answer would be $\left(\frac16\right)^2=\frac1{36}$ for $k=1$. But we’re not finding the probability of the dice being under or equal to $k$, are we? Thankfully the probability of getting over $k$ for $2$ dice would be $1-\frac1{36}=\frac{6^2-1^2}{6^2}$. Finally, let’s factor in the various values of $k$. Remember, the deominater will not be $6^2$ but $6^3$ because that is the number of possibilities for $3$ dice (first and second dice, and the value of $k$) so the answer will be $$\frac{6^2-1^2+6^2-2^2+6^2-3^2+6^2-4^2+6^2-5^2+6^2-6^2}{6^3}$$, which is $\frac{125}{216}$ For the second answer, change the exponents by one to get $$\frac{6^3-1^3+6^3-2^3+6^3-3^3+6^3-4^3+6^3-5^3+6^3-6^3}{6^4}$$