title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Splitting field of $x^{4 }-3$ over $\mathbb{Q}$
Remember that $\;Q(\sqrt[4]3\,,\,i)\;$ is a vector space over $\;Q\;$ of dimension $\;4\cdot 2=8\;$ , and it has a very nice basis (putting $\;w:=\sqrt[4]3\;$ for simplicity, we get): $\;\{1\,,\,w\,,\,w^2\,,\,w^3\,,\,i\,,\,wi\,,\,w^2i\,,\,w^3i\}\;$ , so you can conveniently write any element in the above field in the form $$a+bw+cw^2+dw^3+ewi+fw^2i+gw^3i\;,\;\;a,b,c,d,e,f,g\in\Bbb Q$$
First & Second Derivative of $y=x(2x+3)^4$?
What you have written is simply the product rule: $$(f(x)\cdot g(x))' = f'(x)\cdot g(x) + f(x) \cdot g'(x).$$ To obtain the rule for this situation, we need to combine the chain rule, $$\big(f(g(x))\big)' = f'(g(x))\cdot g'(x)$$ with the product rule to obtain: \begin{align*} (f(g(x)))\cdot h(x))' &= \big(f(g(x))\big)'\cdot h(x) + f(g(x)) \cdot h'(x)\\ &= f'(g(x))\cdot g'(x) \cdot h(x) + f(g(x)) \cdot h'(x) \end{align*}. So what are $f, g$ and $h$ here? Identifying them correctly and using the rule we just developed should allow you to arrive at the desired result. (You'll need to do a little algebra to simply the resulting expression.) Repeating this process will allow you to find the correct second derivative as well. Good luck!
Show that if $(x_n)\rightarrow 2$ then $(1/x_n)\rightarrow 1/2$
Choose $N_{1}$ so that $|x_{n}-2|<1$ for all $n\geq N_{1}$. Then, $|x_{n}|>1$ for all $n\geq N_{1}$, so $1/|x_{n}|<1$. Next, choose $N_{2}$ so that $|x_{n}-2|<2\epsilon$ for all $n\geq N_{2}$. Then, take $N=\max\{N_{1},N_{2}\}$ and note that for $n\geq N$, $$ \frac{|x_{n}-2|}{2|x_{n}|}<\frac{2\epsilon}{2}=\epsilon. $$
Equivalent Definition of Measurable set
Yes they are equivalent. Consider $E \subseteq \mathbb{R}$. 1.(Definition due to Carathéodory) $E$ is measurable if for each set $A \subset \mathbb{R}$, $m^{*}(A)=m^{*}(A\cap E)+m^{*}(A\cap E^{c})$. The following are equivalent: i. E is measurable (according to definition 1) ii. For each $\epsilon >0$ , there is a closed set $F$ and open set $O$ for which $F\subseteq E \subseteq O $ and $ m^{*}(O-F)<\epsilon $ Let us also have the definition of Lebesgue outer measure $m^{*}: \mathscr{P}(\mathbb{R}) \rightarrow \bar{\mathbb{R}}^{+}$ given by, $ m^{*}(E) = \inf \{ \sum l(I_{n}) : (I_n)_{n \in \mathbb{N}}$ is a collection of open intervals such that $E \subset \cup I_{n} \}$ When $m^{*}$ is defined on $\mathscr{P}(\mathbb{R})$ it is only countably sub-additive. But if we restrict $m^{*}$ to $\mathscr{M}\subset \mathscr{P}(\mathbb{R})$, where $\mathscr{M}$ is the collection of all sets that satisfies 1, then it has been proved that $m^{*}|_{\mathscr{M}}$ is countably additive and that $\mathscr{M}$ is a $\sigma$-algebra (containing the intervals). Proof: i. $\implies$ ii. (Case $m^{*}(E) < +\infty$ ) Given an $\epsilon > 0$ there is a collection $\{I_{n} \}$ of open intervals such that $m^{*}(E) < l(\cup \{I_{n} \}) < m^{*}(E) + \epsilon $, this is so because $m^{∗}(E)$ is an infimum. Define $O = \bigcup_{n =1}^{\infty} \{I_{n} \}$. Note that $O$ is an open set, $O \in \mathscr{M}$, $E \subset O$, and $m^{*}(O) = l(O)$. Now we have, $ m^{*}(O) < m^{*}(E) + \epsilon \implies m^{*}(O\backslash E) = m^{*}(O) - m^{*}(E) < \epsilon $ Let us find now that closed $F$. Notice that $ E \subset \mathscr{M} \implies E^{c} \subset \mathscr{M}$. By the first part of this proof there is an open set $O \in \mathscr{M}$ such that $E^{c} \subset O$ and $ m^{*}(O\backslash E^{c}) < \epsilon$. Take $ F = O^{c}$ closed. Then, $ F \subset E$ and, $m^{*}(E \backslash F) = m^{*}((E^{c})^{c} \cap F^{c}) = m^{*}((E^{c})^{c}\cap O) = m^{*}(O \backslash E^{c}) < \epsilon$ Finally we observe that, $(O \backslash F) = (O \backslash E) \bigcup (E \backslash F)$ (disjoint) hence, $m^{*}(O \backslash F) = m^{*}(O \backslash E) + m^{*}(E \backslash F) < \epsilon + \epsilon = 2 \epsilon$. To prove ii. $ \implies$ i. first prove that ii. implies that exists a $G \in \mathscr{G}_{\delta}$ such that $m^{*}(G \backslash E) = 0$. Then use it to prove i. Once this has been done, we have to think on the case $m^{*}(E) = \infty$. I hope this could help a bit. I recommend reading Chapter 3 from Royden's book, the overall structure of the proof I tried to sketch here are suggested there.
Why can't I use FTLI when $\vec{F}=\nabla f$?
This is a completely vague question. Of course, the FTLI applies precisely to vector fields that are gradients: $$\int_C \nabla f\cdot d\vec r = f(B)-f(A)\,,$$ where $C$ is a path from $A$ to $B$.
Algebra tricks to simplify expression
The expression for $P_{k}$ involves the unknown $P_{1}$. This unknown value $P_{1}$ can be obtained on using the fact that $P_{N}=1$. By letting $k=N$ in the expression for $P_{k}$, we obtain the value of $P_{1}$. Now substituting the value of $P_{1}$ into the expression for $P_{k}$, we get an expression $P_{k}$ that involves only known quantities. $P_{0}=0$ and $P_{N}=1$ are initial conditions of the problem. EDIT: You have started your derivation for $P_{k}$, with the equation $P_{k}=pP_{k+1}+qP_{k-1}$ and after some simplification, you have arrived at the expression for $P_{k}$ as follows: $(1)------ P_k= \begin{cases} P_1 \frac{1-(p/q)^N}{1-(p/q)}, & \text{if } p \neq q \\ P_1 N, & \text{if }p=q=1/2 \end{cases}$ which involves an unknown, $P_{1}$. We can calculate $P_{k}$, if we know $P_{1}$. The value of $P_{1}$ is to obtained on invoking the initial/boundary conditions. Here, the relevant condition is $P_{N}=1$. This what you are doing in the very first equation. From this, you have obtained the expression for $P_{1}$. This is your second equation. Substituting $P_{1}$ into the equation (1) above, you have obtained your third equation.
What is the remainder when $x^7-12x^5+23x-132$ is divided by $2x-1$? (Hint: Long division need not be used.
Since $\deg(2x-1)=1$ then the remainder is a constant. Write $$P(x)=(2x-1)Q(x)+r$$ so what's $P\left(\frac12\right)$?
Power series of $f(z)^8$
Your assumption of continuity of $f$ along with the analyticity of $f^8$ is enough to conclude $f$ is analytic. For example, see the following: If $z\mapsto f(z)^n$ is analytic then $f$ is analytic If $f^2$ is an analytic function then so is $f$ The former gives the result immediately but is a tad tougher to prove, while the latter gives the result after noting $f$ being continuous implies $f^2$ and $f^4$ are continuous, at which point we apply the latter result three times. We thus know $f$ is analytic, so $$f(z) = \sum_{k=0}^\infty c_kz^k$$ Applying your edit that $f(0)=0,$ we must have $c_0 = 0.$ Let us first assume $c_1 \neq 0,$ so $$f(z) = \sum_{k=1}^\infty c_kz^k = c_1 z + \mathcal{O}(z^2)$$ Thus, by the Binomial Theorem, we have that $$f(z)^8 = (c_1 z + \mathcal{O}(z^2))^8 = (c_1 z)^8 + \mathcal{O}(z^9)$$ Which proves the claim whenever $c_1 \neq 0.$ We now use this idea to prove the general case. Suppose $c_k = 0$ for $0 \le k \le n-1,$ but $c_n \neq 0.$ Then: $$f(z)^8 = \left(\sum_{k=n}^\infty c_kz^k\right)^8 = (c_n z^n + \mathcal{O}(z^{n+1}))^8 = (c_n z^n)^8 + \mathcal{O}(z^{8(n+1)})$$ Which proves that the leading nonzero term will have a power divisible by $8.$ This of course extends readily to other powers of $f.$
What does it mean when it is said that a solution to a differential equation only exists on an interval?
The initial value problem $$y' = \sqrt{y}, \,\,\, y(0) = 0,$$ has multiple solutions. For example $y(x) = 0$ and $y(x) = x^2/4$. In general, the initial value problem $$y' = f(x,y), \,\,\, y(x_0) = y_0,$$ is guaranteed to have a unique solution on some open interval containg $x_0$ when the function $f$ satisfies certain conditions. Look up the Picard-Lindelof Theorem. The notion of uniqueness is relevant only in the context of an accompanying initial condition. The differential equation may have many solutions, in general. However, we say the solution is unique for an initial value problem when given any two functions $y_1$ and $y_2$ that satisfy both the differential equation and initial condidition, then $y_1(x) = y_2(x)$ for all $x$ where solutions exist. The interval on which the solution exists need not be extendable to $\mathbb{R}$ as in your example where $f(x,y) = y^2$.
Determining if a function is real anaytic at the point $a$?
The most convenient method I know is to show that $f$ extends to a holomorphic function in a neighborhood of $a$ in the complex plane. The property of being holomorphic can be verified just by considering the first order derivatives (the Cauchy-Riemann equations). It implies, by a theorem of complex analysis (see any textbook), that the function is represented by a power series. Example Consider the function $f(x)=1/(x^2+1)$ near the point $x=1$. Its natural extension to the complex plane is $$F(x+iy) = \frac{1}{(x+iy)^2+1}$$ which is holomorphic as long as $x+iy\notin \{-i,i\}$. In particular, $F$ is holomorphic in the open disk of radius $\sqrt{2}$ centered at $1$. It follows that $F$ is represented by a power series centered at $1$ with radius of convergence $\sqrt{2}$. Restricting attention to the real line again, conclude that $f$ is represented by a power series, $$ \frac{1}{x^2+1} = \sum_{n=0}^\infty c_n(x-1)^n,\qquad |x-1|<\sqrt{2} $$
Sitting arrangement problem.
But then I notice that in this problem if $4$ boys opposite to each other and then $4$ girls opposite to each other then $1$ boy and $1$ girl pending. So at least $1$ is opposite. So my above attempt is wrong. How to do this properly? Just do that.   You have noticed that all arrangement must have at least one girl seated opposite a boy.   That is what you needed to notice.   Hence the count you seek is the count of all possible arrangements.   Thus there is nothing to exclude; and so nothing to subtract. The count is $~10!~$, that is all.
Equation of a line after being reflected in another line using the transformation matrix
There is a much faster way to make the computations: remember that a reflection is its own inverse, so if $\mathbf r'$ has coordinates $(x',y')$, we have $$\mathbf r=T\mathbf r'\iff \begin{pmatrix}x\\y\end{pmatrix}=T\begin{pmatrix}x'\\y'\end{pmatrix}=\tfrac 15 \begin{pmatrix}-3x'+4y'\\4x'+3y'\end{pmatrix}$$ and plugging this relation in the equation of the line $y=3x+1$, we obtain \begin{align} \frac 15(4x'+3y')&=\frac 35(-3x'+4y')+1\iff 4x'+3y'=-9x'+12 y'+5\\&\iff13x'-5=9y'. \end{align}
Evaluating $f(z)=\sqrt{z^2-1}$, given the branch I am on.
If $\phi(z)$ is the branch of $\sqrt{z^2-1}$ with a branch cut at $(-\infty,1]$ which is positive on $(1,\infty)$, then $\phi(2i)=\sqrt{5}i$ and $\phi(-2i)=-\sqrt{5}i$. So (writing $\text{Log}$ for the principal value of the logarithm as Gamelin does) $$\begin{align*} \text{Log}\left(z+\phi(z)\right)\bigg|_{-2i}^{2i} &=\text{Log}\left(2i+\phi(2i)\right)-\text{Log}\left(-2i+\phi(-2i)\right)\\ &=\text{Log}((2+\sqrt{5})i)-\text{Log}((2+\sqrt{5})(-i))\\ &=\pi i, \end{align*}$$ in agreement with the numerical result. To see that $\phi(2i)=\sqrt{5}i$, let $z$ move along a path from $2$ to $2i$ in the closed first quadrant, for example, $\theta\mapsto z(\theta):= 2e^{i\theta}$, $0\le \theta\le \pi/2$. Then $z(\theta)^2-1=4 e^{2i\theta}-1$ will travel from $3$ to $-5$, always remaining in the closed upper half-plane. Since $\phi(z)$ was defined to be positive on $(1,\infty)$, $\phi(2)=\sqrt{3}>0$, so $\phi(z)$ must also travel along a path in the closed first quadrant. Therefore $\phi(2i)$ is $\sqrt{5}i$ rather than $-\sqrt{5}i$. Proving that $\phi(-2i)=-\sqrt{5}i$ can be done in the same way, by moving $z$ on a path from $2$ to $-2i$ in the closed fourth quadrant and observing that, since $z^2-1$ moves along a path in the closed lower half-plane, $\phi(z)$ must travel along a path in the closed fourth quadrant.
Eigenvalues of matrix product of two hermitian matrices
We have $$\begin{split}\overline{\det(AB-\lambda I)} &= \det\left[(AB-\lambda I)^*\right] = \det(B^*A^*-\overline\lambda I) \\&= \det(BA-\overline\lambda I) = \det(AB-\overline\lambda I),\end{split}$$ where the last equality follows from the Sylvester's determinant theorem.
Interesting question about finding a quadratic polynomial such that $h(\alpha)=\beta, \ h(\beta)=\gamma, \ h(\gamma)=\alpha$
The traditional thing is this: take three distinct real numbers $u,v,w$ so the pairwise differences are nonzero. To get a quadratic $q(x)$ that gives $q_u(u) =1$ while $q_u(v)=q_u(w) = 0.$ $$ q_u(x) = \frac{(x-v)(x-w)}{(u-v)(u-w)} $$ Back to your $\alpha, \beta, \gamma$ do the same and make $$ \beta q_\alpha + \gamma q_\beta + \alpha q_\gamma $$ I'm not sure what these are called, but one can always arrange such "indicator" functions: given distinct numbers $x_1, x_2, ..., x_n$ we can make a polynomial function $f_1(x_1) = 1,$ $f_1(x_2) = 0,$ $f_1(x_3) = 0,$ and so on, where the degree of each $f_i$ is $n-1.$ Alright, Matthew points out that these are called Lagrange Polynomials, https://en.wikipedia.org/wiki/Lagrange_polynomial
characteristic polynomial and eigenvalues of $T(A)={ A }^{ t }$
You can solve it all in one go. First notice that $T^2 = id$, therefore the only possibles eigenvalues are $\pm 1$. Now you simply need to solve the equations $A^t = A$ and $A^t = -A$. This gives you as eigenvectors for the eigenvalue $1$: $$\begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix}, \begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix}, \begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix}$$ And for the eigenvalue $-1$: $$\begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}$$ These four matrices are linearly independent, $M_2(\mathbb R)$ has dimension 4 so they span the whole space. $M_2(\mathbb R)$ has a basis of eigenvectors for $T$, so $T$ is diagonalisable, and its characteristic polynomial is $(X-1)^3(X+1)$. This approach has a straightforward generalization for $M_n(\mathbb R)$ for any $n$.
Help me with basis and dimension
Consider $\Bbb{R}$ as $\Bbb{R}$-vectorspace. Two basis are given by $C:=\{1\}, D:=\{2\}$. Can you answer your first question from here? For the second question notice that $C\cup D$ is a generating system for $A+B$.
Noob doubt about signs in equations
Friction of a falling object always points upwards. If the constant $k$ is positive then, in an upward-pointing reference frame, the contribution of friction to acceleration is $+kv$ (or $+kv^2$). Remember that your terminal velocity is negative in that frame!
Prove Logic Using Proof of Contradiction
Well, under the assumption that $e$, we know that $$(e\vee f)\to(a\vee b) = (\lnot e\wedge \lnot f)\vee a\vee b$$Since $e$, $$(e\vee f)\to(a\vee b) =a\vee b = c\vee d$$But, $$c\vee d\to \lnot e$$ A contradiction. So, $\lnot e$.
Minimal polynomial and diagonalization of a block diagonal matrix.
Notice that if $P$ is a polynomial then $$P(C)=\begin{pmatrix} P(A) & 0 \\ 0 & P(B)\\ \end{pmatrix}$$ so we see that $P$ annihilates $C$ if and only if it annihilates $A$ and $B$. If we denote $\pi_A$ and $\pi_B$ the minimal polynomial of $A$ and $B$ respectively then the polynomial $P=\pi_A\lor \pi_B$ annihilates $C$ so $\pi_C$ divides $P$ and conversely since $\pi_C$ annihilates $C$ so it annihilates $A$ and $B$ so $\pi_A$ and $\pi_B$ divides $\pi_C$ and then $P$ divides $\pi_C$. We conclude that $$\pi_C=\pi_A\lor \pi_B$$ $A$ and $B$ are diagonalizable if and only if $\pi_A$ and $\pi_B$ have simple roots, if and only if $\pi_A\lor \pi_B=\pi_C$ has simple roots, if and only if $C$ is diagonalizable.
Show that the integral equation has a solution on a suitable subset of $C[0,1]$
The reason that the subsets seem like they are chosen arbitrarily is because they are chosen slightly arbitrarily. Let's define the operator $T: C[0,1] \to C[0,1]$ by $$(Tx)(t) = \log(1+t) + \frac 1 5 \int^1_0 e^{-t} \cos(ts)^2 x(s)^2 ds, \,\,\,\,\,\,\,\,\, t \in [0,1], \,\,\,\,\, x \in C[0,1].$$ $($I leave you to show that this indeed a map from $C[0,1] \to C[0,1]).$ We need to choose a complete subspace of $C[0,1]$ and show that $T$ is a contraction on this subspace. Since any closed subset of a complete space is complete, we simply need to choose a closed subset and the most natural closed subsets of $C[0,1]$ are the balls: $$B_r = \{ x \in C[0,1] : \,\,\, \| x \|_\infty \le r\}, \,\,\,\,\, r \ge 0.$$ We show that for suitable $r \ge 0$, we have $T : B_r \to B_r$ and $T$ is a contraction. Take $x \in B_r$ and $t \in [0,1]$. We see \begin{align*}\lvert (Tx)(t) \rvert &= \left \lvert \log(1+t) + \frac 1 5\int^1_0 e^{-t} \cos(ts)^2 x(s)^2 ds \right \rvert \\ &\le \log(2) +\frac 1 5\int^1_0 \lvert e^{-t} \rvert \lvert \cos(ts) \rvert^2 \lvert x(s) \rvert^2 ds \\ &\le \log(2) +\frac 1 5 \int^1_0 \|x \|^2_\infty ds = \log(2) + \frac{\| x \|^2_\infty}{5} \le \log(2) + \frac{r^2}{5}. \end{align*} Passing to the supremum, we have $$\| Tx \|_\infty \le \log(2) + \frac{r^2}{5}.$$ We want $\| Tx \|_\infty \le r$ so we need to choose $r \ge 0$ such that $\log(2) + \frac{r^2}{5} \le r$. Looking at the graph of $p(r) = r^2 - r + \log(2)$, we notice that this is negative for $0.85 \lesssim r \lesssim 4.2$ so for any $r$ in that range we have $T: B_r \to B_r$. Next note that for any $x,y \in B_r$ and $t \in [0,1]$ \begin{align*} \lvert (Tx)(t) - (Ty)(t) \rvert &= \left \lvert \frac 1 5 \int^1_0 e^{-t} \cos(ts)^2 x(s)^2 ds- \frac 1 5 \int^1_0 e^{-t} \cos(ts)^2 y(s)^2 ds \right \rvert \\ &\le \frac 1 5 \int^1_0 \lvert e^{-t}\rvert \lvert \cos(ts) \rvert^2 \lvert x(s)^2 - y(s)^2 \rvert ds \\ &\le \frac 1 5 \int^1_0 \lvert x(s) + y(s) \rvert \lvert x(s) - y(s) \rvert ds \\ & \le \frac{1}{5} \| x+y \|_{\infty} \| x-y \|_{\infty}\\ &\le \frac{\| x \|_{\infty} + \| y\|_{\infty}}{5} \| x-y \|_{\infty} \le \frac{2r}{5} \|x-y\|_{\infty}. \end{align*} Passing to the supremum gives $$\| Tx - Ty\|_{\infty} \le \frac{2r}{5} \|x-y\|_\infty$$ whence $T$ is a contraction on $B_r$ so long as $2r/5 <1$ or $r < 5/2$. Choosing any $r$ which satisfies both bounds, the Banach fixed point theorem gives existence of a unique fixed point in $B_r$. Thus for example, there is a unique fixed point of $T$ in $B_2$ and this fixed point will satisfy $$x(t) = \log(1+t) + \frac 1 5 \int^1_0 e^{-t} \cos(ts)^2 x(s)^2 ds, \,\,\,\,\ t \in [0,1].$$
Find the remainder of the polynomial division $p(x)/(x^2-1)$ for some $p$
Plug in $1$ and $-1$ to get two values of $r(x)$ which is linear. From there you can get what $a,b$ are in $ax+b.$ Since $$f(x)=g(x)(x+1)(x-1)+r(x)$$ we have $$ f(1)=g(1)(1+1)(1-1)+r(1)=r(1)=-10$$ $$ f(-1)=g(1)(-1+1)(-1-1)+r(-1)=r(-1)=16$$ We know the remainder is of degree $1$, so $r(x)=ax+b$ and now we know, $$r(1)=ax+b=a+b=-10$$ $$r(-1)=ax+b=-a+b=16$$ so, solve $$a+b=-10$$ $$-a+b=16$$ which yields, $a=-13$ $b=3$, so $$r(x)=-13x+3$$
Localizations of $ \mathbb{Z}_{p^k}$
Yes. The ring $\mathbb{Z}_{p^k}$ is local with unique maximal ideal $(p)$. If $S\cap (p) = \emptyset$, you are inverting units and hence $$S^{-1} \mathbb{Z}_{p^k}\cong \mathbb{Z}_{p^k}=\mathbb{Z}/p^k\mathbb{Z}.$$ If $S\cap (p) \neq \emptyset$, then $ap\in S$ for some $a \in \mathbb{Z}_{p^k}$. But then you get $$\frac{1}{1}=\frac{a^{k-1}p^{k-1}}{a^{k-1}p^{k-1}}=\frac{0}{p}=\frac{0}{1}.$$ So you get $$S^{-1} \mathbb{Z}_{p^k}\cong 0=\mathbb{Z}/\mathbb{Z}.$$
Solving $\log_{6}(2x+3)=3$. Can I start by dividing by $\log_6$?
Okay, you misundertood the question. You thought it was $(\log 6)\times (2x + 3) = 3$ where $\log 6 = \log_{10} 6$ is the number $k$ where $10^k = 6$. That is not at all what the problem actually was. The problem was $\log_6(2x+3) = 3$ where $\log_6 M$ is then number $k$ where $6^k = M$. So $\log_6(2x+3) = 3$ means $6^3 = 2x + 3$ and ... the rest solves itself. .... The thing to note is that the $_6$ is in a subscript and that indicates the base. So $\log_b m = k \iff b^k = m$ If you have $\log K$ without a subscript that means that the base is assumed to be $10$. So $\log m = k \iff 10^k = m$. (TMI: Although more advanced courses often assume $\log$ without a subscript means the base is $e = 2.717....$ so $\log m =k \iff e^k = m$. But that's only more advanced classes that do that. TO play it safe you should always write the base.) So anyway $\log_6 m = k \iff 6^k =m$. So $\log_6 (2x + 3) = 3 \iff 6^3 = 2x + 3$.
Finding the Tolerance of Adaptive Quadrature Estimations
The general method behind this is called Richardson extrapolation. The error is of size $O(h^4)$, where $h$ is the length of the $n=1/h$ sub-intervals.. Extracting the dominating term, this means that the error has the form $$I(h)-I(0)=C·h^4+O(h^5).$$ Now we compare $I(h)$ and $$I(h/2)=I(0)+C·h^4/16+O(h^5)$$ Eliminating the leading error term this leads to $$ 16·I(h/2)-I(h)=16·I(0)-I(0)+O(h^5) $$ and thus $$ I(0)=\frac{16·I(h/2)-I(h)}{15}+O(h^5) $$ so that $$ I(h/2)-I(0)=\frac{I(h)-I(h/2)}{15}+O(h^5) $$ In your source, $I_1=I(h)$, $I_2=I(h/2)$ and $I(f)=I(0)$.
Proof by strong Induction for a recurrence
We show that $P_n : a_n < 2^n$ is true. for $n=1$, it's ok. for $n >1$, suppose $P_1,...,P_{n-1}$. We see that $$ a_n = a_{n-3}+a_{n-2}+a_{n-1} \le 2^{n-3}+2^{n-2} + 2^{n-1}=2^n(\frac{1}{2^3}+\frac{1}{2^2}+\frac{1}{2})<2^n $$ So $P_n$ is true. Here are some comments about your proof: The way you stated your inductive hypothesis is odd (to me). Also here we know that $a_n$ is defined by induction so using strong induction is a good idea. It seems to me that you got kind of lost when applying the hypothesis. You started your proof from the result not the hypothesis...
How does $f$ change with the increasing of $\sigma$
If the integral were taken from $-\infty$ to $\infty$, then the integral equals to $\newcommand{\expect}{\mathbf{E}} \expect\left(X^\lambda\right)$ where $X\sim\mathcal{N}(m, \sigma^2)$. So obviously when $\lambda=1$, it is nothing but the expected value of $X$, hence it stays the same, the value is the mean, $m$. If $\lambda=2$, then it's $\newcommand{\var}{\mathbf{Var}}\expect X^2 = \var(X) + \left(\expect X\right)^2 = \var(X) + m^2$, hence it's increasing as $\sigma$ increases, since $\var(X) = \sigma^2$.
Ultrafilter on a boolean algebra containing $u$ but not $v$
This isn't quite correct. $u \neq v$ does not imply that $u \cdot (-v) \neq 0$. (E.g. if you start with a poset $(\mathbb P; \preceq)$ and $u,v \in \mathbb P$ are such that $u \prec v$, then in the Boolean completion of $\mathbb P$ we will have $u \cdot (-v) = \emptyset$. Other than that, your proof is fine. But you don't need to consider ideals at all: Do we have tarski's ultrafilter theorem's equivalent for ultrafilters in boolean algebras? A direct corollary of the prime ideal theorem is that every filter is contained in an ultrafilter (by the proof you outlined implicitly: Given a filter $F$ consider its associated ideal $I$ obtained by taking pointwise complements. Extend $I$ to a prime ideal $P$ and consider its associated filter $U$ -- again by taking pointwise complements. This will be an ultrafilter such that $F \subseteq U$).
Direct sum of subspaces and isomorphisms
Your injectivity proof is fine, and your linearity proof is almost fine. You should have $$\sum (w_i+(w_i)^\prime)=t(w_1+(w_1)^\prime,\dots, w_n+(w_n)^\prime)$$ instead of $$\sum (w_i+(w_i)^\prime)=(w_1+(w_1)^\prime,\dots, w_n+(w_n)^\prime).$$ Now, for surjectivity, you've really not quite justified it. Take any $v\in V$. Since $V=\bigoplus_{i=1}^n W_i,$ then $v=\sum_{i=1}^nw_i$ where $w_i\in W_i$ for each $1\le i\le n$. Then $(w_1,...,w_n)\in W_1\times\cdots\times W_n,$ and $t(w_1,...,w_n)=v,$ proving surjectivity.
A polynomial $p(x)$ with real coefficients is of degree five. Find $p(x)$ in factorised form with real coefficients.
Let $f(x)=g(x)(x-2)(x-a)$, where $g(x)=B(x+1)(x-2+i)(x-2-i)$. The $x$ axis has slope $0$. Since $f$ is tangent to the $x$ axis at $x=2$, we have $$f'(2)=0$$ But $$f'(x)=g'(x)(x-2)(x-a)+g(x)(x-a)+g(x)(x-2)$$ so that $$f'(2)=g(2)(2-a)=0$$ Since $g(2)\ne 0$, we must have $a=2$
Simplifying expression for expected number of coin tosses to get $h$ heads in a row
Hints: The coefficient of $E(x)$ is $(1-p)\sum\limits_{i=0}^{h-1} p^{i}$ and this is same as $1-p^{h}$ by the formula for a geometric sum. For the second term you will have to calculate $\sum\limits_{i=0}^{h-1} ip^{i}$. For this let $f(t)=\sum\limits_{i=0}^{h-1} p^{i}t^{i}$. Then $\sum\limits_{i=0}^{h-1} ip^{i}=f'(1)$. $f(t)$ is defined by a geometric sum with common ratio $pt$. Write down the value of this sum and differentiate to get that value of $\sum\limits_{i=0}^{h-1} ip^{i}$. The rest of the calculation is straightforward.
find a basis for $\Bbb{R}^3$ that contains the vector $[1,0,1]$
Given any vector $v\in\mathbb{R}^3\setminus\{(0,0,0)\}$, at least one of these sets will be a basis of $\mathbb R^3$: $\{v,(1,0,0),(0,1,0)\}$; $\{v,(1,0,0),(0,0,1)\}$; $\{v,(0,1,0),(0,0,1)\}$. In order to check for each of them whether or not it is a basis, just compute the determinant of the matrix whose rows are the entries of the three vectors; the three vectors will form a basis if and only if that determinant is different from $0$.
How to find $x$-intercept on TI-83/TI-84 calculator without having to set the bounds for each intercept?
You can also draw the help function $y=0$. Then you can do 2nd $\rightarrow$ Trace $\rightarrow$ intersect. You will have to select the two curves and do a guess, but I think it's still more convenient than giving left/right bounds.
Is the trivial ring regular?
The empty scheme is regular according to EGA IV.5.8.2. Accordingly, the zero ring is regular according to Bourbaki's AC.VIII.5 Exercice 6. (Of course, the point is that regularity is defined as some property for every point or for every prime ideal, resp.)
If $(2146!)$base $10$=$(x)$base$26$ then what will be the number of consequtive zeroes at the end of $'x'$?
If we were asked to find the number of zeroes at the end of, say, $36!$ in decimal notation, that is the same as asking for the highest value of $k$ such that $10^k$ divides $36!$ Similarly for your question we need to find the highest value of $k$ such that $26^k$ divides $2146!$. Since $26=2\cdot 13$, we can find the answer for $2$ and $13$ separately and then take the lower value. Clearly there will a far higher power of $2$ that divides $2146!$ than the maximum power of $13$, so we can focus on finding the multiplicity of $13$ in $2146!$. As an illustration we can look at the number of zeroes at the end of $456!$ in base-$26$. $\lfloor 456/13\rfloor = 35$ of the contributing numbers are divisible by $13$, and $\lfloor 456/13^2\rfloor = 2$ of those are divisible by $13^2=169$. None are divisible by $13^3=2197$. So $13^{37} \mid 456!$ and $13^{38} \nmid 2146!$ and there are $37$ base-$26$ zeroes at the end of $456!$
The normal distribution - how to calculate the integral
Here is the table it references. Note that the table shows the area between $0$ and some number $a > 0$. (The format of the table is unconventional, in my opinion. Usually left-tailed or right-tailed - not from $0$ - areas are given. It is important that you know what kind of $Z$-table you are looking at when using one.) Note that $$P(-1 < Z < 1.5) = P(Z < 1.5) - P(Z < -1) = 1 - P(Z \geq 1.5) - [1-P(Z \geq -1)] = P(Z \geq -1)-P(Z \geq 1.5)$$ Draw a picture of the normal curve. Intuitively, the area in $[-1, \infty)$ should be equal to that of $(-\infty, 1]$ by symmetry. The area under the curve in $[0, 1]$ is $0.3413$ as shown in the table. The area in $(-\infty, 0)$ is $0.5$. So the area in $(-\infty, 1]$ is $0.5+0.3413=0.8413$. The area in $[0, 1.5]$ is $0.4332$ (see table). The area in $[0, \infty)$ is $0.5$. So the area in $(1.5, \infty)$ is $0.5-0.4332 = 0.0668$. Hence, the answer is given by $0.8413-0.0668=0.7745$. For more traditional tables, the identity$$P(Z < -a) = 1 - P(Z<a)$$ is useful.
Good Reference for Spanier-Whitehead duality?
Well, one classic source is some exercises in Spanier's book on algebraic topology (alas, I don't have my copy at hand so I can't give a more precise reference, but it is towards the end). There is also a chapter on it in MR0273608 (42 #8486) Cohen, Joel M. Stable homotopy. Lecture Notes in Mathematics, Vol. 165 Springer-Verlag, Berlin-New York 1970 v+194 pp. However, I have to admit that I find Adams's book very clear and beautiful. Is there a reason you don't like it?
Relation between sum of matrices and norm.
Hint. Suppose $(M+N)x=0$ for some unit vector $x$. Then $\|N\|_2\ge\|Nx\|_2=\|Mx\|_2\ge\sigma_n(M)$.
What is the probability of a monkey favoring red to blue and yellow?
The monkey picks red over blue, so it either belongs to RYB, RBY or YRB. If all events are equally likely, the probability that the monkey will prefer red over yellow equals: $$\frac{P[RYB] + P[RBY]}{P[RYB] + P[RBY] + P[YRB]} = \frac{\frac{1}{6} + \frac{1}{6}}{\frac{1}{6} + \frac{1}{6} + \frac{1}{6}} = \frac{2}{3}$$
Difference between the sum of local assortativity and the total assortativity of a graph
The issue is related to the estimation of the mean and the variance of the excess degree distribution that is part of the equation. As is shown in the output of the implementation above, those metrics are being calculated based on the vector of excess degrees but in the paper is clear that the excess degree distribution truly follow this formula: $$q(k) = \frac{(k+1)p(k+1)}{\overline k}$$ where $p$ is the probability of a node be of degree k and $\overline k$ is the degree average. Making the respective changes then the result of estimation by both methods is the same: Sum of local assortativities = -1.0 igraph assortativity = -1.0 Diff calculation = 0.0
explanation of a step in an intuitive proof of Leibniz rule
From what I understand from the video, it's only a heuristic technique of computing the derivative of the integral $I(t) := \int_{x_1}^{x_2} f(x, t) dx.$ Thus, it is meant to be taken with a grain (or rather a full tablespoon) of salt. It is not rigorous. In his video, $dt$ signifies a finite quantity, however small, thus a constant, it is not to be taken as the $1-$form $dx$ (or measure, what have you, choose your preferred way of looking at integrals), thus can be safely moved outside the integral.
$\frac{A}{B}$ for $A=\frac1{1\cdot2}+\frac1{3\cdot4}+\dots+\frac1{21\cdot22}$ and $B=\frac1{12\cdot22}+\frac1{13\cdot21}+\dots+\frac1{22\cdot12}$
Using partial fractions decompositions \begin{align} \frac{1}{2n(2n-1)}&=\frac{1}{2n-1}-\frac{1}{2n}\\ \frac{1}{(11+n)(23-n)}&=\frac{1}{34}\left(\frac{1}{11+n}+\frac{1}{23-n}\right) \end{align} we have \begin{align} A&=\left(\frac{1}{1}-\frac{1}{2}\right)+\left(\frac{1}{3}-\frac{1}{2}\right)+\dots+\left(\frac{1}{21}-\frac{1}{22}\right)\\ &=\left(\frac{1}{1}+\frac{1}{3}+\dots+\frac{1}{21}\right)-\left(\frac{1}{2}+\frac{1}{4}+\dots+\frac{1}{22}\right)\\ &=\left(\frac{1}{1}+\frac{1}{2}+\dots+\frac{1}{22}\right)-2\left(\frac{1}{2}+\frac{1}{4}+\dots+\frac{1}{22}\right)\\ &=\left(\frac{1}{1}+\frac{1}{2}+\dots+\frac{1}{22}\right)-\left(\frac{1}{1}+\frac{1}{2}+\dots+\frac{1}{11}\right)\\ &=\left(\frac{1}{12}+\frac{1}{13}+\dots+\frac{1}{22}\right) \end{align} and \begin{align} 34\cdot B&=\left(\frac{1}{12}+\frac{1}{22}\right)+\left(\frac{1}{13}+\frac{1}{21}\right)+\dots+\left(\frac{1}{22}+\frac{1}{12}\right)\\ &=2\cdot\left(\frac{1}{12}+\frac{1}{13}+\dots+\frac{1}{22}\right)\\ &=2\cdot A \end{align} so $A/B=17$.
Why is this sum of Kronecker products singular?
Add Row1 to Row4 and Row2 to Row5. These are elementary row operations that don't change the determinant, but now Row4 and Row5 are equal, so the determinant is zero.
Looking for a differentiable function which behaves somewhat like $\min(x,1)$
First, let $$f(x)=\begin{cases}e^{\frac{-1}{x}}&x>0\\0&x\le0\end{cases}$$ It can be shown that $f$ is smooth. Then one such desired function is $$g(x)=\frac{f(x)}{f(x)+f(1-x)}$$ Since $f$ is smooth, and $f(x)+f(1-x)$ is never $0$, $g$ is also smooth. $g$ takes values $0$ for $x<0$, and values $1$ for $x>1$. It's called a smooth transition function.
Find $a_1$ so that $ a_{n+1}=\frac{1}{4-3a_n}\ ,n\ge1 $ is convergent
The Moebius transformation $$T:\quad\bar{\mathbb C}\to\bar{\mathbb C},\qquad x\mapsto T(x):={1\over 4-3x}$$ has the two fixed points $1$ and ${1\over3}$. We therefore introduce a new complex projective coordinate $z$ via $$z:={x-{1\over3}\over x-1},\qquad{\rm resp.},\qquad x={z-{1\over3}\over z-1}\ .$$ In terms of this coordinate $T$ appears as ${\displaystyle \hat T(z)={z\over3}}$ (with fixed points $0$ and $\infty$), so that $$\bigl(\hat T\bigr)^{\circ n}(z)={z\over 3^n}\ .$$ It follows that for all initial points $z\ne\infty$ we have $$\lim_{n\to\infty}\bigl(\hat T\bigr)^{\circ n}(z)=0\ .$$ In terms of the original variable $x$ this means that for all initial points $x\ne1$ we have $$\lim_{n\to\infty}T^{\circ n}(x)={1\over3}\ .$$ There is, however, the following caveat: The above argument refers to the domain $\bar{\mathbb C}$; but maybe you want to exclude $x=\infty$ as a generic point. In terms of the coordinate $z$ this is the point $z_*=1$. For all initial values $z_k=3^k$ $(k\geq1)$ we have $\bigl(\hat T\bigr)^{\circ k}z_k=z_*$. This implies that in the original formulation of the problem you have $T^{\circ k}(x_k)=\infty$ (i.e., you "accidentally" hit $\infty$ after finitely many steps) for all initial points $x_k=\bigl(3^k-{1\over3}\bigr)/(3^k-1)$ $(k\geq1)$.
Maximum Value of Trig Expression w/o Calculus
You can look to write $\sin (3x)+2\cos(3x)=a\sin (3x+\phi)$. This will be possible when the frequency of the two waves is the same, as here. Now use the angle-sum formula to get $$\sin (3x)+2\cos(3x)=a \sin(3x) \cos (\phi) + a \cos (3x) \sin(\phi)$$ Then $a \cos(\phi)=1,\\ a \sin(\phi)=2$ so $\tan (\phi)=2, \\ \sin(\phi)=\frac 2{\sqrt 5},\\ \cos(\phi)=\frac 1{\sqrt 5},\\ a=\sqrt 5$ Now we can see that the maximum is $\sqrt 5$ and the minimum is $-\sqrt 5$
$8$ cards are drawn from a deck of cards without replacement
For each of the possible values for the sum, $S$, of the three highest cards -- integers on $[6,30]$ -- figure out the number of combinations of eight-card hands that would give the three highest cards that sum. For $S=30$, we can have anywhere from three to eight $10$s, so we consider those separately: $$P(S=30, 10, 10, 10) = \frac{{16 \choose 3}{36 \choose 5} + {16 \choose 4}{36 \choose 4} + {16 \choose 5}{36 \choose 3} + {16 \choose 6}{36 \choose 2} + {16 \choose 7}{36 \choose 1} + {16 \choose 8}{36 \choose 0}}{{52 \choose 8}}.$$ For something a little more complicated, you'll need to add up more combinations. For $S=20$, for example, there are twelve possible triples for the three highest cards: $$(7, 7, 6), \\ (8, 8, 4), (8, 7, 5), (8, 6, 6), \\ (9, 9, 2), (9, 8, 3), (9, 7, 4), (9, 6, 5), \\ (10, 8, 2), (10, 7, 3), (10, 6, 4), (10, 5, 5).$$ Let's calculate $P(S=20, 9, 8, 3).$ We draw one $9$: ${4 \choose 1}$. We draw one $8$: also ${4 \choose 1}.$ Then there needs to be at least one $3$ (but due to the constraints on the number of $2$s and $3$s, there can't be less than two!): $$P(S=20, 9, 8, 3) = \frac{{4 \choose 1}{4 \choose 1}\left[{4 \choose 4}{4 \choose 2} + {4 \choose 3}{4 \choose 3} + {4 \choose 2}{4 \choose 4}\right]}{{52 \choose 8}}.$$ (Note that there are little subtleties like the fact that $(10, 9, 1)$ isn't possible. If you draw a $10$ and a $9$, at least two of the remaining six cards must be something higher than an ace.) Once you have the probability of getting each sum individually, then getting the expected value is easy. Counting up the number of combinations of cards for each sum is the laborious part.
Given $E:V\rightarrow V$ s.t. $E^2=E$, prove $V=V_0 \ \oplus V_1$ and $E$ is diagonalizable
Quick proof: Since $E^2-E=0$, the minimal polynomial of $E$ is $\lambda^2-\lambda=\lambda(\lambda-1)$, a product of unique linear factors, therefore $E$ is diagonalizable and $V$ is the direct sum of its eigenspaces, i.e., $V=V_0\oplus V_1$. Direct proof: You’ve already shown that $V_1$ and $V_2$ have a trivial intersection. By definition, $V_0=\ker E$. Per астон’s hint, for any $v\in V$, $E(v-Ev)=Ev-E^2v=Ev-Ev=0$, hence $v-Ev\in\ker V=V_0$. Furthermore, $E(Ev)=Ev$ for all $v\in V$, so $Ev\in V_1$. Therefore, $v=(v-Ev)+Ev$ is the sum of an element of $V_0$ and an element of $V_1$. Since $V$ is the direct sum of the eigenspaces of $E$, there is a basis of $V$ that consists of eigenvectors of $E$, hence $E$ is diagonalizable.
$\eta(s)+\eta(1-s)=F(s)-G(s)$ and roots of $F(s),G(s)$ are on the critical line
Re Question 2: I came up with an idea recently and uploaded A Sequence of Cauchy Sequences Which Is Conjectured to Converge to the Imaginary Parts of the Zeros of the Riemann Zeta Function which is a reformulation I believe that if this statement could be proven then the RH would be proven. It is similarly related to interlacing zeros, formulated in terms of attractive/repulsive fixed points of a dynamical system formed by a scaled function of the Hardy Z function (normalized by Omega so that the Lipschitz constant can stay bounded, see refs in linked paper). The criteria is similar to the Let \begin{equation} Y_{n, m} (t) = \left\{ \begin{array}{ll} t & m = 0\\ t + h_{n, m} \cos (\pi n) \tanh \left( \frac{Z (Y_{n, m - 1} (t))}{| \Omega (t) | \prod_{k = 1}^{n - 1} \tanh (Y_{n, m - 1} (t) - y_k)} \right) & m \geqslant 1 \end{array} \right. \end{equation} denote the $m$-th iterate of the $n$-th iteration function corresponding to the $n$-th zero of the Hardy $Z$ function where \begin{equation} \Omega (t) = \left\{ \begin{array}{ll} 1 & t = e\\ e^{\frac{3}{4} \sqrt{\frac{\log (t)}{\log (\log (t))}}} & t \neq e \end{array} \right. \end{equation} is a lower bound for the running maximum of $| Z (s) |$ \begin{equation} \max_{0 \leqslant s \leqslant t} | Z (s) | > \Omega (t) \forall t \geqslant 45.590 \ldots \end{equation} ensuring that \begin{equation} \frac{| Z (t) |}{\Omega (t)} > 0 \forall t \geqslant 45.590 \ldots \end{equation} which normalizes the range of $Z (t)$ which is known to grow in both maximum and average value as $t \rightarrow \infty$ and $h_{n, m}$ is factor which influences the rate of convergence \begin{equation} h_{n, m} = \left\{ \begin{array}{ll} 1 & m \leqslant 2\\ h_{n, m - 1} & (\Delta Y^{}_{n, m - 2} (t)) = (\Delta Y^{}_{n, m - 1} (t))\\ \frac{h_{n, m - 1}}{2} & (\Delta Y^{}_{n, m - 2} (t)) \neq (\Delta Y^{}_{n, m - 1} (t)) \end{array} \right. \end{equation} where \begin{equation} \Delta Y_{n, m} (t) = Y_{n, m} (t) - Y_{n, m - 1} (t) \end{equation} is the $1$-st difference of the $m$-th iterate for the $n$-th zero. Let \begin{equation} c_n (\varepsilon) = \frac{Z (\max_{t \in [0, y_n]} \{ Y_{n + 1, 1} (t) \geqslant t \} + \epsilon) - Z (\min_{t \in [y_n, \infty]} \{ Y_{n + 1, 1} (t) \leqslant t \} - \epsilon)}{2 \varepsilon + \max_{t \in [0, y_n]} \{ Y_{n + 1, 1} (t) \geqslant t \} - \min_{t \in [y_n, \infty]} \{ Y_{n + 1, 1} (t) \leqslant t \}} \end{equation} denote the Lipschitz constant then if it is always possible to choose a small enough positive $\varepsilon$ such that $0 < c_n (\varepsilon) < 1$ then the Riemann Hypothesis is true.
Approximation of $e^{-x^2}$
We have: $$e^x\approx 1+x\Rightarrow e^{-x^2}\approx 1-x^2$$
How can I prove $\text{Im } f = (0, 1]$ for $f(x) = 1/(1 + x^2)$?
For the first part of your proof, I think you meant to write $1 \le 1 + a^2 < \infty$. Given $y \in S$, consider $f(x)$ where $x = \sqrt{\frac{1}{y} - 1}$. (This last expression is $f^{-1}(y)$.)
Confused about computation of integral basis
The inclusion $i : \mathbf{Z} \to \mathcal{O}$ induces an isomorphism $\bar i \colon \mathbf{Z} / \ell \mathbf{Z} \to \mathcal O / \lambda \mathcal O$. In particular, $\bar i$ is surjective. Thus for any $x \in \mathcal O$ we have that $x + \lambda \mathcal O$ lies in the image of $\bar i$. It can thus be written as $i(a) + \lambda \mathcal O$ for some $a \in \mathbb{Z}$. Now $x \in i(a) + \lambda \mathcal O = a + \lambda \mathcal O$. This proves $\mathcal O \subseteq \mathbb Z + \lambda \mathcal O$. The other inclusion is clear.
Computation Operation in one Recurrence Relation
How many terms are in the sum defining $T(n)$ as a function of $n$? You need one multiply for each, then one less adds to do the sum. The numbers $T(n)$ grow much faster-are you expected to account for that?
Proof of exponential theorem
Note that $$ a^x=e^{x\ln a}=\sum_{k=0}^\infty\frac{x^k(\ln a)^k}{k!} $$ where the second equality comes from knowledge of the Maclaurin series for $e^x$
Inversive distance between concentric circles
For two circles $\alpha$ and $\beta$ with radical axis $l$, the pencil $\alpha\beta$ is the set of circles $\gamma$ which share this radical axis, i.e. the radical axis of $\gamma$ and $\alpha$ is $l$ as well (and thus automatically also the radical axis of $\gamma$ and $\beta$). In the picture below, the pencil is drawn in solid lines (black), and the common axis is the vertical (grey) line. If and only if the circles do not intersect, there exists a circle (dashed red in the picture), which is centered at the intersection of $l$ and the common axis of symmetry $g$ of the pencil, and orthogonal to all circles of the pencil. This circle intersects the axis of symmetry in the two limiting points of the pencil. These can be thought to be the two circles of zero radius belonging to the pencil. Inverting in a circle (dotted blue in the picture) centered at one of those points will leave the axis of symmetry invariant. The dashed circle, however, becomes a line orthogonal to $g$, since it passes through the center of inversion. A circle belonging to the pencil will be transformed to another circle orthogonal to both lines, which is thus necessarily centered at their intersection. Therefore, the pencil is transformed into a pencil of concentric circles, all centered at the intersection of $g$ and the image of the dashed circle. Since the radius of the circle of inversion is arbitrary, and choosing another radius corresponds to scaling the inverted picture, any well-defined quantity derived from the inverted picture has to be invariant under scaling. Therefore, it has to be defined as an expression of the ratios of the concentric circles - which is exactly what Coxeter proceeds to do. He defines $(\alpha, \beta)$ - the "distance" of the two circles - to be the logarithm of the ratios of the radii of the inverted circles.
Is this a valid proof for showing $\int_{1}^{\infty}\frac{3e^{yt}}{y^4}\text{ d}y$ diverges?
Yes it is valid to take $K>1$. We assume that $t>0$. One may observe that, for $K>1$, $$ \frac{3e^{yt}}{y^4}\ge \frac{3e^{yt}}{K^4}, \qquad y \in [1,K], $$ giving $$ \int_1^K\frac{3e^{yt}}{y^4}\:dy\ge \frac{3}{K^4}\int_1^K e^{ty}\:dy=\frac{3}{K^4}\cdot \frac{e^{K t}-e^t}{t} $$ the latter expression tends to $\infty$ as $K \to \infty$.
Prove that : $X_n \xrightarrow{\mathrm{a.s.}}0\iff \sum_n P(X_n>0) <\infty$
I completely overthought this problem, thank you Did for pointing me in the right direction. Actually, $\begin{align}X_n\xrightarrow{\mathrm{a.s.}} 0 &amp;\iff P(\{w, X_n(w)\to 0\}) = 1 \\ &amp;\iff P(\{w,\exists n, \forall k\geq n,X_n(w)=0\})=1 \text { because } X_n(w) \text{ is integer-valued} \\ &amp;\iff P(\operatorname{liminf X_n=0)} =1 \\ &amp;\iff P(X_n &gt;0 \text{ i.o })=0 \end{align}$ If $\sum P(X_n &gt;0 ) &lt; \infty$, $ P(X_n &gt;0 \text{ i.o })=0$ hence $X_n\xrightarrow{\mathrm{a.s.}} 0$ If $\sum P(X_n &gt;0 ) = \infty$ $, P(X_n &gt;0 \text{ i.o })=1$ and there is no convergence.
If $0<x<1$ then prove that $x^a \leq x < 1$ for all $a\in \mathbb{R}, a\geq 1$.
I have a proof for this inequality for all $a \geq 1, a\in \mathbb{R}$. Fix $0&lt;x&lt;1$ and define a function $f(a) = x^a - x$. Then $f^{\prime}(a) = x^a \ln x-0&lt;0$. Note that $\ln x &lt;0$ for $0&lt;x&lt;1$. So the function is decreasing for $a \geq 1$. That is $f(a) \leq f(1)$ as desired. This requires no induction and more powerful than induction.
Orientation of circles in $\Bbb C_\infty$
The set in (c) consists of those $S^{-1} z$ with $\text{Im}(z, Sz_1, Sz_2, Sz_3) &gt; 0$. Replacing the dummy variable $z$ with $Sz$ (with $S$ being a bijection) gives the result. For the second part, use the fact that the cross ratio is invariant under Mobius transformations (and $\Gamma$ is arbitrary) to reduce the case of $S(z) = z$ and $(z_1, z_2, z_3) = (1, 0, \infty)$.
Integrating With Trig Subsitution
In all the following, I intentionnally forget the constant to add to every primitive, in order to simplify the notations. All equalities with primitives are to be understood "up to a constant". Using trigonometric functions, and substitution $x=\tan u$ (then $\mathrm{d}x=(1+\tan^2u)\,\mathrm{d}u = \frac{\mathrm{d}u}{cos^2u}$): $$\int \frac{\sqrt{1+x^2}}{x} \mathrm{d}x=\int \frac{\sqrt{1+\tan^2 u}}{\tan u} \frac{1}{\cos^2u}\mathrm{d}x=\int \frac{1}{|\cos u| \tan u} \frac{1}{\cos^2u} \mathrm{d}u$$ If $\cos u &gt; 0$, $$\int \frac{\sqrt{1+x^2}}{x} \mathrm{d}x=\int \frac{1}{\sin u\cos^2u} \mathrm{d}u$$ Then $$\frac{1}{\sin u\cos^2u}=\frac{1}{\cos^2u}\left(\frac{1}{\sin u} - \sin u + \sin u\right) = \frac{1}{\cos^2u}\left( \frac{1-\sin^2 u}{\sin u}+\sin u\right)$$ $$=\frac 1 {\sin u} + \frac{\sin u}{\cos^2 u}$$ And (see here) $$\int \frac{1}{\sin u} \mathrm{d}u=\log \left|\tan \frac u 2\right|$$ $$\int \frac{\sin u}{\cos^2 u} \mathrm{d}u = \frac 1 {\cos u}$$ So $$\int \frac{\sqrt{1+x^2}}{x} \mathrm{d}x=\frac 1 {\cos u} +\log \left|\tan \frac u 2\right| = \frac 1 {\cos (\arctan x)} +\log \left|\tan \frac {\arctan x} 2\right| $$ Now $\frac{1}{\cos^2 u}=1+\tan^2 u$ thus $\frac 1{\cos (\arctan x)} = \sqrt{1+x^2}$, $$\int \frac{\sqrt{1+x^2}}{x} \mathrm{d}x = \sqrt{1+x^2} +\log \left|\tan \frac {\arctan x} 2\right| $$ Also, notice we have always $\cos u = \cos (\arctan x) &gt; 0$ since $\arctan x \in ]-\pi/2, +\pi/2[$. Our assumption above was not too restrictive :-) There is further simplification, with $\tan \frac{\arctan x}{2}$. The idea is to write $\tan \frac{\theta}2$ as a function of $\tan \theta$, that is, reversing the usual formula $\tan \theta = \frac{2t}{1-t^2}$ where $t=\tan \frac{ \theta }2$. That is, we want to solve $$(1-t^2) \tan \theta -2t = 0$$ $$(\tan \theta) t^2 + 2t - \tan \theta = 0$$ It's a quadratic equation with $\Delta = 4(1+\tan^2\theta$), so the solutions are $$t = \frac{-1 \pm \sqrt{1+\tan^2 \theta}}{\tan \theta}$$ Notice that $t = \tan \frac{\theta}{2}$ has the same sign than $\tan \theta$ for $\theta \in ]-\pi/2, \pi/2[$, so the sign in $\pm$ is actually a $+$. $$\tan \frac{\theta}{2}=\frac{\sqrt{1+\tan^2 \theta}-1}{\tan \theta}$$ And $$\tan \frac{\arctan x}{2}=\frac{\sqrt{1+x^2}-1}{x}$$ Therefore, we have $$\int \frac{\sqrt{1+x^2}}{x} \mathrm{d}x = \sqrt{1+x^2} +\log \left|\frac{\sqrt{1+x^2}-1}{x}\right|=\sqrt{1+x^2} +\log (\sqrt{1+x^2}-1) - \log |x| $$ The answer can be written slightly differently: $$\sqrt{1+x^2} -\log (\sqrt{1+x^2}+1) + \log |x|$$ It's enough to check that $$\frac{(\sqrt{1+x^2}+1)(\sqrt{1+x^2}-1)}{|x|^2}=\frac{1+x^2-1}{x^2}=1$$ Thus, taking logarithms, $$\log (\sqrt{1+x^2}+1) + \log (\sqrt{1+x^2}-1) - 2\log |x| = 0$$ $$\log (\sqrt{1+x^2}-1) - \log |x| = -\log (\sqrt{1+x^2}+1) + \log |x|$$ There is still place for a bit of simplification: $$\log (\sqrt{1+x^2}+1) - \log |x|=\log \frac{\sqrt{1+x^2}+1}{|x|}=\log \left(\sqrt{1+\frac{1}{|x|^2}}+\frac{1}{|x|}\right)$$ You may recognize $\arg\sinh t = \log (t + \sqrt{1+t^2})$, so $$\log (\sqrt{1+x^2}+1) - \log |x|=\arg \sinh \frac{1}{|x|}$$ Thus, as a final step, we can write $$\int \frac{\sqrt{1+x^2}}{x} \mathrm{d}x = \sqrt{1+x^2}-\arg \sinh \frac{1}{|x|}$$ Another solution, using hyperbolic functions, with substitution $x=\sinh u$ (and $\mathrm{d}x=\cosh u \,\mathrm{d}u$), using the fact that $\cosh^2 u - \sinh^2 u = 1$: $$\int \frac{\sqrt{1+x^2}}{x} \mathrm{d}x = \int \frac{\cosh u}{\sinh u} \cosh u \,\mathrm{d}u= \int \frac{1+\sinh^2u}{\sinh u} \mathrm{d}u$$ $$=\int \left(\frac{1}{\sinh u}+\sinh u\right) \mathrm{d}u$$ And, with the usual $\int \frac 1 {\sinh u} \,\mathrm{d}u = \frac 1 2\log|\tanh \frac u 2|$, $$\int \frac{\sqrt{1+x^2}}{x} \mathrm{d}x=\frac 1 2\log\left|\tanh \frac u 2\right| + \cosh u = \frac 1 2\log\left|\tanh \frac {\arg \sinh x} 2\right| + \cosh (\arg \sinh x)$$ $$=\sqrt{1+x^2} + \frac 1 2\log\left|\tanh \frac {\arg \sinh x} 2\right|$$ Like with the first solution, it would be possible to simplify further.
Relation: Modular Forms and hyperbolic geometry, or, why do they map from $\mathbb{H}$?
I would say that modular forms come from looking at the automorphism group of $\mathbb{H}$, which is $PSL_2(\mathbb{R})$. In particular, we look at nice discrete co-compact subgroups (i.e. subgroups for which the quotient $\mathbb{H}/\Gamma$ is compact), such as $PSL_2\mathbb{Z}$. In this sense, modular forms are a specific example of automorphic forms. For $\mathbb{R}^2$, Euclidean space, the automorphism group is (I think?) $\mathbb{R}^2 \rtimes O(2)$. I'm not sure exactly what the discrete co-compact subgroups of this are, but I would suspect that they are a lot less interesting than those that arise from looking at the hyperbolic plane. Most likely, all that you get is the study of elliptic functions (i.e. functions that are defined on an elliptic curve, which is the quotient of $\mathbb{C} = \mathbb{R}^2$ modulo a lattice). Now, you can combine these together to look at Jacobi forms...
Checking whether the function $u(x)=e^x$ solves the integral equation $u(x) + \lambda \int_0^1\sin(xt) u(t) dt=1$
In order to see if $u(x)=e^x$ is , or not, solution of the integral equation $$u(x) + \lambda \int_0^1\sin(xt) u(t) dt=1$$ It is sufficient to put $u=e^x$ into the equation and check the result $$e^x + \lambda \int_0^1\sin(xt) e^t dt=e^x + \lambda \frac{x+(\sin x+x\cos x)e}{1+x^2}$$ Obviously, the left term is not equal to the right term $=1$. Hense, $e^x$ is NOT solution of the equation. Another way, without need to compute the integral consists in computing the second derivative : $$u''(x) = \lambda \int_0^1\sin(xt) t^2 u(t) dt$$ $$u''_{(x=0)}=0$$ Comparing to the second derivative of $e^x$ at $x=0$, which is equal to $1$, both second derivatives are not equal. So the the two functions are different. The conclusion is the same : $e^x$ is not solution of the equation.
Proof check for "every neighbourhood is an open set"
I think your proof is incorrect. There is a problem with your statement: "Since $N_q$ is not in $N$, then $p$ is not in $N$". It is true that $N_q$ is not a subset of $N$, but it is certainly possible that some elements (points) of $N_q$ are also elements of $N$.
Self multiplication of a CDF degenerates into a Dirac Delta?
As $F$ is a CDF, we have $0 \le F(x) \le 1$ for all $x \in \def\R{\mathbb R}\R$. As $q^n \to 0$ exactly for $\lvert q \rvert &lt; 1$, we have $$ F^n(x) \to \begin{cases} 1 &amp; F(x) = 1\\ 0 &amp; \text{otherwise} \end{cases} $$ So, $F^n$ tends pointwise to $\chi_{\{F=1\}}$, the indicator function of the set, where $F$ equals $1$ and this is never equal to a dirac distribution. In both your examples, $F$ is never 1, so $F^n \to 0$ for the gaussian and the exponential CDFs.
counting number of speed bumps
There’s a speed bump at one end of the road. If it’s at the beginning, then the speed bumps are at positions $20k$ feet for $k=0,1,\dots,50$, with the last speed bump at $1000$ feet. There are $51$ integers in the set $\{0,1,2,\dots,50\}$ so there are $51$ speed bumps. If it’s at the end of the road, there’s a speed bump at position $1015-20k$ feet for each $k$ from $0$ through $50$, and there are again $51$ such values of $k$. It appears that you forgot to count the bump at the end of the road.
$GL(n, \mathbb{C})$ is a properly embedded Lie subgroup of $GL(2n, \mathbb{R})$
If you consider all (not necessarily invertible) matrices, then the obvious extension of $\beta$ to a map $M_n(\mathbb{C})\to M_{2n}(\mathbb{R})$ is a linear injection, and hence a topological embedding. Since $GL(n,\mathbb{C})$ and $GL(2n,\mathbb{R})$ are just open subspaces of $M_n(\mathbb{C})$ and $M_{2n}(\mathbb{R})$, it follows that $\beta$ is also a topological embedding. (To prove that any linear injection $i:\mathbb{R}^m\to \mathbb{R}^n$ is an embedding, note that there is an invertible linear map $T:\mathbb{R}^n\to\mathbb{R}^n$ such that $Ti:\mathbb{R}^m\to\mathbb{R}^n$ is just the standard inclusion of $\mathbb{R}^m$ into $\mathbb{R}^n$. Thus $Ti$ is an embedding, and hence so is $i$ since $T$ is a homeomorphism.)
Units in Polynomial Rings
Hint: Note that $(2x+1)(2x+1)=1$ in $\mathbb{Z}_4$. For another example, use $\mathbb{Z}_9$. Note that $(3x+1)(6x+1)=1$, so $3x+1$ has a multiplicative inverse.
Check series for convergence or divergence
Your way is fine since both series converge indeed $$ \sum_{n=1}^{\infty} \frac{2}{4^n} + \sum_{n=1}^{\infty} \frac{(-3)^n}{4^n}= 2\sum_{n=1}^{\infty} \left(\frac{1}{4}\right)^n + \sum_{n=1}^{\infty} \left(-\frac 3 4\right)^n$$ which are two geometric series.
How to show normality of the subgroup by conjugating it with generators of the group?
Define a map $c\colon G\to \operatorname{Aut} G$ by $c(g)(x) = gxg^{-1}$. If $g,h\in G$, then $$c(gh)(x) = (gh)x(gh)^{-1} = g(hxh^{-1})g^{-1} = (c(g)\circ c(h))(x),\ \forall x\in G,$$ so $c$ is a group homomorphism. That means that if $\{g_1,\ldots,g_n\}$ are generators of $G$, and $c(g_i)H = H,\ i = 1,\ldots, n,$ then for any $g\in G$, write $g = g_{i_1}g_{i_2}\ldots g_{i_k}$ and you have $$c(g)(h) = c(g_{i_1}g_{i_2}\ldots g_{i_k})(h) = (c(g_{i_1})c(g_{i_2})\ldots c(g_{i_k}))(h)\in H.$$
Odd subfield of cyclotomic field with 2-ramification
$m=2^k \prod_j p_j^{d_j}$. $\Bbb{Q}(\zeta_m)$ is totally ramified at the primes of $\Bbb{Q}(\zeta_{m/{p_j}^{d_j}})$ above $p_j$ so $L\subset \Bbb{Q}(\zeta_m)$ is unramified at $p_j$ implies that $L\subset \Bbb{Q}(\zeta_{m/p_j^{d_j}})$. Whence $$L\subset \bigcap_j \Bbb{Q}(\zeta_{m/p_j^{d_j}})=\Bbb{Q}(\zeta_{2^k})$$ Since $[\Bbb{Q}(\zeta_{2^k}):\Bbb{Q}]= 2^{k-1}$ then $[L:\Bbb{Q}]$ is odd means that $L=\Bbb{Q}$.
Translation is continuous in $L^1$ for finite Borel measures
It's not true. Let $$\phi(x)=x^{-1/2}\chi_{(0,1)}(x)$$and $$\phi_n(x)=\phi(x-1/n).$$ Let $$f(x)=|x|^{-1/2}$$and $$I_n=\int f(x)\phi_n(x)\,dx.$$ Since $I_n&lt;\infty$ (for $n=1,2,\dots$) there exist $a_n&gt;0$ with $$\sum a_n&lt;\infty$$and $$\sum_1^\infty a_nI_n&lt;\infty.$$Define $$\psi=\sum_1^\infty a_n\phi_n$$and $$d\mu=\psi dx$$(that is, $\mu(E)=\int_E\psi(x)\,dx$.) Then $\mu$ is a finite Borel measure (since $\psi\in L^1(\mathbb R)$), and dominated convergence shows that $\mu$ satisfies the conintuity hypothesis. And $\sum a_nI_n&lt;\infty$ shows that $f\in L^1(\mu)$, although $$\int f(x-1/n)\,d\mu(x)=\infty$$for every $n$ (so that $\int|f(x)-f(x-1/n)|\,d\mu(x)=\infty$, since $f\in L^1(\mu)$.).
Is $v_3=(2,0,3)$ in the subspace spanned by $v_1=(1,1,4)$ and $v_2=(-1,1,1)$?
It would be more correct to say that $v_1, v_2, v_3$ are linearly dependent, i.e. there exist nonzero scalars (numbers) $\alpha_1, \alpha_2, \alpha_3$ such that $$\alpha_1 v_1 + \alpha_2 v_2 + \alpha_3 v_3 = 0$$ = the zero vector. In other words, $v_3 = -\frac{\alpha_1}{\alpha_3} v_1 - \frac{\alpha_2}{\alpha_3} v_2$. So $v_3$ is in the subspace spanned by $v_1$ and $v_2$, because that subspace is the totality of vectors of the form $\beta_1 v_1 + \beta_2 v_2$, where $\beta_i$ are scalars. Edit: Adren is right (some of the scalars may be zero in the definition of linear dependence). All that matters in our particular case is that $\alpha_3 \neq 0$.
Proof by induction verfication
HINT: Try partial fraction decomposition on $$ \sum_{k=1}^n\frac{1}{k^2+k} $$ you will see it is a telescope sum. Hope this helps :)
What is the origin of the (V)BODMAS rule?
6÷2(a+b) = 6÷2a+2b (if parentheses are 'ignored') So, if I now give you the values of a as 1 and b as 2 .. Then the answer would be 7! However, by retaining the parentheses it would be 6÷(2a+2b) and give 1. For me, the parentheses and correct use thereof, becomes rather important in describing the intent. Especially in more complex equations like I=(LR)/(1-(1+r)^-p) to calculate the installment on a bond.
(Soft question) Should you struggle with every question in the textbook or look at the solution manual for some?
This is definitely a question many students face, and has been discussed a bit on this site as well (see this excellent answer). In general, I think it's unwise to try and study and completely understand/come up with a solution to each and every exercise in a book. You should definitely read the solution after struggling for a while (~$1$h) and (re)reading the text, thinking about the exercises. But there might be literally thousands of exercises, and it would take significant effort to come up with all their solutions yourself, hence I think you should not attempt every question, and even in those you do, give up after a while if you cannot solve them. Instead, in textbooks which explicitly mark difficulty level, I would recommend trying $5$ or so from the "basic" questions per section/chapter to make sure you get the basics, $2$ to $3$ from the middle-level question, and $1$ question from the most difficult level. You should see the linked answer for more in depth discussion.
Help understanding a $\frak {sl}(2)$ Lie algebra representation
Consider these three $5\times5$ matrices:$$\mathcal H=\begin{bmatrix}4&amp;0&amp;0&amp;0&amp;0\\0&amp;2&amp;0&amp;0&amp;0\\0&amp;0&amp;0&amp;0&amp;0\\0&amp;0&amp;0&amp;-2&amp;0\\0&amp;0&amp;0&amp;0&amp;-4\end{bmatrix},\ \mathcal E=\begin{bmatrix}0&amp;4&amp;0&amp;0&amp;0\\0&amp;0&amp;6&amp;0&amp;0\\0&amp;0&amp;0&amp;6&amp;0\\0&amp;0&amp;0&amp;0&amp;4\\0&amp;0&amp;0&amp;0&amp;0\end{bmatrix}\text{, and }\mathcal F=\begin{bmatrix}0&amp;0&amp;0&amp;0&amp;0\\1&amp;0&amp;0&amp;0&amp;0\\0&amp;1&amp;0&amp;0&amp;0\\0&amp;0&amp;1&amp;0&amp;0\\0&amp;0&amp;0&amp;1&amp;0\end{bmatrix}.$$If $aE+bF+cH\in\mathfrak{sl}_2$, consider the matrix $a\mathcal E+b\mathcal F+c\mathcal H$. It's a $5\times5$ matrix and this defines a $5$-dimensional representation of $\mathfrak{sl}_2$ since, as you can check, $[\mathcal H,\mathcal E]=2\mathcal E$, $[\mathcal H,\mathcal F]=-2\mathcal F$, and $[\mathcal E,\mathcal F]=\mathcal H$.
Existence in real-analysis
You are most likely not suppose to use the Mean Value Theorem/Rolle's Theorem/Intermediate Value Theorem for this. It is highly likely that trying to prove Rolle's Theorem without assuming the property of the real numbers in question would be impossible. All you need to do is consider $A = \{x : x \in \mathbb{R} \ | x^{n} \leq a\}$. This set is nonempty and bounded from above, therefore the supremum of $A$ exists, call it $c$. Now you should argue that 1: $c$ is positive, 2: $c^{n} = a$, 3: prove that $c$ is unique with this property.
What is integration of $\csc (\pi \sqrt y )$
I re-edited the answer, with a slightly different substitution, to better reconstruct W.A. solution. First re-write your integral as \begin{eqnarray} \mathcal I = \int \frac{x}{\sin x} dx = \int \frac{2ix}{e^{ix}\left(1-e^{-2ix}\right)}dx. \end{eqnarray} Then use the substitution $e^{ix} = u$, which yields $$\mathcal I = 2i \int \frac{\log u}{1-u^2}du = i\int\frac{\log u}{1-u}du+i\int\frac{\log u}{1+u}du.$$ Recalling the definition of the dilogarithm for the first term, and integration by parts for the second term gives you $$\mathcal I = i\operatorname{Li}_2(1-u) + i\log u \log(1+u) - i\int\frac{\log(1+u)}{u}du.$$ Using again the dilogarithm to integrate the last term brings us to $$\mathcal I = i\operatorname{Li}_2(1-u) + i\log u \log(1+u)+\operatorname{Li}_2(-u)+C,$$ and substituting back, $$\mathcal I = i\operatorname{Li}_2\left(1-e^{ix}\right) -x\log\left(1+e^{ix}\right) +i\operatorname{Li}_2\left(-e^{ix}\right) + C.\tag{1}\label{eq1}$$ Using the the reflection formula $$\operatorname{Li}_2(1-z)+ \operatorname{Li}_2(z) = \frac{\pi^2}6 -\log z \log (1-z)$$ to replace the first term in \eqref{eq1}, you obtain the W.A. expression, as given in J.G.'s comment, that is $$\mathcal I = x\log\left(1-e^{ix}\right)-x\log\left(1+e^{ix}\right)+i\left[\operatorname{Li}_2\left(-e^{ix}\right)-\operatorname{Li}_2\left(e^{ix}\right)\right]+C_1.$$
Pagerank existence of a graph given a vector
Hint You are really asking this. Given a vector $\vec{x} \in \mathbb{R}^n$ with $x_i \ge 0$ and $\sum x_i = 1$, can we construct a matrix $A \in \mathbb{R}^{n \times n}$ such that $$A \vec{x} = \vec{x} \Leftrightarrow (A-I_n)\vec{x} = \vec{0}$$
Numerical analysis , using Taylor Theorem
Case 1: $x \in [0, \pi]$, then $ \xi(x) \in [0,\pi]$, hence $\sin\left(\xi (x)\right)\frac{{x}^{3}}{6} \ge 0.$ Thus $1-\frac{{x}^{2}}{2}\le \cos\left(x\right)$ Case 2: $ x \in [- \pi,0]$. Then $ -x \in [0,\pi]$. Case 1 now shows: $1-\frac{{x}^{2}}{2}= 1-\frac{{(-x)}^{2}}{2} \le \cos(-x) = \cos(x).$
A name for layered directed graph as in a fully-connected neural network
No. At least not up to my knowledge. If you only have an input layer and an output layer, the graph $G = (V, E)$ is bipartite. The "bi" means you can make two sets of nodes $V_1 \cap V_2 = \emptyset$ with $V_1 \cup V_2 = V$ and all edges $(v_1, v_2) \in E: v_1 \in V_1 \land v_2 \in V_2$. The more general term is $k$-partite (see Wikipedia). A neural network with $k$ layers (including input and output) is $k$-partite (one also says it is "multipartite"). edit: I've just realized that this is not the exact class you want. The set of all $k$-partite graphs contains all neural networks with $k$ layers, but not only simple multilayer perceptron architectures. It also contains architectures like DenseNets and more. But I guess "$k$-partite directed graph" is as close as you get without saying "multilayer perceptron graph structure" or something similar. edit: Thanks to Chiel ten Brinke, I've realized that $k=2$ for any layered network (without residual connections). Just put all even-numbered layer nodes in one set and all odd-numbered layer nodes in another set.
Doubt on Tail events and Kolmogorov Zero-One Law
Credit to Did for this: $$\left\{ \limsup_{n \to \infty}\frac{S_n}{\sqrt{n}} &gt; M \right\}= \left\{ \limsup_{n \to \infty}\frac{S_n-S_m}{\sqrt{n}} &gt; M \right\}$$ for every $m \geq 1$. Now $$\left\{\frac{S_n-S_m}{\sqrt{n}} &gt; M\right\} \in \sigma\left(\bigcup_{k=m}^n \mathcal{F_k}\right)$$ and $$\left\{\limsup_{n \to \infty}\frac{S_n-S_m}{\sqrt{n}} &gt; M\right\} \in \sigma\left(\bigcup_{k=m}^\infty \mathcal{F_k}\right)$$ Since this is true for every $m$, we have $$\left\{ \limsup_{n \to \infty}\frac{S_n}{\sqrt{n}} &gt; M \right\} \in \bigcap_{m=1}^\infty\sigma\left(\bigcup_{k=m}^\infty \mathcal{F_k}\right)$$ Thus it is a tail event. Update: I'd like to point out that the source of my confusion was also from the Hewitt Savage Zero One Law article on Wikipedia which said that one could not apply Kolmogorov Zero One Law to $S_n$ as the events were not independent. However they were talking about a specific subsequence and not the $\limsup$ subsequence.
Solve the functional equation (medium-hard)
When $y=1$ you get $x^2f(f(x))=f(x)f(f(x))$. So $x^2=f(x)$ or $f(f(x))=0$. Show that if $f(f(x))=0$ for some non-zero $x$ then $f(x)=0$ for all $x$. This gives two functions, $f(x)=0$ and $f(x)=x^2$. So, if $f(f(x_0))=0$ for some $x_0\neq0$, we first show that $f(x_0)=0$. This is because if $f(x_0)\neq 0$ then we can pick $y=x_0/f(x_0),x=x_0$ and get: $$x_0^2f(x_0)=y^2f(x_0)f(f(x_0))=0$$ So $f(x_0)=0$. Now, if $f(x_1)\neq 0$ for any $x_1$, we let $y=x_0/f(x_1)$ and $x=x_1$ and we get: $$0=x_1^2 f(x_0)=y^2f(x_1)f(f(x_1))$$ So either $f(x_1)=0$ or $f(f(x_1))=0$. But we've already shown that the latter implies the former. So we've shown that the function $f$ must be one of $x^2$ or $0$.
Proving $p\implies q$
You've misunderstood your teacher. One way to prove $p\implies q$ is to assume $\lnot q$ and from that assumption prove $\lnot p$. This is called proof by contraposition. (The contrapositive of $p\implies q$ is $\lnot q\implies\lnot p$.) Of course, that is not the only way to prove $p\implies q$. You could also give a direct proof: assume $p$ and from it deduce $q$.
Find Location of a Charged Particle
This problem has nothing to do with divergence. Just solve it as a bunch of equations: $$\frac{dx}{dt}=0~~~~\rightarrow ~~~~x=0$$ $$\frac{dy}{dt}=y^2~~~~\rightarrow ~~~~\int\frac{1}{y^2}dy=\int dt ~~~~\rightarrow~~~~ y=\frac{-1}{t+c_1}$$ $$\frac{dz}{dt}=z~~~~\rightarrow ~~~~\int\frac{1}{z}dz=\int dt ~~~~\rightarrow ~~~~z=e^{(t+c_2)}$$ Then find the constants $c_1, c_2$ with initial conditions: $$y(t=0)=0.25=\frac{-1}{c_1} ~~~~\rightarrow ~~~~c_1=-4 ~~~~\rightarrow ~~~~y(t)=\frac{1}{4-t}$$ $$z(t=0)=e^{c_2}=1 ~~~~\rightarrow ~~~~c_2=0~~~~\rightarrow ~~~~z(t)=e^t$$ That gets you a position equations: $$r(t)=\frac{1}{4-t}\hat{y} + e^t \hat{z}$$ Then you can plug your times into this equations.
testing the convergence of complex series and my argument.
The argument as stated is wrong because $a_{n}$ and $b_{n}$, the real and imaginary parts of the series, are not $\frac{\log{n}}{n}$. In fact, consider the 2nd term, $c_{2} = \frac{\log{n}}{n} + i^{2}\frac{\log{n}}{n} = 0$. Depending on the value of $n\mod{4}$, $c_{n}$ is $c_{1} = \frac{\log{n}}{n} + i\frac{\log{n}}{n}$, $c_{2} = 0$, $c_{3} = \frac{\log{n}}{n} - i\frac{\log{n}}{n}$ or $c_{4} = \frac{2\log{n}}{n}$. Here is another idea: it is sufficient to show that the real part does not converge to show that the entire series does not. From the above, I suspect that you can show that $a_{n} \geq \frac{\log{n}}{n}$, meaning that $a_{n}$ does not converge and so $c_{n}$ does not. This detail is left to you (thought it could be false, I'm just eyeballing.)
Show that geodesic equation is given by $\ddot x^k +\Gamma_{ij}^k \dot x^i\dot x^j=0$
First of all, remember that the arguments of $\nabla$ must be tangent fields on the manifold, which $\dot \gamma$ is not, therefore one has to first give a rigorous meaning to the notation $\nabla _{\dot \gamma} \dot \gamma$. Once we've clarified this, the rest will come naturally, without any effort. Let $\gamma : [a,b] \to M$ and let $t_0 \in [a,b]$. Let $p = \gamma (t_0)$. Let $U$ be some small neighbourhood of $p$ such that the portion of $\gamma$ that stays inside $U$ has no self-intersection, and let $X \in \mathcal X (U)$ be a local field that extends $\dot \gamma$, i.e. $X _{\gamma (t)} = \dot \gamma (t)$. Then $\left( \nabla _{\dot \gamma} \dot \gamma \right) (t_0)$ means $(\nabla _X X) _p$. With this, things become easy because the condition $\nabla _{\dot \gamma} \dot \gamma = 0$ expands as: $$0 = \nabla _X X = \nabla _{X^i \partial _i} (X^j \partial _j) = X^i \nabla _{\partial _i} (X^j \partial _j) = X^i \Big( (\nabla _{\partial _i} X^j) \partial _j + X^j (\nabla _{\partial _i} \partial _j) \Big) = \\ X^i \Big( (\partial _i X^j) \partial _j + X^j \Gamma _{ij} ^k \partial _k \Big) = (X^i \partial _i) (X^j) \partial _j + \Gamma _{ij} ^k X^i X^j \partial _k = \Big( X (X^k) + \Gamma _{ij} ^k X^i X^j \Big) \partial _k ,$$ which implies that $$\tag {#} X (X^k) + \Gamma _{ij} ^k X^i X^j = 0 \; \forall k .$$ Notice now that for any smooth $f$, $$X(f) (p) = \textrm d f (X) (p) = \textrm d _p f (X_p) = \textrm d _{\gamma (t_0)} f (\dot \gamma (t_0)) = \frac {\textrm d} {\textrm d t} \Bigg| _{t = t_0} f \circ \gamma ,$$ so $$X(X^k) (p) = \frac {\textrm d} {\textrm d t} \Bigg| _{t = t_0} X^k \circ \gamma = \frac {\textrm d} {\textrm d t} \Bigg| _{t = t_0} \dot \gamma ^k = \ddot \gamma ^k (t_0) ,$$ hence, upon evaluation in $p$, $(\#)$ becomes $$\ddot \gamma ^k (t_0) + \Gamma _{ij} ^k (\gamma (t_0)) \ \dot \gamma ^i (t_0) \dot \gamma ^j (t_0) = 0 ,$$ which upon removal of the arguments becomes the desired $$\ddot \gamma ^k + (\Gamma _{ij} ^k \circ \gamma) \ \dot \gamma ^i \dot \gamma ^j = 0 .$$ All this pain that we have gone through was required by the fact that $\nabla _{\dot \gamma} {\dot \gamma}$ does not obey the usual algebraic manipulation rules as $\nabla _X Y$ does. (For instance, what would you make of $\nabla _{\partial _i} \dot \gamma ^j$, given that $\dfrac {\partial \dot \gamma ^j} {\partial x_i}$ has no meaning - how would you derive a thing that depends on $t$ with respect to the variables $x_1, \dots, x_n$?)
Intriguing Convergent Series
If we define the harmonic numbers $$H_n=\sum_{k=1}^n\frac1k$$ Your series can be represented by $$H_{2n}-H_n=\frac1{n+1}+\cdots+\frac1{2n}$$ And since $\lim H_n-\log(n)=\gamma$, we know $\lim H_n-\log(n)-\gamma=0$ $$\lim_{n\to\infty} H_{2n}-H_n=\lim_{n\to\infty} H_{2n}-\log(2n)-\gamma-(H_n-\log(n)-\gamma)+\log(2)=0+0+\log(2)$$ So that your series converges to $\log(2)$.
prove that $\mathbb{Q}^n$is dense subset of $\mathbb{R^n}$
Let $(x_1,\ldots,x_n)\in\mathbb{R}^n$ and let $r&gt;0$. For each $k\in\{1,2,\ldots,n\}$, let $q_k\in\mathbb Q$ be such that$$x_n&lt;q_n&lt;x_n+\frac r{\sqrt n}.$$Then\begin{align}\bigl\|(x_1,\ldots,x_n)-(q_1,\ldots,q_n)\bigr\|&amp;=\sqrt{\sum_{k=1}^n(x_k-q_k)^2}\\&amp;&lt;\sqrt{\sum_{k=1}^n\left(\frac r{\sqrt n}\right)^2}\\&amp;=r.\end{align}
If $u,v \in V$ with $\|u\|=\|v\|=1$ and $\langle u,v\rangle=1$, prove that $u=\alpha v$ for some $\alpha \in F$
Your insight is good: $$\left\|u-v\right\|^2=\langle u-v,u-v\rangle=\left\|u\right\|^2-\langle u,v\rangle-\langle v,u\rangle+\left\|v\right\|^2=0\implies u-v=0$$ Remember that $\;1=\langle u,v\rangle=\left\|u\right\|\left\|v\right\|\cos\theta=\cos\theta\;$ , with $\;\theta=\;$ angle between $\;u,\,v\;$ , so the above means $\;\theta=0\;$ ( or any other integer multiple of $\;2\pi\;$), so $\;u,\,v\;$ are vectors of the same length and in the same direction $\;\implies\;$ they're equal.
Additive property of Riemann-Steiltjes integral proof (Rudin)
In the case of the $L(P,f_i,\alpha)$ you take the infimum independently of the other one. Whereas in the other case, you do it over the sum. In general, you have $$\inf\limits_{x \in [a,b]} \{f(x) + g(x)\} \geqslant \inf\limits_{x \in [a,b]} f(x) + \inf\limits_{x \in [a,b]}g(x),$$ which exactly represents the argument Rudin uses. You can proove that by assuming $$\inf\limits_{x \in [a,b]} \{f(x) + g(x)\} &lt; \inf\limits_{x \in [a,b]} f(x) + \inf\limits_{x \in [a,b]}g(x),$$ which will result in a contradiction.
Computing the Hausdorff distance between two subsets of $\mathbb{R}$
It remains to show, that $d_H (X,Y) \ge 1$. Using the definition you mentioned, to show that this infimum is at least $1$ we have to show, that either $X \not\subset Y_\varepsilon$ or $Y \not\subset X_\varepsilon$ for all $0&lt;\varepsilon&lt;1$. For the sets from your question we have explicitly:$$X_\varepsilon = [-\varepsilon,1+\varepsilon].$$We therefore see that for any $0 &lt; \varepsilon &lt;1$ we have $Y \not\subset X_\varepsilon$. Hence $d_H (X,Y) \ge1$.
Erwin Kreyszig's Introductory Functional Analysis With Applications, Section 2.8, Problem 4: How to show boundedness?
$$ \left|\max_{t\in[a,b]}x(t)\right| \leq \max_{t\in[a,b]}\left|x(t)\right| = \|x\|_{C[a,b]}\\ \left|\min_{t\in[a,b]}x(t)\right| = \left|-\max_{t\in[a,b]}-x(t)\right| \leq \max_{t\in[a,b]}\left|x(t)\right| = \|x\|_{C[a,b]} $$ So $\|f_1\| \leq 1$ and $\|f_2\| \leq 1$. Assume $x_0 = 1, \|x_0\| = 1$. $f_1(x_0) = 1 = f_2(x_0)$. Hence $\|f_1\| = \|f_2\| = 1$.
Showing that a certain subset is Hausdorff
I would take a more pedestrian approach. Pick $x \in \overline{A}$. Let $V_i$ be a neighbourhood of $[(x,i)]$ in $Y$ (where $[(x,i)]$ denotes the equivalence class of $(x,i)$ modulo $R$). By the continuity of the projection, there are open neighbourhoods $U_i$ of $(x,i)$ in $X\times \{0,1\}$ that are mapped into $V_i$. Since $X\times\{i\}$ is open in $X\times\{0,1\}$, we may assume that $U_i$ is of the form $W_i \times\{i\}$ for an open neighbourhood $W_i$ of $x$. Let $W = W_0 \cap W_1$. Since $x \in \overline{A}$, there is an $a_W \in A\cap W$. But then $[(a,0)] = [(a,1)] \in \pi(W\times\{0\}) \cap \pi(W\times\{1\}) \subset V_0\cap V_1$, so $[(x,0)]$ and $[(x,1)]$ have no disjoint neighbourhoods, and since $Y$ is Hausdorff, that means $[(x,0)] = [(x,1)]$ or $x\in A$, hence $A$ is closed. Now we show that the projection embeds $X\times \{0\}$ into $Y$, which then, since that space is homeomorphic to $X$, means $X$ is Hausdorff. Since the projection is continuous by definition of the quotient topology, and evidently injective on $X\times\{0\}$, it suffices to see that it is closed. Let $C \subset X$ be closed. Then $\pi^{-1}(\pi(C\times\{0\})) = C\times\{0\} \cup (C\cap A)\times\{1\}$. But $C\cap A$ is closed in $X$, so $\pi^{-1}(\pi(C\times\{0\}))$ is closed in $X\times\{0,1\}$, whence $\pi(C\times\{0\})$ is closed, and so $\pi\lvert_{X\times\{0\}}$ is closed, hence an embedding.
Cardinality of sets and union
Yes, assuming the axiom of choice it is true. Without the axiom of choice there can be counterexamples. In particular, if $A$ is an amorphous set, let $A_0=A\times\{0\}$ and $A_1=A\times\{1\}$. Clearly there is a bijection between $A_0$ and $A_1$, but if there were a bijection between $A_0\cup A_1$ and $A$, $A$ would be the disjoint union of two infinite sets and therefore not amorphous.
A good polynomial expansion of Bessel $I$ of $\log(x)$
Probably not an answer. For easier notation, let $x=1+t$. You probably observed that $$I_n(\log (t+1))=\frac{t^n}{(2 n)\text{!!}}\left(1-\frac n 2 t+\sum_{i=2}^\infty a_i^{(n)} t^i \right)$$ I have not been able to identify the patterns for the coefficients $a_i^{(n)}$ which appear in $$P_n=\sum_{i=2}^\infty a_i^{(n)} t^i $$ $$\left( \begin{array}{cc} n &amp; P_n \\ 0 &amp; \frac{t^2}{4}-\frac{t^3}{4}+\cdots \\ 1 &amp; \frac{11 t^2}{24}-\frac{7 t^3}{16}+\cdots \\ 2 &amp; t^2-t^3+\cdots \\ 3 &amp; \frac{29 t^2}{16}-\frac{65 t^3}{32}+\cdots \\ 4 &amp; \frac{173 t^2}{60}-\frac{73 t^3}{20}+\cdots \\ 5 &amp; \frac{101 t^2}{24}-\frac{287 t^3}{48}+\cdots \\ 6 &amp; \frac{81 t^2}{14}-\frac{64 t^3}{7}+\cdots \\ 7 &amp; \frac{731 t^2}{96}-\frac{849 t^3}{64}+\cdots \\ 8 &amp; \frac{349 t^2}{36}-\frac{665 t^3}{36}+\cdots \\ 9 &amp; \frac{481 t^2}{40}-\frac{1991 t^3}{80}+\cdots \\ 10 &amp; \frac{482 t^2}{33}-\frac{359 t^3}{11}+\cdots \end{array} \right)$$ I suppose that some polynomial fit could be interesting to perform since these coefficients vary in avery smooth manner.
radius of convergence $\displaystyle \sum_{n=1}^{\infty}n! x^{n!}$
The radius of convergence is $1$, since the sum converges for $x\in(-1,1)$, sometimes both approaches (ratio and root) will work, so it is fine that there are two methods. Your uncertainty with $\frac{0}{0}$ is okay, because if we we had some series $\sum_{n=1}^\infty x_n$, and we knew for certain terms, where $x_n=0$, say for a set $S\subset\Bbb N$, then we would just rewrite the series as: $\sum_{n=1}^\infty x_n=\sum_{n\in \Bbb N\setminus S}x_n$, and we would assess the remaining sequence. Example $x_n=1/n^2$ for $n$ odd, and $x_n=0$ for $n$ even, we would just have: $\sum_{n=1}^\infty x_n=\sum_{n\in \Bbb N\setminus 2\Bbb N}^\infty x_n$
Utilitarian introduction to commutative algebra
Commutative algebra with a view towards algebraic geometry by Eisenbud is the best for motivation but it’s also enormous, although you don’t have to read the whole thing
Is my claim true for extrema of the given two-variable function?
Generally, you cannot claim this. First, the solution of the differential equations are not necessary a local maximum and a local minimum of the function. They can be either of those or a saddle point. Second you should check the boundary of the function domain: the global maximum and/or global minimum can be there.
Find $\lim_{(x,y) \to (\infty,1)} (1 + \frac{1}{x})^{\frac{x^2y}{x+y}}$
It is much easier: $$\lim_{(x,y) \to (\infty , 1)} \frac{xy}{x+y} = \lim_{(x,y) \to (\infty , 1)} \frac{y}{1+y/x} = \frac{1}{1+0} = 1$$