title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Finding nilpotent and diagonalizable operators for a linear transformation $T$
This can be accomplished as an extra step after doing the work for the Jordan Normal Form, but keeping careful account of the change of basis matrix and its inverse. For $R^{-1} A R = J$ we get, $$ \left( \begin{array}{rrr} 1 & -1 & 0 \\ 0 & 1 & 0 \\ 2 & 0 & -1 \end{array} \right) \left( \begin{array}{rrr} 3 & 1 & -1 \\ 2 & 2 & -1 \\ 2 & 2 & 0 \end{array} \right) \left( \begin{array}{rrr} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 2 & 2 & -1 \end{array} \right) = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 2 & 1 \\ 0 & 0 & 2 \end{array} \right) $$ with the important reverse direction $RJR^{-1} = A$ $$ \left( \begin{array}{rrr} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 2 & 2 & -1 \end{array} \right) \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 2 & 1 \\ 0 & 0 & 2 \end{array} \right) \left( \begin{array}{rrr} 1 & -1 & 0 \\ 0 & 1 & 0 \\ 2 & 0 & -1 \end{array} \right) = \left( \begin{array}{rrr} 3 & 1 & -1 \\ 2 & 2 & -1 \\ 2 & 2 & 0 \end{array} \right) $$ Next we take $$ D_0 = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{array} \right) $$ and $$ N_0 = \left( \begin{array}{rrr} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array} \right) $$ so that $D_0 N_0 = N_0 D_0$ and $D_0 + N_0 = J$ Then $$ D = R D_0 R^{-1} = \left( \begin{array}{rrr} 1 & 1 & 0 \\ 0 & 2 & 0 \\ -2 & 2 & 2 \end{array} \right) $$ while $$ N = R N_0 R^{-1} = \left( \begin{array}{rrr} 2 & 0 & -1 \\ 2 & 0 & -1 \\ 4 & 0 & -2 \end{array} \right) $$ You can check easily enough, we get $D+N = A,$ also $DN=ND=2N$ and $N^2 = 0 \; .$
How do I show this $xy\leq \frac{1}{p}x^p+\frac{1}{q}y^q.$ without using concavity of log function?
This proof is from Steele's Cauchy-Schwarz Master Class. The weighted AM-GM inequality gives us \begin{align*} u^\alpha v^\beta \le \frac{\alpha}{\alpha + \beta}u^{\alpha + \beta} + \frac{\beta}{\alpha + \beta}v^{\alpha + \beta} \end{align*} for $u, v \ge 0$ and $\alpha, \beta > 0$. Substituting $x = u^\alpha$, $y= v^\beta$, $p = (\alpha+\beta)/\alpha$, and $q = (\alpha + \beta)/\beta$ gives us the desired inequality.
Is there a formal analogy between the binomial theorem and logical implication?
Let $A$, $B$, $C$ be finite sets. Let $A^B$ be the set of all functions $f:B\to A$, and let $\#(A)$ be the cardinality of $A$. Let $$S_i=\left\{f\in (A\cup B)^C \mid \#\{c\in C: f(c)\in B\}=i\right\}.$$ Then since the number of $f(c)$ that land in $B$ has to be something, $$(A\cup B)^C=\bigcup_{i=0}^{|C|}S_i = S_0\cup S_1\cup \cdots \cup S_{|C|}$$ where $S_0=A^C$ and $S_{|C|}=B^C$.
Canonical transformation $T_pV \cong V$?
The inverse isomorphism is given by $$X \mapsto \exp_p(X) - p$$
What is the largest complete subspace of $(\mathbb{Q}, |\cdot|)$
you cannot apply Zorn's lemma because Zorn's lemma says something about chains, and here if you take a chain of complete subspaces, the union of the chain need not be complete. For example list $\Bbb Q=\{q_n:n\in\Bbb N\}$,and let $F_n=\{q_k:k\le n\}$. Then each $F_n$ is complete (even finite) and these $F_n$ form an increasing chain, yet the union of this chain is $\Bbb Q$, which is not complete.
Estimating the equation of a line from its rounded values
Given pairs $(x_i,y_i)$, you can try to find $a,b$ that minimize the $L^\infty$ distance from $y_i$ to $ax_i+b$ using LP, which is $\max_i |y_i-ax_i-b|$; if $y_i$ indeed depends linearly on $x_i$ up to rounding, then there's a choice of $a,b$ with $L^\infty$ distance of less than $1/2$. There may be a simpler way to solve this particular optimization problem, but an LP would work, especially with your small dataset. But if your model is wrong, you might get garbage because of the unforgiving $L^\infty$ metric. Minimizing the $L^1$ metric can also be formulated as an LP, and minimizing the $L^2$ metric is the well-known least-min-squares, for which there is an explicit solution.
properties of outer measure
Use this sequence: $A_1 = A$, $A_2 = B$, and $A_n = \varnothing$ for all other $n$.
Confusion over semantics behind ∃-elimination
The term $u$ is arbitrary yes, but what isn't arbitrary is the fact that we have a derivation $\Sigma, A(u) \vdash B$. What does it mean to have $\Sigma, A(u) \vdash B$? It means that, assuming $\Sigma$ and $A(u)$ (for some particular $u$), we can prove $B$. Moreover, since $u$ doesn't appear anywhere else, it can't be a "proper" part of the proof: all that is necessary to prove $B$ is that $A(x)$ is true for some x. The $\exists- $ says precisely this, that we don't care about the specific $u$ that satisfies $A(x)$, only that there is something which satisfies it. On the other hand, suppose we can prove $A(u)$ from $\Sigma$ without assuming $u$. This means that the proof of $A(u)$ can't really "depend" on $u$, so we should be able to prove $A(x)$ for any $x$. And this is what the $\forall +$ rule says.
Can One Integrate $\frac{1}{i} \int \frac{e^{ix}-e^{-ix}}{e^{ax}+e^{-ax}+e^{ix}+e^{-ix}}dx$
Consider the integral \begin{align} I(a) = \int \frac{\sin(x) }{ \cos(x) + \cos(ax) } \ dx. \end{align} This integral can be readily evaluated as is given by \begin{align} I(a) = \frac{1}{a^{2}-1} \ \left[ (a+1) \ln\left( \cos\left(\frac{(1-a)x}{2}\right)\right) - (a-1) \ln\left( \cos\left(\frac{(1+a)x}{2}\right)\right) \right]. \end{align} Now letting $a \rightarrow i a$ leads to \begin{align} I(ia) &= \frac{-1}{a^{2}+1} \ \left[ (1+ia) \ln\left( \cos\left(\frac{(1-ia)x}{2}\right)\right) + (1-ia) \ln\left( \cos\left(\frac{(1+ia)x}{2}\right)\right) \right] \\ &= \frac{-1}{a^{2}+1} \ \left[ \ln\left(\cos\left(\frac{(1-ia)x}{2}\right) \cos\left(\frac{(1+ia)x}{2}\right) \right) + ia \ \ln\left(\frac{\cos\left(\frac{(1-ia)x}{2}\right)}{\cos\left(\frac{(1+ia)x}{2}\right)}\right) \ \right] \\ I(ia) &= \frac{-1}{a^{2}+1} \ \left[ \ln\left(\frac{\cos(x) + \cosh(ax)}{2}\right) + ia \ \ln\left( \frac{1+i \tan(x/2) \tanh(ax/2)}{1-i \tan(x/2) \tanh(ax/2)}\right) \right]. \end{align} The second term can be reduced as follows. It is known that \begin{align} x+iy = \sqrt{x^{2}+y^{2}} \ e^{i \tan^{-1}(y/x)} \end{align} which helps lead to \begin{align} \ln(x+iy) = \frac{1}{2} \ \ln(x^{2}+y^{2}) + i \tan^{-1}(y/x). \end{align} From this result it is seen that \begin{align} \ln\left( \frac{1+i \tan(x/2) \tanh(ax/2)}{1-i \tan(x/2) \tanh(ax/2)}\right) = 2 i \ \tan^{-1}\left(\tan\left(\frac{x}{2}\right) \ \tanh^{-1}\left( \frac{ax}{2} \right) \right) \end{align} and now $I(ia)$ becomes \begin{align} I(ia) &= \frac{2a}{a^{2}+1} \ \tan^{-1}\left(\tan\left(\frac{x}{2}\right) \ \tanh^{-1}\left( \frac{ax}{2} \right) \right) - \frac{1}{a^{2}+1} \ \ln\left(\frac{\cos(x) + \cosh(ax)}{2}\right). \end{align} Hence the desired integral value is \begin{align} \int \frac{\sin(x) }{ \cos(x) + \cosh(ax) } \ dx = \frac{2a}{a^{2}+1} \ \tan^{-1}\left(\tan\left(\frac{x}{2}\right) \ \tanh^{-1}\left( \frac{ax}{2} \right) \right) - \frac{1}{a^{2}+1} \ \ln\left(\frac{\cos(x) + \cosh(ax)}{2}\right). \end{align}
How does $e^x\cdot e^X$ equal $e^{x+X}$?
Perhaps you're confused about the arrangement. We should get $e^xe^x=e^{2x}$, which can be written $$e^{2x} = 1 +(2x) + \tfrac{(2x)^2}{2!} + \tfrac{(2x)^3}{3!}+\cdots$$ $$=1 + 2x + \tfrac{4x^2}{2!} + \tfrac{8x^3}{3!} + \cdots$$ So let's multiply series: $$e^x\cdot e^x = \left(1 + x + \tfrac{x^2}{2!} + \tfrac{x^3}{3!} + \cdots\right)\cdot\left(1 + x + \tfrac{x^2}{2!} + \tfrac{x^3}{3!} + \cdots\right)$$ $$= 1\cdot\left(1 + x + \tfrac{x^2}{2!} + \tfrac{x^3}{3!} + \cdots\right) + x\cdot\left(1 + x + \tfrac{x^2}{2!} + \tfrac{x^3}{3!} + \cdots\right) + \tfrac{x^2}{2!}{\cdot\left(1 + x + \tfrac{x^2}{2!} + \tfrac{x^3}{3!} + \cdots\right)}+\cdots$$ $$= (1) + (1\cdot x + x\cdot 1) + (1\cdot \tfrac{x^2}{2!} + x\cdot x + \tfrac{x^2}{2!}\cdot 1) + \cdots $$ $$= 1 + (2)x + (\tfrac1{2!} + 1 + \tfrac1{2!})x^2 + \cdots$$ $$=1 + 2x + \tfrac4{2!}x^2 + \cdots$$ You collect the products of a fixed degree $n$ as $$1\cdot \tfrac{x^n}{n!} + x\cdot\tfrac{x^{n-1}}{(n-1)!} + \tfrac{x^2}{2!}\cdot\tfrac{x^{n-2}}{(n-2)!} + \cdots + \tfrac{x^{n-2}}{(n-2)!}\cdot \tfrac{x^2}{2!} +\tfrac{x^{n-1}}{(n-1)!}\cdot x + \tfrac{x^{n}}{n!}\cdot 1$$ $$= (\tfrac1{n!} + \tfrac{n}{n!} + \tfrac{n(n-1)}{n!} + \tfrac{n(n-1)(n-2)}{n!} + \cdots)x^n$$ Your job is to show that this is $\tfrac{2^n}{n!}x^n$ (expand $(1+1)^n)$ using the binomial formula). Retry: I think you are doing it right, you just didn't collect all terms correctly. You are really just using the distributive law, like $$(a + b + c+\cdots)(\textrm{terms}) = a\cdot(\textrm{terms}) + b\cdot(\textrm{terms}) + c\cdot(\textrm{terms})+\cdots$$ For each of the products on the RHS, you need to look for results of the same degree (we're really looking at the exponential series here, of course). Constant terms only occur as $1\cdot 1$, in the first product on the RHS. Degree 1 terms occur as $1\cdot x$ or $x\cdot 1$ (in the first and second products on the RHS). Degree 2 terms occur as $1\cdot x^2$, $x\cdot x$, or $x^2\cdot 1$ (in the first, second, and third products on the RHS). Degree 3 terms occur as $1\cdot x^3$, $x\cdot x^2$, $x^2\cdot x$, or $x^3\cdot 1$ (in the 1st, 2nd, 3rd, and 4th products on the RHS). And so on.
Split a rod of unit length into two at a random position to get two pieces length x and y. Then split both pieces into two again at random positions.
The length $x$ of the shorter piece of the original rod is unifomly distributed in $\bigl[0,{1\over2}\bigr]$. The length $u$ of the larger part of this piece is then uniformly distributed in $\bigl[{x\over2},x\bigr]$, and the length $v$ of the smaller part of the larger piece is uniformly distributed in the interval $\bigl[0,{1-x\over2}\bigr]$. It folows that the point $(u,v)$ is uniformly distributed in the rectangle $R_x:=\bigl[{x\over2},x\bigr]\times\bigl[0,{1-x\over2}\bigr]$ of area ${x(1-x)\over4}$. We now have to consider the part $T_x\subset R_x$ where $v\geq u$. If $0\leq x\leq{1\over3}$ the set $T_x$ is a trapezoid of area $${\rm area}(T_x)={x\over2}\left({1-x\over2}-{3x\over4}\right)={x(2-5x)\over8}\qquad\bigl(0\leq x\leq{1\over3}\bigr)\ .$$ If ${1\over3}\leq x\leq{1\over2}$ then $T_x$ is a triangle of area $${\rm area}(T_x)={1\over2}\left({1-x\over2}-{x\over2}\right)^2={(1-2x)^2\over8}\qquad\bigl({1\over3}\leq x\leq{1\over2}\bigr)\ .$$ The probability we are after is then given by $$p=\int_0^{1/2}{{\rm area}(T_x)\over{\rm area}(R_x)}\>2dx=\int_0^{1/3}{2-5x\over1-x}\>dx+\int_{1/3}^{1/2}{(1-2x)^2\over x(1-x)}\>dx=1-\log{27\over16}\ .$$
Proving inequality (induction?)
Apply the AM-GM inequality: $$ (x_1\cdot x_2\cdots x_n)^{\frac{1}{n}}\leq \frac{x_1+x_2+\ldots+x_n}{n} $$ to $\Big(\underbrace{a\cdots a}_{\text{p times}} \cdot \underbrace{1\cdots 1}_{\text{(n-p) times}} \Big)$ gives $$ a^{\frac{p}{n}}\leq \frac{pa+n-p}{n}=\frac{p}{n}a+\frac{n-p}{n}=1+\frac{p}{n}(a-1). $$
How to find the sum of a geometric sequence with an upper bound of n
Note that $$\sum_{i=0}^{n-1}ar^i = a\frac{1-r^n}{1-r}$$ You can read more about geometric series at the wikipedia page. While the formula I provided ends with $n-1$, it shouldn't be too difficult to modify it to a formula that ends with $n$. Remark: $\sum_{i=0}^n a_i$ manes $a_0+ a_1 + \ldots +a_n$. It doesn't mean $a_0+a_1 + \ldots n$.
Closed path trough only finitely many times an open subset disjoint of his base point?
No. $\alpha$ can go only infinitely many times through the intersection $B_2 \cap B_3$. Take a path from $x_0$ to a boundary point $x_1 \in \text{bd} B_3 \cap B_2$. Then you can run in finite time through infinitely loops $l_n$ in $\overline B_3 \cap B_2$ of length $2^{-n}$ such that $l_n \cap \text{bd} B_3 = \{x_1\}$. Finally take any path from $x_1$ to $x_0$. Drawing a picture is helpful.
Collatz Conjecture: Properties of odd integers that make up a cycle
You make an induction on the length of a cycle. And then in the induction step you even assume that the same numbers $x_i$ of a $k$-cycle are also in a $(k+1)$-cycle. Thus you essentially assume that the cycle has length $1$. Edit after revision of the question: The claim $$ 2^{L_k}x_{i+k} = 3^kx_i + 3^{k-1} + \sum_{s=1}^{k-1}3^{k-1-s}2^{\sum_{t=0}^s\nu_2(3x_{i+t}+1)}$$ is verifiably wrong, e.g., for $k=2$, $x_1=27$, $x_2=\frac{3\cdot 27+1}2=41$, the right hand side evaluates to $$9\cdot 27+3+2^{\nu_2(3\cdot 27+1)+\nu_2(3\cdot 41+1)} =254,$$ which is not even a multiple of $x_3=\frac{3\cdot 41+1}{4}=31$.
Can the boundary of image be equal to the image of the boundary for an open bounded set?
Hint: Let $B$ be the upper half of the open unit disc. Define $f(z) = z^3.$ What are $f(\partial B), \partial (f(B))?$
Basis from kernel and image of homomorphism
There are two ways I see to attack the problem. First, there's the naive method, involving directly applying the definitions of kernel and image. Second, there's the matrix method, involving turning the linear operator into a matrix, then analysing the matrix using row reduction. Both methods are viable. You are using the naive method in the first part, and referring to the matrix method in the second part, but seem unable to figure out how to do it. To continue the naive method for part 1, you have a system of two linear equations \begin{align*} -x_{11} - x_{21} + x_{12}+x_{22} &= 0 \\ x_{11} + x_{21} - x_{12} - x_{22} &= 0. \end{align*} Note that both equations are just negatives of each other. We can therefore let one of the variables depend on the other three, which can be free. So, in particular, if we let $t = x_{21}$, $s = x_{12}$, and $r = x_{22}$, then $$x_{11} = s + r - t,$$ and hence \begin{align*} \begin{pmatrix} x_{11} & x_{12} \\ x_{21} & x_{22} \end{pmatrix} &= \begin{pmatrix} s + r - t & s \\ t & r \end{pmatrix} \\ &= s\begin{pmatrix} 1 & 1 \\ 0 & 0\end{pmatrix} + t\begin{pmatrix} -1 & 0 \\ 1 & 0\end{pmatrix} + r\begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix}. \end{align*} This gives you three vectors whose span contain the kernel of $\varphi$. If you verify that all three vectors are also in the kernel of $\varphi$, and that they are linearly independent, then they form a basis for the kernel. The naive method can also be used for the second part. By your own calculation, an arbitrary transformed matrix takes the form, \begin{align*} \varphi(X) &= \begin{pmatrix} 0 & -x_{11} - x_{21} + x_{12}+x_{22} \\ x_{11} + x_{21} - x_{12} - x_{22} & 0 \end{pmatrix} \\ &= x_{11}\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} + x_{21}\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} + x_{12} \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} + x_{22}\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}. \end{align*} We now have a list of $4$ vectors that span the image of $\varphi$, but you might notice that they're very dependent. Reduce it down to a linearly independent list, and you'll have a basis. If you want to do the matrix route properly, you'll need to find the matrix for $\varphi$ with respect to a basis (or two, but why complicate things?). It doesn't matter which basis you choose, so we'll pick an easy one: $$\mathcal{B} = \left(\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}\right).$$ We now compute the matrix of $\phi$ with respect to this basis. Namely, we must transform each basis vector individually in turn, and write the resulting matrices as coordinate column vectors with respect to the given basis. These column vectors will form the columns of our matrix for $\varphi$. Let $(e_{11}, e_{21}, e_{12}, e_{22}) = \mathcal{B})$. Then \begin{align*} \varphi(e_{11}) &= \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} = 0e_{11} + (-1)e_{21} + 1e_{12} + 0e_{22} \\ \varphi(e_{21}) &= \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} = 0e_{11} + (-1)e_{21} + 1e_{12} + 0e_{22} \\ \varphi(e_{11}) &= \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} = 0e_{11} + 1e_{12} + (-1)e_{12} + 0e_{22} \\ \varphi(e_{21}) &= \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} = 0e_{11} + 1e_{22} + (-1)e_{12} + 0e_{22}. \end{align*} We take the coordinates and write them as column vectors, and use them to form our matrix. Our matrix for $\phi$ becomes $$A = \begin{pmatrix} 0 & 0 & 0 & 0 \\ -1 & -1 & 1 & 1 \\ 1 & 1 & -1 & -1 \\ 0 & 0 & 0 & 0 \end{pmatrix}.$$ The nullspace of this matrix will be the set of coordinate column vectors, with respect to $\mathcal{B}$, corresponding to vectors in the kernel of $\phi$. For example, the coordinate column vector $$\begin{pmatrix} 0 \\ 1 \\ 1 \\ 0 \end{pmatrix}$$ belongs to the nullspace of $A$, because the matrix $$0e_{11} + 1e_{21} + 1e_{12} + 0e_{22} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$$ belongs to the kernel of $\varphi$. The columnspace of $A$ corresponds to the image of $\varphi$ in much the same way; it consists of coordinate column vectors with respect to $\mathcal{B}$, corresponding to vectors in the image of $\varphi$. If you reduce the columns down to a basis (there's a method involving row reduction), then they will correspond to a basis for the image of $\varphi$.
A little detail in a question about finding all zeros of $2z^5 +8z -1$ in the ring $\{ 1 < |z| < 2 \}$
Note that when $|z|=1$, we have $|f(z)| = |2z^5+8z-1| = |8z - (1-2z^5)| \ge |8z| - |1-2z^5| \ge 8 - 3 = 5$.
Suppose the joint density is given by $f_{X,Y}(x,y)=Ce^{-(x+y)}$ for $0<x<y<\infty$ and 0 otherwise.
Your computation of $c$ is correct. But $f_X$ cannot depend on $y$ so your computation of $f_X$ is not correct. $f_X(x)=2\int_x^{\infty} e^{-x-y} dy=2e^{-x} \int_x^{\infty} e^{-y} dy=2e^{-2x}$ for $0 &lt;x&lt;\infty$.
Probability that sum of independent uniform variables is less than 1
Prove by induction the more general result: If $0\le t\le 1$, then $$ P(S_n\le t)=\frac{t^n}{n!}, $$ where $S_n$ denotes the sum $X_1+\cdots+X_n$. The base case $n=1$ is clear. If holds for $n$, then calculate for $0\le t\le 1$: $$ P(S_{n+1}\le t)=\int_0^1P(S_n+x\le t)f(x)dx\stackrel{(1)}=\int_0^t\frac{(t-x)^n}{n!}\,dx=\frac{t^{n+1}}{(n+1)!} $$ Note that in (1) the quantity $P(S_n\le t-x)$ is zero when $x&gt;t$.
Prove that the dual basis is linearly independent
The notation is somewhat unusual, but the $e^i$ are the coordinate functions for the basis $e_1,\ldots,e_n$ of $V$. That is, from any linear combination $x_1e_1+\cdots+x_ne_n$, where the $x_1,\ldots,x_n$ are arbitrary scalars, the function $e^j$ extracts the coordinate $x_j$: $$ e^j(x_1e_1+\cdots+x_ne_n)=x_1e^j(e_1)+\cdots+x_ne^j(e_n)\\=x_1\cdot0+x_2\cdot0+\cdots+x_j\cdot1+\cdots+x_n\cdot0=x_j. $$ Now to the question itself: here the linear combination is over the coordinate functions, so the result is an arbitrary linear function on $V$ with scalar values. To show that this is the null function only if all the coefficients are zero, you need to extract an individual coefficient $a_i$. The trick is quite similar to the above, only now it is not applying $e^j$, but applying to $e_i$ what is needed: $$ (a_1e^1+\cdots+a_ne^n)(e_i) =a_1e^1(e_i)+\cdots+a_ne^n(e_i)\\ =a_1\cdot0+\cdots+a_i\cdot1+\cdots+a_n\cdot0 =a_i. $$ So it the linear form in the leftmost parentheses is zero, then certainly $a_i=0$, and this holds for every $i$.
Integral $\int\sqrt{1-\cos2x}~dx=$?
Yes, you can simplify as you did at the end, but no need for integration by parts! Recall, $\sqrt 2$ is merely a constant! $$\int \sqrt 2 \sin x \,dx = \sqrt 2 \int \sin x \,dx = -\sqrt 2 \cos x + C$$ You did the "hardest part" by recognizing the trigonometric identity here. The rest is simply knowing that $\int \sin x = -\cos x + C$
Describing $\frac{\partial}{\partial x} \oint_{\partial \Omega(x)} f(x, n) \; \mathrm{d}n$ as a contour integral.
Your computation is correct (although at the very beginning I would write $d/dx$, since your contour integral is a function of $x$ only). You need to think of $\gamma_x$ as a variational vector field along the curve $\Gamma_x = \partial\Omega(x)$ and then the second integral is a contour integral over $\Gamma_x$ as well. EDIT: In particular, we have the contour integral of the function $(f_n\gamma_x)(n,x)$ along the curve. As I suggested, this appears to depend on the parametrization of $\Gamma_x$, but you can think of watching a point on the curve move as a function of $x$ and take the velocity vector of this trajectory (thinking of $x$ as time). This is in fact not independent of the parametrization because you need to watch the point $\gamma(\theta,x)$ move to nearby points with the same $\theta$ value. The third term seems more interesting. You want to think of $\gamma_{\theta x}$ instead as $(\gamma_x)_\theta$, and then integrate by parts. I believe this gives you another copy of the second term. EDIT: Here is a more conceptual (and more sophisticated) approach. We want to integrate the $1$-form $\omega = f(n,x)\,dn$ over a curve $\Gamma$ in $\Bbb C$. Choose a variational vector field $X$ along $\Gamma$ (in the calculus of variations one often chooses it to be normal to the curve, but that isn't necessary). You can think of this vector field as giving $\partial\Gamma/\partial x$. We ask how the integral varies with $x$. Let's reinterpret this by mapping a rectangle $R_\epsilon = [0,2\pi]\times [x,x+\epsilon]$ to $\Bbb C$. This is your map $\gamma$, and for fixed $x$, the image is the curve $\Gamma_x$. My variation vector field is $X=\gamma_x=\dfrac d{d\epsilon}\Big|_{\epsilon=0}\gamma(n,x+\epsilon)$. We are trying to compute $$\dfrac d{d\epsilon}\Big|_{\epsilon=0} \int_{\Gamma_{x+\epsilon}} \omega.$$ Now we recognize this derivative as the integral of $\mathscr L_X\omega$ and apply the famous Cartan formula $$\mathscr L_X\omega = \iota_X(d\omega) + d(\iota_X\omega).$$ Integrating these over $\Gamma_x$ should give you intrinsic formulations of what we were doing. (Without the Cartan formula, you can use Stokes's Theorem to rewrite that integral over $\partial R_\epsilon$ as a double integral and then do the derivative limit with that.)
Absolute convergence implies convergence in complete spaces
This is the proof of the fact that, if $V$ is a complete normed space, then every absolutely convergent series is convergent. You will see that we need through the proof the fact that $V$ is complete. Let $\sum_{n=1}^\infty \lVert a_n\rVert$ be a convergent series, which means that $\sum_{n=1}^\infty a_n$ is absolutely convergent. We want to prove that this implies that $\sum_{n=1}^\infty a_n$ is a convergent series. But by definition of a convergent series, it will be convergent if and only if the sequence $\{S_k\}_{k=1}^\infty=\{\sum_{n=1}^k a_n\}_{k=1}^\infty$ is convergent. Since $V$ is complete, this sequence will be convergent if and only if it is a Cauchy sequence. (What we can prove is that this will always be a Cauchy sequence, but if the space $V$ was not complete, we couldn't conclude from here that it would be convergent). And indeed, it is a Cauchy sequence because: $$\lVert S_k-S_p\rVert=\biggl\lVert\sum_{n=p+1}^k a_n\biggr\rVert\leq\sum_{n=p+1}^k \lVert a_n\rVert$$ and this expression tends to zero when $k$ and $p$ are taken as close as necessary. This way we have proven that the series $\sum_{n=1}^\infty a_n$ is convergent, as we needed to show.
how to differentiate a function with square root
Here's a hint: $$\frac{d}{dx}\sqrt{x} = \frac{d}{dx} x^{\frac{1}{2}}.$$ By power rule, we know that $\frac{d}{dx} x^{\alpha} = \alpha x^{\alpha-1}$. Applying this for the square root, we have $$\frac{d}{dx}\sqrt{x} = \frac{1}{2}x^{\frac{1}{2}-1} = \frac{1}{2}x^{-\frac{1}{2}}.$$ Then apply the chain rule and you will get the answer.
Lipschitz maps on $[0,1]$
No. For $x\in[0,1]$ you get $f(x)=\lvert f(x)-f(0)\rvert\le\lvert x-0\rvert=x$ and $1-f(x)=\lvert f(1)-f(x)\rvert\le\lvert 1-x\rvert=1-x$, so $f(x)\ge x$. Thus $f(x)=x$.
Counting orbits under $\operatorname{Sym}(3)$
It is $x(n)=\lfloor \frac{n^2+4n+1}{12} \rfloor + \lfloor \frac {n+3}{6} \rfloor$ No proof, just used your data, took two differences, and played.
Is there a name for symmetric matrices with exactly one non-zero eigenvalue which also equals the trace?
just called rank one symmetric. There is a nonzero column vector $v$ so that your matrix is $$ v v^T $$ For any column vector $w$ orthogonal to $v,$ we get $v v^T w = v(v^Tw) = 0$ while the nonzero eigenvalue comes from $v v^T v = v |v|^2,$ and the nonzero eigenvalue is $|v|^2 = v \cdot v$
Strange equality involving a geometric series and gamma and zeta function
Last statement could wrote like this ‎$$\frac{u^{s-1}}{e^u-1}=\frac{e^{-u}u^{s-1}}{1-e^{-u}}=e^{-u}u^{s-1}\sum_{k=0}^{\infty}e^{-ku}=\sum_{k=1}^{\infty}e^{-ku}u^{s-1}$$‎
Definition of a relation generated by R - Can someone explain this definition and give a simple example?
It is stated that $R^\prime$ derived from $R$ via the given rules is an equivalence relation. So we can check if reflexivity, symmetry and transitivity is provided by these rules. Reflexivity: With $(x,y)\in R$ we obtain $(x,x)\in R'$ and $(y,y)\in R'$ since we can apply $x_i=x_{i+1}$ with $i=0$. This way it is assured that all elements of the diagonal are elements of $R'$. $$\{(x,x)|x\in X\}\subseteq R'$$ Symmetry: Whenever $(x,y)\in R'$ we also need $(y,x)\in R'$. This is assured by the rules $(x_i,x_{i+1})\in R$ or $(x_{i+1},x_i)\in R$. Transitivity: Is given by the two rules to obtain symmetry and reflexivity above together with the chaining of the elements with index range $0\leq i\leq n-1$.
Trying to get it right regarding " inverse" applied to relations, functions and operations...
True True True True, but... say $f:X \to Y$ is a function. If you want an inverse function $g:Y \to X$, then yes, $f$ must be injective and surjective. However, if $f$ is not surjective (so $f(X)\neq Y$), then you can still define an inverse function $g:f(X) \to X$. For instance, $\arctan(x):\mathbb{R} \to \mathbb{R}$ is injective but not surjective. However, if we consider it as a function $\mathbb{R}\to (-\pi/2,\pi/2)$, then it does have an inverse function, namely $\tan(x):(-\pi/2,\pi/2) \to \mathbb{R}$. We need to restrict the domain of $\tan(x)$ for them to truly be inverse functions though, because for instance $\tan(2\pi)= 0$ while $\arctan(0) \neq 2\pi$. True. If you are considering a function $f:A \times A \to A$ as $\{((a,b),c) \mid f(a,b)=c\} \subseteq (A \times A) \times A$, then the inverse relation will be a subset of $A \times (A \times A)$, so not a function on $A\times A$. True. And sure, a function $A \to A^2$ could be interesting mathematically. A function $\mathbb{N} \to \mathbb{N}^2$ shows that $\mathbb{N}^2$ is countable, for instance. For the rest of these, you need to say explicitly what you mean by "inverse operation". We can't say what it is "informally" or what properties it has if we don't know what it is formally.
Finding generators of an automorphism group?
The automorphism group of $C_{17}$ is isomorphic to $C_{16}$. If you want to embed $C_2\cong\langle y \rangle$ into it, then letting $C_{16}\cong \langle x \rangle$, you can just send $y\mapsto x^8$. Of course, this isn't very satisfying, and you probably want to actually construct the automorphism itself; this requires some explanation. If you've taken number theory, you've seen $(\mathbb{Z}/17\mathbb{Z})^\times$, the group of units modulo $17$. If you've run into finite fields, you know that the additive group of $\mathbb{F}_{17}$ is isomorphic to $\mathbb{Z}/17\mathbb{Z}$ and the multiplicative group of $\mathbb{F}_{17}$ is $(\mathbb{Z}/17\mathbb{Z})^\times$. If you haven't seen either of those yet, it turns out that if you multiply the integers in a cyclic group of prime order rather than adding them, you get a group as long as you remove $0$ (since it has no multiplicative inverse). You can read about this in any elementary text on modular arithmetic (including Wikipedia). What I am getting at is that there is a rather elegant description of the automorphism group of a cyclic group by multiplications of its elements. If we take an $a\not= 0\in \mathbb{Z}/17\mathbb{Z}$, we can define the function $\theta_a:\mathbb{Z}/17\mathbb{Z}\rightarrow \mathbb{Z}/17\mathbb{Z}$ by $\theta_a(x)=ax$. Noting that $$\theta_a(x+y)=a(x+y)=ax+ay=\theta_a(x)+\theta_a(y)$$ and that $\theta_a(x)=ax=0$ implies that $x=0$, we see that this is an automorphism of $\mathbb{Z}/17\mathbb{Z}$. We can show that such automorphisms form a group under composition. $$(\theta_a\theta_b)(x)=\theta_a(\theta_b(x))=a(bx)=(ab)x=(\theta_{ab})(x)$$ so $\theta_a\theta_b=\theta_{ab}$. (Here, you can see the isomorphism to $(\mathbb{Z}/17\mathbb{Z})^\times$ popping up in the subscript: the map $\theta_a\mapsto a$.) The rest of the group axioms are verified easily. It turns out that this group of $\theta_a$'s constitutes the entire automorphism group of $\mathbb{Z}/17\mathbb{Z}$. So, finding a nontrivial embedding of $C_2$ into $C_{17}$ reduces to finding an element $a\in (\mathbb{Z}/17\mathbb{Z})^\times$ with order $2$. Noting that $16\equiv-1\mod{17}$, you see that the map is $\phi:C_2\rightarrow \text{Aut}{C_2}$ by $\phi:y\rightarrow \theta_{16}$. Of course, this method will work to find the automorphism group of $C_p$ for any prime $p$, and the automorphism group of any cyclic group $C_n$ can be described similarly as long as the $a$'s in $\theta_a$ correspond to units modulo $n$. (For a good time, try forming the semidirect product of $C_n$ by $\text{Aut}{C_n}$ - you now know how to make a holomorph.)
Well-defined, surjective map between two quotient spaces (Linear Algebra - beginner level)
$\Psi$ is well-defined: Let $[w]_U, [w']_U \in W/U$ such that $[w]_U= [w']_U$. We have $$[w]_U = [w']_U \implies w - w' \in U \subseteq V \implies w- w' \in V \implies [w]_V = [w']_V$$ $\Psi$ is surjective: Let $[w]_V \in W/V$. Then $[w]_U \in W/U$ and we have $$\Psi([w]_U) = [w]_V$$ by definition of $\Psi$.
Fréchet mean between points in $\mathbb{R}^3$
In the case $d=d_E$ you have $f_m=\sum w_i x_i$ (assuming $\sum w_i=1$, which can always be achieved by scaling the weights). Proof: for any $x\in\mathbb R^3$ $$\sum w_i\|x-p_i\|^2 = \sum w_i\|x-f_m+f_m-x_i\|^2 \\= \|x-f_m\|^2 + \sum w_i\|f_m-x_i\|^2 + 2\sum w_i\langle x-f_m, f_m-x_i\rangle$$ In the last term we move summation into the bracket to get $$\sum w_i\langle x-f_m, f_m-x_i\rangle=\langle x-f_m, f_m-\sum w_ix_i\rangle=0$$ Thus, $$\sum w_i\|x-p_i\|^2 =\|x-f_m\|^2+\sum w_i\|f_m-x_i\|^2\ge \sum w_i\|f_m-x_i\|^2 $$ QED Also, if your metric is of the form $d(x,y)=\|\Phi(x)-\Phi(y)\|$ where $\Phi\colon\mathbb R^3\to\mathbb R^3$ is some invertible map, then $f_m=\Phi^{-1}(\sum w_i \Phi(x_i))$.
Is argmin attained equivalent to local convexity?
Not 100% sure that I understand everything correctly, but let me give you an example that might answer your question. Let us assume for $\theta = (\theta_1,\theta_2)\in [0,1]^2$ $$ g(x, \theta) = | \theta_1 \max\{x - \theta_2,0\}|^2. $$ So $g$ is the loss of a ReLU neural network trying to approximate the zero function. Choosing $(X^m)_{m=1}^M$ to be i.i.d in $[0,1]$, we have that $\theta^*: = (1, 1)$ is a local minimum of $\hat{f}$ (it is even global) since $\hat{f}(\theta^*)= 0$. This is independent from $M$. In addition, $\theta = (0, 0)$ is a global minimum of $f$ since $f(\theta) = 0$. Since $\theta^*\neq \theta$, this shows that the answer to your question is no in general. The issue that makes this example work is that the maps $f$ and $\hat{f}$ are not necessarily injective and have multiple argminima.
Isomorphism of fundamental groups
Yes, this is true provided that "conjugated" means "conjugate in $GL(2, {\mathbb Z})$". I will leave it to you to construct examples where "conjugate in $SL(2, {\mathbb Z})$" is not enough. Here is a proof, which is a heavy edit of the original argument that was missing several special cases. I will use the notation $K={\mathbb Z}^2$ for the normal subgroup in the semidirect product $G_A={\mathbb Z}^2\rtimes _A {\mathbb Z}$ and the notation $Q$ for the quotient $G_A/K$. Note that $G_A$ is abelian if and only if $A=I$. $G_A$ is nonabelian but contains an abelian subgroup of index $2$ iff $A=-I$. (Recall that $A\in SL(2, {\mathbb Z})$.) Therefore, from now on, I will assume that $A\ne \pm I, B\ne \pm I$. Under the assumption $A\ne \pm I$, the following dichotomy holds: (i) Either $A$ fixes a primitive integer vector $v\in {\mathbb Z}^2$; equivalently, $G_A$ contains a unique maximal normal infinite cyclic subgroup, generated by $v$, or (ii) $A$ has no eigenvalues $\pm 1$, equivalently, $G_A$ contains a unique maximal normal rank 2 abelian subgroup, namely, the subgroup $K$. Equivalence in (i) is clear; to prove equivalence in (ii), note that any abelian subgroup $H$ strictly containing $K$ will project to an infinite cyclic subgroup of the quotient group $Q:=G_A/{\mathbb Z}^2\cong {\mathbb Z}$, which will imply that $H$ is abelian of rank 3. To show uniqueness, suppose that $H&lt; G_A$ is an abelian subgroup of rank 2. Project $H$ to the quotient group $Q$. If the projection is trivial then $H$ is contained in our subgroup $K$. If the projection is nontrivial, then it is infinite cyclic. Since $H$ has rank 2, $H\cap K$ is also infinite cyclic. Since $H$ is normal in $G_A$, $H\cap K$ is generated by a primitive integer vector, an eigenvector of $A$. Consider Case (i). Then either $Av=v$ or $Av=-v$. Which of these two cases occurs depend only on the isomorphism class of the group $G_A$, since $Av=v$ is equivalent to the assumption that $G_A$ has nontrivial center. (ia) If $Av= v$, then $A$ is conjugate in $GL(2, {\mathbb Z})$ to the matrix $$ \left[\begin{array}{cc} 1 &amp; n\\ 0 &amp; 1\end{array}\right] $$ In this case, the center $Z(G_A)$ of $G_A$ is infinite cyclic generated by $v\in {\mathbb Z}^2$ and $G_A/Z(G_A)\cong {\mathbb Z}^2$. For any choice of a basis $\{v, w\}$ of ${\mathbb Z}^2$, a generator $q$ of $Q$ and its lift $u$ to $G_A$, we have $$ [u, w]= v^{\pm n}. $$ Therefore, $|n|$ is uniquely determined by the group $G_A$ and, so is the $GL(2, {\mathbb Z})$-conjugacy class of the matrix $A$. (ib) If $Av=-v$ then $A$ is conjugate in $GL(2, {\mathbb Z})$ to the matrix $$ \left[\begin{array}{cc} -1 &amp; n\\ 0 &amp; -1\end{array}\right] $$ Consider then the subgroup $G_{A^2} &lt; G_A$: It is a characteristic subgroup of $G_A$ since it equals the kernel of the action of $G_A$ on $&lt;v&gt;$ (given by conjugation). The subgroup $G_{A^2}$ has nontrivial center and we recover $|2n|$ from $G_{A^2}$ as in (ia). Thus, again, the group $G_A$ determines the number $|n|$ and, hence, the conjugacy class of $A$ in $GL(2, {\mathbb Z})$. Case (ii). Thus, $A$ (and $B$) does not have primitive integer eigenvectors in ${\mathbb Z}^2$. Then the subgroup $K$ is the unique maximal normal abelian subgroup of rank $2$ in $G_A$. In order to prove maximality, note that any strictly larger abelian subgroup $H$ will project to an infinite cyclic subgroup of the quotient group $Q:=G_A/{\mathbb Z}^2\cong {\mathbb Z}$, which will imply that $H$ is abelian of rank 3. To show uniqueness, suppose that $H&lt; G_A$ is a normal abelian subgroup of rank 2. Project $H$ to the quotient group $Q$. If the projection is trivial then $H$ is contained in our subgroup $K$. If the projection is nontrivial, then it is infinite cyclic. Since $H$ has rank 2, $H\cap K$ is also infinite cyclic. Since $H$ is normal in $G_A$, $H\cap K$ is generated by a primitive integer vector, an eigenvector of $A$, a contradiction. Thus, $K$ is a characteristic subgroup of $G_A$. Hence, any isomorphism $\phi: G_A\to G_B$ will preserve the semidirect product decomposition. The isomorphism $\phi$ also projects to an automorphism of quotient cyclic groups $\bar{\phi}: {\mathbb Z}\to {\mathbb Z}$. There will be two cases to consider: (i) $\overline\phi: 1\mapsto 1$ and (ii) $\overline\phi: 1\mapsto -1$. I will identify matrices $A$ and $B$ in your question with corresponding automorphisms of $K$. Each isomorphism $\phi: G_A\to G_B$ will restrict to an equivariant (with respect to the action of $Q$ on $K$) isomorphism $\psi: {\mathbb Z}^2\to {\mathbb Z}^2$ such that (due to equivariance) either (i) $\psi \circ A= B\circ \phi$, or (ii) $\psi \circ A= B^{-1} \circ \phi$ (depending on whether we are in Case (i) or Case (ii) above). Now, use the fact that $Aut({\mathbb Z}^2)= GL(2, {\mathbb Z})$ which allows us to identify the automorphism $\psi$ with a matrix $C\in GL(2, {\mathbb Z})$. Then the above equations translate into: (i) $CA C^{-1}= B$ or (ii) $C A C^{-1}= B^{-1}$. qed
Evaluating the double integral in R^2.
Draw a picture. The answer is $\displaystyle \int_1^{2} \int_{1/x} ^{x} \frac {x^{2}} {y^{2}} dydx=\frac 9 4$. [Note that $xy=1$ and $y-x=0$ meet at $(1,1)$].
Expected loss in regards to a question containing a continuous random variable with uniform distribution
Not complicated, but doesn't fit nicely into a Comment: (a) OK (b) E(L) = \$160 [P(Ticket)] + \$0 [P(No Ticket)] = \$30 (c) Pay for 3 hours = \$6 &lt; \$30, so paying is better.
do radial limits imply non-tangential limits
Consider $f_k(z) = \exp(-1/(1-z)^k)$ for positive integers $k$. If $z = 1-r e^{i\theta}$, $f_k(z) = \exp(- \exp(ik\theta)/r^k)$ so $|f_k(z)| = \exp(-\cos(k\theta)/r^k)$. Thus $|f_k(z)| \to 0$ as $r \to 0+$ iff $\cos(k\theta) &gt; 0$, while $|f_k(z)| \to \infty$ if $\cos(k\theta) &lt; 0$. For any given angle $0 &lt; \theta &lt; \pi/2$, we can choose $k$ so $\cos(k\theta) &lt; 0$ and $|f_k(z)| \to \infty$ as you approach $1$ at angle $\theta$.
Number of terms in multivariate polynomial
Yes, this was also answered here: $$\sum_{k=0}^n h_k(1,...,1)=\sum_{k=0}^n\binom{r+k-1}{k}=\binom{r+n}{n}.$$
Real Analysis Riemann integrals with piece wise function
Following the hint let us partition $[0,1 ] $ into $2^{n+1}$ intervals of length $2^n$, by splitting the interval at the midpoint, then continuing in that fashion. So our partition is $P_n=\{[x_i,x_{i+1}]=[\frac{(i-1)}{2^{n}},\frac{i}{2^n}],i=1,...,2^n\}$ Now our lower sum is $L(P,f)=\sum_{k=1}^{2^n}\frac{1}{2^n}\inf_{x\in[x_k,x_k+1]}f(x)=\sum_{k=1}^{2^n}\frac{1}{2^n}=1$, and our upper sum is: $U(P,f)\sum_{k=1}^{2^n}\frac{1}{2^n}\sup_{x\in[x_k,x_k+1]}f(x)=\sum_{k=1}^{2^n-2}\frac{1}{2^n}+\frac{4}{2^n}=1-\frac{1}{2^{n-1}}+\frac{1}{2^{n-2}}$ Now: $U(P,f)-L(P,f)= 1-\frac{1}{2^{n-1}}+\frac{1}{2^{n-2}}-1=\frac {1}{2^{n-2}}-\frac {1}{2^{n-1}}=\frac {1}{2^{n-1}}\le \epsilon $, for $n$ sufficiently large. Hope that helps.
remainder of the division of $7^{1203}$ by $143$
Everything was done correctly. The following is an "improvement" that is not needed in this case. Imagine working separately modulo $11$ and $13$. We have $7^{10}\equiv 1\pmod{11}$ and $7^{12}\equiv 1\pmod{13}$. Note that $60$ is the lcm of $10$ and $12$. It follows that $7^{60}\equiv 1$ modulo $11$ and $13$, and hence modulo $143$. If your exponent has been $1263$ instead of $1203$, this observation would have saved a significant amount of time. Note that the more distinct prime divisors the modulus $n$ has, the more we tend to save by using this type of trick.
How do I solve the equation $e^{\ln(2x+1)} = 5x$?
Yes, the exponential and the logarithm are inverses: $$e^{\ln(\color{red}{\rm stuff})} = \color{red}{\rm stuff}, \quad \ln(e^{\color{blue}{\rm stuff}}) = \color{blue}{\rm stuff},$$ so that $e^{\ln(2x+1)} = 2x+1$.
Weibel 1.5.8: Well-definedness of long exact sequence containing homology of $f$
The answer to Q1 is yes. I'll expand on an answer to Q2 below. (I don't know how to get tikzcd diagrams onto StackExchange and I don't have enough reputation to include images. You'll have to follow these links to see what I mean. I also don't have enough reputation to comment. My apologies.) In 1.5.8, Weibel draws a diagram with exact rows in the category of complexes. Let's call this diagram (1). The three rows in (1) have been seen before in 1.5.2 and on p22. The vertical morphisms are $\alpha$, $\beta$ and $\varphi$. The morphism $\varphi$ is new, but $\alpha$ was seen in Lemma 1.5.6 (where, confusingly, the Lemma uses $C$ while (1) uses $B$) and $\beta$ was seen just afterwards in Exercise 1.5.4. We know both $\alpha$ and $\beta$ are quasiisomorphisms. We can deduce that $\varphi$ is a quasiisomorphism too, by the argument Weibel gives after (1) on p23. Applying homology to each of the three rows in this diagram gives us the rows in the following diagram, all of which are long exact sequences. We'll call this second diagram (2). The vertical maps in diagram (2) are either identities (which are isomorphisms) or the morphisms $H_n(\alpha),H_n(\beta),H_n(\varphi)$ (which are isomorphisms). All the vertical morphisms in (2) are therefore all isomorphisms. This proves that all three long exact sequences are isomorphic. EDITS: copy editing
Dividing one polynomial by another
First step: By which monomial should you multiply $x^2-2x+1$ in order to get some polynomial with the same leading coefficient as $x^3-12x^2-42$? That's easy: $x$. So, now you do$$x^3-12x^2-42-x\times(x^2-2x+1)=-10x^2-x-42.$$Now, the same question: By which monomial should you multiply $x^2-2x+1$ in order to get some polynomial with the same leading coefficient as $-10x^2-x-42$? Now, the answer is $-10$. So, now you do$$-10x^2-x-42-(-10)\times(x^2-2x+1)=-21x-32.$$Now the degree is less than the degree of $x^2-2x+1$ and therefore there's nothing else to do: the quotient is $x-10$ and the remainder is $-21x-32$.
Prove the following inequality using the AM-GM Mean
Required to prove: $\displaystyle x_{0}+\frac{1}{x_{0}-x_{1}}+\frac{1}{x_{1}-x_{2}}+\ldots+\frac{1}{x_{n-1}-x_{n}}\geq x_{n}+2n$ i.e., To prove that (TPT) $$\sum a_k + \sum \frac{1}{a_k} \geq 2n$$ where $a_k=x_{k-1}-x_k$ for $k = 1, \ldots, n$ i.e., TPT $$nA + \frac{n}{H} \geq 2n$$ where $A = \frac{\sum a_k}{n}$ is the arithmetic mean and $H = \frac{n}{\sum \frac{1}{a_k}}$ is the harmonic mean i.e., TPT $$A + \frac{1}{H} \geq 2$$ However, $$A + \frac{1}{H} \geq A + \frac{1}{A} \geq 2$$ Hence the proof.
Set of points where multiplication is locally injective
You can't do that. Suppose that $f|_{B((x,y),\varepsilon)}$ is injective. Then $B((x,y),\varepsilon)$ would be homeomorphic to its image. But its image would be an open interval of $\mathbb R$, and therefore they cannot possibly be homeomorphic.
Rewrite a linear operator over $\mathbb{C}$ as $T=a(S_1+S_2)$, where $S_1,S_2$ are isometries, $a\in\mathbb{C}$.
Yes, start with the polar decomposition $T = G P$ where $G$ is unitary and $P = \sqrt{T^*T}$ is positive semidefinite. Let $P = p R$ where $p&gt;0$ is large enough that $I-R^2$ is positive semidefinite. Then $R = (R + i \sqrt{I-R^2})/2 + (R - i \sqrt{I-R^2})/2$, where $R \pm i \sqrt{I-R^2}$ are unitary.
Prove that $(\Bbb{Z}/8)^∗\cong \Bbb{Z}/2×\Bbb{Z}/2$ and $(\Bbb{Z}/9)^∗\cong \Bbb{Z}/6$
Otherwise: $(\mathbf Z/8\mathbf Z)^\times$ has order $4$ and is not cyclic since all its elements have order $2$ (except $1$). $(\mathbf Z/9\mathbf Z)^\times$ has order $6$ and is cyclic: $2$ has order $6$ (and also $2^5=5$).
How to calculate the transition matrix in Markov sampling (example)?
There are many answers on this site dealing with Markov chain Monte Carlo, and a rigorous introduction can be found in many textbooks, but I guess that you are looking for some background context. There is, in general, considerable choice of transition matrices to generate a desired distribution. So there is no unique prescription, but the matrix $M$ must satisfy some conditions. It must be a stochastic matrix which means each of its rows must add up to $1$: $\sum_j M_{ij}=1, \; \forall i$. Assuming the initial distribution $P_0\{i\}$ is normalised, $\sum_i P_0\{i\}=1$, this ensures that the normalization is preserved by the transition matrix: $\sum_j P_1\{j\}=1$ where $P_1\{j\}=\sum_i P_0\{i\} M_{ij}$ (and so on, for all subsequent iterations). You can see that your example matrix $M$ satisfies this condition. It must satisfy $$\sum_j P_Z\{j\} M_{ji}=\sum_j P_Z\{i\} M_{ij}=P_Z\{i\}, \; \forall i.$$ This guarantees that $P_Z\{i\}$ is a steady state solution of the Markov chain. Physically, the first equality represents a balance of probability flows into, and out of, each state $i$, once the desired distribution $P_Z$ has been achieved. The second equality follows from condition 1. It establishes $P_Z$ as a left eigenvector of the matrix $M$, with eigenvalue $1$. Again, if you do the calculations, you can see that your example matrix satisfies this condition. It is not necessary, but is sufficient, and is quite frequently applied, that the matrix should satisfy the detailed balance condition \begin{align*} P_Z\{j\} M_{ji}&amp;=P_Z\{i\} M_{ij}, \; \forall i,j. \\ \Rightarrow\quad \frac{M_{ji}}{M_{ij}} &amp;= \frac{P_Z\{i\}}{P_Z\{j\} } , \; \forall i,j. \end{align*} This guarantees that each term in the sum in condition 2 satisfies the equality: at steady state the flow from $i\rightarrow j$ is exactly balanced by the flow from $j\rightarrow i$, for every $i,j$ pair. Interestingly, your example matrix $M$ does not satisfy the detailed balance condition. Nonetheless, it generates the desired distribution. A way of constructing the matrix to satisfy detailed balance is described in the answer to this question: Designing a Markov chain given its steady state probabilities. If we apply the method to your distribution we get $$ M' = \begin{bmatrix} 0.6 &amp; 0.4 &amp; 0 \\ 0.2 &amp; 0.4 &amp; 0.4 \\ 0 &amp; 0.4 &amp; 0.6 \end{bmatrix} $$ However, this is not the only way of doing it: there is still plenty of flexibility in choosing the elements of $M$ to satisfy detailed balance. All the above relates to the steady state distribution. It is almost always desirable to ensure that an arbitrary initial distribution actually tends towards this steady state, and that imposes further conditions on $M$: it must be irreducible and aperiodic. For an irreducible transition matrix, every state is accessible from every other state, given sufficient iterations of $M$. In other words, it should not be possible to subdivide the states into two or more disconnected sets. The aperiodic condition avoids the possibility that the limiting state is an oscillation between two or more different distributions. There is some discussion of this here:What values makes this Markov chain aperiodic?. A periodic transition matrix will have more than one eigenvalue whose modulus is $1$, representing the non-stationary behaviour: this is to be avoided. I should mention the Metropolis-Hastings algorithm for constructing $M$, which is very useful when the state space is very large, and when the distribution probabilities are only known up to some common normalizing factor (which may be impractical to calculate). This is a common situation in physics simulations. Then the off-diagonal elements $M_{ij}$ are expressed as a product of two terms: a proposal probability $\alpha_{ij}$, and an acceptance probability $A_{ij}$. The elements of $\alpha$ are usually chosen to be nonzero for pairs of states which are "nearby" (in a sense connected with the physics); they are independent of $P_Z$, and typically $\alpha_{ij}=\alpha_{ji}$. The acceptance probability $A_{ij}$ is given by a formula involving the ratio $P_Z\{i\}/P_Z\{j\}$, and so that $A_{ij}/A_{ji}$ satisfies detailed balance. The diagonal elements of $M$ are fixed by condition 1 above. More details are easy to come by, elsewhere. Lastly, there is some scope for discussing whether one transition matrix is "better" than another, for example in the sense of converging more quickly towards the stationary distribution. Again, I don't think it's appropriate here to go into details, but this will be related to the subsidiary eigenvalues of $M$, particularly the second largest (the largest being $1$).
Determining the structure of the abelian group, integral matrix
You know that invertible matrix operations do not change the range of the matrix, thus $$A_R = \mathrm{cok}(R \colon \mathbb Z^4 \to \mathbb Z^4)$$ remains the same. Subtract row one once from row two and four and twice from row three. You end up with the matrix $$ \begin{pmatrix} 2 &amp; 0 &amp; 0 &amp; 0 \\ 4 &amp; 0 &amp; 0 &amp; 1 \\ 6 &amp; 6 &amp; 0 &amp; 2 \\ 4 &amp; 6 &amp; 0 &amp; 2\end{pmatrix} $$ Now you make the obvious column operations to end up with $$ \begin{pmatrix} 2 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 \\ 0 &amp; 6 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \end{pmatrix} $$ The result is clearly $$ A_R = \mathbb Z \oplus \mathbb Z/2 \mathbb Z \oplus \mathbb Z/6\mathbb Z $$
Completeness and the Completeness Axiom
Wikipedia discusses several forms of completeness for the real numbers, ending with: For an ordered field, Cauchy completeness is weaker than the other forms of completeness on this page. But Cauchy completeness and the Archimedean property taken together are equivalent to the others.
How to show that $\vert \vert T \vert \vert = \sqrt{c}$ where $c:=\sum\limits_{j=1}^{\infty}\sum\limits_{k=1}^{\infty}\vert t_{jk}\vert^{2}$
Let $t$ be diagonal: $t_{ii}=1$ for $i=1\dots n$, $t_{ii}=0$ for all other $i$. Then $c=n$. But $\|T\|\le 1&lt; \sqrt c$ for $n&gt;1$.
How to converted $x^3+y^3=6xy$ to parametric equations?
Taking $x=r \cos^{\frac{2}{3}} \phi$ and $y = r \sin^{\frac{2}{3}} \phi$ we obtain $r=6 \cos^{\frac{2}{3}} \phi \sin^{\frac{2}{3}} \phi$ and now it can be used for parametric represantation: $$x=6\cos^{\frac{4}{3}} \phi \sin^{\frac{2}{3}} \phi$$ $$y=6\cos^{\frac{2}{3}} \phi \sin^{\frac{4}{3}} \phi$$
GCD and the cycle decomposition of a permutation
I have not seen any name for such permutations, but I can give you some results when every cycle order $\vert c_i \vert$ is either $1$ or a prime not equal to any $\vert c_j \vert$ Taking this case, a permutation $\sigma = c_i \ldots c_r$ having $r$ disjoint cycles can be seen as a permutation with order $\prod_{i=1}^{r}\vert c_r \vert$ since cycle orders are guaranteed to not share common factors. In the case of exponentiation, let $\sigma^x = c_1^x \ldots c_r^x = c_1^{x \pmod{\vert c_1\vert}}, \ldots, c_r^{x \pmod{\vert c_r \vert}}$. For example, in Cryptography, the Pohlig-Hellman algorithm, viewed as permutations, uses the advantage of the exponent $x$ being reduced in every cycle order. If we keep extending the definition, we find all the values $x \pmod{\vert c_i \vert}$, in the worst-case after $Deg(\sigma)$ iterations. The last step of the algorithm to recover $x$ is by solving an instance of the CRT (Chinese Remainder Theorem) as $\gcd(\vert c_i \vert , \ldots, \vert c_r \vert)=1$. Time ago I wrote about these examples (p. 12-13). Another fact in exponentiation, if we desire to build a permutation $\sigma$ with big order, the only way to have such order is having a distinct prime decomposition in the cycle orders. This way, even having big order, retrieving the exponent $x$ is possible using the exponentiation trick shown above. This is a consequence of finding the smallest tradeoff between $Deg(\sigma)=\sum_{i=1}^r \vert c_i \vert$ and $Ord_G(\sigma)=\prod_{i=1}^r \vert c_i \vert$. View it as having small degree (number of symbols) but a big order i.e $Deg(\sigma)=2+3+5+\ldots+29=129$ and $Ord(\sigma)=2\cdot 3 \cdot 5 \ldots 29=6469693230$ and we recover $1 &lt; x &lt; 6469693230$, in the worst case, after $129$ iterations plus the CRT solving Moreover, the case where every cycle order $\vert c_i \vert$ is prime satisfies the $\gcd=1$ assumption. Those permutations in $S_n$ can be constructed by selecting a $r$-sum $\sum_{i=1}^{r} \lambda_i=n$ where every $\lambda_i$ is prime or $\lambda_i=1$. Once you find a valid sequence, you can count all the permutations in the same conjugacy class as integer partitions of $n$ and the conjugacy classes of $S_n$ are related. For counting permutations first list the $k$ primes under $n$, containing $1$ this is $S=(1,p_1,\ldots,p_k)$. Then the sum of the cardinality of each conjugacy class associated to a distinct sequence on $P$ that sums $n$ gives you the total nº of permutations having prime cycle order or equal to $1$.
The convegrence of $L^2$ norm of the gradient of a mollified sequence of functions
This is not true as stated. For example, in one dimension let $\Omega=(-1,1)$ and $$u_n (x) = \begin{cases} 1/n,\quad &amp;x&gt;0\\ 0,\quad &amp;x\le 0 \end{cases} $$ This converges to $0$ in $L^2$. The convolution $(u_n)_{\epsilon_n}$ goes from $0$ to $1/n$ on the interval $[-\epsilon_n,\epsilon_n]$. Therefore (using FTC and Cauchy-Schwarz), $$ \int |\nabla (u_n)_{\epsilon_n}|^2 \ge \frac{1}{n^2 \epsilon_n} $$ and this need not go to zero in general.
How many pillars do we need to surround a triangular area?
Small example first to see what is going on. If we have to cover a perimeter of $15$ feet, we would need $3+1$ pillars if it would be a straight line. Notice $15/5=3$, which corresponds to the number of gaps between the poles. It would look like: $$ \underbrace{\Big| \text{5 feet}\Big| \text{5 feet}\Big|\text{5 feet}\Big|}_{15 \text{ feet }} $$ We see that there is one more pillar than there are gaps. However, we have $3$ pillars if the endpoints were to be connected (which is the case for our triangle). Since when we get back to the starting point we see that we have already placed a pillar there. Indeed $25+40+55=120$ then $120/5=24$. If we were to place them along a line we would have to place $25$ pillars, but for our triangle the endpoints overlap so we only need $24$ pillars.
Is it a straight line indeed?
Note that $1\over \tan x$ = $\cot x$ = $\tan\left(\frac\pi2 - x\right)$ Therefore $\tan^{-1}\left(\frac{1}{\tan x}\right) = \frac\pi2 - x$ (P.S: $0 &lt; x &lt; \pi$)
Diophantine equation of 2nd order
Hint: $$x^2-4y^2=(x-2y)(x+2y)=3$$ Can you continue from here?
A construction with ruler and rusty compass
Apologies, but I didnt like MathManiacs answer. To say the least its confusingly written, and fails to carry a construction through to the end, and makes assumptions about what is provided. Some vital details seem to be brushed over. I attempted to follow his construction and failed; I dont know why. I couldnt see what direction he was going and the instructions themselves dont make a whole lot of sense to me when you actually try to carry it out. Perhaps its my failure but I thought Id give a different perspective, filling some gaps. I of course dont mean any disrespect to the community but I didnt feel as though this 3 year old question was ever truly answered. I only came across this question in an Ask Jeeves search because I particularly love geometry. Here is my own construction. Hopefully you can follow along with the animated GIF image. This isn't a proof; it's just a method of construction. We have an arbitrary line m, and an arbitrary line segment AB defining the center, A, and the radius AB of a circle. No circle is drawn there because circle A(B) is of any size and you only have a rusty compass, which is defined for you in the upper-right corner. We wish to find the points of intersection on line m of where circle A(B) intersects the line. Segment AB can be extended to line m, intersecting it at C. Naturally for this construction to work line AB cannot be parallel with line m. See the notes toward the bottom for dealing with this case. Draw a circle A(r) centered at point A. This circle will intersect the extended line AB at some point D. Draw a completely arbitrary line n, passing through point A but not coincident with AB, intersecting the constant circle A(r) at point E. Points D and E define a new line, DE. Construct two new lines, parallel to DE, passing through B and C, intersecting line n at points F and G, respectively. Points D and F define a new line, DF. Construct a new line parallel to DF, but passing through G. This line intersects the extended line AB at point H. Draw a line parallel to line m but passing through H. This line intersects the constant circle A(r) at two points X1 and X2. Lines AX1 and AX2 intersect the line m at two points I1 and I2. These are your desired intersections. If X1 and X2 dont exist then neither do the intersections I1 and I2, for obvious reasons. I have no citation for this construction. It is based on the notion of shrinking down the scale, so that the circle A(B) becomes A(r), and line m becomes a new line that can be directly intersected with A(r) (producing the X intersections). The two I points - the intersections of interest - are projections of the two X points, away from the point A and onto the line m again at appropriate scale. Circle A(r) is to circle A(B) what the horizontal line passing through H is to the line m. The bulk of the construction is scaling line m down in proportion. Creating a Parallel Line To draw a parallel line using a straightedge and a single circle, the Wikipedia has an article entitled Poncelet-Steiner Theorem which contains an animated GIF depicting the construction. It requires 5 intermediary lines to be drawn for each parallel line to be made, and 1 circle (of any size) for each line from which a parallel is to be made. The Case of Parallel AB In the case that line segment AB is parallel to line m, a separate construction could be implemented to rotate AB into a new non-parallel position. You then restart this line-circle intersection construction using a new line segment AB'. That said, the construction for this case is actually easier than just explained. You dont need to rotate AB at all. I wont show you the modification, but you can swap the roles of lines m and n in the original construction and adapt the construction appropriately. This can be done fairly readily and with no additional cost. I absolutely love geometric constructions and in particular restricted constructions. Send me more. The aforementioned Poncelet-Steiner theorem, and of course the Mohr-Mascheroni Theorem, are fascinating restricted constructions in their own right.
System of vector equations (in Minkowski space)
Since these are all scalar products, they're invariant under Lorentz transformations of the $P_\alpha$. You can use this to choose a coordinate system in which $P_\alpha$ only has components up to $\alpha$, e.g. $P_0$ would be $(\sqrt{m_0},0,0,\ldots)$, $P_1$ would be $(m_1/\sqrt{m_0},\sqrt{m_1^2/m_0+2m_1+m_0-p_1^2},0,0,\ldots)$, and so on. Of course whether the radicands are non-negative depends on the parameters; if they aren't, there's no solution (since any solution could be transformed to this form). Since you're looking for $n+1$ vectors, not $n$, there won't be a solution in general, unless the parameters are such that $P_n$ has the right length without requiring an $(n+1)$-th component.
For what Value of K, the following system of equations will have No Solution?
Second equation can be re-written as: $-2Kx-8y=-20$ When the left hand side of the above equation is equal to the left hand side of the first equation, i.e. $2x-8y=3$, we can say that the system has no solutions as the same left hand side cannot have two different values. So:$$-2K=2 \implies K=-1$$ If you are new to this you can refer to Consistent and Inconsistent solutions for system of equations.
What is this quadratic form as an invariant?
This quadratic form, the "trace form", and the corresponding bilinear form $(x, y) \mapsto \operatorname{Tr}^E_F(xy)$, is an important tool in algebraic number theory (e.g. it's vital in understanding ramification, since it's used in the definition of the different ideal). I'm not sure why you want to find some other invariant that's equivalent to this one: what's wrong with the one that you've got?
Prove or disprove that if one root of a quadratic equation is rational, then the other root must be rational as well.
If you want it simple, my suggestion is to proceed in the reverse direction. That is, start with the two roots r,s of your polynomial and derive the coefficients from there. Alternatively, a formalization of your claim could be: $$\forall a, b, c \in \mathbb{Z} \left ( \exists x \in \mathbb{Q} \ \ ax^2 + bx + c = 0 \Rightarrow \left ( \forall y \in \mathbb{R} \ \ ay^2 + by + c = 0 \Rightarrow y \in \mathbb{Q} \right ) \right )$$ Your starting point seems reasonable, but very incomplete.
Using the l'hopital's rule for a $1^∞$ case
Note that $$ \frac{1}{x}\ln(1-2x)= \frac{\ln(1-2x)}{x},$$ which is the indeterminate $\frac{0}{0}$ in the limit as $x \rightarrow 0.$ This is what you should apply L'Hopital's rule to. Why? Because differentiation will make the expression simpler -- it will get rid of the logarithm. You never want to apply L'Hopital's rule if you have to differentiate $1/\ln(f(x))$, as differentiation will not get rid of the logarithm in such a case -- it won't make the expression simpler. Any time you end up with a more complicated expression after L'Hopital's rule, you've made a mistake.
Total number paths between two nodes in a complete graph
I assume $P$ is the number of nodes. The actual number of paths between the two nodes which have $k$ extra vertices is $\frac{(P-2)!}{(P-2-k)!}$, for $0\leq k\leq P-2$. This is because you can choose $k$ other nodes out of the remaining $P-2$ in $\frac{(P-2)!}{(P-2-k)!k!}$ ways, and then you can put those $k$ nodes in any order in the path. So the total number of paths is given by adding together these values for all possible $k$, i.e. $$\sum_{k=0}^{P-2}\frac{(P-2)!}{(P-2-k)!}=(P-2)!\sum_{j=0}^{P-2}\frac{1}{j!},$$ where $j=P-2-k$. Now $\sum_{j=0}^{\infty}\frac{1}{j!}=e$, so $$(P-2)!\sum_{j=0}^{P-2}\frac{1}{j!}\approx(P-2)!e.$$ This will always be an overestimate, but it will be an overestimate by $(P-2)!\sum_{j=P-1}^{\infty}\frac1{j!}\approx\frac1{P-1}$, and in fact the error will always be less than $1$. So just rounding down to the next integer will give the right answer.
True? $\sum_{n=1}^\infty {n+m-1 \choose m}^{-1} = 1 + \frac {1}{m-1} $
Using the formula ${1/ {n\choose r}}=(n+1)\int_0^1 u^r(1-u)^{n-r}\,du,$ gives: $${1\over {n+m-1 \choose m}}=\int_0^1 (n+m)\,u^m\,(1-u)^{n-1}\,du. $$ Thus, adding over $n$ gives \begin{eqnarray*} \sum_{n=1}^\infty {1\over {n+m-1 \choose m} } &amp;=&amp;\int_0^1 \sum_{n=1}^\infty (n+m)\,u^m\,(1-u)^{n-1}\,du\\[8pt] &amp;=&amp;\int_0^1 (1+mu)\,u^{m-2} \,du\\[8pt] &amp;=&amp;{1\over m-1}+{m\over m}\\[8pt] &amp;=&amp;{1\over m-1}+1. \end{eqnarray*}
Is there conjectures in deep learning theory?
I don't know of any conjecture that captures all theoretical problems raised by deep learning . Also, there is not much consensus on how to properly formalize the questions. For example Naftali Tishby's group uses an information theoretic approach , while others - like Sanjeev Arora ,looks more to the (non-convex) optimization problems and different simplified or specific architectures (random or linear networks). A good theory of deep learning should explain two things: 1.Why they generalize so well ,although the standard learning theory indicate that they should overfit . 2.Why the optimization is practically so easy ,although the current theory say it is hard.
How to prove $\sqrt{2 + 2cos(\frac{π}{2^{k+1}})} = 2cos(\frac{π}{2^{k+2}})$?
Hint: Use that $$\cos(2x)=2\cos^2(x)-1$$ And squaring your equation we get $$\cos\left(\frac{\pi}{2^{k+1}}\right)=2\cos^2\left(\frac{\pi}{2^{k+2}}\right)-1$$
Expected number of cycles in random function
Let $a_n$ denote the expected number of cycles in a random endofunction on a set of $n$ elements. Marko Riedel has already posted the derivation of the asymptotic result $a_n\sim\frac12\log n$ by P. Flajolet and A. Odlyzko in a comment. Here I'll also derive the constant term, and I'll use Prüfer sequences instead of generating functions, so you can choose the proof that's more accessible for you. Joyal's proof of Cayley's formula for the number of labeled trees with a given number of vertices makes use of a correspondence between endofunctions and vertebrates. A vertebrate is a tree with an ordered pair of (possibly identical) vertices distinguished. The name is based on the idea that the unique path in the tree connecting the two distinguished vertices forms a sort of spine of the tree. An endofunction can be viewed as a family of rooted trees, where the roots are the elements contained in cycles. A vertebrate can also be viewed as a family of rooted trees, where the roots are the elements along the spine. There are $k!$ possible orders of the $k$ elements along the spine, and $k!$ possible permutations specifying the endofunction on the $k$ elements contained in cycles, so there are as many endofunctions with $k$ cyclic elements as there are vertebrates with spine length $k$. To count the vertebrates with spine length $k$, label the vertices $1$ through $n$ and fix the spine to be $n,n-1,\ldots,n-k+2,m$, in that order, with $m\lt n-k+2$ arbitrary. Then the last $k-2$ elements of the Prüfer code for the vertebrate are $n-k+2,\ldots,n-1$, and the element before that is one of the $k$ elements on the spine; the remaining $n-k-1$ elements of the Prüfer code can be chosen arbitrarily. We also need to multiply by $\frac{n!}{(n-k)!}$ because we arbitrarily fixed the $k$ elements on the spine, so the number of vertebrates with spine length $k$ (and thus the number of endofunctions with $k$ cyclic elements) is $$ kn^{n-k-1}\frac{n!}{(n-k)!}=(n-1)!\frac{kn^{n-k}}{(n-k)!}\;. $$ The endofunction permutes the $k$ cyclic elements with some permutation on $k$ elements. The expected number of cycles in a uniformly random permutation on $k$ elements is $H_k=\sum_{j=1}^k\frac1j$, the $k$-th harmonic number. (This is proved e.g. here.) Thus, the expected number of cycles in a uniformly random endofunction is $$ a_n=\frac{\sum_{k=1}^n(n-1)!\frac{kn^{n-k}}{(n-k)!}\sum_{j=1}^k\frac1j}{\sum_{k=1}^n(n-1)!\frac{kn^{n-k}}{(n-k)!}}\;. $$ The denominator is of course simply the total number $n^n$ of endofunctions. Exchanging the summations yields: \begin{eqnarray*} a_n&amp;=&amp;n^{-n}\sum_{k=1}^n(n-1)!\frac{kn^{n-k}}{(n-k)!}\sum_{j=1}^k\frac1j \\&amp;=&amp; (n-1)!\sum_{j=1}^n\frac1j\sum_{k=j}^n\frac{kn^{-k}}{(n-k)!} \\ &amp;=&amp; n!\sum_{j=1}^n\frac1j\frac{n^{-j}}{(n-j)!}\;. \end{eqnarray*} The last equality is provided by Wolfram|Alpha; I haven't tried to prove it, but if you're interested I suspect there's a nice combinatorial argument for it. I doubt this can be simplified further, but we can derive an upper bound and an asymptotic result from it. For the upper bound, note that $\frac{n!n^{-j}}{(n-j)!}\le1$, so $a_n\le\sum_{j=1}^n\frac1j=H_n=\log n+\gamma+o(1)$, where $\gamma$ is the Euler–Mascheroni constant. For the asymptotics, I won't rigorously derive the order of the approximations, but the result should be correct up to terms of order $o(1)$. We have \begin{eqnarray*} a_n &amp;=&amp; n!\sum_{j=1}^n\frac1j\frac{n^{-j}}{(n-j)!} \\ &amp;=&amp; \sum_{j=1}^n\frac1j\prod_{k=0}^{j-1}\frac{n-k}n \\ &amp;=&amp; \sum_{j=1}^n\frac1j\prod_{k=0}^{j-1}\left(1-\frac kn\right) \\ &amp;=&amp; \sum_{j=1}^n\frac1j\exp\left(\sum_{k=0}^{j-1}\log\left(1-\frac kn\right)\right) \\ &amp;\approx&amp; \sum_{j=1}^n\frac1j\exp\left(-\sum_{k=0}^{j-1}\frac kn\right) \\ &amp;=&amp; \sum_{j=1}^n\frac1j\exp\left(-\frac{j(j-1)}{2n}\right) \\ &amp;\approx&amp; \sum_{j=1}^\infty\frac1j\exp\left(-\frac{j(j-1)}{2n}\right)\;. \end{eqnarray*} Thus we want to sum the harmonic series, damped by a Gaussian. This is hard to do directly because of the smooth transition from the discrete behaviour for small $j$ that we need to sum exactly to the approximately continuous behaviour for large $j$ that we could approximate with an integral. To deal with these two domains separately, we can introduce an intermediate term with a simple exponential that we can sum exactly: \begin{eqnarray*} a_n&amp;\approx&amp;\sum_{j=1}^\infty\frac1j\exp\left(-\frac{j(j-1)}{2n}\right) \\ &amp;=&amp; \sum_{j=1}^\infty\frac1j\exp\left(-\frac j{\sqrt{2n}}\right)+\sum_{j=1}^\infty\frac1j\left(\exp\left(-\frac{j(j-1)}{2n}\right)-\exp\left(-\frac j{\sqrt{2n}}\right)\right)\;. \end{eqnarray*} The first sum is the integral of a geometric series: $$ \sum_{j=1}^\infty\frac{q^j}j=\int_0^1\sum_{j=1}^\infty q^{j-1}\mathrm dq=\int_0^1\frac1{1-q}\,\mathrm dq=-\log(1-q)\;, $$ and $$ -\log\left(1-\exp\left(-\frac1{\sqrt{2n}}\right)\right)\approx\log\sqrt{2n}=\frac12(\log n+\log2)\;. $$ The summand in the second sum has no singularity at $j=0$ and varies smoothly, so we can approximate it by an integral over $x=\frac j{\sqrt{2n}}$: \begin{eqnarray*} a_n&amp;\approx&amp;\frac12(\log n+\log2)+\int_0^\infty\frac1x\left(\mathrm e^{-x^2}-\mathrm e^{-x}\right)\mathrm dx \\ &amp;=&amp;\frac12(\log n+\log2+\gamma)\;, \end{eqnarray*} where the last equality is again provided by Wolfram|Alpha. A plot of the exact values against $\log n$, compared to the linear asymptote, confirms the asymptotic result: P.S.: We can use the same method to obtain an asymptotic result for the expected length of the cycle that a uniformly random element in a uniformly random endofunction leads to. The element is equally likely to lead to any one of the cyclic elements, so the expected length of the cycle is the expected length $\frac{k+1}2$ of the cycle containing a random cyclic element: $$ n^{-n}\sum_{k=1}^n(n-1)!\frac{kn^{n-k}}{(n-k)!}\cdot\frac{k+1}2=\frac12+\frac{n!}{2n}\sum_{k=1}^n\frac{k^2n^{-k}}{(n-k)!}\;. $$ Here we don't have any complication due to a singularity at the origin, and we can directly approximate the sum by an integral over $x=\frac k{\sqrt{2n}}$ as above, yielding $$ \frac{n!}{2n}\sum_{k=1}^n\frac{k^2n^{-k}}{(n-k)!}\approx\frac1{2n}(2n)^{\frac32}\int_0^\infty x^2\mathrm e^{-x^2}\mathrm dx=\sqrt{\frac{\pi n}8}\;. $$ I didn't include the constant term $\frac12$ because the approximations also contribute a constant term; the correct constant term seems to be $\frac13$. Here's a log-log plot of the exact values compared to the asymptote $\sqrt{\frac{\pi n}8}$ and to $\sqrt{\frac{\pi n}8}+\frac13$: So the expected number of cycles is of order $\log\sqrt n$, compared to $\log n$ for permutations, and the expected length of the cycle containing a random element is of order $\sqrt n$, compared to $n$ for permutations. The number $k$ of cyclic elements is directly related to the expected length $\frac{k+1}2$ of the cycle containing a random element, so its expected value is asymptotically $\sqrt{\frac{\pi n}2}$, as also derived in the paper by Flajolet and Odlyzko.
integral of continuous function can be very small
The only way "$|\int_x^y f(s)\,ds| \le \epsilon$ for all $\epsilon &gt; 0$" can be true is if $\int_x^y f(s)\,ds = 0$. Hence, the statement is equivalent to "$\exists M, \forall y &gt; x \ge M, \int_x^y f(s)\,ds = 0$" This clearly fails if we take $f(x) = 1$. Hence, the statement "$\exists M, \forall y&gt; x\geq M$ such that $|\int_{x}^{y}{f(s)ds}|\leq \varepsilon$ for all $\varepsilon&gt;0$" is false.
Geodesic deviation on a unit sphere
The geodesic connecting the two north-moving particles does not ly on a constant $\phi$ coordinate line.
Expected area of the intersection of two circles
Let $\vec{x}_1$ and $\vec{x}_2$ be the two points. Let $r = |\vec{x}_1 - \vec{x}_2|$ be the distance between them. By elementary geometry, if you draw two circle of radius $r$ using these two points as center, the area of their intersection is given by $(\frac{2\pi}{3} - \frac{\sqrt{3}}{2})r^2$. Notice the picking of two points are independent, we have: $$E\left[ \vec{x}_1 \cdot \vec{x}_2 \right] = E\left[\vec{x}_1\right] \cdot E\left[\vec{x}_2\right] = \vec{0} \cdot \vec{0} = 0$$ This implies $$E\left[|\vec{x}_1 - \vec{x}_2|^2\right] = E\left[|\vec{x}_1|^2 + |\vec{x}_2|^2\right] = 2\frac{\int_0^R r^3 dr}{\int_0^R rdr} = R^2$$ As a result, the expected area of the intersection is $(\frac{2\pi}{3} - \frac{\sqrt{3}}{2})R^2$. Update for those who are curious Let $\mathscr{C}$ be the set of events such that the intersection contains the origin, then: $$\begin{align} \operatorname{Prob}\left[\,\mathscr{C} \right] &amp;= \frac{2\pi + 3\sqrt{3}}{6\pi}\\ E\left[\,|\vec{x}_1 - \vec{x}_2|^2 : \mathscr{C}\right] &amp;= \frac{20\pi + 21\sqrt{3}}{6(2\pi + 3\sqrt{3})} \end{align}$$ and the expected area of intersection conditional to containing the center is given by: $$\frac{(4\pi - 3\sqrt{3})(20\pi + 21\sqrt{3})}{36(2\pi + 3\sqrt{3})}$$ To evaluate $E\left[ \varphi(\vec{x}_1,\vec{x}_2) ) : \mathscr{C} \right]$ for any function $\varphi( \vec{x}_1, \vec{x}_2 )$ which is symmetric and rotational invariant w.r.t its argument, you need to compute an integral of the from: $$\int_{\frac{\pi}{3}}^{\pi} \frac{d\theta}{\pi} \left[2\int_{0}^{R} \frac{2udu}{R^2} \left( \int_{\alpha(\theta)u}^{u} \frac{2vdv}{R^2} \phi( \vec{x}_1, \vec{x}_2 ) \right) \right] $$ where $u \ge v$ are $|\vec{x}_1|$ and $|\vec{x}_2|$ sorted in descending order. $\theta$ is the angle between $\vec{x}_1$ and $\vec{x}_2$. The mysterious $\alpha(\theta)$ is $\max(2\cos(\theta),0)$ for $\theta \in [\frac{\pi}{3},\pi]$. The integral is a big mess and I need a computer algebra system to crank that out. I won't provide more details on this part not relevant to the main answer.
If $\sum_{n=0}^\infty|a_n|<\infty$, prove that $\sum_{n=0}^\infty a_nx^n$ converges uniformly for $x\in[0,1]$.
You should not use any theorem. For $|x| \le 1$ : $$f_k(x) = \sum_{n=0}^k a_n x^n, \qquad f(x) = \sum_{n=0}^\infty a_n x^n$$ Since $\sum_{n=0}^\infty |a_n| &lt; \infty$, the series for $f(x)$ converges absolutely and is well-defined. Also, again for $|x| \le 1$ : $$|f(x)-f_k(x)| =| \sum_{n=k+1}^\infty a_n x^n| \le \sum_{n=k+1}^\infty |a_n x^n| \le \sum_{n=k+1}^\infty |a_n |$$ where $\lim_{k \to \infty}\sum_{n=k+1}^\infty |a_n | =0$. Q.e.d. this is the definition of the uniform convergence.
Application of Law of Total Expectation
Note, that we have by the law of total expectation \begin{align*} \def\P{\mathbf P}\def\E{\mathbf E} \E[S_\tau] &amp;= \sum_{k=0}^\infty \E[S_\tau \mid \tau = k]\P(\tau = k)\\ &amp;= \sum_{k=0}^\infty \E[S_k]\P(\tau = k)\\ &amp;= \sum_{k=0}^\infty k\mu\P(\tau = k)\\ &amp;= \mu \sum_{k=0}^\infty k\P(\tau = k)\\ &amp;= \mu\E[\tau]. \end{align*}
Step By Step how to graph trig from local maxima and minimum
$y=\frac12\sec(\frac\pi3x)-2$ has a minimum at $(0,-1.5)$ and a maximum at $(3, -2.5)$. You only need to shift your graph to the left by $0.75$. Hint: To shift a graph to the left by $C$, change $f(x)$ to $f(x+C)$.
Orthogonality of Characters
Here $\bar{f}_r(a_i)$ simply denotes the complex conjugate of $f_r(a_i)$. So there is no separate function $\bar{f}$ in this equation; it's just an equation about the functions $f_r$.
Is this reasoning correct for average prime gap?
I don't think the reasoning is correct. It's pretty clear that $$ p_n-\Pi^{-1}(n)\sim\sqrt{\frac{n}{\log n}}\not\sim2\sqrt n\log n\sim\frac{2\sqrt n\log^2n}{\log n-2} $$ and I don't see any proof (or reason to expect proof) that the inverse of the partials gives the average gap size without further assumptions. In any case there's no justification for the last line giving $g_n$. Instead, note that $(p_2-p_1)+(p_3-p_2)+\cdots+(p_{n+1}-p_n)$ telescopes to $p_{n+1}-p_1$ and so the average of the first $n$ gaps is $$ \frac{p_{n+1}-p_1}{n} $$ which is $$ \log n+\log\log n-1+\frac{\log\log n-2}{\log n}-\frac{(\log\log n)^2-6\log\log n+11}{2\log^2n}+O\left(\left(\frac{\log\log n}{\log n}\right)^3\right) $$ by a result of Cipolla (1902).
Geometric Sequence Question
Hint: An increase of $10\%$ is represented by multiplying the quantity by $1.1$.
Pointwise convergence on measurable functions is not metrizable
The set of all functions in $\mathbb{R}^D$ in the pointwise convergence topoloy is metrisable iff $D$ is countable (for uncountable $D$ indeed first countability fails). A variant of this proof works for the subspace of measurable functions: Suppose that $U_n$, $n \in \mathbb{N}$, is a local base at the function $f \equiv 0$ (for definiteness, the constant 0 function is certainly in the space). Then each $U_n$ contains a basic open subset $$B(F_n, r_n) = \{f : D \rightarrow \mathbb{R}: f \text{ measurable }: \forall x \in F: |f(x)| &lt; r \}$$ for some finite subset $F_n \subset D$ and some $r_n &gt; 0$. (This is a standard base for the pointwise convergence topology, relativised to our subspace, corresponding to product open sets that depend on finitely many coordinates (i.e. the set $F_n$). If the $U_n$ form a local base for $f$ so do the sets $B(F_n, r_n)$. As $D$ is uncountable, $D \setminus \cup_n F_n$ is also uncountable so we have some $p \in D \setminus \cup_n F_n$. Then $f \in B(\{p\}, 1)$ which is open, so should contain some $$B(F_n, r_n) \subset U_n \subset B(\{p\}, 1)$$ for some $n$ But if $g$ is a measurable function that maps $p$ to $2$, and $F_n$ to $0$ (surely these exist, we can even take all other values to be $0$ too, so $g = 2\chi_{\{p\}}$), then $g \notin B(\{p\},1)$, while $g \in U_n$, contradicting the supposed inclusion. So we cannot have a local countable base for uncountable $D$ in the space of measurable functions in $\mathbb{R}^D$ in the product/pointwise topology. The set of measurable functions is so large we can just mimick the proof for all functions $\mathbb{R}$ to $D$.
Squeeze Theorem vs instantaneous rates of change
For $\sin$ function one may use $|\sin(x)|\le |x|$ to find the limit at zero, and further $\cos(x)\le\frac{\sin(x)}{x}\le 1$ to find its derivative at zero. Both proofs use the squeeze theorem.
How to prove $f$ is $C^\infty$
The answers really are in the comments to the main question, and are due to Martin Blas Perez and Michael Hoppe. Hence this post is CW. The function $z=f(x, y)$ satisfies the equation $$ (x^2+y^4)z+z^3=1, $$ so it can never vanish. In particular, by the implicit function theorem, $f$ must be $C^1$ and $$ \begin{array}{cc} \frac{\partial z}{\partial x} = -\frac{2xz}{x^2+y^4+3z^2}, &amp; \frac{\partial z}{\partial y} =-\frac{4y^3z}{x^2+y^4+3z^2}, \end{array} $$ and we remark that the denominators can never vanish. By iterating this process we see that $f$ is infinitely differentiable. Actually, Micheal even computed an explicit expression for $z$. See his comment to the main question, which I hope he turns into an answer. From that expression it is manifest that $z$ is smooth.
Two questions about torsionless modules
THis is a standard fact about torsionless modules, which you can find (together with a characterisation of reflexive modules) in the book by auslander reiten smalo (look in the index for torsionless or reflexive modules). Every module of the form $\Omega^{1}(N)$ is torsionless for some module $N$. Then $Ext^{1}(\Omega^{1}(N),\Lambda)=Ext^{2}(N, \Lambda)=0$ for all $N$, which exactly means that the injective dimension of the algebra is at most 1.
Showing that similar matrices have the same minimal polynomial
if $f$ is a polynomial then you have: $$f(x)=a_nx^n+...+a_1x+a_0$$ Then you have $$f(P^{-1}AP)=a_n(P^{-1}AP)^n+...+a_1(P^{-1}AP)+a_0I$$ which is $$f(P^{-1}AP)=a_n(P^{-1}APP^{-1}AP...P^{-1}AP)+...+a_1(P^{-1}AP)+a_0P^{-1}IP$$ or $$f(P^{-1}AP)=P^{-1}a_nA^nP+...+P^{-1}a_1AP+a_0P^{-1}IP$$ which finally gives $$f(P^{-1}AP)=P^{-1}(a_nA^n+...+a_1A+a_0I)P=P^{-1}f(A)P$$
Show that for two maximal matchings $M_1,M_2$ holds $\lvert M_1 \rvert \le 2\lvert M_2 \rvert$
This can be proved by contradiction. Suppose that, on the contrary, $\lvert M_1 \rvert &gt; 2\lvert M_2 \rvert.$ At most $2\lvert M_2 \rvert$ edges of $ M_1$ can have a vertex in common with an edge of $ M_2$ and so there is at least one edge which doesn't. This edge can be added to $M_2$ which therefore cannot have been a maximal matching.
Factories with different amount of products and chance of being broken delivered to a store
Your intuition is sound, re the relative chances of a product coming from each store, but you must first normalize the probabilities of the product coming from each store. The chances that a product comes from F-1 (factory 1) is $(1/7).$ The chances that a product comes from F-2 is $(2/7).$ The chances that a product comes from F-3 is $(4/7).$ So, the correct computation is $$[(1/7) \times (0.03)] + [(2/7) \times (0.02)] + [(4/7) \times (0.01)].$$
Recurrence and Fibonacci: $a_{n+1}=\frac {1+a_n}{2+a_n}$
Hint: By letting $2+a_n=\frac{b_{n+1}}{b_n}$, we get $$\frac{b_{n+2}}{b_{n+1}}-2 = 1 - \frac{b_{n}}{b_{n+1}} \implies b_{n+2} -3b_{n+1} + b_{n} = 0.$$
Asymptotical stability for $x'=-x^3$ and $x'=x^3$
You can use the Lyapunov function $V(x)=x^2$. For the system $$\tag{1} \dot x= -x^3 $$ the derivative along the trajectories is $$ \dot V\Big|_{(1)}= 2x\dot x= -2x^4. $$ It is negative definite, so the the equilibrium point $x=0$ is asymptotically stable. For the system $$\tag{2} \dot x= x^3 $$ the derivative along the trajectories is $$ \dot V\Big|_{(1)}= 2x\dot x= 2x^4. $$ It is positive definite, so the the equilibrium point $x=0$ is unstable. Alternatively, you can notice the fact that for the system (1) $x(t)$ decreases (because $\dot x&lt;0$) when $x(t)&gt;0$ and increases (because $\dot x&gt;0$) when $x(t)&lt;0$. Hence, the solutions always move toward zero, that is, the norm $\|x(t)-x_0\|=|x(t)|$ decreases. This implies stability of the origin. In order to prove asymptotic stability, consider the general solution to (1) $$ x(t)=\frac1{\sqrt{C+2t}}. $$ It tends to zero when $t\to+\infty$, thus, the origin is asymptotically stable. As for the system (2), $x(t)$ increases when $x(t)&gt;0$ and decreases when $x(t)&lt;0$. Hence, the direction is away from zero, the norm increases, hence, the origin is unstable.
Asymptotics of $\operatorname{agm}(1,x)$ when $x\to\infty$
For every $x&gt;1$, $$\mathrm{agm}(1,x)=x\cdot\mathrm{agm}(1,x^{-1})=\frac{\pi x}{2K(u(x))}$$ where $$u(x)^2=1-x^{-2}$$ and $K$ denotes the complete elliptic integral of the first kind. When $x\to\infty$, $u(x)\to1$. The asymptotic expansion of $K(k)$ when $k\to1$ reads $$K(k)=-\frac12\log|1-k|+O(1)$$ hence, when $x\to\infty$, $$\mathrm{agm}(1,x)=\frac{\pi x}{-\log|1-u(x)|+O(1)}=\frac{\pi x}{2\log x+O(1)}$$ in particular, $$\lim_{x\to\infty}\frac{\log x}x\cdot\mathrm{agm}(1,x)=\frac\pi2$$
Finding a basis for $sp(4,\mathbb{C})$ and related basis.
The elements of $\mathfrak{sp}_4(\mathbb{C})$ can be written in block form $$ X=\begin{pmatrix} A &amp; B \\ C &amp; -A^t \end{pmatrix} $$ with $2\times 2$ blocks, where $B^t=B$ and $C^t=C$. From here we count that the dimension of the Lie algebra equals $10$, because $A$ has $4$ parameters, and $B$ and $C$ have three parameters (so that $4+3+3=10$). Now just take these parameters $0$ respectively $1$ to obtain a basis with $10$ elements. Usually we write $H_1=diag(1,0,-1,0)$, $H_2=diag(0,1,0,-1)$, $X_{12}=E_{12}-E_{43}$ etc. There are many references, e.g., here.
How do I prove that a function decreases/increases on an interval?
you have to solve the inequality $$f'(x)=4(x-3)^3\geq 0$$ or $$f'(x)=4(x-3)^3\le 0$$ thus your function is increasing if $$3\le x&lt;+\infty$$ or decreasing if $$-\infty&lt;x&lt;3$$
A question about proof of Bolzano-weierstrass theorem.
I'm not sure the proof you've given works, but the statement is true: every sequence has a monotonic subsequence. In fact, this then implies Bolzano-Weierstrass since bounded monotonic sequence must converge. A fun proof of this uses graph theory and something called the infinite Ramsey theorem. This says if you colour the edges of a complete infinite graph in finitely many colours, there is guaranteed to be a monochromatic infinite complete subgraph. To apply this, for a sequence $(x_n)$ construct a infinite complete graph on the positive integers. Then colour an edge $ij$ blue if $x_i \leq x_j$ ($i &lt; j$) or red otherwise. By infinite Ramsey you have a complete infinite subgraph, which corresponds exactly to a monotonic subsequence. A good exercise in analysis is to prove that every sequence has a monotonic sequence without using graph theory. One of the reason's I doubt you can just prove this by picking $A_{n_k}$ as small as possible is that the subsequence may be monotonic increasing or decreasing.
Evaluating $\int_0^\infty \frac{1}{x+1-u}\cdot \frac{\mathrm{d}x}{\log^2 x+\pi^2}$ using real methods.
By setting $e^\eta=v=1-u$ and exploiting the inverse Laplace transform we have: $$\int_{1}^{+\infty}\frac{dx}{(x+v)\left(\pi^2+\log^2 x\right)}=\frac{1}{\pi}\int_{1}^{+\infty}\frac{dx}{x+v}\int_{0}^{+\infty}\sin(\pi z)\,x^{-z}\,dz.\tag{1}$$ Moreover, if $0&lt;z&lt;1$, by exploiting the Euler beta function and the reflection formulas for the $\Gamma$ function we have: $$ \int_{0}^{+\infty}\frac{x^{-z}}{x+v}\,dx = \frac{\pi}{v^z\sin(\pi z)}=\int_{0}^{+\infty}\frac{x^z}{x+vx^2}\,dx\tag{2}$$ and rearranging carefully we get: $$ \int_{0}^{+\infty}\frac{dx}{(x+v)\left(\pi^2+\log^2 x\right)}=\int_{0}^{1}t^{-v}\,dt - \int_{0}^{+\infty}v^{-z}\,dz = \frac{1}{1-v}-\frac{1}{\log v}\tag{3}$$ as wanted. Anyhow, this is not the sketch of a really alternative proof, since the inverse Laplace transform is just the residue theorem in disguise.
Evaluate $\int_0^{\frac{\pi}{2}}\frac{x^2}{1+\cos^2 x}dx$
This integral is deviously difficult. It may look like it has the solution I provided for this integral, but, as you will see, there is an additional wrinkle. As in the linked solution, I will express the integral in a form in which I may attack via Cauchy's theorem. I will then deduce the contour over which we will need to apply Cauchy's theorem - this is the additional wrinkle. So, to start, let's rewrite the integral, which I will denote $K$: $$K=\int_0^{\pi/2} dx \frac{x^2}{1+\cos^2{x}} = 2 \int_0^{\pi/2} dx \frac{x^2}{3+\cos{2 x}} = \frac18 \int_{-\pi}^{\pi} dy \frac{y^2}{3+\cos{y}}$$ Along these lines, define $$J(a) = \int_{-\pi}^{\pi} dy \frac{e^{i a y}}{3+\cos{y}}$$ so that $$K = -\frac18 J''(0)$$ This then allows us to define the following contour integral $$I(a) = \oint_C dz \frac{z^a}{z^2+6 z+1}$$ where, as I will explain below, $C$ is the following contour which is a modified keyhole contour about the negative real axis within the unit circle. The modification is a pair of semicircular bumps of radius $\epsilon$ about the point $z=-3+2 \sqrt{2}$. These bumps are necessary because the integrand has a pole within the unit circle on the chosen branch cut of the integrand (i.e., the negative real axis). The strategy is to express $I(a)$ in terms of $J(a)$ and other terms, which will then yield an expression for $J(a)$ by Cauchy's theorem. To do this, we write out $I(a)$ explicitly in terms of integrals along the eight pieces of the contour $C$. This is an exercise in parametrization which is better left to the reader. The result is $$I(a) = \frac{i}{2} J(a) +e^{i \pi} \int_1^{3-2 \sqrt{2}+\epsilon} dx \frac{e^{i \pi a} x^a}{x^2-6 x+1} \\+ i \epsilon \int_{\pi}^0 d\phi \, e^{i \phi} \frac{ [e^{i \pi}(3-2 \sqrt{2})+\epsilon e^{i \phi}]^a}{(-3+2 \sqrt{2}+\epsilon e^{i \phi})^2+6 (-3+2 \sqrt{2}+\epsilon e^{i \phi})+1}\\ +e^{i \pi} \int_{3-2 \sqrt{2}-\epsilon}^{\epsilon} dx \frac{e^{i \pi a} x^a}{x^2-6 x+1} + i \epsilon \int_{\pi}^{-\pi} d\phi \, e^{i \phi} \frac{\epsilon^a e^{i a \phi}}{\epsilon^2 e^{i 2 \phi}+6 \epsilon e^{i \phi}+1}\\ +e^{-i \pi} \int_{\epsilon}^{3-2 \sqrt{2}-\epsilon} dx \frac{e^{-i \pi a} x^a}{x^2-6 x+1}\\ + i \epsilon \int_0^{-\pi} d\phi \, e^{i \phi} \frac{ [e^{-i \pi}(3-2 \sqrt{2})+\epsilon e^{i \phi}]^a}{(-3+2 \sqrt{2}+\epsilon e^{i \phi})^2+6 (-3+2 \sqrt{2}+\epsilon e^{i \phi})+1}\\+e^{-i \pi} \int_{3-2 \sqrt{2}+\epsilon}^1 dx \frac{e^{-i \pi a} x^a}{x^2-6 x+1} $$ Actually, one remark is worth making at this point. The parametrization of the semicircular bumps about the point $z=-3+2 \sqrt{2}$ requires us to invoke the correct representation of the minus sign. Thus, above the branch cut, $-3+2 \sqrt{2}=e^{i \pi} (3-2 \sqrt{2})$, while below the branch cut, $-3+2 \sqrt{2}=e^{-i \pi} (3-2 \sqrt{2})$. This is crucial to the analysis. We consider the limit as $\epsilon \to 0$; in this limit, the above simplifies considerably: $$I(a) = \frac{i}{2} J(a) + i 2 \sin{\pi a} \: PV \int_0^1 dx \frac{x^a}{x^2-6 x+1} - i \frac{\pi}{4 \sqrt{2}} 2 \cos{\pi a} \: (3-2 \sqrt{2})^a$$ where $PV$ denotes the Cauchy principal value. By Cauchy's theorem, $I(a)=0$; this produces an expression for $J(a)$: $$J(a) = \frac{\pi}{\sqrt{2}} \cos{\pi a} \: (3-2 \sqrt{2})^a - 4 \sin{\pi a}\: PV \int_0^1 dx \frac{x^a}{x^2-6 x+1}$$ All we need to do now is use the above expression for $K=-\frac18 J''(0)$; the result is $$K = \pi \, PV \int_0^1 dx \frac{\log{x}}{x^2-6 x+1} + \frac{\pi^3}{8 \sqrt{2}} - \frac{\pi}{8 \sqrt{2}} \log^2{(3-2 \sqrt{2})}$$ To complete this analysis, we evaluate the above integral. Note that it is a principal value, as the integration interval includes a pole of the integrand. To this effect, we note that $$\frac1{x^2-6 x+1} = \frac1{4 \sqrt{2}} \left (\frac1{x-(3+2 \sqrt{2})} - \frac1{x-(3-2 \sqrt{2})} \right )$$ We use the following formulae, when $b \gt 1$,: $$\int_0^1 dx \frac{\log{x}}{x-b} = \text{Li}_2{\left ( \frac1{b}\right )}$$ and $$PV\int_0^1 dx \frac{\log{x}}{x-1/b} =\frac{\pi^2}{3} - \frac12 \log^2{\left ( \frac1{b}\right )} - \text{Li}_2{\left ( \frac1{b}\right )}$$ Thus, using the value $b=3+2 \sqrt{2}$, we find that $$PV \int_0^1 dx \frac{\log{x}}{x^2-6 x+1} = \frac1{4 \sqrt{2}} \left [2 \text{Li}_2{(3-2 \sqrt{2})} - \frac{\pi^2}{3} + \frac12 \log^2{(3-2 \sqrt{2})} \right ] $$ Putting this all together, we find that the $\log^2$ terms cancel and we have $$K = \int_0^{\pi/2} dx \frac{x^2}{1+\cos^2{x}} = \frac{\pi}{2 \sqrt{2}} \text{Li}_2{(3-2 \sqrt{2})} + \frac{\pi^3}{24 \sqrt{2}}$$ To summarize, we replaced the $x^2$ in the original integral by a factor of $e^{i a y}$, which then became a factor of $z^a$ in a contour integral. The integrand of the contour integral not only has a branch point at $z=0$, but also a pole on the branch cut. The contour of the contour integral then needed to have a detour around this pole on the branch cut. Using Cauchy's theorem, we got an expression for the integral over $e^{i a y}$, and differentiating twice, we got an expression for the original integral in terms of the principal value of another integral. The problem then reduced to evaluating the principal value of that integral.
Number partition - prove recursive formula
HINT: $p_k(n)$ is the number of partitions of $n$ into $k$ parts. There are two kinds of partitions of $n$ into $k$ parts: those having at least one part of size $1$, and those in which every part has size at least $2$. If every part has size at least $2$, you can subtract one from each part to get a partition of $n-k$ into $k$ parts. And if there’s a part of size $1$, you can ... ?
Show that an $R$-module homomorphism $\alpha:A \to B$ is injective.
You should embed the kernel $K$ into some injective module $Q$ and then apply $\operatorname{Hom}(-,Q)$ to the composition $$K \to A \to B.$$ You get a map which is both surjective and zero, hence $\operatorname{Hom}(K,Q)=0$. By our choice we have an embedding $K \subset Q$, so we deduce $K=0$.
Guidance for this?? I have no idea even to get started.
Hint: Take a line $R$ parallel to $L$ and passing through the center of the sphere. Take the intersections of this line with the sphere (two points) The planes tangent to the sphere at these two points are orthogonal to the line $R$ and also to the line $L$. Added: The line $R$ has equation: $(x,y,z)^T=(3,0,4)^T+t(1,-1,1)^T \qquad (1)$ The sphere has equation: $(x-3)^2+y^2+(z-4)^2=12 \qquad (2)$ To find the points of intersection substitute $(1)$ in $(2)$, this gives: $$ [(3+t)-3]^2+(-t)^2+[(4+t)-4]^2=12 \iff t^2 = 4 \iff t=\pm 2 $$ The points of intersection of $R$ with the sphere are $A=(5,-2,6)^T$ and $B=(1,2,2)^T$ Since the vector direction of $R$ is $\vec r=(1,-1,1)^T$ the plane passing in $A$ and orthogonal to $R$ has equation: $$ (\vec x-\vec A) \cdot \vec r=0 \iff (x-5)-(y+2)+(z-6)=0 $$ so the equation of this plane is $x-y+z=13$. And in the same manner you can find the other plane corresponding to $t=-2$.
Prove that there is no other sub-limits?
It can be shown that $\lim\limits_{n \to \infty} a_n = x$ iff we have $\lim\limits_{n \to \infty} b_n = \lim\limits_{n \to \infty} c_n = x$. Theorem: Suppose that $\lim\limits_{n \to \infty} a_n = a$. Now suppose that $b_n$ is a subsequence of $a_n$; that is, suppose that $b_n = a_{k_n}$ where $k_n$ is strictly increasing. Then $\lim\limits_{n \to \infty} b_n = a$. Proof: take $\epsilon &gt; 0$. Take some $M$ s.t. for all $n \geq M$, $|a_n - a| &lt; \epsilon$. Then we note that for all $n$, we have $k_n \geq n$ (because $k_0 \geq 0$ and $k$ is strictly increasing). Then for all $n \geq N$, we have $k_n \geq n \geq N$ and thus $|b_n - a| = |a_{k_n} - a| &lt; \epsilon$. Thus, by the $N-\epsilon$ definition of limits, we have $\lim\limits_{n \to \infty} b_n = a$. In your case, we have two subsequences $c_n = a_{2n}$ and $d_n = a_{2n + 1}$ of $a$. Then if $\lim\limits_{n \to \infty} a_n = a$, we would have $\lim\limits_{n \to \infty} c_n = \lim\limits_{n \to \infty} d_n = a$ by the above. On the other hand, suppose we have $a_n$, $c_n$, and $d_n$ as defined above and that $c_n$ and $d_n$ have the same limit $x$. It can be shown that $a_n$ also has limit $x$ as follows: Suppose we have $\epsilon &gt; 0$. Take $N, M$ s.t. for all $n \geq N$, $|c_n - x| &lt; \epsilon$ and for all $m \geq M$, $|d_n - x| &lt; \epsilon$. Define $K = \max(2N, 2M + 1)$. Now suppose we have $k \geq K$. Case 1: $k$ is even. Write $k = 2n$. Then $n \geq N$. Then $|a_k - x| = |a_{2n} - x| = |c_n - x| &lt; \epsilon$. Case 2: $k$ is odd. Write $k = 2m + 1$. Then $m \geq M$. Then $|a_k - x| = |a_{2m + 1} - x| = |d_n - x| &lt; \epsilon$. Then by the definition of limit, we have $\lim\limits_{n \to \infty} a_n = x$.
Triangular distribution question
The solution is incorrect as it stands - the two $[-2,2]$ distributions should add to a $[-4,4]$ triangle. However if $U1$ and $U2$ were uniform on $[-1,1]$, the density equation would be correct, because the two sides of the triangular distribution fold together with Theo taking the bigger piece, so the slope is $-\frac 12,$ not $-\frac 14$. The shortcut is to remember that the area under the density graph must be $1$, so the right triangle sitting on a base of $2$ units will have maximum density value $1$ at measure $10$.
Can someone explain the steps of manipulation of this equation for the value of x?
4: What do you get if you do $7+7+...+7+7~~ n$ times? $~~7n!$ (from $0 $ to $ n-1$ there are n steps.) For step 5 you have to write: $$x = \frac{5n^2-5n}{2} + 7n - 7$$ as$$\sum_{j=0}^{n-1} j=\sum_{j=1}^{n-1} j=\sum_{j=1}^{n} j-n=\frac{(n+1)n}{2}-n=\frac{n^2+n-2n}{2}=\frac{n^2-n}{2}$$