title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to use powers on matrices
Since first part is answered by Doug M , for the second bit we can approach by the method of induction. We consider this matrix ,$\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}^{n}$ Lets check for n=2, $\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}.\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}=\begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix}$. Lets check for n=3, $\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}.\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}=\begin{pmatrix} 1 & 3 \\ 0 & 1 \end{pmatrix}$. I think we got a pattern! So, our hypthesis is $\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}^{n} = \begin{pmatrix} 1 & n \\ 0 & 1 \end{pmatrix}$ .To prove our hypothesis we use first principle of mathematical induction. Let us assume that this form is true for $n = k$ that is multiplying $k$ times which gives us $\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}^{k} =\begin{pmatrix} 1 & k \\ 0 & 1 \end{pmatrix}$ Now if we prove it for $n=k+1$ then it's true for all $n \geq 1$ So, consider $\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}^{k+1} = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}^{k}.\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}$ Now $\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}^{k} = \begin{pmatrix} 1 & k \\ 0 & 1 \end{pmatrix}$ from our assumption, so $\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}^{k+1} = \begin{pmatrix} 1 & k \\ 0 & 1 \end{pmatrix}.\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & (k+1) \\ 0 & 1 \end{pmatrix}$ from our first case. Hence this holds for any $n \geq 1$,so as a particular case of yours,for $n=99$,this case also holds that is $\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}^{99} = \begin{pmatrix} 1 & 99 \\ 0 & 1 \end{pmatrix}$. Hope this helps!
Can I construct a line segment with the length $e$ or $\pi$?
Not with compass and straightedge, starting from a given interval of length $1$. The line segments that are so constructible have lengths that are a (small) subset of the set of real algebraic numbers, and $e$ and $\pi$ are known to be transcendental (non-algebraic).
Determine $s_{10}$ for $\sum_{n=1}^{\infty}\frac{1}{n^2}$
By creative telescoping $$\begin{eqnarray*} \sum_{n\geq m}\frac{1}{n^2}&=&\sum_{n\geq m}\left(\frac{1}{n}-\frac{1}{(n+1)}\right)+\frac{1}{2}\sum_{n\geq m}\left(\frac{1}{n^2}-\frac{1}{(n+1)^2}\right)\\&+&\frac{1}{6}\sum_{n\geq m}\left(\frac{1}{n^3}-\frac{1}{(n+1)^3}\right)-\frac{1}{6}\sum_{n\geq m}\frac{1}{n^3(n+1)^3}\tag{1}\end{eqnarray*} $$ hence by recalling that $\zeta(2)=\frac{\pi^2}{6}$ and plugging in $m=11$ in $(1)$ we get: $$ H_{10}^{(2)} = \frac{\pi^2}{6}-\frac{1}{11}-\frac{1}{2\cdot 11^2}-\frac{1}{6\cdot 11^3}+\frac{1}{6}\sum_{n\geq 11}\frac{1}{n^3(n+1)^3}\tag{2} $$ hence $\frac{\pi^2}{6}-\frac{1}{11}-\frac{1}{2\cdot 11^2}-\frac{1}{6\cdot 11^3}$ is an approximation of $H_{10}^{(2)}$ with an error $\leq 10^{-6}$. By Wolstenholme's theorem we know that $11$ is a divisor of the numerator of $H_{10}^{(2)}$ and the denominator of $H_{10}^{(2)}$ is clearly a divisor of $2^6\cdot 3^4\cdot 5^2\cdot 7^2$. These facts allow to turn the previous approximation into an exact evaluation: $$ H_{10}^{(2)}=\frac{1968329}{1270080}\tag{3}$$ but I wonder why a reasonable person should follow this approach, instead of just adding ten terms of the form $\frac{1}{n^2}$.
Prove that there do not exist 2 integers $a$ and $b$ such that $a^2-b^2=n$
You can do it that way, but let me offer an argument that's easier to keep tidy. Since $(2n)^2=4n^2,\,(2n+1)^2=4n^2+4n+1$, a square is a multiple of $4$ or $1$ more than one. So the difference of two squares can't be $2$ more than a multiple of $4$, but it can be $4n=(n+1)^2-(n-1)^2$.
Expected value of the maximum and minimum of $n$ independent random vairables
$f_Z(x)=nx^{n-1}$ then $EZ=\int_0^1 x nx^{n-1}dx=n\int_0^1x^ndx=n\left(\frac{x^{n+1}}{n+1}|_0^1\right)=\frac{n}{n+1}$ $f_Y(x)=n(1-x)^{n-1}$ then $EY=\int_0^1 x n(1-x)^{n-1}dx=n\int_0^1x(1-x)^{n-1}dx$ $=-\int_0^1xd(1-x)^{n}=-x(1-x)^n|_0^1+\int_0^1(1-x)^ndx=-\int_0^1(1-x)^nd(1-x)$ $=-\frac{(1-x)^{n+1}}{n+1}|_0^1=\frac{1}{n+1}$
How to calculate the integral of $x^x$ between $0$ and $1$ using series?
Just write $$x^x=e^{x\ln x}=\sum_{n=0}^{\infty}\frac{(x\ln x)^n}{n!}$$ and use that $$\int_0^1(x\ln x)^n dx=\frac{(-1)^n n!}{(n+1)^{n+1}}.$$ To show the last formula, make the change of variables $x=e^{-y}$ so that $$\int_0^1(x\ln x)^n dx=(-1)^{n}\int_0^{\infty}y^ne^{-(n+1)y}dy,$$ which is clearly expressible in terms of the Gamma function.
Understanding the basis term
The set $B'$ spans a space strictly larger than the row space of your matrix. For example $(0,1,3) \in \text{span}\{(0,1,0),(0,0,1)\}$ but $(0,1,3) \notin \text{span}\{(0,1,2)\}$
Interesting Gradient problem? Don't know how to write it?
I found this to be an engaging problem since it admits a somewhat more geometrical approach in the event that $\nabla f$ is assumed continuous, though as Michael Hardy has shown, a analytic technique suffices even when such an assumption is not made. And in response to his closing question, I would like to point out that the chain rule indeed applies and shows that $df(p)/d\varepsilon \mid_{\varepsilon = 0} > 0$ (this is verified in my proof below); for more on the chain rule the present context, see this wikipedia page. In the event that $\nabla f$ is continuous near $q$, we can argue as follows: Let $\gamma(s)$ be any differentiable path joining $q = \gamma(0)$ and $p = \gamma(\varepsilon)$; by the chain rule $\dfrac{df(\gamma(s))}{ds} = \nabla f(\gamma(s)) \cdot \gamma'(s), \tag{1}$ and then since $f(q) = 0$, $f(p) = f(p) - f(q) = \int_0^\varepsilon \dfrac{df(\gamma(s))}{ds} ds = \int_0^\varepsilon \nabla f(\gamma(s)) \cdot \gamma'(s) ds. \tag{2}$ We take the path $\gamma(s) = q + sv$ in (1); then $\gamma'(s) = v$ for all $s$ so that (2) becomes $f(p) = \int_0^\varepsilon \nabla f(q + sv) \cdot v \; ds. \tag{3}$ Since $v = \dfrac{\nabla f(q)}{\Vert \nabla f(q) \Vert}, \tag{4}$ the integrand in (3) becomes $\theta(s) = \dfrac{\nabla f(q + sv) \cdot \nabla f(q)}{\Vert \nabla f(q) \Vert}; \tag{5}$ furthermore we have $\theta(0) = \dfrac{\nabla f(q) \cdot \nabla f(q)}{\Vert \nabla f(q) \Vert} = \dfrac{\Vert \nabla f(q) \Vert^2}{\Vert \nabla f(q) \Vert} = \Vert \nabla f(q) \Vert > 0. \tag{6}$ $\theta(s)$ is a continuous function of $s$ since $\nabla f(q + sv)$ is continuous in $s$; it follows that for $m$ with $0 < m < \Vert \nabla f(q) \Vert$ there exists an $\varepsilon_1 > 0$ such that $\theta(s) > m$ if $s < \varepsilon_1$. Then if $0 < \varepsilon < \varepsilon_1$, $f(p) = \int_0^\epsilon \theta(s) ds > \int _0^\varepsilon m ds = m \varepsilon > 0. \tag{7}$ This shows that $f(p) = f(q + \varepsilon v) > 0$ for $0 < \varepsilon < \varepsilon_1$, as per request. QED. Hope this helps. Cheers, and as ever, Fiat Lux!!!
If $B$ is an infinite set and $A\subset B$ is finite then $|B|=|B \setminus A|$
Every infinite set has an infinite countable subset (assuming AC). So $B\setminus A=C\uplus D$ where $C$ is countable and infinite and the union is disjoint. Then $B\setminus A=|C|+|D|=\aleph_0+|D|$. Also $B=A\uplus C\uplus D$, so $|B|=|A|+\aleph_0+|D|$. As $A$ is finite, $|A|+\aleph_0=\aleph_0$.
If $2^x=9$ and $4^y=27$ then what will $\frac {x+2y}{2y-4x}$ be?
since $$2^{x+2y}=2^x\cdot 4^y=9\cdot 27=3^5$$ and note $$2^{2y-4x}=4^y/(2^x)^4=27/9^4=3^{-5}$$ so $$2^{x+2y}\cdot 2^{2y-4x}=2^{(x+2y)+(2y-4x)}=3^5\cdot 3^{-5}=1=2^{0}$$ so $$\Longrightarrow x+2y+2y-4x=0$$ $$\Longrightarrow x+2y=-(2y-4x)$$ so $$\Longrightarrow \dfrac{x+2y}{2y-4x}=-1$$
Mapping of equivalence classes of integers modulo $n$
You have $[n] = [0]$, but $n \not=0$. This is why is it not 1-1; it does not preserve distinctness. Remember your domain is the integers.
About the complement of a subobject in a topos
The object $M$ must be connected and the subobject $S \to X$ must be complemented. Then $X$ would be the coproduct of $S$ and $\neg S$, and the functor $\hom{(M, -)}$ would then preserve coproducts by the definition of connected object in an extensive category. It thus follows that every arrow from $M$ to $X$ must either factor through $S$ or through $\neg S$, but not both.
How to solve this stochastic integrals?
Besides applying the Itô formula, there is also the possibility to calculate a stochastic integral using approximation by step functions. It works fine for the integral $\int_0^T B_t \, dB_t$: Let $$f_n(t,\omega) := \sum_{j=1}^n 1_{[t_{j-1},t_{j})}(t) \cdot B_{t_{j-1}}(\omega)$$ where $\Pi_n$ is a partition of $[0,T]$ such that $$\max_{t_j \in \Pi_n} |t_j-t_{j+1}| \to 0 \qquad (n \to \infty)$$ Since the Brownian motion has continuous paths, it's not difficult to show that $(f_n)_{n \in \mathbb{N}}$ is an approximating sequence as required in the definition of the stochastic integral, therefore $$\int_0^T f_n(t) \, dB_t \stackrel{L^2}{\to} \int_0^T B_t \, dB_t \qquad (n \to \infty)$$ By definition, it's easy to calculate the stochastic integral of step functions: $$\int_0^T f_n(t) \, dB_t = \sum_{j=1}^n B_{t_{j-1}} \cdot (B_{t_{j}}-B_{t_{j-1}}) \tag{1} $$ On the other hand, we have $$\begin{align} B_T^2 &= \left( \sum_{j=1}^n B_{t_j}-B_{t_{j-1}} \right) \cdot \left( \sum_{k=1}^n B_{t_k}-B_{t_{k-1}} \right) =\underbrace{\sum_{j=1}^n (B_{t_j}-B_{t_{j-1}})^2}_{\stackrel{L^2}{\to} T} + 2 \underbrace{\sum_{j=1}^n B_{t_{j-1}} \cdot (B_{t_j}-B_{t_{j-1}})}_{\stackrel{(1)}{\to} \int_0^T B_t \, dB_t}. \end{align}$$ Thus, $$B_T^2 = T + 2 \int_0^T B_t \, dB_t.$$
Example of a group in which the equation $x^2=e$ has more than two solutions
Permutation group $S_n$ has a lot of solutions for such equation. For example, any transposition will work.
Decomposing an upper triangular Toeplitz matrix into a product of matrices
It's an upper triangular Toeplitz matrix. Probably "Toeplitz" is the first word to use in your search for further information. Here, for instance, is a reference about inverting such matrices, and here is an article about factorizing them.
Big List of Erdős' elementary proofs
I think this is worth posting here, mostly because I really enjoy the simplicity of this proof but also because I have no idea how well it is known. The result is not deep or important, so the main interest is in the simplicity of the argument. Erdős proved a lower bound on the number of primes before an integer $n$. Wacław Sierpiński, in his Elementary Theory of Numbers, attributes to Erdős the following elementary proof of the inequality $$\pi(n)\geq\frac{\log{n}}{2\log{2}}\quad\text{for }n=1,2,\ldots.$$ Please note that I have adapted the argument from the text of the book to make things, in my opinion, a bit clearer. Note also that the only tools used in the below proof are some basic combinatorial facts and some results about square-free numbers, which can, for example, be proved with the Fundamental Theorem of Arithmetic. Let $n\in\mathbb{N}=\{1,2,3,\ldots\}.$ Consider the set $$S(n) = \{(k,l)\in\mathbb{N}^{2}:l\text{ is square-free and }k^{2}l\leq n\}.$$ It is a standard fact that every natural number has a unique representation in the form $k^{2}l,$ where $k$ and $l$ are natural numbers and $l$ is square-free. This gives $\lvert S(n)\rvert = n.$ Now if we have a pair $(k,l)$ with $k^{2}l\leq n,$ then we must have $k^{2}\leq n$ and $l\leq n$, since $k$ and $l$ are positive. Note that this gives $k\leq\sqrt{n}.$ Since $l$ is square-free, $l$ can be expressed as a product of distinct primes, each of which must be not-greater-than $n$ since $l\leq n$. That is, $l$ can be expressed as a product of the primes $p_{1},p_{2},\ldots,p_{\pi(n)}.$ There are $2^{\pi(n)}$ such products. Therefore, if we know $(k,l)\in S(n)$ then there are at most $\sqrt{n}$ possibilities for what $k$ might be and at most $2^{\pi(n)}$ possibilities for what $l$ might be (independent of $k$, of course). It follows that $\lvert S(n)\rvert \leq 2^{\pi(n)}\sqrt{n},$ so $n\leq2^{\pi(n)}\sqrt{n}.$ Taking $\log$s and rearranging gives the result.
How solve this problem? Need some hints $ \lim \limits_{x\to 0}{\tan{\cos{\frac{1}{x}}}+\lg{2+x} \over \lg{4+x}} $
Well, the limit of a quotient is the quotient of the limits, so the bottom is easy, and the only problem on top is the limit $$ \lim_{x \to 0} \tan \cos \frac{1}{x} $$ and this limit does not exist because it is the same as $$ \lim_{x \to 0} \tan \cos \frac{1}{x} = \tan \cos (\lim_{x \to 0} \frac{1}{x}) = \tan (\cos (\lim_{x \to \infty} x)) $$ and clearly $$\cos (\lim_{x \to \infty} x) = \lim_{x \to \infty} \cos x $$ isn't well defined as mentioned in the comment above. To see this we could look at some monotonically increasing and unbounded subsequences ${x_n}$, and if we find one that doesn't make the function converge then the function clearly can't converge. One such subsequence would be $x_n = n \pi$ which makes $ \cos x $ oscillate between $1$ and $-1$.
Show $\frac{\pi}{\alpha \sinh (\pi \alpha)} = \sum_{n= -\infty}^\infty \frac{(-1)^n}{\alpha^2 + n^2} $
You're almost there. Note that from odd symmetry that $$\color{blue}{\sum_{n=-\infty}^\infty \frac{(-1)^n\,n}{n^2+\alpha^2}=\sum_{n=-\infty}^{-1} \frac{(-1)^n\,n}{n^2+\alpha^2}+\sum_{n=1}^\infty \frac{(-1)^n\,n}{n^2+\alpha^2}=0}$$ Now, setting $t=0$ in the equation $$e^{\alpha t} = \frac{\sinh (\pi \alpha)}{\pi} \sum_{n= -\infty}^\infty \frac{(-1)^{n}}{\alpha^2 + n^2}(\alpha + i n) e^{i n t}$$ reveals $$\begin{align}1 &= \frac{\sinh (\pi \alpha)}{\pi} \sum_{n= -\infty}^\infty \frac{(-1)^{n}}{\alpha^2 + n^2}(\alpha + i n) \\\\ &=\frac{\sinh (\pi \alpha)}{\pi} \sum_{n= -\infty}^\infty \frac{(-1)^{n}\,\alpha}{\alpha^2 + n^2}+i\frac{\sinh (\pi \alpha)}{\pi}\color{blue}{\overbrace{ \sum_{n= -\infty}^\infty \frac{(-1)^{n}\,n}{\alpha^2 + n^2}}^{=0}} \\\\ &=\frac{\sinh (\pi \alpha)}{\pi} \sum_{n= -\infty}^\infty \frac{(-1)^{n}\,\alpha}{\alpha^2 + n^2} \end{align}$$ whereupon solving for the series yields $$\sum_{n= -\infty}^\infty \frac{(-1)^{n}}{\alpha^2 + n^2}=\frac{\pi}{\alpha \sinh(\pi \alpha)}$$ as was to be shown!
Will this series converge? If so, what is its limit?
We have $$2a_n-a_{n-1}-a_{n-2}=0$$ Using Recurrence Relation formula , $$a_n=A+B\left(-\frac12\right)^n$$
Show that there is $A \subseteq \mathbb{R} $ for which is $A' \subset (A')'$, in topology of left rays on $\mathbb{R}$.
I use the following definition for $A'$: We say that $x\in A'$ if $x\in\mathbb{R}$ and for any open neighborhood $U$ such that $x\in U$ there exists $a\in A$ such that $a\not = x$ and $a\in U$ (The assumption that $a\not = x$ is going to be extremely important). With respect to this definition we can take: $A=\{0\}$. Then $A'=(0,\infty)$ as every neighborhood of $x>0$ will contain $0$. However $A'' = [0,\infty)$. Without the assumption that $a\not = x$ I believe you can prove that $A''\subseteq A'$.
Connection between focal points and singularities of the normal exponential map
The claim is false, but for a sort of stupid reason. Let's take $M$ to be the plane, and $S$ to consist of two circles: the unit circle $C$ at the origin, and the circle $Q$ of radius $1/4$ centered at $(-1/4, 0)$; note that $Q$ contains the origin. Now at the point $( (0,0), 0)$ of $NS$, the exponential map is just lovely. On the other hand, the point $(0,0) = exp( (0,0), 0)$ is a focal point of the circle $C$, and hence of $S$. You might argue that my example isn't connected, but you can join the two circles by a pair of parallel vertical lines from the top of $Q$ to a point in the NW side of $C$, smoothed out at the joints, so the disconnectedness is not essential to this construction. By the way, I expect a local version is true, something like "if $X$ is a "focal point of $S$ at $(p, v)$," then $exp$ is singular at $(p, v)$." But that version may be completely trivial -- a simple contortion of the definitions. By "a focal point of $S$ at $(p, v)$," I mean something like that it's a focal point of any submanifold consisting of any open subset $U$ of $S$ that contains $p$.
Formula obtained by using Trignometric approximation for a triangle with a very small side
EDIT Hmm, the question changed while I was answering. See the 2. revision. When $A$ is close to $B$, we get a small $\theta_2$. So a truncated Taylor series would look like $$\cos{\theta_2}-1=-\frac{1}{2}\theta_2^2\; .$$ Now $\theta_2\approx \tan \theta_2=\frac{r_1\cos \theta_1}{s}$, where $r_1\cos \theta_1$ is the projection of $r_1$ on the opposite side in a right triangle $AB'C$, with $B'$ being in $\overline{BC}$, such that $AB' \perp BC$.
Proving a statement with inequality and min operator
First note that $m \geq \min(m,p)$. Now suppose, $m_1+m_2 < p_1+p_2$, then $\min(m_1+m_2,p_1+p_2) =m_1+m_2 \geq \min(m_1,p_1) +\min(m_2,p_2)$. On the other hand, if $m_1+m_2 \geq p_1+p_2$, $\min(m_1+m_2,p_1+p_2) =p_1+p_2 \geq \min(m_1,p_1) +\min(m_2,p_2)$.
game theory question
Plot the payoff profiles (2,1), (0,0) and (1,2) in two-dimensional payoff space. The convex hull of these points is the set of feasible payoffs under (correlated) mixed strategies.
Example of semiprime ring
The example you gave is indeed semiprime, but it is a complicated way to look at the ring $F_2^2$. Given $(a,b)$ nonzero, one of $(a,b)(1,0)(a,b)$ or $(a,b)(0,1)(a,b)$ is nonzero. Any semisimple or Von Neumann regular ring or prime ring is going to be semiprime. I recommend that you try to show that the ring of linear transformations of any vector space is semiprime. Rings that don't have nilpotent elements are also semiprime. Another easy source is to take any semiprime ideal J in a ring R and use R/J. Semi prime ideals are easy to find: they're just the intersections of prime ideals.
What is the Turing degree of truth in the second-order theory of real numbers?
Since $\mathbb{N}$ is second-order-definable in $\mathbb{R}$, the true second-order theory of $\mathbb{R}$ is basically the same as (= computably isomorphic to) true third-order arithmetic. And the gap between true third-order arithmetic and true second-order arithmetic is gigantic. For example, true second-order arithmetic (construed as a set of natural numbers via Godelization) is definable over $\mathbb{N}$ by a single third-order formula. Incidentally, there's a methodological point here: what would a "satisfying description" of the relevant Turing degree consist of? For sets of this complexity we're way beyond the point of well-behaved degree constructions, and I think the best we can hope for is descriptions of the form "The Turing degree of the $\mathfrak{L}$-theory of $\mathcal{M}$" for some natural logic $\mathfrak{L}$ and natural structure $\mathcal{M}$. It's often convenient to look for descriptions of this form with $\mathcal{M}$ a nice transitive set, so in this case "The third-order theory of $L_\omega$" would be my suggestion of an answer. (This point about "naming" Turing degrees is similar to the point about "naming" large ordinals I made here, and I believe in a couple other related answers as well. In each case, unless one has a specific form of description in mind, I think the initial description is actually optimal-ish.)
$3D$ rotation matrix uniqueness
Let's assume the matrix $R$ for a given rotation is not unique and there exists another matrix $Q\neq R$ which performs the exact same rotation. This means $\forall \mathbf{v}\in\mathbb{R}^n:\quad Q\mathbf{v}=R\mathbf{v}$ and thus $\forall \mathbf{v}\in\mathbb{R}^n:\quad (Q-R)\mathbf{v}=\mathbf{0}$ This means, that the null-space of $M=(Q-R)$ (the sub-space of all vectors whose image is the zero-vector) has to consist of the whole $\mathbb{R}^n$ (and therefore has a dimension of $n$, of course). The rank-nullity-theorem now states, that the rank of $M$ is the difference of its dimension and the dimension of its null-space, in this case $0$. But then again, only the zero matrix has a rank of $0$. This means the above equation only holds for $(Q-R)=O$ (with $O$ being the $n\times n$ zero matrix), which in turn implies $Q=R$. This contradicts the assumption. So each rotation (in fact any linear transformation) in $\mathbb{R}^n$ corresponds to a unique $n\times n$ matrix (for a given base $B$, of course). Moreover each orthogonal matrix $R\in\mathbb{R}^{n\times n}$ with $\det R=1$ represents a unique rotation in $\mathbb{R}^n$ (again for a given base $B$ of $\mathbb{R}^n$). In fact the matrix representation is even more unique than the axis-angle or quaternion representation (not to speak of the ambiguities of Euler angles), since because of the periodicity of the $\sin$ and $\cos$ used in the matrix representation you don't have the angle ambiguity, because $R(\theta)=R(\theta+2\pi k)$. And you don't even have the ambuigity of negated axis and angle (or negated quaternion) being the same rotation (which you especially outruled in your question), because their matrix represntations are in fact the same.
What does this syntax mean: "$f^{-1} : N_{10} \Rightarrow N_b $ is the inverse of $f: N N_{b} \Rightarrow N_{10}$?"
When you define a function $f$ in higher mathematics, you first write $$f:A\rightarrow B$$ Here $A$ will denote the set the function is from (the domain) and $B$ will denote the section the function is going to (the codomain). After this, you (generally) give a formula expressing how the function is defined. For example we could write "$f:\mathbb{N}\rightarrow\mathbb{N}$ defined by $f(n)=n+1$." In the image you've attached, you've got a function called $f^{-1}$ which goes from whatever $N_b$ is (probably the natural numbers in base $b$) to $N_{10}$ (probably the natural numbers in base $10$). I would imagine that they have given you a definition for $f$ in a previous problem, or in the preceding chapter, or something. It seems like there is a mistake in the problem - an extra $N$ before $N_b$ in the definition of $f$. The problem should be very easy. If $f$ is one-to-one, then there is a unique element of $N_{10}$ associated with each element of $N_{b}$ in the image of $f$. (Which is probably all of $N_b$, as if not $f^{-1}$ is not well defined.)
Show that $\operatorname{Var}S^2=\frac{1}{n}(Е(X_i-EX_i)^4 - \frac{n-3}{n-1}\sigma^4)$
Here is an approach. As you remarked $$ \text{Var}(S^2)=\frac{2\sigma^4}{(n-1)}. $$ Now note that $$ \frac{1}{n}\left(E(X_i-EX_i)^4 - \frac{n-3}{n-1}\sigma^4\right)=\frac{\sigma^4}{n} \left(E\left(\frac{X_i-EX_i}{\sigma}\right)^4 - \frac{n-3}{n-1}\right).\tag{1} $$ But we know that $X_i-EX_i/\sigma=Z\sim N(0,1)$ and it is well-known that $EZ^4=3$ (for example by using moment generating/characteristic functions). In any case from (1) we get that $$ \frac{1}{n}\left(E(X_i-EX_i)^4 - \frac{n-3}{n-1}\sigma^4\right)=\frac{\sigma^4}{n}\left(3-\frac{n-3}{n-1}\right)=\frac{2\sigma^4}{(n-1)}=\text{Var}(S^2). $$ as desired.
Plane parallel to two lines and goes through a point?
Yes, your arithmetic is correct. As you discovered, the lines have direction vectors $(2,1,1)$ and $(1,3,4)$. The plane normal must be perpendicular to both of these, therefore parallel to their cross product, which is $(1,-7,5)$. The plane through the original with normal vector $(1,-7,5)$ can be described by the equation $x-7y+5z=0$. You imply that this felt wrong because it's very similar to the computations you did to find a plane that's perpendicular to two give ones. But it's ok. The calculations should be similar: if a plane is perpendicular to two given planes, then it's parallel to both of their normal vectors. So, in both problems, the basic calculation is to find a plane that's parallel to two vectors. In the first problem, the two vectors are the direction vectors of the two lines; in the second problem, the two vectors are the normals of the two given planes. But, once you have the two vectors, the rest of the calculations are the same.
Hard problem Relationship between CDH and Discrete Log
A paper (note: SpringerLink, may require subscription) called "Separating Decision Diffie–Hellman from Computational Diffie–Hellman in Cryptographic Groups∗" published in '03 in the Journal of Cryptology has the following quote, "In 1994 Maurer used a variation of the elliptic curve factoring method to give strong evidence that CDH and DL are probably equivalent (see [9]). This approach was formalized by Maurer and Wolf in [10] and finally appeared as a journal version in [11]." (where CDH = "Computational Diffie-Hellman" and DL = "Discrete Log") Where the relevant citations are included below. The paper goes on to say "Our goal in this paper is to merge the two approaches and to construct a family of groups where CDH and DL are equivalent and presumably hard". Assuming "equivalent" here means CD reduces to DL and DL reduces to CDH, this seem relevant. So while this isn't a complete answer, the answer might be that at least occasionally we can use a CDH solution to solve a DLog. (Note: I haven't read this paper (yet) so I don't know that they actually met their goal, so this too might be wrong! If someone familiar can confirm whether or not they did, that would be great--I don't have the time or battery life right now to do so.) I suspect [11] might shed more light on the issue, if you can find it (I haven't looked), and there may be more recent results. "[9] U. Maurer. Towards the equivalence of breaking the Diffie–Hellman protocol and computing discrete logarithms. In Advances in Cryptology -CRYPTO’94, volume 839 of Lecture Notes in Computer Science, pages 271–281. Springer-Verlag, Berlin, 1994. [10] U. Maurer and S. Wolf. Diffie–Hellman oracles. In N. Koblitz, editor, Advances in Cryptology - Crypto ’96, Volume 1109 of Lecture Notes in Computer Science, pages 268–282. Springer-Verlag, Berlin, 1996. [11] U. Maurer and S. Wolf. The relationship between breaking the Diffie–Hellman protocol and computing discrete logarithms. SIAM Journal on Computing, 28(5):1689–1721, 1999."
Gelfand-Naimark for $C^*$-categories
This is Theorem 6.12 of Paul Mitchener's article $C^\ast$-categories.
Why is $\int_0^{2\pi} e^{i\,k\rho[\sin\alpha\cos\alpha-\sin\theta\cos(\phi-\beta)]}\mathrm{d}\beta = 2\pi J_0(k\rho\xi)$?
After a good night's sleep and with the help of Leucippus' comment, I was able to figure it out: The integration is over a disk in the $(x,y)$-plane with the area element $\rho\mathrm{d}\rho\mathrm{d}\beta$. The integral in question is the angular integration. The phase is the scalar product $\vec x\cdot\vec k_0-\vec x\cdot\vec k=(\vec k_0-\vec k)\cdot\vec x$ with the vectors $$ \vec x = \rho\left( \begin{array}{c} \sin\beta\\ \sin\beta\\ 0 \end{array}\right),\, \vec k_0 = k\left( \begin{array}{c} \sin\alpha\\ 0\\ \cos\alpha \end{array}\right),\, \vec k = k\left( \begin{array}{c} \sin\theta\cos\phi\\ \sin\theta\sin\phi\\ \cos\theta \end{array}\right). $$ For the purpose here, the scalar product is most conveniently evaluated as $$ (\vec k_0-\vec k)\cdot\vec x = |\vec k_0-\vec k|\;|\vec x|\,\cos\measuredangle(\vec k_0-\vec k,\vec x)\,. $$ Since integration is only in the $(x,y)$-plane, only those two components of $\vec k$ and $\vec k_0$ contribute. The length of the (projected) $\vec k_0-\vec k$ is then given by $k\,\xi$. This can be seen by either calculating the norm or from the figure and the law of cosines. Thus, the phase reads indeed $$ (\vec k_0-\vec k)\cdot\vec x = k\,\rho\,\xi \cos\measuredangle(\vec k_0-\vec k,\vec x)\,. $$ Next, notice that we integrate over the whole azimuthal angle $\beta=\measuredangle((1,0)^T,\vec x)$. It is, therefore, irrelevant whether we start at 0 angle with the $x$-axis and end at $2\pi$ after one revolution, or we start at 0 angle with the axis defined by the constant vector $\vec k-\vec k_0$ (dashed arrow in the figure) and at $2\pi$ after one revolution. The new integration angle is thus defined by $\beta'\equiv\measuredangle(\vec k_0-\vec k,\vec x)$ and runs from 0 to $2\pi$. Putting all together gives indeed $$ \int_0^{2\pi} e^{i\,k\rho[\sin\alpha\cos\alpha-\sin\theta\cos(\phi-\beta)]}\mathrm{d}\beta = \int_0^{2\pi}e^{i\,k\rho \xi \cos\beta'}\mathrm{d}\beta'\,, $$ which then evaluates to the Bessel function.
Linear transformation with an integral.
Here's a sketch of the solution. I'll leave some details for you to verify. From what was discussed in the comments, it should be clear that $T$ is a linear map. Any element in $\Bbb{R}_2[x]$ looks like $p(x):=ax^2+bx+c$ for some $a,b,c \in \Bbb{R}$. Thus as a vector space $\Bbb{R}_2[x]$ is $\Bbb{R}^3$ in disguise. Using that $\alpha=1$ and $\beta=0$, you should quickly get that $p(1)=a+b+c$, $p(0)=c$, $p'(1)=2a$ and $$ \int_{-1}^1p(x)\ dx=\frac{2}{3}a+2c. $$ Therefore, the map $T$ (regarded as a map from $\Bbb{R}^3$ to $M_2(\Bbb{R})$) looks like $$ T(a,b,c) := \begin{pmatrix} a+b+c & 2a\\ \frac{2}{3}a+2c & c \end{pmatrix} $$ From here is easy to see that $\mathrm{ker}(T)=\{(0,0,0)\}$, which in turn means that the only element in $\ker(T)$ is the $0$ polynomial (whence $T$ is injective). To find $\mathrm{im}(T)$ observe that $$ \begin{pmatrix} a+b+c & 2a\\ \frac{2}{3}a+2c & c \end{pmatrix} = a\begin{pmatrix} 1 & 2\\ \frac{2}{3} & 0 \end{pmatrix} + b \begin{pmatrix} 1 & 0\\ 0 & 0 \end{pmatrix}+ c \begin{pmatrix} 1 & 0\\ 2 & 1 \end{pmatrix}. $$ From here, you should be able to deduce that $\mathrm{im}(T)$ is the span of the three (linearly independent) matrices shown on the right hand side of the last equation above.
A topological function with only removable discontinuities
Note that property $(1)$ is just equivalent to regularity, so we already know many examples of spaces for which the proposition holds.
Abou independent random variables
If you are not familiar with conditional expectation we can proceed as follows. First we find the cdf of $X_1$. For $x> 1$ $$ P(X_1\leq x)=\int_1^x4t^{-5}\, dt=1-\frac{1}{x^4}. $$ Let $Y=\max(X_1, X_2)$. Then $Y$ has cdf given by $$ \begin{align} P(Y\leq y)&=P(\max(X_1, X_2)\leq y)\\ &=P(X_1\leq y, X_2\leq y)\\ &=P(X_1\leq y)^2\tag{0}\\ &=\left(1-y^{-4}\right)^2\tag{1} \end{align} $$ for $y>1$ where we used the independent and identically distributed assumption in (0). At this point you can find the density of $Y$, say $f$, by differentiating $(1)$ and compute $$ EY=\int_1^\infty yf(y)\, dy $$ which I leave to you.
How to find $y(x,z)$ from the given set of data?
HINT. Formally, is described the model $$z(x,y) = e^{a+bx+cy},$$ or $$\ln z = a+bx+cy,\tag1$$ where $b<0, c>0,$ $$x_i=12+2.25i,\quad y_i= 100 + 24j.$$ The table data model is $$z_{i,j} = e^{\large \frac xy+1}\tag2$$ Assuming the discrepancy in the form of $$d(a,b,c) = \sum w_{i,j}(\ln z_{i,j} - a - bx_i - Cy_j)^2,\tag3$$ where $w_{ij}$ is the arbitrary matrix of weights, one can get the point (A,B,C), which provides $\min d(a,b,c)$ in accordance with the described model. This point is the stationary point of $d(a,b,c).$ So $\operatorname{grad} d(A,B,C) = 0,$ or \begin{cases} \sum w_{i,j}\ (\ln z_{i,j} - A - Bx_i - Cy_j)=0\\ \sum w_{i,j}\ i\ (\ln z_{i,j} - A - Bx_i - Cy_j)=0\\ \sum w_{i,j}\ j\ (\ln z_{i,j} - A - Bx_i - Cy_j) = 0.\tag4 \end{cases} This leads to the linear system \begin{cases} S_{00}A + S_{10} B + S_{01}C = R_{00}\\ S_{10}A + S_{20} B + S_{11}C = R_{10}\\ S_{01}A + S_{11} B + S_{02}C = R_{01},\tag5 \end{cases} where $$S_{kl} = \sum w_{ij}x_i^k y_j^l,\quad R_{kl} = \sum w_{ij} x_i^k j^l \ln z_{ij}.\tag6$$ Using weight array $w=1$ gives and looks unusable. This situation is happened, because the data table does not correspond to the model. However, applying of the weight array in the form of $$w_{ij}=e^{-\left(\Large\frac{5(x_i-150)}{24}\right)^2-\left(\Large\frac{2(y_j-23.25)}{2.25}\right)^2}$$ localizes the model and gives So the estimation is $$Y = \dfrac{\ln z -A-Bx}C \approx \dfrac{\ln z - 1.13517\ 52307 + 0.00091\ 33448x}{0.00675\ 67568}, \tag7$$ and the result looks suitable. Note that the constants "2" and "5" in the $w$ formula were obtained empirically, with the goal of the good approximation of the table data near of the expected point. If the table data model corresponds with the given model better, then these constants can be decreased or $w=1$ can be used.
Estimate of lower bound of $|s(s+1)|$ in contour integration
$$\inf_{\Re(s) \le 0,|s|=T}|s(s+1)| = \inf_{\Re(s) \le 0,|s|=T}T |s+1|= T|T-1|$$ For $T$ very small and $T$ very large $T|T-1| \ge T^2/2$. $T^2 - T=T^2/2$ means $T = 0$ or $T = 2$ thus $T|T-1| \ge T^2/2$ fails for $T \in [1,2)$ and it succeeds for $T \ge 2$. And $T - T^2 = T^2/2$ means $T =0$ or $T = 2/3$ thus $T|T-1| \ge T^2/2$ fails for $T \in (2/3,1]$ and it succeeds for $T \in [0,2/3]$.
Find if sphere is inside parallelepiped
The question is actually asking the distance of a point (center) to several (6) planes, which is intuitively defined by the length of line segment from the point to the plane which is perpendicular to the plane. After rephrasing the question will be very easy if you know vector. Since you know the coordinate of vertices, to find the distance of center of sphere to faces of parallelepiped, pick vertices that are in the same face and find 2 vectors on this face (which will be edges or diagonals of a face). Compute the cross product and you will get the normal vector to this face. Find the line through the center of the sphere parallel to the normal and find the point intersected by this line and the face. Find the distance of this intersection and the center of sphere and compare to radius. You have to check for all 6 faces then done.
Proving that all sets of size n have $2^n$ subsets
Let $h_n$ be an $n$-bit string. i.e. A string consisting of 0's and 1's. Now let $S$ be your ground set with $n$ elements and $H \subseteq S$. If you label the elements of $S$ from 1 to $n$, given a fixed $H$, you can construct an $n$-bit string, say $h_n$ as follows, if $H$ has the 1st element, it has 1 in the first position and 0 otherwise. If $H$ has the second element, it has 1 in the 2nd position and 0 otherwise and so on... For example if $S$ = {1,2,3,4,5,6} and $H$ = {2,4,6}, then $h_n$ is '010101'. It is clear that given any subset, you can construct such an $n$-bit string. Conversely, given an $n$-bit string, you can construct a corresponding subset. In other words, there is a bijection from the set of $n$-bit strings to the set of all subsets of $S$. So the number of subsets of $S$ is equal to the number of $n$-bit strings which is $2^n$. Alternate proof by induction: Let the statement hold for all $n$ element sets. i.e. if $\vert S \vert = n$ then $S$ has $2^n$ subsets. Now suppose you add one more element $x_{n+1}$ to $S$ and call that set $S'$. Then all the subsets of $S'$ will be subsets of $S$ either with or without the new element $x_{n+1}$. In other words, for every $H \subseteq S$, there will be two subsets $H'$ and $H''$ of $S'$ given by $H' = H$ and $H'' = H \cup \{x_{n+1}\}$. So if $S$ has $2^n$ subsets, $S'$ will have $2 \times 2^n = 2^{n+1}$ subsets.
Inspecting Direction field
Looking at your curves one is tempted to say the following: If the value $a:=y(0)>0$ then the solution $x\mapsto y(x)$ is increasing for all $x>0$. If $a<0$ then the solution is first decreasing, then reaches a minimum at a certain point $x_a$, and for $x>x_a$ increases definitely to infinity. But this is not the whole truth. In order to get a full view one has to determine the general solution of the given ODE. Using standard methods one obtains $$y(x)=C e^{x/2}-3 e^{x/3}\ ,\qquad C\ \ {\rm arbitrary}\ ,$$ and introducing the initial condition gives $$y(x)=e^{x/2}\bigl(a+3-3e^{-x/6})\bigr)\ .$$ Now you can see that the really crucial value is $a_0=-3$. I leave the details of the further discussion to you.
Convergence of $\sum_{n=1}^{\infty} \frac{x^n}{n+1}$
$$\sum_{n=1} \frac{x^n}{n+1} = \frac{1}{x}\sum_{n=2} \frac{x^n}{n} = \frac{1}{x}\left(\sum_{n=1} \frac{x^n}{n} - x\right) = -1 - \frac{\log(1-x)}{x}$$ for $0<|x| < 1$ (the limit to $0$ exists, and equals $0$)
Contour integration of $\frac{\log( x)}{x^2+a^2}$
A standard approach to evaluating the integral $\int_0^\infty \frac{\log(x)}{x^2+a^2}\,dx$ using contour integration is to analyze the integral $\oint_C \frac{\log^2(z)}{z^2+a^2}\,dz$, where $C$ is the classical "keyhole" contour with the keyhole taken along the positive real axis. To that end, we choose to cut the complex plane along the positive real axis such that $$\log(z)=\log(|z|)+i\arg (z)$$ where $0\le \arg(z)<2\pi$. With this choice of branch cut, $\log(z)=\log(x)$ as we approach the real axis from the first quadrant and $\log(z)=\log(x)+i2\pi$ as we approach from the fourth quadrant. From the reside theorem we have $$\begin{align} \oint_C \frac{\log^2(z)}{z^2+a^2}\,dz&=2\pi i \text{Res}\left( \frac{\log^2(z)}{z^2+a^2}, z=\pm ia\right)\\\\ &=2\pi i \left(\frac{\log^2(|a|e^{i\pi/2})}{i2|a|}+\frac{\log^2(|a|e^{i3\pi/2})}{-i2|a|} \right)\\\\ &=\frac{\pi}{|a|}\left(2\pi^2 -i2\pi \log(a)\right)\tag 1 \end{align}$$ Integration over $C$ can be decomposed as $$\begin{align} \oint_C \frac{\log^2(z)}{z^2+a^2}\,dz&=\int_\epsilon^R \frac{\log^2(x)}{x^2+a^2}\,dx-\int_\epsilon^R \frac{(\log(x)+i2\pi)^2}{x^2+a^2}\,dx\\\\ &+\int_0^{2\pi}\frac{\log^2(Re^{i\phi})}{(Re^{i\phi})^2+a^2}\,iRe^{i\phi}\,d\phi-\int_0^{2\pi}\frac{\log^2(\epsilon e^{i\phi})}{(\epsilon e^{i\phi})^2+a^2}\,i\epsilon e^{i\phi}\,d\phi\\\\ &=-i4\pi \int_\epsilon^R \frac{\log(x)}{x^2+a^2}\,dx+4\pi^2 \int_\epsilon^R \frac{1}{x^2+a^2}\,dx\\\\ &+\underbrace{\int_0^{2\pi}\frac{\log^2(Re^{i\phi})}{(Re^{i\phi})^2+a^2}\,iRe^{i\phi}\,d\phi}_{\to 0\,\text{as}\,R\to \infty}-\underbrace{\int_0^{2\pi}\frac{\log^2(\epsilon e^{i\phi})}{(\epsilon e^{i\phi})^2+a^2}\,i\epsilon e^{i\phi}\,d\phi}_{\to 0\,\text{as}\,\epsilon \to 0}\tag2 \end{align}$$ Letting $R\to \infty$ and $\epsilon \to 0$ in $(2)$ and equating the real and imaginary parts of the result to $(1)$ yields $$\bbox[5px,border:2px solid #C0A000]{\int_0^\infty \frac{\log(x)}{x^2+a^2}\,dx=\frac{\pi\log(|a|)}{2|a|}}$$ and as a bonus $$\bbox[5px,border:2px solid #C0A000]{\int_0^\infty \frac{1}{x^2+a^2}\,dx=\frac{\pi}{2|a|}}$$
How to divide the rectangle into $5$ parts?
Try to optimize this shape. It should be slightly shorter than 80m A second option is to use circular arcs that meet at angles of $120^\circ$, and cut the edge at right angles. It gives a total length around 78.25m
Proof of Von Neumann Ergodic Theorem
Since $U$ is an isometry, $\|U\|=1$ and thus $$\|U_nf\|=\frac 1n\left\|g-U^ng\right\|\leq\frac2n\|g\|\to0$$ as $n\to\infty$.
Are functions of this sort bijections from a subset of the reals to the reals?
Yes, a function with those properties will be a bijection. It's $1-1$ if we assume that it is strictly monotone, and the intermediate value theorem shows that it's onto as follows: Choose an $\alpha \in \mathbb{R}$. Since $\lim_{x \to b^+} f(x) = \infty$, there exists an $\epsilon > 0$ such that $$b - \epsilon < y < b \implies f(y) > \alpha$$ Likewise, there exists an $\epsilon'$ for the left side. Choosing any $y$ and $z$ in the two respective intervals, we see that some $c \in [y, z]$ must satisfy $f(c) = \alpha$.
Number of 3-subsets in which an element of a set $S$ appears in if every pair of elements in $S$ appears in exactly two of it's 3-subsets, together.
Since there are $\,\binom{n}{2}\,$ pairs of people, each pair has to be on exactly two committees, and there are exactly three possible pairs of people per committee, that means the number of committees has to be $$\frac{2\binom{n}{2}}{3}= \frac{n(n-1)}{3}$$ That means that the total number of committee members is $\,n(n-1),\,$ and since there are only $\,n\,$ people, each person must be on $\,n-1\,$ committees.
spectrum of compact forward weighted shift
Hints: as mentioned in the comments, $t_n\to 0$. The weights I assume are positive by definition. Now, since $\|Tx\|^²=\sum|t_nx_n|^2$ it is not hard to show that $\|T\|=\max \{t_n\}.$ In fact, inductively, show that $\tag1 \|T^n\|=\max_k\{t_kt_{k+1}\cdots t_{k+n-1}\}$ Now, choose $n$ large enough so that $t_n<\epsilon$ whenever $n\ge N$ so $\tag2 t_{k+N}\cdots t_{k+n-1}\le \epsilon^{n-N-1}$ Finally, combine $(1)$ and $(2)$ with the spectral radius formula to show that $\sigma (T)=0.$
How to approach a change of basis using matrices instead of vectors?
Hint: A matrix $$ A=\begin{bmatrix}a&b\\ c&d \end{bmatrix} $$ in terms of the basis $\beta$ is expressed as: $$ A=ae_1+be_2+ce_3+de_4 $$ and we can write this as a ''vector'' of the components: $$A=\begin{bmatrix}a\\b\\ c\\d \end{bmatrix}$$ With this notation we have: $$ T(A)=[T]_{\beta \beta}\begin{bmatrix}a\\b\\ c\\d \end{bmatrix}=AM=\begin{bmatrix}a&b\\ c&d \end{bmatrix}\begin{bmatrix}1&2\\ 3&4 \end{bmatrix}= \begin{bmatrix}a+3b&2a+4b\\ c+3d&2c+4d \end{bmatrix}= \begin{bmatrix}a+3b\\2a+4b\\ c+3d\\2c+4d \end{bmatrix} $$ So you can use the usual ''machinery of vectors'' that you know to find the matrix $[T]_{\beta \beta}$. The matrix that represents the transformation $T$ in the basis $\beta$ is: $$ [T]_{\beta \beta}= \begin{bmatrix} 1&3&0&0\\ 2&4&0&0\\ 0&0&1&3\\ 0&0&2&4 \end{bmatrix} $$ as you can easily verify.
Volterra equation for a Bessel type IVP that appears in inverse scattering
Using Maple I am obtaining $$z \left( \xi \right) =C\sqrt {\xi}{{\rm J_{L+1/2}}\left(\,\xi\,k\right)} +1/2\,\sqrt {\xi}\pi \, \left( {{\rm J_{L+1/2}}\left(\,\xi\,k\right)} \int _{0}^{\xi}\!\sqrt {\eta}{{\rm Y_{L+1/2,}}\left(\,\eta\,k\right)}g \left( \eta \right) z \left( \eta \right) {d\eta}- {{\rm Y_{L+1/2,}}\left(\,\xi\,k\right)}\int _{0}^{\xi}\!\sqrt {\eta} {{\rm J_{L+1/2,}}\left(\,\eta\,k\right)}g \left( \eta \right) z \left( \eta \right) {d\eta} \right) $$ Do you agree?
solve system of two trigonometric equations
Since you have $a$ in both equation, you can eliminate it and the equation becomes $$ \frac{x}{\sinh(bx)}=\frac{2}{b\cosh(bx)} $$ that is, $$ bx=2\tanh(bx) $$ Consider the function $$ f(t)=2\tanh t-t $$ that we can study for $t\ge0$. The derivative is $$ f'(t)=\frac{2}{\cosh^2t}-1=\frac{2-\cosh^2t}{\cosh^2t} $$ which vanishes at $\cosh t=\sqrt{2}$ and is positive in the interval $[0,\operatorname{arcosh}\sqrt{2}]$. Moreover, $\lim_{t\to\infty}f(t)=-\infty$. Thus the equation $f(t)=0$ has a single positive solution $t_0$, that can be determined numerically; an approximate value is $1.9189$. Once you have found it, you get $$ b=\frac{t_0}{x} $$ and you can compute the value of $a$.
Convergence of $\left( \frac{1}{1} \right)^2+\left( \frac{1}{2}+\frac{1}{3} \right)^2+\cdots$
Notice, that: $$\left(\frac{1}{1}\right)^2+\left(\frac{1}{2} + \frac{1}{2}\right)^2 + \dots<\left(\frac{1}{1}\right)^2+\left(\frac{2}{2}\right)^2+\left(\frac{3}{4}\right)^2+\left(\frac{4}{7}\right)^2=2+\sum_{n=2}^{\infty}\left(\frac{n}{\frac{n(n-1)}{2}+1}\right)^2<2+\sum_{n=2}^{\infty}\left(\frac{2}{n-1}\right)^2$$ Since it's bounded by converging series, it's convergent itself.
Expected number of couples having same number
The probability that the ball with e.g. number $1$ in box A will reach box B (to get the color green) is $1-p$. On condition that it indeed reaches box B it has a chance of $1-p$ to reach box C (as a green ball). Then the unconditional probability that a green ball with number $1$ will be present in box C is $(1-p)^2$. The probability that the ball with number $1$ in box D will reach box C (as a red ball) is $1-p$. On base of independence we conclude that the probability that box C will eventually contain a red and a green ball with number $1$ is $(1-p)^3$. Then we end up with expectation $n_1(1-p)^3$ for the number of pairs of balls in box C that have an equal number.
Prove $(m!)^3(n!)^4|(3m)!(4n)!$ for all positive integers $m,n$
The number of ways to put $3m$ objects into $3$ distinct bins with $m$ objects in each bin is given by the multinomial coefficient $$\binom{3m}{m,m,m}=\frac{(3m)!}{(m!)^3}$$ Hence the value must be an integer. Similarly we have $$\binom{4n}{n,n,n,n}=\frac{(4n)!}{(n!)^4}$$ ways to place $4n$ objects into $4$ distinct bins with $n$ objects in each bin.
Multivariable calculus: finding the volume of a solid enclosed by three surfaces
Here is one way of doing it: ask yourself, what is the projection of the solid in the $yz$ plane ? It is the region bounded by $z=3y$, $z=2+y$, and $y=0$: $$ D= \{(y,z)\;|\; 0 \le y \le 1, 3y \le z \le 2+y \} $$ Now, how does the third variable ($x$) evolve on this region? The answer lies in the equation $y=x^2\; \Leftrightarrow\; x=\pm \sqrt{y}$. Therefore, you can describe the solid as $$ E = \{(x,y,z)\; |\; (y,z) \in D, -\sqrt{y} \le x \le \sqrt{y} \} $$ Can you take it from there? You should find $\frac{16}{15}$ in the end. Note that you can also solve this problem by projecting the solid in the $xy$ plane. In this case, you get $$ E =\{(x,y,z)\;|\; (x,y)\in D, 3y \le z \le 2+y\} $$ where this time $$ D = \{(x,y)\;|\; -1 \le x \le 1, x^2 \le y \le 1 \}  $$ It is an excellent exercise to do the exercise both ways and make sure the results match.
If $f \in {\mathcal{C}}_c({\mathbb{R}}^n)$, then $f$ is uniformly continuous
Hint: Fix $\epsilon$. Then, for each point $x$ on the support let $\delta_x$ be the maximal $\delta$ so that the $\epsilon-\delta_x$ criterion is satisfied on a $\delta_x$ neighbourhood of $x$. The union of these neighbourhoods is an open cover of your your support, which is compact. By compactness there exists a finite covering... Conclude.
Show that for any $r>0$ and $z \in \mathbb{C}$ we have that $B_r(z)\subset S_r(z)$
We have $|\operatorname{re} w| \le |w|$ and $|\operatorname{im} w| \le |w|$. If $w \in B_r(z)$ then $|w-z| < r$. Then $|\operatorname{re} (w-z)| \le |w-z|< r$ and similarly for $\operatorname{im}$. hence $w \in S_r(z)$.
Total work needed to move an object along a path under the the force field $\vec F = (4y^3-3xy^2, 3x+12y^2x-3x^2y)$
Note that $\bf F$ is the sum of two vector fields: ${\bf F}_1=(0,3x)$ (which is not conservative) and ${\bf F}_2=(4y^3-3xy^2,12y^2x-3x^2y)$ (which is conservative with potential function $U(x,y)=4xy^3-\frac{3x^2y^2}{2}$). The work for ${\bf F}_2$ does not depend on $a$ and it is equal to $U(1,0)-U(-1,0)=0$. On the other hand, the work for ${\bf F}_1$ does depend on $a$ and it can be evaluated by computing the (easy) line integral along the straight lines from $(−1,0)$ to $(0,a)$ and from $(0,a)$ to $(1,0)$.
$X^X$ as a monoid in a closed monoidal category
Yeah, this is definitely a monoid. The main thing to prove is associativity, i.e. that $m\circ(X^X\otimes m)=m\circ(m\otimes X^X)$. The way to proceed is to show that their adjuncts are equal. The adjunct of $m\circ(X^X\otimes m)$ is $$\begin{CD}X^X \otimes X^X \otimes X^X \otimes X @>{X^X \otimes\,m\,\otimes X^X}>>X^X \otimes X^X \otimes X @>{X^X \otimes \: \text{ev}}>> X^X \otimes X @>{\text{ev}}>> X \end{CD}$$ which we can rewrite as $$\begin{CD}X^X \otimes X^X \otimes X^X \otimes X @>{X^X \otimes X^X\otimes \mathrm{ev}}>>X^X \otimes X^X \otimes X @>{\text{ev}\circ(m\otimes X)}>> X \end{CD}$$ and the adjunct of $m\circ(m\otimes X^X)$ is $$\begin{CD}X^X \otimes X^X \otimes X^X \otimes X @>{m\,\otimes X^X \otimes X^X}>>X^X \otimes X^X \otimes X @>{X^X \otimes \: \text{ev}}>> X^X \otimes X @>{\text{ev}}>> X \end{CD}$$ which we can rewrite as $$\begin{CD}X^X \otimes X^X \otimes X^X \otimes X @>{X^X \otimes(\text{ev}\circ(m\otimes X))}>>X^X \otimes X @>{\text{ev}}>> X \end{CD}.$$ So our trick will be to show that $\text{ev}\circ(m\otimes X)=\text{ev}\circ(X^X\otimes \text{ev})$ since then the adjuncts of both $m\circ(X^X\otimes m)$ and $m\circ(m\otimes X^X)$ will be equal to $$\begin{CD}X^X \otimes X^X \otimes X^X \otimes X @>{X^X \otimes X^X \otimes \: \text{ev}}>>X^X \otimes X^X \otimes X @>{X^X \otimes \: \text{ev}}>> X^X \otimes X @>{\text{ev}}>> X \end{CD}.$$ But it's immediate that $\text{ev}\circ(m\otimes X)=\text{ev}\circ(X^X\otimes \text{ev})$ since the adjunct (going the other way now) of $\text{ev}\circ(m\otimes X)$ is $m$ (by definition of $\text{ev}$) and the adjunct of $\text{ev}\circ(X^X\otimes \text{ev})$ is also $m$ (by definition of $m$). A similar argument works to show the identity law, where the identity is defined as the adjunct of $\lambda_X:I\otimes X\to X$.
Given $x_{n} \to x_{0}$ as $n \to \infty$, and $e^{x}=\sum_{k=0}^{\infty}\frac{x^{k}}{k!}$, prove that $\lim_{n \to \infty}e^{x_{n}} = e^{x_{0}}$
I will offer a solution using the series. Since $x_n\to x_0$, we may assume there exists $M>0$ such that $|x_n|\leq M$ for all $n$. Therefore, $|\sum_{k=0}^\infty \frac{x_n^k}{k!}|\leq \sum_{k=0}^\infty \frac{M^k}{k!}$. Hence, by dominated convergence theorem (with point mass measure on the naturals), you can bring the limit inside the sum of $\sum_{k=0}^\infty \frac{x_n^k}{k!}$ so $\lim_{n\to \infty}e^{x_n}=e^{\lim_{n\to\infty} x_n}=e^{x_0}$.
Algebraic Proof that a Disk is Convex
It's no extra work to prove it for an ellipsoid. Given a positive (or non negative) definite bilinear form on a denoted $(x,y)$, the convexity of $f(x)=(x,x)$ is, after expanding all terms and cancelling the non-negative scalar factor of $\lambda(1-\lambda)$, the statement that $$(a,a) + (b,b) \geq (a,b) + (b,a) $$ which is the expansion of $(a-b,a-b) \geq 0$. I had thought it would be equivalent to Cauchy Schwarz but it looks a little bit weaker.
1. You roll five fair, six-sided dice. What is the probability that the sum of the five dice is 20?
It can be done without a computer. As explained by others, and in the linked question, the generating function of your problem is given by $$\eqalign{ g(x)&=(x+x^2+x^3+x^4+x^5+x^6)^5 \cr &=x^5\left({1-x^6\over 1-x}\right)^5=x^5(1-x^6)^5(1-x)^{-5}\cr &=x^5\ (1-5x^6+10x^{12}+{\rm higher\ terms})\ \sum_{j=0}^\infty{5+j-1\choose j}x^j\ .\cr}$$ Now you have to find the coefficient of $x^{20}$ on the RHS. This coefficient is $${19\choose15}-5{13\choose9}+10{7\choose3}=651\ .$$ This is the number of lucky outcomes among the $6^5$ possible, and equally probable, histories.
Convergence of series by adding parenthesis
Let $s_n=\sum_{k=1}^na_n$ and let $S_n$ be the $n$th partial sum of the series that you get after adding the parenthesis. Suppose, say the the first four $a_k$'s are postive, that the five next ones are negative, and then there are some more positive terms. Then $S_1=s_4$. And, after that, $S_2=s_9$. Besides, $s_1\leqslant s_2\leqslant s_3\leqslant s_4=S_1$. And then $s_4\geqslant s_5\geqslant s_6\geqslant s_7\geqslant s_8\geqslant s_9=S_2$. And so on. Each $s_k$ is between a $S_l$ and $S_{l+1}$. So, since $(S_n)_{n\in\Bbb N}$ converges, so does $(s_n)_{n\in\Bbb N}$.
Show by induction that $F_n \geq 2^{0.5 \cdot n}$, for $n \geq 6$
We have that $F_n>F_{n-1}$ then $$F_{n+1}=F_n+F_{n-1}>2F_{n-1}>2\cdot2^{(n-1)/2}=2^{(n+1)/2}$$
A positive integer $x$ is a solution of the congruence $m^x\equiv 1$ mod $n$ if and only if $ord_nm|x$
Let the order of $m$ be $d$. Now, let $q$ and $r$ be the quotient and remainder when $x$ is divided by $d$; that is $x= dq+r$. Now $$1=m^x=m^{dq+r}=(m^d)^q\cdot m^r=m^r.$$ But $d$ is the smallest integer such that $x^d=1$ and $0\leq r\leq d-1$ so it must follow that $r=0$. So $d$ divides $x$. Conversely, suppose $d$ divides $x$. So, there exists an integer $t$ such that $td=x$. Then $$m^x=m^{td}=(m^d)^t=1.$$ So, $m^x=1$.
Why integration by long division gives different answer than using u substitution?
Note that $$\frac{(x+2)^2}2-4(x+2)+6\ln|x+2|+C=\frac{x^2}2-2x-6+6\ln|x+2|+C=$$ $$=\frac{x^2}2-2x+6\ln|x+2|+(C-6)=\frac{x^2}2-2x+6\ln|x+2|+C_1$$ therefore the two results are the same up to a constant which is not essential, indeed in both cases $$\frac{d}{dx}\left(\frac{x^2}2-2x+6\ln|x+2|+C\right)=\frac{x^2+2}{x+2}$$
PCA using SVD in Matlab, a few questions.
If the SVD of $X$ is $X=USV^\top$, then the SVD of $X^\top$ is just the transpose of the prior factorization, $X^\top=VSU^\top$ or $U_1=V$, $S_1=S$ and $V_1=U$. The principal components of this approach are the singular vectors with the largest singular values. In the implementations, the diagonal matrix $S$ contains the singular values sorted from largest to smallest, so that you only have to consider the first two components. If $X$ has format $25\times 2000$, then the columns of the $25\times 25$ matrix $U$ contain the singular vectors you are interested in. Update PCA was originally invented in mechanics to study the kinematics of rigid bodies, for instance the rotation and nutation and oscillations of planets. The idea there is that these kinematics are the same as an ellipsoid that is aligned and shaped according to the principal components of the mass distribution. Any movement of a rigid body can be described as the movement of its center of mass and a rotation around that center of mass. If the data is not shifted so that the center of mass is the origin, for instance if in 2D all points are clustered around $(1,1)$, then the principal component of the data set will be close to this point $(1,1)$. But to get that point, one could just as well only have computed the center of mass or mean value of all data points. To get the information about the shape of the cluster out of the SVD, you have to subtract the center of mass. If that is what you mean by 'subtracting the baseline' then all is well in that regard. But still, the application of SVD makes the most sense if you can say that if you flip the sign of an input vector, then this could have reasonable come as well from a measurement in the experiment. The result of the SVD can be written as $$ X=\sum_{k=1}^r u_k\sigma_k v_k^\top. $$ If one pair of $(u_k,v_k)$ is replaced by $(-u_k,-v_k)$ then noting changes in the sum, the sign change cancels between both factors. To get the data set of person $j$ out of the matrix $X$ one has to select row $j$ of $X$ as $e_j^\top X$. Now if $X$ gets compressed by using only the terms for the first or first two singular values in the SVD, the approximation of data $j$ set will be $$ e_j^\top X=\sum_{k=1}^2 (e_j^\top u_k)(\sigma_kv_k)^\top =\sum_{k=1}^2 U_{jk}(\sigma_kv_k)^\top. $$ Again, any sign changes in $v_k$ in the computation of the SVD are balanced by sign changes in the coefficients $e_j^\top u_k=U_{jk}$. One heuristic to make the sign definitive could be to make sure that the entry with largest absolute value in every vector $u_k$ is positive.
Kernel of matrices product
I assume that $A,B$ are supposed to be square matrices. I'm not sure what you mean by $A\backslash B$. In general, we really can't tell anything about the nullspace of a sum of matrices just from knowing the matrices' null spaces. For example, there exist matrices $A$ and $B$ such that $\ker(A)$ and $\ker(B)$ are both trivial, but $\ker(A+B)$ is the whole space. On the other hand, there exist matrices $A,B$ such that all of $\ker(A),\ker(B),\ker(A+B)$ are trivial. Now, we can do a little better for the kernel of $AB$, by noting that $$x\in\ker(AB)\Longleftrightarrow ABx=0\Longleftrightarrow Bx\in\ker(A).$$ Still, without knowing something more about $B$, we can't say much else.
Understanding the behaviour of order statistics of samples of uniform distribution
In short, the two weights sequences you define are asymptotically equivalent in probability. I put a sketch of proof, see below. But I'd recommend to have a closer look at Chapter 7 of [David H.A., Nagaraja H.N. Order Statistics (2003)]. I believe the topics discussed there are pretty close to the current one. Proposition. $\forall\rho>0\;\;\mathbb{P}(|w_i-\tilde{w}_i|>\rho)\to 0, \;n \to \infty$. Proof. a) A basic fact of order statistics is that $X_{(i)}$ converges in probability to $\frac{(i-1)/n\;+\; i/n}{2}$ as $n\to\infty$. b) If we denote $\delta_{i+1}=X_{(i+1)}-X_{(i)}$, then it's possible to get $\delta_{i+1}=\textit{o}(1/n)$ and $$ F_{P}(X_{(i+1)}) = F_P(X_{(i)}) + p(X_{(i)})\cdot\delta_{i+1} + \textit{o}(1/n^2) $$ using the Taylor approximation; the notation here is $p(\bullet)\equiv F^{\prime}_P(\bullet)$. c) Finally, $\mathbb{P}(|w_i-\tilde{w}_i|>\rho) = \mathbb{P}(|p(X_{(i)}) - \left(F_{P}(X_{(i+1)}) - F_P(X_{(i)})\right) |>\rho) = \mathbb{P}(|p(X_{(i)})\cdot\left(1-\delta_{i+1} \right) + \textit{o}(1/n^2)|>\rho). $ (We just used b) and then a).) As we see, the left-hand side of the inequality is of order $\textit{o}(1/n)$ (in probability, see a)). Thus, we can choose appropriate (big enough) $n^\ast$ given any positive $\rho$, which means that the probability of the event $\left\{\ldots >\rho\right\}$ tends to zero. This concludes the proof.
For $r_n=0.2+0.3r_{n-1}$ defined recursively, how to show the limit of $r_n$ exists?
A method is the following: Your potential limit $A $ (equal $2/7$) satisfy (1) $A=0.2+0.3A$. Put $v_n=u_n-A$ and substract (1) from the recurrence relation, you get $v_n=0.3v_{n-1}$. An easy induction show that $v_n=(0.3)^n v_0$, and you are done.
$f(z)=\int_1^\infty e^{-x}x^z\,dx$ is complex analytic
I'd use Morera's theorem. Morera's theorem says if $$\int_\gamma f(z)\,dz=0\tag{1}$$ for every simple closed curve $\gamma$, then $f$ is holomorphic. Since the function is defined by an integral, the integral in $(1)$ becomes an iterated integral. You can then show that the hypotheses of Fubini's theorem are satisfied, so you can change the order of integration. Then conclude that what has now become the inside integral evaluates to $0$ because then you're integrating a function that you already know is holomorphic, along a closed curve.
The relation between retraction and coproduct in R-Mod
Given a linear category with kernels, let $i : X \to Y$ be a split monomorphism. Choose $p : Y \to X$ with $pi=id_X$. Let $K \to Y$ be the kernel of $p$. Then $K \hookrightarrow Y \hookleftarrow X$ is a coproduct diagram. In fact, it extends to a biproduct diagram using $K \xleftarrow{q} Y \xrightarrow{p} X$, where $q$ is defined by $\mathrm{id}_Y-ip : Y \to Y$, which factors through $K$ since $p(\mathrm{id}_Y-ip)=p-pip=p-p=0$. It is easy to check the defining 5 equations. So the statement holds in every linear category with kernels. But it is also true in the category of sets. I think the most general case is a a category in which every idempotent splits and has a complement.
If $x_{n+1} = (1+\frac{x_n}{a})^a - 1$, find $\lim_{n\to\infty}nx_n$
So $l=\lim\limits_{n\to \infty} x_n \in [0,x_1)$ and you get $$l=\left(1+\frac{l}{a}\right)^a-1$$ Now, differentiating with respect to $l$ gives $$1=a\cdot\left(1+\frac la\right)^{a-1}\cdot\frac 1a$$ and from here it's easy to deduce that $l = 0$. To evaluate your limit, we can use Cesaro-Stolz and l'Hopital: $$ \begin{aligned} \lim_{n\to\infty} nx_n &= \lim_{n\to \infty} \frac{n}{\frac{1}{x_n}}\\ &= \lim_{n\to\infty}\, \frac {(n+1)-n}{\frac 1{x_{n+1}}-\frac 1{x_n}}\\ &= \lim_{n\to\infty}\, \frac {x_nx_{n+1}}{x_n-x_{n+1}} \\ &= \lim_{n\to\infty}\, \frac {x_n\left[\left(1+\frac {x_n}a\right)^a-1\right]}{x_n-\left(1+\frac {x_n}a\right)^a+1}\\ &= \lim_{n\to\infty}\left[ \frac {\left( 1+\frac {x_n}a\right )^{a-1}}{\frac {x_n}a\cdot\frac {x_n^2}a\cdot\frac 1{1+x_n}-\left(1+\frac {x_n}a\right)^a} \right]\\ &= \lim_{n\to\infty}\, \frac {x_n^2}{1+x_n-\left(1+\frac {x_n}a\right)^a}\\ &= \lim_{x\to 0}\, \frac {x^2}{1+x-\left(1+\frac xa\right)^a}\\ &= \lim_{x\to 0}\, \frac {2x}{1-a\cdot\left(1+\frac xa\right)^{a-1}\cdot\frac 1a}\\ &= 2\lim_{x\to 0}\, \frac x{1-\left(1+\frac xa\right)^{a-1}}\\ &= 2\lim_{x\to 0}\, \frac 1{\left(1-a\right)\cdot\left(1+\frac xa\right)^{a-2}\cdot\frac 1a} \\ &=\boxed{\frac{2a}{1-a}} \end{aligned} $$ Note: When using a Cesaro-Stolz, for the fluency of the proof, I (almost always) write: $$\lim_{n \to \infty} \frac{a_n}{b_n} = \lim_{n\to \infty} \frac{a_{n+1}-a_{n}}{b_{n+1}-b_n}$$ However this is a notation abuse. Correct it is to prove first that the limit $$\lim_{n\to \infty} \frac{a_{n+1}-a_{n}}{b_{n+1}-b_n}$$ exists and is finite (say it equals $l$), and then say $$\lim_{n \to \infty} \frac{a_n}{b_n} =l$$
find equation of the line containing the origin and is perpendicular to the another line
the line $$l:(3, 1, 0) + t(2, -1, 5)$$ is through $(3, 1, 0)$ and parallel to $n = (2, -1, 5)$ therefore a plane through the origin and orthogonal to this line is $$2x - y + 5z = 0$$ the line $l$ cuts this plane at the $t$ value given by $$2 \times 3 - 1 + t(2 \times 2 + 1+ 5 \times 5)= 0 \to t = -\frac 16$$ the line you are looking for is $$6s(8/3,7/6,-5/6)=s(16, 7,-5),\text{ where $s$ is real number.}$$
Show that $\int_0^\infty\frac1{a+(k+(1-x))^2}\:{\rm d}k\le\frac1{\sqrt a}\int_0^{\frac{\sqrt a}{1-x}}\frac1{1+y^2}\:{\rm d}y$
Let $$k+1-x=\sqrt{a}t$$ $$\Rightarrow dk=\sqrt{a}dt $$ $$\implies \int_0^\infty\frac{dk}{a+(k+1-x)^2}=\int_\frac{1-x}{\sqrt a}^\infty \frac{\sqrt a dt}{a(1+t^2)}$$ Now apply reciprocal substitution, $y=\frac{1}{x}$ and you are done. CAVEAT $1-x$ may be negative, break the integral in that case.
Trinomial Theorem Solution Verification
Note that $e^{\omega_0x}+e^{\omega_1x}+e^{\omega_2x}=1+\mathrm{e}^{2\pi i x/3}+\mathrm{e}^{2\cdot2\pi i x/3}$. Clearly $$ 1+\mathrm{e}^{2\pi i x/3}+\mathrm{e}^{2\cdot2\pi i x/3}=\frac{\mathrm{e}^{2\pi i x}-1}{\mathrm{e}^{2\pi i x/3}-1}=\mathrm{e}^{2\cdot2\pi i x/3}\cdot\frac{\mathrm{e}^{\pi i x}-\mathrm{e}^{-\pi i x}}{\mathrm{e}^{\pi i x/3}-\mathrm{e}^{-\pi i x/3}}=\mathrm{e}^{2\cdot2\pi i x/3}\cdot\frac{\sin (\pi x)}{\sin(\pi x/3)} \\=\mathrm{e}^{2\cdot2\pi i x/3}\cdot\frac{3\sin (\pi x/3)-4\sin^3(\pi x/3)}{\sin(\pi x/3)}=\mathrm{e}^{2\cdot2\pi i x/3}\big(3-4\sin^2(\pi x/3)\big). $$ Thus $$ (1+\mathrm{e}^{2\pi i x/3}+\mathrm{e}^{2\cdot2\pi i x/3})^n=\mathrm{e}^{4n\pi i x/3}\big(3-4\sin^2(\pi x/3)\big)^n=\mathrm{e}^{4n\pi i x/3}\left(\sum_{k=0}^n 3^k4^{n-k}\binom{n}{k}\sin^{2n-2k}(\pi x/3). \right) $$
Perron Frobenius not working on the following matrix
The elements of your numerical eigenvector all have the same sign, so there is a representative of the eigenspace which has all nonnegative entries (e.g. the one taken by multiplying all the entries of your numerical eigenvector by $-1$). Of course the representative of probabilistic interest is the one with all nonnegative entries and which has a sum of $1$.
A symbol of commuting ranges in tensor product
$[x,y]=xy-yx$ is the commutator of $x$ and $y$.
Find $f(x)$ such that $f^\prime(x) = f(f(x))$
No such function does exist. Assume the contrary. Since $f'(x)=f(f(x))>0$ for any $x\in\Bbb R$, we know that $f$ is increasing. Mean Value Theorem implies that there exists some $c\in(-1,0)$ such that $$f(f(c))=f'(c)=f(0)-f(-1)<f(0)$$ Since $f$ is increasing, $f(c)<0$, a contradiction.
limit of integral of a continuous function
For each $n\in \mathbb{N}$ define $f_n\colon [0,1]\to \mathbb{R}$, $f_n(x)=f(\frac{x^n}{1+x^2})$. Then, by the continuity of $f$ the sequence $(f_n)$ converges pointwise to the function $g:[0,1]\to \mathbb{R}$, $g(x)=f(0)$ for $x\in [0,1)$ and $g(1)=f(1/2)$. Now, you need some theorem, that allows you to interchange the limit and the integral: $\lim_{n\to \infty} \int_{0}^1 f_n(x)dx\overset{?}=\int_{0}^1 \lim_{n\to \infty}f_n(x)dx=\int_{0}^1 g(x)dx=f(0)$. The step with the question mark needs justification. Do you know a theorem that justifies this step?
Some basic questions on Markov chains (Durrett)
A discrete time, $S$-valued stochastic process is a collection of $S$-valued random variables, $X=\{X_n:n\in\mathbb{N}\}$. For each fixed $\omega$, we then get a sequence in $S$, namely, the sequence $\{X_n(\omega)\}_{n=1}^\infty$. In this way, we can think of the process $X$ as a function that sends $\omega$ to a sequence in $S$. The set of all sequences in $S$ is denoted by $S^\mathbb{N}$. So we can think of $X$ as a map, $X:\Omega\to S^{\mathbb{N}}$. More specifically, it is the map defined by \begin{equation} (X(\omega))_n = X_n(\omega). \end{equation} Now, one way to build a variety of stochastic processes is to start with a probability space, $(\Omega,\mathcal{F},P)$, and then define various functions $X:\Omega\to S^\mathbb{N}$, $Y:\Omega\to S^\mathbb{N}$, $Z:\Omega\to S^\mathbb{N}$, and so on. An alternative way to build processes, however, is to start with just a measurable space, $(\Omega,\mathcal{F})$, and only one function, $X:\Omega\to S^\mathbb{N}$. We can then give this one function, $X$, a variety of different properties by defining many different probability measures on $(\Omega,\mathcal{F})$, such as $P_1$, $P_2$, $P_x$, $P_\mu$, and so on. It is this second approach which is quite common in the study of Markov processes, and it is the one which Durrett is following. More specifically, he starts with the measurable space $(\Omega,\mathcal{F})$, where $\Omega=S^\mathbb{N}$ and $\mathcal{F}=\mathcal{S}^\mathbb{N}$, and with the one function $X:\Omega\to S^\mathbb{N}$, where $X$ is the identity. Since $X$ is the identity, we have $X(\omega)=\omega$. Recalling that both sides of this equality are sequences, we may write this equality component-wise, giving $(X(\omega))_n =\omega_n$. This, in turn, may be rewritten as $X_n(\omega)=\omega_n$. In other words, $X_n$ is the function from $S^\mathbb{N}$ to $S$ which maps a sequence to its $n$-th term. The shift-operator, $\theta_n$, is a map from $S^\mathbb{N}$ to $S^\mathbb{N}$. It takes a sequence and chops off the first $n$-terms. More specifically, if $\omega\in S^{\mathbb{N}}$, then $\theta_n(\omega)$ is the sequence defined by $(\theta_n(\omega))_m = \omega_{n+m}$. This is why we have $$ (X_j\circ\theta_n)(\omega) = X_j(\theta_n(\omega)) = (\theta_n(\omega))_j = \omega_{n+j}. $$
Zerodivisors in polynomial rings
Since $a_ng=0$ you get $a_nb_{m-1}=0$. Now look at the coefficient of $x^{m+n-1}$ in the product $fg$ and find that $a_{n-1}b_m=0$. Now repeat the argument for $a_{n-1}g$, and so on.
Is this space complete?
As pointed out in the comments, $d$ will become a metric if we rather consider equivalence classes of functions. It has been shown in this website that this metric is equivalent to convergence in measure, i.e. $d(f_n,f)\to 0$ if and only if $f_n\to f$ in measure. Here we can rediscover the fact that if a sequence converges in measure, a subsequence converges almost everywhere. Take $(f_n)_n$ a Cauchy sequence for $d$; we can construct inductively a positive integer $n_k$ such that $n_k\gt n_{k-1}$ and $d(f_{n_k},f_{n_{k-1}})\leqslant 2^{-k}$. Using the Borel-Cantelli lemma, we can show that for almost every $x$, $(f_{n_k}(x))_{k\geqslant 1}$ is a Cauchy sequence, which converges to some $f(x)$. By Fatou's lemma, $d(f_{n_k},f)\to 0$ as $k\to\infty$. To conclude, we need to show that the whole sequence converges to $f$: we combine the later fact with the assumption that $(f_n)_n$ is a Cauchy sequence.
Calculating partial derivatives of composition with max
In regions where $|f(x)| < 1$, your function is simply $f$, so the partial derivative is $\frac{\partial f}{\partial x_i}$. In regions where $|f(x)| > 1$, your function is equal to the sign of $f(x)$, and its derivative is $0$. On parts where $|f(x)|=1$, you have no guarantee that partial derivatives even exist. For example, if $f(x_1,x_2,x_3)=x_1$, then the partial derivative $\frac{\partial f}{\partial x_1}$ does not exist around the plane $x_1=1$
Show that $2$ sets have "the same number of elements"
In general, the way you can proof two sets $X,Y$ are equipotent is by finding a bijection (i.e., a function that is injective and surjective) $h:X\to Y$ between them. In this case, notice that an element of the set $(B\times C)^A$ must be a function $f:A\to B\times C$, and element by element it looks like $f(a)=(b,c)$. If we call $f_1(a)=b$ and $f_2(a)=c$ whenever $f(a)=(b,c)$, then there is a natural way to obtain an element of the set $B^A\times C^A$ associated to $f$, namely the ordered pair $(f_1,f_2)$. If you can show that the assignment \begin{align*} h:&(B\times C)^A\longrightarrow B^A\times C^A\\ &\ \ \ \ \ \ f\ \ \ \ \ \ \ \longmapsto h(f)=(f_1,f_2) \end{align*} is a bijection, you will be done.
Complex line integral $\sinh(z)$ over piecewise smooth line
On the line from 0 to $\pi$, we can write z= x so that the integral becomes $\int_0^\pi \frac{e^x- e^{-x}}{2}dx$ which is easy to integrate. On the line from $\pi$ to $\pi+ i\pi$ we can write $z= \pi+ ix$ so that the integral becomes $\int_0^\pi e^{\pi+ ix}- e^{-\pi-ix}{2} i dx= e^\pi\frac{i}{2}\int_0^{\pi} e^{ix}dx+ \frac{i}{2}e^{-\pi}\int_0^\pi e^{-ix}dx$
Why is my translation of $\exists{x}\,(C(x) \rightarrow F(x))$ into an English sentence wrong?
Consider a world with no people. Then there is no comedian, so your translation is correct. However, the statement is false because there doesn't exist anyone, in particular no one who satisfies $C(x) \rightarrow F(x)$. To correct your answer, you could say that in case 1, there is someone who is not a comedian.
Counting elementary events and discrete triangular law
For the first question, this is better demonstrated by writing down the matrix of $(f, s)$:$$\begin{matrix} (6,1)&(6,2)&(6,3)&(6,4)&(6,5)&(6,6)\\ (5,1)&(5,2)&(5,3)&(5,4)&(5,5)&(5,6)\\ (4,1)&(4,2)&(4,3)&(4,4)&(4,5)&(4,6)\\ (3,1)&(3,2)&(3,3)&(3,4)&(3,5)&(3,6)\\ (2,1)&(2,2)&(2,3)&(2,4)&(2,5)&(2,6)\\ (1,1)&(1,2)&(1,3)&(1,4)&(1,5)&(1,6) \end{matrix}$$ Along any diagonal from the upper-left to lower-right the sum $f + s$ is seen to be a constant. For the second question, take $k = 2$ for example. The corresponding diagonal crosses through $(2, 1)$ and $(1, 2)$, so the number of events for $k = 2$ is $2$.
How to solve infinite possible coin problem?
This question is old but unanswered, so this is for those who may be looking for an answer. The way the event is defined in the question may lead to wrong interpretations. • Your interpretation of the event is : You toss the n coins infinitely many times until all coins show heads. • The actual event is : You toss the n coins. The coins that show tail are thrown again. Then, the heads are counted. It is not suggested that you repeat the process infinitely many times, you just do this once. This results in three possible events given by Ω={TT,TH,H}, as shown in this event tree diagram. The subset of successful events is S={TH,H} as the success event is defined by head. The probability of success therefore is P(S)=p+(1-p)p. The probability of failure is 1-P(S). Clearly, the number of final heads follows the Binomial distribution Bin(n,p+(1-p)p).
Is there such an expression?
I have seen this way of expressing complex numbers. The numerator on the left is magnitude $A$ at angle $\phi_1$, so in other notation would be $Ae^{i\phi_1}$ The equality would then be $\frac {Ae^{i\phi_1}}{Be^{i\phi_2}}=\frac ABe^{i(\phi_1-\phi_2)}$, which is true.
Every chart is smoothly compatible, what is wrong with my argument?
If you choose the smooth structure $A_1$, then $\phi^{-1}(U\cap V) \to U\cap V$ is differentiable with respect to $A_1$. But then $ \psi : U\cap V \to \psi(U\cap V)$ is not differentiable with respect to $A_1$ and the composition is not differentiable. You may want to use $A_2$ in the second map, but then you have to consider $$\psi \circ I \circ \phi^{-1},$$ where $I$ is the identity map $I: (U\cap V, A_1) \to (U\cap V, A_2)$. But this map is not differentiable.
Why is the sum of residues of $\frac{1}{1+z^n}$ in the upper half plane $1/[in\sin(\pi/n)]$?
The poles of $(1+z^n)^{-1}$ occur at the $n^{\text{th}}$ roots of $-1$, namely $$z_k = e^{i\pi(2k+1)/n}.$$ Those poles in the upper half-plane are given by $0 \leq k \leq \frac{n}{2}-1$. As you noticed, the residue at $z_k$ is $$-\frac{1}{n}e^{i\pi(2k+1)/n},$$ so that the sum of the residues in the upper half-plane is $$\begin{align} \sum_{k=0}^{n/2-1} - \frac{1}{n} e^{i\pi(2k+1)/n} &= - \frac{1}{n} e^{i\pi/n} \sum_{k=0}^{n/2-1} \left(e^{i 2 \pi/n}\right)^k \\ &= -\frac{1}{n} e^{i \pi/n} \frac{\left(e^{i 2 \pi/n}\right)^{n/2}-1}{e^{i 2 \pi/n}-1} \\ &= \frac{1}{n} e^{i \pi/n} \frac{2}{e^{i 2 \pi/n}-1} \\ &= \frac{1}{n} \cdot \frac{2}{e^{i\pi/n} - e^{-i\pi/n}} \\ &= \frac{1}{i n} \cdot \frac{2 i}{e^{i\pi/n} - e^{-i\pi/n}} \\ &= \frac{1}{i n \sin(\pi/n)}. \end{align}$$
Uniform bound for $cos(nz)$.
Using $\Bbb{C}=\bigcup_n nB$, where $B$ is the open unit disk, it is easy to see that the sequence of functions $(z\mapsto f(n z))_n$ on the unit disk is uniformly bounded if and only if $f$ is bounded. Since your $f$ is holomorphic, this is the case (by Liouville) if and only if it is constant, but the Cosine is not constant. So, no, you are not missing anything.
Bézout's identity question
Suppose $g=\gcd(x,y)$. Bezout guarantees $a$ and $b$ so that $$ ax+by=g\tag1 $$ For any $c$, we have $$ \color{#C00}{(ca+y/g)}x+\color{#090}{(cb-x/g)}y=cg\tag2 $$ where $$ b\color{#C00}{(ca+y/g)}-a\color{#090}{(cb-x/g)}=1\tag3 $$ Thus, $(2)$ is the equation sought.
Left topological zero-divisors in Banach algebras.
In what follows, $ \mathcal{A} $ shall denote a unital Banach algebra and $ \mathbb{S}(\mathcal{A}) $ the unit sphere of $ \mathcal{A} $. (a) Let $ a $ be an invertible element of $ \mathcal{A} $. Assume, for the sake of contradiction, that there exists a sequence $ (b_{n})_{n \in \mathbb{N}} $ in $ \mathbb{S}(\mathcal{A}) $ such that $ \displaystyle \lim_{n \to \infty} a b_{n} = \mathbf{0}_{\mathcal{A}} $. As left-multiplication by any element of $ \mathcal{A} $ is a continuous operation on $ \mathcal{A} $, we obtain \begin{align} \lim_{n \to \infty} b_{n} &= \lim_{n \to \infty} a^{-1} (a b_{n}) \\ &= a^{-1} \left( \lim_{n \to \infty} a b_{n} \right) \\ &= a^{-1} \cdot \mathbf{0}_{\mathcal{A}} \\ &= \mathbf{0}_{\mathcal{A}}. \end{align} This clearly contradicts the requirement that $ \forall n \in \mathbb{N}: ~ b_{n} \in \mathbb{S}(\mathcal{A}) $, so it must be the case that $$ \{ ab \in \mathcal{A} ~|~ b \in \mathbb{S}(\mathcal{A}) \} $$ is bounded away from $ \mathbf{0}_{\mathcal{A}} $, which yields $ \zeta(a) > 0 $. By taking the contrapositive of this conclusion, Problem (a) is thereby solved. (b) Fix $ a \in \mathcal{A} $, and let $ (a_{n})_{n \in \mathbb{N}} $ be a sequence in $ \mathcal{A} $ that converges to $ a $. Claim 1: $ \displaystyle \limsup_{n \to \infty} \zeta(a_{n}) \leq \zeta(a) $. Proof of Claim 1: Let $ \epsilon > 0 $, and find a $ b \in \mathbb{S}(\mathcal{A}) $ such that $$ \zeta(a) \leq \| ab \| < \zeta(a) + \epsilon. $$ Next, observe that $ \forall n \in \mathbb{N}: ~ \zeta(a_{n}) \leq \| a_{n} b \| $ and $ \displaystyle \lim_{n \to \infty} \| a_{n} b \| = \| ab \| $. Hence, $ \zeta(a_{n}) < \zeta(a) + \epsilon $ for all $ n \in \mathbb{N} $ sufficiently large, which yields $ \displaystyle \limsup_{n \to \infty} \zeta(a_{n}) \leq \zeta(a) + \epsilon $. As $ \epsilon $ is arbitrary, we obtain $ \displaystyle \limsup_{n \to \infty} \zeta(a_{n}) \leq \zeta(a) $. $ \quad \spadesuit $ Claim 2: $ \displaystyle \zeta(a) \leq \liminf_{n \to \infty} \zeta(a_{n}) $. Proof of Claim 2: Let $ \epsilon > 0 $, and pick a sequence $ (b_{n})_{n \in \mathbb{N}} $ in $ \mathbb{S}(\mathcal{A}) $ such that $$ \forall n \in \mathbb{N}: \quad \zeta(a_{n}) \leq \| a_{n} b_{n} \| < \zeta(a_{n}) + \epsilon. $$ Next, observe that \begin{align} \forall n \in \mathbb{N}: \quad |\| a_{n} b_{n} \| - \| a b_{n} \|| &\leq \| a_{n} b_{n} - a b_{n} \| \\ &= \| (a_{n} - a) b_{n} \| \\ &\leq \| a_{n} - a \| \| b_{n} \| \\ &= \| a_{n} - a \|. \quad (\text{As $ \| b_{n} \| = 1 $ for all $ n \in \mathbb{N} $.}) \end{align} Hence, $ \displaystyle \lim_{n \to \infty} (\| a_{n} b_{n} \| - \| a b_{n} \|) = 0 $, from which it follows that $$ \zeta(a) \leq \| a b_{n} \| < \zeta(a_{n}) + 2 \epsilon $$ for all $ n \in \mathbb{N} $ sufficiently large. This yields $ \displaystyle \zeta(a) - 2 \epsilon \leq \liminf_{n \to \infty} \zeta(a_{n}) $, and as $ \epsilon $ is arbitrary, we obtain $ \displaystyle \zeta(a) \leq \liminf_{n \to \infty} \zeta(a_{n}) $. $ \quad \spadesuit $ By the two claims, $ \displaystyle \lim_{n \to \infty} \zeta(a_{n}) = \zeta(a) $. Therefore, as $ a $ is arbitrary, we conclude that $ \zeta $ is a continuous function. (c) Let $ (a_{n})_{n \in \mathbb{N}} $ be a sequence in $ \mathcal{G}(\mathcal{A}) $ that converges to some $ a \in \mathcal{A} $. We claim that if $ (\| a_{n}^{-1} \|)_{n \in \mathbb{N}} $ is a bounded sequence in $ \mathbb{R}_{+} $, then $ a \in \mathcal{G}(\mathcal{A}) $. Indeed, suppose that $ (\| a_{n}^{-1} \|)_{n \in \mathbb{N}} $ is bounded above by $ M > 0 $. Then \begin{align} \forall m,n \in \mathbb{N}: \quad \| a_{m}^{-1} - a_{n}^{-1} \| &= \| a_{m}^{-1} a_{n}^{-1} (a_{n} - a_{m}) \| \\ &\leq \| a_{m}^{-1} \| \| a_{n}^{-1} \| \| a_{n} - a_{m} \| \\ &\leq M^{2} \| a_{n} - a_{m} \|. \end{align} As $ (a_{n})_{n \in \mathbb{N}} $ is a Cauchy sequence in $ \mathcal{A} $, it follows that $ (a_{n}^{-1})_{n \in \mathbb{N}} $ is also a Cauchy sequence in $ \mathcal{A} $. By the completeness of $ \mathcal{A} $, we see that $ \displaystyle \lim_{n \to \infty} a_{n}^{-1} = b $ for some $ b \in \mathcal{A} $. Then as multiplication in $ \mathcal{A} $ is a jointly continuous binary operation on $ \mathcal{A} $, we obtain \begin{align} ba = \lim_{n \to \infty} a_{n}^{-1} a_{n} = \mathbf{1}_{\mathcal{A}}, \\ ab = \lim_{n \to \infty} a_{n} a_{n}^{-1} = \mathbf{1}_{\mathcal{A}}. \end{align} Therefore, $ a $ is invertible and $ \displaystyle \lim_{n \to \infty} a_{n}^{-1} = a^{-1} $. Now, as $ \mathcal{G}(\mathcal{A}) $ is known to be an open subset of $ \mathcal{A} $, we have $ \partial(\mathcal{G}(\mathcal{A})) = \text{cl}(\mathcal{G}(\mathcal{A})) \setminus \mathcal{G}(\mathcal{A}) $. Let $ a \in \partial(\mathcal{G}(\mathcal{A})) $. Then $ a \notin \mathcal{G}(\mathcal{A}) $ and there exists a sequence $ (a_{n})_{n \in \mathbb{N}} $ in $ \mathcal{G}(\mathcal{A}) $ that converges to $ a $. By the previous paragraph, $ (\| a_{n}^{-1} \|)_{n \in \mathbb{N}} $ is necessarily an unbounded sequence in $ \mathbb{R}_{+} $. Question raised by the OP in his comment below: Is every $ a \in \partial(\mathcal{G}(\mathcal{A})) $ a left topological zero-divisor? The answer is ‘yes’. Fix $ a \in \partial(\mathcal{G}(\mathcal{A})) $, and let $ (a_{n})_{n \in \mathbb{N}} $ be a sequence in $ \mathcal{G}(\mathcal{A}) $ converging to $ a $ such that $ \displaystyle \lim_{n \to \infty} \| a_{n}^{-1} \| = \infty $. Then for all $ n \in \mathbb{N} $, we have \begin{align} \left\| a \cdot \frac{a_{n}^{-1}}{\| a_{n}^{-1} \|} \right\| &\leq \left\| a \cdot \frac{a_{n}^{-1}}{\| a_{n}^{-1} \|} - a_{n} \cdot \frac{a_{n}^{-1}}{\| a_{n}^{-1} \|} \right\| + \left\| a_{n} \cdot \frac{a_{n}^{-1}}{\| a_{n}^{-1} \|} \right\| \quad (\text{By the Triangle Inequality.}) \\ &= \left\| (a - a_{n}) \cdot \frac{a_{n}^{-1}}{\| a_{n}^{-1} \|} \right\| + \left\| \frac{\mathbf{1}_{\mathcal{A}}}{\| a_{n}^{-1} \|} \right\| \\ &\leq \| a - a_{n} \| \cdot \left\| \frac{a_{n}^{-1}}{\| a_{n}^{-1} \|} \right\| + \left\| \frac{\mathbf{1}_{\mathcal{A}}}{\| a_{n}^{-1} \|} \right\| \\ &= \| a - a_{n} \| + \frac{1}{\| a_{n}^{-1} \|}. \end{align} As the last line converges to $ 0 $ as $ n \to \infty $, we obtain $ \displaystyle \lim_{n \to \infty} a \cdot \frac{a_{n}^{-1}}{\| a_{n}^{-1} \|} = \mathbf{0}_{\mathcal{A}} $. Finally, as $ \dfrac{a_{n}^{-1}}{\| a_{n}^{-1} \|} \in \mathbb{S}(\mathcal{A}) $ for each $ n \in \mathbb{N} $, we conclude that $ a $ is a left topological zero-divisor.
Is there an inference rule with premise neg X with neg X OR Y and conclusion Y?
No, that would not be valid.   $\neg X, \neg X\vee Y\nvdash Y$.   The premises just say assume: "$X$ is definitely false, $Y$ may also be true."   So they don't infer that $Y$ is true. However $\neg X,\neg X\wedge Y \vdash Y$ is valid . Under the assumption that $\neg X$ and $Y$ are true, then of course $Y$ is true. Which did you actually mean?
Estimating $\beta_o$ and $\beta_1$ with Weighted Least Squares with Logit link
If $Y = X\beta+\epsilon,$ and $var(Y_i)=1/w_i$ and $cov(Y_i, Y_j)=0$, then $cov(Y)=\sigma^2 \times diag(1/w_1,...,1/w_n)$, as such the "correction" $V^{-1/2}$ matrix should be of the form $V^{-1/2}=diag(\sqrt{w_i},...,\sqrt{w_i})$ because now $$ cov(V^{-1/2}Y)=V^{-1/2}cov(Y)V^{-1/2}=\sigma^2 I. $$ Therefore, the WLS will be $$ \beta_{WLS} = (X'V^{-1}X)^{-1}X'V^{-1}y, $$ where $V^{-1} =diag(w_1,...,w_n)$ and $X$ is the usual design matrix of a form $X=[\mathbb{\vec{1}}, \mathbb{\vec{x}}]$. After some algebra you should get $$ \beta_{WLS} = \begin{pmatrix} \sum w_i & \sum x_iwi\\ \sum x_iw_i & \sum x_i^2w_i \end{pmatrix} \begin{pmatrix} \sum w_iy_i\\ \sum x_iw_iy_i \end{pmatrix}. $$ I guess you can compare it to your final result to reassure the calculations.
Is this model a linear regression model?
$$Y_i = \log_{10}(\beta_1) + \log_{10}(X_{i1}) + \beta_2X_{i2} + \epsilon_i$$ is not linear since $$\frac{\partial Y_i}{\partial \beta_1} \neq 0$$ But $$Y_i = \beta_3+ \log_{10}(X_{i1}) + \beta_2X_{i2} + \epsilon_i$$ is linear. When done, make $\beta_1=10^{\beta_3}$.
Legendre polynomial : show by recurrence that $P_n(1) = 1$
Simply: take any $n\in\Bbb N$, $n\ge 2$, and assume $P_{n}(1)=P_{n-1}(1)=1$. Then for $x=1$ we get $(n+1)P_{n+1}(1)-(2n+1)+n=0$, hence trivially $P_{n+1}(1)=1$ by the induction argument.
Ext preserves the tensor
As said in the comments, we need $C$ of rank 1. Otherwise $C=\mathcal{O}_X^{\oplus 2}$ is a counter-example whenever $\operatorname{Ext}^1(A,B)\neq 0$. But if $C$ is of rank 1, then it is invertible. In particular, the functors $B\mapsto\operatorname{Hom}(A,B)$ and $B\mapsto\operatorname{Hom}(A\otimes C,B\otimes C)$ are naturally isomorphic. Thus, so are their derived functors.