title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Number of homomorphisms from a number field to $\mathbb{R}$
There's no simple answer. In general, if $\alpha$ is a primitive element of $K$ with minimal polynomial $f$ over $\mathbb{Q}$, then there is a homomorphism $K\to \mathbb{R}$ for each root of $f$ in $\mathbb{R}$. There are at most $\deg f$ roots in $\mathbb{R}$, but there could be less since $\mathbb{R}$ is not algebraically closed. The number of homomorphisms $K\to\mathbb{R}$ is commonly written $r_1$, and $[K:\mathbb{Q}]-r_1$ is commonly written $2r_2$ (this difference is always even, since nonreal roots of $f$ in $\mathbb{C}$ come in complex conjugate pairs).
How to find the left inverse of a piece wise defined function
So we are given: $$ f(x) = \cases { x-1 & x $\text{ even}$ \\ 2x & x $\text { odd}$ \\ } $$ Now, you can see this function is injective. However, we claim it is surjective, and this is the reason why: Let $l$ be an integer. Suppose $l$ is odd, then $l+1$ is even, right? And from the definition of $f$, we see that $f(l+1) = (l+1)-1 = l$. So, for $l$ odd, we would like the inverse to be $l+1$. Let $l$ be even, then $\frac{l}{2}$ is well-defined, and $f(\frac l2) = 2 \times \frac l2 = l$.So for $l$ even, we would like the inverse to be $\frac l2$. Hence, we see that $f$ is surjective. However, the inverse mapping is given by the surjectivity: $$ g(l) = \cases{ l+1 & $l \text{ odd}$ \\ \frac l2 & $l \text{ even}$ \\ } $$ Why has the flipping happened? Simple. Both operations (adding $1$, multiplying by $2$) change the odd/even parity of the number (multiplying by $2$ happens only for odd numbers in the definition of $f$, so that all numbers change parity). Any inverse so involved will have to restore this parity (because you get back the same number, so the parity has to be the same, right?). Hence, any odd number will have to go to an even number, and every even number will have to go to an odd number (upon left-composition). We will verify the inversion. Suppose $l$ is odd, then we get $f(l) = 2l$, which is now even, so that $g(f(l))= l$. Suppose $l$ is even, then we get $f(l) = l -1$, which is odd, so that $g(f(l))= l$. Hence, $g$ is the correct left inverse of $f$.
Double integration problem, how to integrate $e^{x^2}$?
Of course, they are several methods to solve the double integral. Among them, using polar coordinates is one of the easiest : $$\frac{1}{2\pi}\int_{-\infty}^{\infty}\int_{-\infty}^ye^{-\frac{1}{2}\left(x^2+y^2\right)}dxdy= \frac{1}{2\pi}\int_{\frac{\pi}{4}}^{\frac{5\pi}{4}}\int_{0}^{\infty}e^{-\frac{r^2}{2}}r dr d\theta$$ $\int_{0}^{\infty}e^{-\frac{r^2}{2}}r dr =-\left[e^{-\frac{r^2}{2}} \right]_0^\infty=1\:\:$ and $\:\:\int_{\frac{\pi}{4}}^{\frac{5\pi}{4}}d\theta=\pi$ $$\frac{1}{2\pi}\int_{-\infty}^{\infty}\int_{-\infty}^ye^{-\frac{1}{2}\left(x^2+y^2\right)}dxdy=\frac{1}{2\pi}\pi=\frac{1}{2}$$ The area of integration is made obvious below : (in Cartesian coordinates, $\:x\:$ varies from $\:-\infty\:$ to $\:x=y\:$, so the area of integration is all the left side of the line $\:y=x\:$ ).
If $A$ is a simple finite dimensional $\mathbb{C}$-algebra then $A\cong M_n(\mathbb{C})$
The field $\mathbb{C}$ is not important. The statement is true for any algebraically closed field $F$. Take a simple right $A$-module $V$. This is a finite dimensional vector space over $F$, because it is a quotient of $A$ modulo some maximal right ideal. Therefore its endomorphism ring is a finite dimensional division algebra over $F$. Since $F$ is algebraically closed the dimension must be $1$, so the endomorphism ring is $F$. Also, the annihilator of $V$ in $A$ must be $\{0\}$, because $A$ is simple, so $V$ is faithful. Thus Wedderburn-Artin allows us to conclude that $A$ is isomorphic to $M_n(F)$, where $n=\dim_FV$ and the isomorphism is easily checked to be an $F$-algebra isomorphism.
Find integral $\int_0^1 \frac{\ln x}{1 - x^2} \mathrm{d}x$
$$\int_0^1 \frac{\ln x}{1 - x^2} \mathrm{d}x$$ Note that for $|x|<1$: $$\sum_{n=0}^{\infty} x^{n}=\frac{1}{1-x}$$ Hence for $|x^2|<1$: $$\sum_{n=0}^{\infty} x^{2n}=\frac{1}{1-(x^2)}$$ So we get: $$=\int_{0}^{1} \sum_{n=0}^{\infty} x^{2n}\log x dx$$ $$=\sum_{n=0}^{\infty} \int_{0}^{1} x^{2n} \log x dx$$ Integration by parts. $$=\sum_{n=0}^{\infty} -\frac{1}{(2n+1)^2}$$ $$=-\sum_{n=1}^{\infty} \frac{1}{(2n-1)^2}$$ Now think about evens and odds to see that this is equivalently: $$=-\left( \sum_{n=1}^{\infty} \frac{1}{n^2}-\sum_{n=1}^{\infty} \frac{1}{(2n)^2}\right)$$ $$=-\frac{3}{4} \sum_{n=1}^{\infty} \frac{1}{n^2}$$
About $Im T^k \cap Ker T^k$ for a linear operator
Let $M_k \cap N_k = \{0\}$ and $a \in N_{k + 1}$ but $\notin N_k$ Then $T^k(a) \ne 0 \in M_k$ and $T^k(a) \in N_k$. Contradaction.
Find is sequence is limited and if it's monotonic.
With just a hint of division, $\frac{2n^2+1}{n^2+1}$ becomes $2-\frac{1}{n^2+1}$ from which everything is much clearer.
The value of $\frac{(-1+i\sqrt 3)^{15}}{(1-i)^{20}}+\frac{(-1-i\sqrt 3)^{15}}{(1+i)^{20}}$ is
It is $$(-1+i\sqrt{3})^{15}=32768$$ and $$(1-i)^{20}=-1024$$
Efficient computation of conjugacy classes of a small group.
From the defining relations try to get a description of all the elements. Any element is a "word" involving $a$ and $b$. Last condition $b^{-1}ab = a^2$ can be rewritten as $ab= ba^2$. This modified version says whenever a comes ahead of b it can be pushed to come behind b, but as $a^2$. So all words in a and b can be reformulated to have b's in the front and a's at end. As $a^9=b^6=1$, any element can be written as $b^m a^n$ with $m=0,1,2,\ldots, 8, \ n=0,1,2,\ldots, 5$, giving 54 possibilities. Now try to describe conjuates of elements of type $a^k$, then $b^k$ and then $a^k b^l$ by any $g$ ($g$ has to be of the form $b^m a^n$ as above). It is doable.
show that a matrix a that is similar to an invertible matrix B is itself invertible. More generally, show that similar matrices have the same rank.
Product of invertible matrices is invertible. So if $PBP^{-1}$ is invertible then so is $P^{-1}PBP^{-1}$ and so is $P^{-1}PBP^{-1}P$. The latter is equal to $B$.
Limit related to $ f(x) = \prod_{i = 1}^x \left( \sin\left( i \frac{\pi}{n}\right) + \frac{5}{4}\right) $?
Using $$ \begin{align} \sin\left(\frac{k\pi}n\right)+\frac54 &=\frac{e^{ik\pi/n}-e^{-ik\pi/n}}{2i}+\frac54\tag1\\ &=\frac{e^{-ik\pi/n}}{2i}\left(e^{ik\pi/n}+2i\right)\left(e^{ik\pi/n}+i/2\right)\tag2 \end{align} $$ we get $$ \begin{align} \prod_{k=1}^{2n-1}\left(\sin\left(\frac{k\pi}n\right)+\frac54\right) &=\frac45\prod_{k=0}^{2n-1}\color{#00F}{\frac{e^{-ik\pi/n}}{2i}}\color{#C00}{\left(e^{ik\pi/n}+2i\right)}\color{#090}{\left(e^{ik\pi/n}+i/2\right)}\tag3\\ &=\frac45\color{#00F}{\frac{(-1)^{n-1}}{2^{2n}}}\color{#C00}{\left[z^{2n}-1\right]_{z=2i}}\color{#090}{\left[z^{2n}-1\right]_{z=i/2}}\tag4\\[3pt] &=\frac45\frac{(-1)^{n-1}}{2^{2n}}\left(2-(-1)^n\left(2^{2n}+2^{-2n}\right)\right)\tag5\\[6pt] &=\frac45+\color{#90F}{\frac45\left((-1)^{n-1}2^{1-2n}+2^{-4n}\right)}\tag6\\[9pt] &=\frac45+\color{#90F}{C(n)}\tag7 \end{align} $$ Explanation: $(3)$: $\frac45$ of the $k=0$ term is $1$; apply $(2)$ $(4)$: $\prod\limits_{k=0}^{2n-1}\!\left(z+e^{ik\pi/n}\right)=z^{2n}-1$ $(5)$: evaluate at $z=2i$ and $z=i/2$ $(6)$: simplify $(7)$: the product is $\frac45+C(n)$ Thus, $$ \begin{align} C(n) &=\frac45\left((-1)^{n-1}2^{1-2n}+2^{-4n}\right)\tag8\\[6pt] &=O\!\left(4^{-n}\right)\tag9 \end{align} $$
Circle Packing: Unsolved Problem in Geometry?
Slightly old, but the only proved (circle in circle) optimal packings are for $n \leq 20$ http://hydra.nat.uni-magdeburg.de/packing/cci/
Can $\mathbb{Z}_8$ be the commutator subgroup of a group?
If $D_n$ is the dihedral group of symmetries of a regular $n$-gon, $$D_n=\langle r,s\mid r^n=s^2=1, srs^{-1}=r^{-1}\rangle,$$ then it is easy to show that $[D_n,D_n]=\langle r^2\rangle.$ Can you get the cyclic group of order $8$ out of this?
How to lift maps going into the base of a fiber bundle?
Define the pullback of the fiber bundle as $f^*E = \{(x,e) \in B' \times E \mid f(x)=p(e)\}$. This is a fiber bundle over $B'$, and a section of this fiber bundle is the same thing as a lift of the map $f$. Whether or not fiber bundles admit sections is not so easily stated as the answer for covering spaces. The answer to this question goes by the name of "obstruction theory", and is complicated enough I'm not going to describe it here. One source I like for this is Davis & Kirk's algebraic topology book, but if I remember Hatcher talks briefly about it.
Let $G$ be a group with identity element $e$, let $a \in G$ have order $n$. Let $m$ be an integer then prove that $a^m =e$ iff $n$ divides $m$
Hint $\ H := \{m\in\Bbb Z\ :\ a^m = 1\}\,$ is closed under subtraction, so is a subgroup $\,\{0\}\subsetneq H\subseteq \Bbb Z.\,$ Therefore $\,H = n\Bbb Z\,$ where $\,n = $ least element $>0\,$ of $\,H.\,$ So $\,a^m = 1\!\iff\! m\in n\Bbb Z\!\iff\! n\mid m$ Remark $ $ Though it is easy to verify the set forms a subgroup by said subgroup test, it is more conceptual to do so by viewing $H$ as the kernel of the group hom: $\, m\mapsto a^m\,$ from $\,\Bbb Z\to G,\,$ presuming that viewpoint is already known.
Simple equation solving
$\textbf{Too long for a comment}$ presumably you are trying to get $$ \mathbf{M} =\begin{pmatrix} 3x+y & 2x \\ x & -4x-y \\ \end{pmatrix} $$? (assuming that you transform like you did for the other 2$\times$2 matrix to the "4 tuple" ) Now by linear are you talking about $$ \mathbf{M} = \mathbf{A}x + \mathbf{B}y $$ where $$ \mathbf{A} = \begin{pmatrix} 3 & 2 \\ 1 & -4 \\ \end{pmatrix},\\ \mathbf{B} = \begin{pmatrix} 1 & 0 \\ 0 & -1 \\ \end{pmatrix} $$ and x and y are just scalars? which achieves the desired result or are you working with $\mathbf{x} = \begin{pmatrix} x \\ y \\ \end{pmatrix}$ and performing a linear operation? My above comment contains a lot of assumptions.
Show that this sequence of events is independent (b-adic expansions)
I add a bit of notation to write $$ A_{i, \varepsilon} = \cup_{k=0}^{b^{i-1}-1} \left[ \frac{kb+\varepsilon}{b^i},\frac{kb+\varepsilon+1}{b^i} \right) =: \cup_{k=0}^{b^{i-1}-1} A_{i,\varepsilon}^{(k)}. $$ Evidently $$ P(A_{i,\varepsilon} ) = \sum_{i=0}^{b^{i-1}-1} \frac{1}{b^i} = \frac{1}{b}, $$ for each $i$. We also see that for each $i$, $$ \cup_{\varepsilon=1}^{b-1} A_{i,\varepsilon}= [0,1). $$ Now fix $\varepsilon_0 \in \{0,...,b-1\}$. We wish to show that for all natural numbers $i_1< \cdots < i_n$ $$ P(A_{i_1,\varepsilon_0}\cap \cdots \cap A_{i_n,\varepsilon_0}) = \frac{1}{b}P(A_{i_1,\varepsilon_0}\cap \cdots \cap A_{i_{n-1},\varepsilon_0}). $$ According to the above $$ P(A_{i_1,\varepsilon_0}\cap \cdots \cap A_{i_{n-1},\varepsilon_0}) = \sum_{\varepsilon=0}^{b-1} P(A_{i_1,\varepsilon_0}\cap \cdots \cap A_{i_{n-1},\varepsilon_0} \cap A_{i_{n},\varepsilon} ) $$ Now the aim is to show that $P(A_{i_1,\varepsilon_0}\cap \cdots \cap A_{i_{n-1},\varepsilon_0} \cap A_{i_{n},\varepsilon} ) = P(A_{i_1,\varepsilon_0}\cap \cdots \cap A_{i_{n-1},\varepsilon_0} \cap A_{i_{n},\varepsilon'} ) $ for all $\varepsilon,\varepsilon'$. The reason this holds is that for $j > i$, either $$\cup_{\varepsilon=0}^{b-1} A_{j,\varepsilon}^{(r)} \subset A_{i,\varepsilon'}^{(k)}$$ or $$ \left[ \cup_{\varepsilon=0}^{b-1} A_{j,\varepsilon}^{(r)}\right] \cap A_{i,\varepsilon'}^{(k)} = \emptyset. $$ This is a straightforward calculation, and also describes the essence of why this result is true-- the intervals are $A_{j,\varepsilon}^{(r)}$ are nested and maintain their relative size within eachother for increasing $j$. Notice that this implies that for any $\varepsilon$, $P(A_{i_1,\varepsilon_0}\cap \cdots \cap A_{i_{n-1},\varepsilon_0} \cap A_{i_{n},\varepsilon} )$ reduces down to a sum of the probabilities of intervals of the form $A_{i_n,\varepsilon}^{(k)}$, namely the sum over those intervals that intersect $ A_{i_1,\varepsilon_0}\cap \cdots \cap A_{i_{n-1},\varepsilon_0}$. Since those by the above must coincide in a 1-1 correspondence with the intervals $A_{i_n,\varepsilon'}^{(k)}$ for all other $\varepsilon' \in \{0,...,b-1\}$ that intersect $A_{i_1,\varepsilon_0}\cap \cdots \cap A_{i_{n-1},\varepsilon_0}$, we must have $$P(A_{i_1,\varepsilon_0}\cap \cdots \cap A_{i_{n-1},\varepsilon_0} \cap A_{i_{n},\varepsilon} ) = P(A_{i_1,\varepsilon_0}\cap \cdots \cap A_{i_{n-1},\varepsilon_0} \cap A_{i_{n},\varepsilon'} ) $$ as needed. Hence $$ P(A_{i_1,\varepsilon_0}\cap \cdots \cap A_{i_{n-1},\varepsilon_0}) = b P(A_{i_1,\varepsilon_0}\cap \cdots \cap A_{i_{n-1},\varepsilon_0} \cap A_{i_{n},\varepsilon_0} ). $$
If $f$ is analytic in some punctured disk and if $|Re f(z)|$ is bounded in a sub disk then it has a removable singularity
Yes. If $\operatorname{Re} f(z) < M$ in the punctured disc then you can apply Riemann's theorem to $g = T \circ f$ where $T$ is a Möbius transformation mapping the half plane $ \{ \operatorname{Re} w < M \} $ onto the unit disk.
The matrix of an isometry has orthonormal columns
Suppose that $T$ is an isometry. Then by (4) there is an orthonormal basis $e_1,\dots,e_n$ of $V$ such that $Te_1,\dots,Te_n$ is also an orthonormal basis. Theorem 6.30 gives us that $$(\forall j \in \{1,\dots,n\}) \quad Te_j = \langle Te_j,e_1 \rangle e_1 + \cdots + \langle Te_j,e_n \rangle e_n$$ so, the $j$-th column of the associated matrix is $$a_j := \begin{pmatrix} \langle Te_j,e_1 \rangle \\ \vdots \\ \langle Te_j,e_n \rangle \end{pmatrix} \in F^{n \times 1}.$$ Then, as you say, if we consider the Euclidean inner product on $F^{n \times 1} \cong F^n$ we have \begin{align} \langle a_j, a_k \rangle &= \sum_{i=1}^n \langle Te_j,e_i \rangle \overline{\langle Te_k,e_i \rangle} \\ &= \sum_{i=1}^n \langle Te_j, \langle Te_k,e_i \rangle e_i \rangle \qquad (\langle -,\lambda v \rangle = \overline \lambda \langle -,v \rangle) \\ &= \left\langle Te_j, \sum_{i=1}^n \langle Te_k,e_i \rangle e_i \right\rangle = \langle Te_j, Te_k \rangle \end{align} so $a_1,\dots,a_n$ is indeed an orthonormal list. Conversely, if there is an orthonormal basis $e_1,\dots,e_n$ of $V$ such that the associated matrix $A$ of $T$ has orthonormal columns, then $A^*A = I$ (if $B$ and $C$ are square matrices, the $(i,j)$-th entry of $B^*C$ is precisely the Euclidean inner product of the $j$-th column of $C$ with the the $i$-th column of $B$) and theorem 7.10 now gives us $T^*T=I$, so $T$ is an isometry by (5).
Showing a function goes to zero exponentially fast
Substituting $x = \mu (\rho-1)a$, you're looking for $C_1$ and $C_2$ such that \begin{equation} \frac{x e^{-x}}{\rho - e^{-x}} \le C_1 e^{-C_2 x} \end{equation} Taking the natural log of both sides and rearranging gives \begin{equation} \ln x - \ln\left[C_1(\rho - e^{-x})\right] \le (1 - C_2) x \end{equation} So as long as $C_2 < 1$, your function will be bounded above by $C_1 e^{-C_2 x}$ for sufficiently large $x$.
Question on Transitive Set.
First note that $T'=\varnothing$ is transitive and it is a subset of every other set. The more likely situation is that you are asked to find a transitive $T'$ such that $T\subseteq T'$. Let $T_0=T$ and $T_{n+1}=\{x\mid\exists y\in T_n: x\in y\}$. Now take $T'=\bigcup_{n=0}^\infty T_n$, and show it is transitive. This $T'$ is called the transitive closure of $T$. Generally speaking if $(X,R)$ is an ordered set we can talk about the transitive closure of $R$, and in the context of sets we simply take the transitive closure of $(T,\in)$.
Why we can get the general solution by calculating the eigenvectors of Jacobian Matrix
Such a method is known as linearization. Solutions of the linearized equations are not solutions of the true ODE unless the ODE is linear. However, the solutions to the linearized equations about a critical point is topologically equivalent to the true solution in a neighborhood about the critical point by the Hartman-Grobman theorem.
If the radius of convergence of the power series $a$ is positive then it "reciprocal" power series have positive radius of convergence
I found a solution that doesn't use continuity theorems, just basic properties of convergent sequences, so it fit well to the context where it is asked. (a) If $ab=1$ it must be the case that $a_0=b_0=1$ and $$ c_n:=\sum_{k=0}^n a_k b_{n-k}=0,\quad\forall k\in\Bbb N_{\ge 1}\tag{1} $$ From (1) we have that $$ 0=\sum_{k=1}^{n-1}a_kb_{n-k}+b_na_0+a_nb_0\implies b_n=-a_n-\sum_{k=1}^{n-1}a_kb_{n-k}\tag{2} $$ From (2) is clear that the coefficients $b_n$ are well defined, hence $b$ exists for all $a$ such that $a_0=1$. (b) Let $a:=\sum a_k X^k$ a formal power series with radius of convergence $\rho_a>0$, then $\underline a(x):=\sum_{k=0}^\infty a_k x^k$ is well-defined for any $x\in\rho_a\Bbb B$ and we have $$ \left|\sum_{k=0}^\infty a_kx^k\right|\le \sum_{k=0}^\infty |a_k|r^k=M<\infty,\quad\forall x\in r\Bbb B,\text{ with }0<r<\rho_a\tag3 $$ Then suppose that there is an $\epsilon\in(0,r)$ such that $$ \sum_{k=1}^\infty |a_k|\epsilon^k\le1\tag4 $$ From $(2)$ we can write the bound $$ |b_n|\le \sum_{k=1}^n|a_kb_{n-k}|\tag5 $$ Now from (4) and (5) we want to prove by induction that $|b_n|\le \epsilon^{-n}$. Observe that $b_0=1$ so the base case holds. Now assume that $|b_n|\le \epsilon^{-n}$ and from $(5)$ we find that $$ |b_{n+1}|\le \sum_{k=1}^{n+1}|a_k|\epsilon^{-n-1+k}\le \epsilon^{-n-1}\tag6 $$ where the second inequality is a consequence of $(4)$. Thus by Hadamard formula we find that $$ \rho_b=\frac1{\limsup\sqrt[n]{|b_n|}}\ge \epsilon>0\tag7 $$ So we only need to prove that such $\epsilon>0$ exists. Because $\sum_{k=0}^\infty |a_k| r^k$ converges for some $r\in(0,\rho_a\wedge 1)$ then there is some $N$ such that $\sum_{k=N+1}^\infty |a_k|r^k< 1/2$. Set $K:=\max_{k\le N}|a_k|$, then we have the bound $$ \begin{align}\sum_{k=1}^\infty|a_k| x^k&\le \sum_{k=1}^N |a_k| x^k+\frac12\\ &\le K\sum_{k=1}^N x^k+\frac12\\ &\le K\frac{x}{1-x}+\frac12,\quad\text{for }x\in[0,r)\end{align}\tag8 $$ Then for any $\epsilon\in(0,r\wedge 1/(2K+1))$ we find that $$ \sum_{k=1}^\infty|a_k|\epsilon^k\le K\frac{\epsilon}{1-\epsilon}+\frac12\le1\tag9 $$ as desired.
change of basis conflicted definitions
Maybe this helps: As written in the wikipage, let $E$ be a $K$-vector space of dimension $d$. Let us for simplicity assume that $K$ is either $\mathbb{R}$ or $\mathbb{C}$. Now, the first thing to understand is, that $E$ must not be $\mathbb{R}^d$ nor $\mathbb{C}^d$, that is, the elements in $E$ could be anything as long as the axioms of a vector space are fulfilled. So, for instance $E$ could be the space of polynomials with degree smaller or equal to $d$ over the real numbers. However, one awesome result of linear algebra is that this vector space is still isomorphic to $\mathbb{R}^d$. In other words, you can uniquely identify every polynomial in that vector space with a $d$-tuple of real numbers. However, this identification depends on your choice of bases! But it this identification that makes it possible to still identify every linear map on a finite dimensional vector space with a matrix. Notice that this is not trivial. Now, to the concrete problem. I think the confusion comes from the fact that you wrote down an equality that isn't defined. But you want the following: $$ \begin{pmatrix}x_1' & x_2' & x_3' \end{pmatrix} = P_{B'\to B} \begin{pmatrix} x_1 & x_2 & x_3 \end{pmatrix}. $$ Notice, that like this the dimension fit. Suppose that $E=\mathbb{R}^3$ then on the left there is written a $3\times 3 $ matrix. And, if and only if $P_{B'\to B}$ is a $3\times 3$ matrix then so it the right side a $3 \times 3$ matrix. In your case, that didn't work, because $\begin{pmatrix} x_1 & x_2 & x_3 \end{pmatrix}^T$ is a $9$ tuple. Now, the matrix $Mat_{B,B'}(Id_E)$ is constructed by taking that columns to be the bases elements of $B'$ written in terms of the bases of $B$. So, that in your case $$ Mat_{B,B'}(Id_E) = \begin{pmatrix} 1 & 0 & 0 \\ 3 & 1 & 0 \\ 4 & 2 & 1 \end{pmatrix}. $$ You see that then indeed $$ P_{B' \to B} = Mat_{B,B'}(Id_E) \quad \text{or equivalently} \quad P_{B\to B'} = Mat_{B',B}(Id_E). $$
How to evaluate this integral? $\int^1_0\frac{(\frac{1}{2}-x)\ln (1-x)}{x^2-x+1}\mathrm{d}x$
First, substitute $y=1-x$: $$ \Rightarrow \int _0^1 \frac{(y-1/2)\log(y)}{y^2-y+1}\,dy $$Then use integration by parts with $u=\log(y)$: $$ =\left. \frac{1}{2}\log(y^2-y+1)\log(y)\right|_{0}^{1} -\frac{1}{2}\int_0^1\frac{\log(y^2-y+1)}{y}\,dy $$The boundary terms vanish (check this yourself). Note that $y^3+1=(y^2-y+1)(y+1)$, so we have $$ -\frac{1}{2}\int_0^1\frac{\log(y^2-y+1)}{y}\,dy = -\frac{1}{2}\int_0^1\frac{\log((y^3+1)/(y+1))}{y}\,dy $$ $$ = \frac{1}{2}\int_0^1\frac{\log(y+1)}{y}\,dy- \frac{1}{2}\int_0^1\frac{\log(y^3+1)}{y}\,dy $$Now invoke the Maclaurin series for $\log(1+z)$: $$ = \frac{1}{2} \int _0^1 \sum_{k=0}^{\infty} \frac{(-1)^k y^k}{k+1}\,dy-\frac{1}{2} \int _0^1 \sum_{k=0}^{\infty} \frac{(-1)^k y^{3k+2}}{k+1}\,dy $$Switch the integral and the sum and use the power rule: $$ = \frac{1}{2} \sum_{k=0}^{\infty} \int _0^1 \frac{(-1)^k y^k}{k+1}\,dy-\frac{1}{2} \sum_{k=0}^{\infty}\int _0^1 \frac{(-1)^k y^{3k+2}}{k+1}\,dy $$ $$ = \frac{1}{2} \sum_{k=0}^{\infty} \frac{(-1)^k}{(k+1)^2}-\frac{1}{2\cdot 3} \sum_{k=0}^{\infty} \frac{(-1)^k }{(k+1)^2} $$ $$ = \frac{1}{3} \sum_{k=0}^{\infty} \frac{(-1)^k}{(k+1)^2} $$If you buy that $\sum_{k=0}^{\infty} \frac{1}{(k+1)^2}=\frac{\pi^2}{6}$, you can convince yourself the alternating version sums to $\frac{\pi^2}{12}$. This gives the total of $\frac{\pi^2}{36}$.
Why does $ab=ba=1$ imply ${a_1}^2 + {a_2}^2 + {a_3}^2 + {a_4}^2 = 1$?
As Travis said in the comments, $$\| a b\| = \| a\| \| b\|$$, in this case we have $ab=1$ so we can write, $$ 1 = \|a\| \|b\|$$, and if we square both sides we can write, $$ 1 = (a_1^2+a_2^2+a_3^2+a_4^2)(b_1^2+b_2^2+b_3^2+b_4^2).$$ All the $a_n$ and $b_n$ are integers. We then have that the product of two integers is equal to $1$. Under what circumstances is this possible?
What is the difference of regular functions and holomorphic functions
One reason is that you might want to work with varieties over fields other than the complex numbers. These varieties are not complex manifolds, so we can't talk of holomorphic functions, but regular functions still make sense. We can't use convergent power series either, because to do this we use the absolute value of $\mathbb C$. What does it mean for $f(z) = \sum_{i=0}^\infty a_i z^i$, where $a_i\in \mathbb F_p$ to be convergent at $z=c$? If we restrict ourselves to varieties over $\mathbb C$, there is some truth to the idea that it doesn't matter if we look at regular or holomorphic functions. Serre's GAGA theorem implies that for a projective variety, global meromorphic functions are the same as global rational functions. However, this very much breaks down if we try to look at local functions (or at functions on a variety which is not projective). For example, $\mathbb A^1_{\mathbb C}$ has plenty of holomorphic functions which are not regular in the sense of Hartshorne, like $e^x$. What GAGA tells us in this case is that the regular functions are exactly the holomorphic functions which are meromorphic at $\infty$ (unlike $e^x$ which has an essential singularity).
Proving minimum number of chairs is $567$
Let $r$ be the number of rows and $c$ be the number of columns. There are $14r$ boys and $10c$ girls along with $3$ empty seats. There are $rc$ seats in all. Thus, $14r+10c+3=rc$. Subtracting $14r+3$ from both sides shows us that $10c=(c-14)r-3$ or $10c \equiv -3 \pmod r$. Subtracting $10c+3$ from both sides shows us that $14r=(r-10)c-3$ or $14r \equiv -3 \pmod c$. Not sure if this will lead you to the solution, but hopefully, these equations help!
Functional analysis orthogonal sequence.
Your argument is correct: since$$\sum_{n=1}^\infty\langle x,e_n\rangle^2\leqslant\lVert x\rVert^2,$$$\lim_{n\to\infty}\langle x,e_n\rangle^2=0$ and therefore $\lim_{n\to\infty}\langle x,e_n\rangle=0$. The limit could not possibly be $1$ for each $x\neq0$. Take, for instance, $x=e_1$. Then the sequence $\bigl(\langle x,e_n\rangle\bigr)_{n\in\mathbb N}$ is just $1,0,0,0,\ldots$
Show that $\sum_{n \leq x} \frac{d(n)}{\log n} = x + 2E \frac{x}{\log x} + O\left( \frac{x}{\log ^2 x} \right).$
If we write $D(x)=\sum_{n\leq x}d(n)$ then by the Dirichlet hyperbola method we have: $$D(x)=x\text{log}(x)+x(2\gamma -1)+\mathcal{O}(x^{1/2})$$ Moreover applying Abel's summation formula to your partial sum gives us: $$\sum_{2\leq n\leq x}\frac{d(n)}{\text{log}(n)}=\frac{D(x)}{\text{log}(x)}+\int_{2}^x\frac{D(t)}{t\text{log}(t)^2}dt+\mathcal{O}\left(\frac{1}{\log(x)}\right)$$ Now note that: $$\frac{D(x)}{\text{log}(x)}=x+2\gamma\frac{x}{\text{log}(x)}-\frac{x}{\log(x)}+\mathcal{O}\left(\frac{x^{1/2}}{\text{log}(x)}\right)$$ $$\frac{D(t)}{t\text{log}(t)^2}=\frac{1}{\text{log}(t)}+\frac{(2\gamma-1)}{\text{log}(t)^2}+\mathcal{O}\left(\frac{t^{-1/2}}{\text{log}(t)^2}\right)\implies \int_{2}^x\frac{D(t)}{t\text{log}(t)^2}=\frac{x}{\log(x)}+\mathcal{O}\left(\frac{x}{\text{log}(x)^2}\right)$$ Therefore adding these together gives us: $$\frac{D(x)}{\text{log}(x)}+\int_{2}^x\frac{D(t)}{t\text{log}(t)^2}dt=x+2\gamma\frac{x}{\text{log}(x)}+\mathcal{O}\left(\frac{x}{\text{log}(x)^2}\right)$$ Which by our first equality proves: $$\sum_{2\leq n\leq x}\frac{d(n)}{\text{log}(n)}=x+2\gamma\frac{x}{\text{log}(x)}+\mathcal{O}\left(\frac{x}{\text{log}(x)^2}\right)$$
Theory of Equations and Undergraduate Mathematics
Complex analysis without excess concern for analytical niceties like interchange of integrals and sums and such, notoriously has a high ratio of interesting results to set-up costs. Applications to evaluation of integrals by residues, summing series by residues, as well as more serious things (zeta function, elliptic functions, modular forms) whose introductory aspects require little preparation. A version of "theory of equations" in which Lagrange resolvents are emphasized and cyclotomic polynomials used as significant example does not need much preparation after an awareness of complex numbers. As @JoeJohnson126 notes, homology with coefficients in $\mathbb Z/2$ avoids the issue of motivating or explaining the signs in boundary maps and orientations and such. Alexandroff wrote a very slim book on algebraic topology from this viewpoint. The non-Euclidean geometries of spheres and hyperbolic spaces can be studied with only a little linear algebra and ideas of calculus, rather than imbedding the topics in general Riemannian geometry and Lie theory. (Yes, it does seem to be the case that the undergrad math curriculum all-too-often acquires a dreary, tedious air, since the good stuff has been pushed later, on grounds of "logical ordering"... the problem being that this tends to push all the motivating examples years later, which is ridiculous.)
Proof that $a^5 b - b^5 a$ is divisible by $30$ for any integers $a$ and $b$
$a^5b-ab^5=ab(a^2+b^2)(a+b)(a-b)$. Now $a=3k$, $3k+1$ or $3k+2$ for some integer $k$ and $b=3l$, $3l+1$ or $3l+2$ for some integer $l$. Simply consider these cases.
Limits of $\frac{e\frac{1}{x}}{x(1-e^{\frac{1}{x}})}$ for $x\to0, \infty$
Notice that $1-e^{\frac1x} \sim -\frac1x$ as $x\to \infty$. Then $$\lim_{x\to\infty}\frac{e^{\frac1x}}{x(1-e^{\frac1x})} = \lim_{x\to\infty}\frac{e^{\frac1x}}{x(-\frac1x))} = \lim_{x\to\infty}-e^{\frac1x} = -1$$
Linear independence and the Wronskian
Suppose $c_1f_1+c_2f_2+c_3f_3=0$ has only the zero solution. Differentiate twice and write the three equations as a single matrix equation. Since the matrix admits only the zero solution the determinant of the coefficient matrix must be nonzero. This coefficient matrix is precisely the Wronskian's matrix. ( I usually call the determinant of the matrix you write the Wronskian )
Show $\lim_{a \rightarrow \infty} \int_0^1 {f(x)x\sin(ax^2)}=0$.
Suggestion: Split $[0, 1]$ into the intervals where $ax^2 = 2\pi n$, $ax^2 = \pi(2 n+1)$, $ax^2 = \pi(2 n+2)$. These are $I_{2n} =[\sqrt{\frac{2\pi n}{a}}, \sqrt{\frac{\pi(2 n+1)}{a}}) $ and $I_{2n+1} =[\sqrt{\frac{\pi(2 n+1)}{a}}, \sqrt{\frac{\pi(2 n+2)}{a}}) $. Since $\sin(ax^2) > 0$ in $I_{2n}$ and $\sin(ax^2) < 0$ in $I_{2n+1}$, show that the integral over $I_{2n} \cup I_{2n+1}$ goes to zero.
Fourier transform, why this gives incorrect answer?
The reason is that your derivative $x f(x) = - \frac{d}{da} f(ax)|_{a=1}$ is not quite valid because the function you are differentiating is discontinuous. If you interpret it in the distribution sense, you get an additional term of the form $\frac1e \delta_1(x)$, where $\delta_1$ is the Dirac distribution with mass at $x=1$. The Fourier transform of this extra term is exactly the difference between your solution and the correct one.
Regular Conditional Bayesian Experiment
This question is a duplicate of this one from stats.stackexchange, and has been answered there.
Dividing third derivative by second derivative
In Liebniz notation, the "d" is not a variable, but a function, and $d^2y$ does NOT refer to $d\cdot d\cdot y$ but rather to $d(d(y))$. Likewise, while the second derivative might be $\frac{d^2y}{dx^2}$, it is more formally $\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$, with $d^2x$ reducing to 0 if $x$ is the independent variable. Thus, if $x$ is the independent variable, and you divide the third derivative by the second derivative, you would get: $$\frac{d^3y}{dx^3} \cdot \frac{dx^2}{d^2y} = \frac{d^3y}{dx\,d^2y}$$ This is because the real way you should think about it is: $$\frac{d(d(d(y)))}{(d(x))^3} \cdot \frac{(d(x))^2}{d(d(y))} = \frac{d(d(d(y)))}{d(x)\,d(d(y))}$$ If $x$ is not the independent variable, you would get a huge mess (the full expansion of higher derivatives get messier with each order). You can't reduce the nested $d()$ function any more than you could reduce a nested $\sin()$ function.
Show that if f is entire and $|f(z)|=1$ for all real numbers, then $f$ has no zeroes.
Well, call $f^*(z)=\overline{f(\overline{z})}$ entire and satisfying same property as $f$; but now $g(z)=f(z)f^*(z)$ also satisfies $|g(x)|=1, x \in \mathbb R$ but $g(x) \ge 0$ for $x$ real; so $g=1$ (first for $x$ real and then by identity), hence $f(z)f^*(z)=1$ for all $z$ so done!
Finding closure of a set
This is the closure of a translate of the rational numbers. Since the rational numbers are dense in the reals, this closure is simply the reals.
Uniform convergence theorem fails when |E| =$\infty$
Choose $f_n= \frac1n \chi_{(0,n)}$. Then, $f_n \rightarrow f$ uniformly, where $f \equiv 0$. We have that for all $n$, $\int_{\Bbb R} f_n d \lambda = \frac1n \lambda((0,n)) = 1$, so $(f_n) \subset L^1(\Bbb R)$. But $\int_{\Bbb R} f d \lambda = 0$ and $\lim \int_{\Bbb R} f_n d \lambda = 1$.
Scale a Point onto Plane
You won't need the point $a$ for this since you have the plane equation. To find the scale factor, look at the line through the origin defined by the vector Pi, $L=s[x_2,y_2,z_2]=[sx_2,sy_2,sz_2]$. You know the equation of the plane, so substitute the coordinates for this line into the plane equation, and see which value of $s$ you need to satisfy the equation. $Asx_2+Bsy_2+Csz_2+D=0$. You can solve this for $s$, as it is the only variable.
Projections in Linear Algebra
No, for a subspace $U$, there can be more than one complementary subspace (and because of this, more than one projection). Think in $\mathbb R^2$. Let $U$ be the subspace $\{y=0\}=<(1,0)>$. Now, $W_1=\{x=0\}=<(0,1)>$ and $W_2=\{x-y=0\}=<(1,1)>$ are both complementary subspaces of $U$. In fact, every vector linearly independent with $(1,0)$ generates a complementary subspace of $U$. Now, if you fix a suplementary $W$ for $U$, every vector of $V$ can be written $v=u+w$, with $u\in U$ and $w\in W$. In this case, the projection would be $T(v)=u$. Let's do it with the previous examples: If we let $U=<(1,0)>$ and $W=<(0,1)>$, then every vector $(x,y)\in \mathbb R^2$ is written $$(x,y)=(x,0)+(0,y), \ (x,0)\in U, \ (0,y)\in V$$ so the projection is $T((x,y))=(x,0)$. If we pick $W=(1,1)$, then $$(x,y)=(x-y,0)+(y,y)$$ so $T(x,y)=(x-y,0)$ in this case.
Integrating Factors for nonexact differential equations
You can approach it differently. Precisely, you can make use of the integrating factor method: \begin{align*} & y^{\prime} = x + \frac{y}{x} + \frac{2}{x} \Longleftrightarrow y^{\prime} - \frac{y}{x} = x + \frac{2}{x} \Longleftrightarrow \frac{y^{\prime}}{x} - \frac{y}{x^{2}} = 1 + \frac{2}{x^{2}} \Longleftrightarrow\\\\ &\left(\frac{y}{x}\right)^{\prime} = 1 + \frac{2}{x^{2}} \Longleftrightarrow \frac{y}{x} = x - \frac{2}{x} + k\Longleftrightarrow y = x^{2} - 2 + kx \end{align*}
Newton's Interpolation Formula: Prove that $a_2$ is the second divided difference?
You are very close to the answer, only a step or two away. Simplifying your expression for $a_2$ gives $$ a_2 = \frac{(x_1-x_0)f_2 + f_1x_0 - f_1x_2 + f_0(x_2-x_1)}{(x_2-x_0)(x_2-x_1)(x_1-x_0)} $$ Now add $f_1x_1 - f_1x_1$ to the numerator of $a_2$ and you will get the desired result.
How do I solve this in an understandable and direct way?
No. Let $f=f_k$ for $k\in\mathbb{N}$. If $k\in E$, where $E = \{n \in \Bbb N : f_n(n) = 0\}$, then $f_k(k) = 0$. Assume $E=E_2$, where $E_2 = \{n \in\Bbb N : f(n) = 1\}$. Then, $k\in E_2$ as well and $f_k(k)=1$, which contradicts the premise $f_k(k)=0$. It follows, $E\neq E_2$, which contradicts what we wanted to show. If $k\notin E$, then $f_k(k)=1$. Assume $E=E_2$. Then, $k\notin E_2$ as well and $f_k(k)=0$, which contradicts the premise $f_k(k)=1$. It follows $E\neq E_2$, which contradicts what we wanted to show.
Finite trees and embedding in infinite regular trees.
This is not possible. For some pair $u,v$ the 'mapped distance' $d(f(u),f(v))$ must be minimal. This implies that you cannot find any $t$, adjacent or not, with $d(f(t),f(v))<d(f(u),f(v))$.
Find the modular residue of this product..
We want to replace these big numbers by much smaller ones that have the same remainder on division by $31$. Take your first big number $12345234$. Divide by $31$ with the calculator. My display reads $398233.3548$. Subtract $398233$. My calculator shows $0.354838$. Multiply by $31$. The calculator gives $10.999978$. If it were perfect, it would say $11$. Do the same with the other big number. My calculator again says $10.99978$, and if perfect would say $11$. Multiply $11$ by $11$, and find the remainder when $121$ is divided by $31$. Again, we could use the calculator. But it can be done in one's head. The remainder when we divide by $31$ is $28$. Remark: It can be fairly important not to write down an intermediate calculator result, and rekey. The reason is that the calculator inernally keeps guard digits, that is, it is calculating to greater precision than it displays. If you rekey, typing in only digits that you see, you will lose some of the built-in accuracy that you paid for. For similar reasons, it is useful to learn to make use of the memory features of your calculator. Let's ee why the procedure we used works. When we divide $12345234$ by $31$, the stuff before the decimal point is the quotient. The remainder is unfortunately not given directly, but what's "left over" is (approximately) $0.354838$. This is a decimal representation of $\frac{r}{31}$, where $r$ is the (integer) remainder. To recover $r$, we multiply $0.354838$ by $31$. Because of internal roundoff, we usually don't get an integer, but most of the time, if the divisor (here $31$) is not too large, we get an answer that is very close to an integer, so we can quickly decide what $r$ must be.
Why Does the existence of $\frac{\partial f}{\partial x}$ not imply that $\frac{\partial f}{\partial x}$ is continuous?
The counterexample that shows existence of $f'$ does not imply continuity of $f'$ is $f(x) = \begin{cases} x^2 \sin(1/x) & x \neq 0 \\ 0 & x=0 \end{cases}$ This is different than the point of the text, this addresses a conclusion you jumped to. The point of the text is that even if partials exist everywhere, the function could still not be continuous, due to the behavior at directions other than horizontal/vertical. Something to consider is $xy/(x^2+y^2)$ or something (good in $x$ and $y$ direction at origin but not for others).
The number of prime ideals lying over a given prime ideal
You needn't that $B$ is the ring of polynomials over an algebraically closed field. It suffices that $A \to B$ is a finite extension: if $B$ can be generated by a set of $n$ elements as $A$-module, then for every prime ideal $P$ of $A$ the number of primes of $B$ lying over $P$ is $\leq n$. For a proof see my answer Is the number of prime ideals of a zero-dimensional ring stable under base change?
solving natural log inequality
Start with a basic integration of $\frac{1}{x}$ from $n-1$ to $n$. $$\int_{n}^{n+1}\frac{1}{n} - \frac{1}{x}dx$$ Since $\frac1n$ is the lower limit of this decreasing function, the value of the integrand will always be greater than or equal to $0$ through out the interval of integration. Now if you sum this up from $n = 1$ to $n-1$, you get $$\int_{1}^{n}\left(\frac{1}{1} + \frac{1}{2}+ \frac{1}{3}...\frac{1}{n-1}\right) - \frac{1}{x} dx$$ $$\int_{1}^{n} \left(\sum_{i=1}^{n-1}\frac{1}{i}\right)- \frac{1}{x}dx$$ $$\left(\sum_{i=1}^{n-1}\frac{1}{i}\right) - \log n$$ Sum of non negative quantities always give a non negative quantity, and hence the above quantity is $\geq 0$, thus proving the first part of the inequality. For the second part, sketch out the box using the graph of $\frac1x$ from $n$ to $n+1$ on x-axis with $\frac1n$ to $\frac{1}{n+1}$ on y-axis. Now if you shade the area of this box that pertains to the integrand in the first equation, you'll notice that it is always less than equal to the area of the box. $$\left(\int_{n}^{n+1}\frac{1}{n} - \frac{1}{x}dx\right) \leq A_{box}$$ $$A_{box} = \left(\frac{1}{n} - \frac1{n+1}\right)(n+1 - n) = \left(\frac{1}{n} - \frac1{n+1}\right)$$ Similarly as before, sum the inequality on both sides from $1$ to $n-1$ to get $$\left(\sum_{i=1}^{n-1}\frac{1}{i}\right) - \log n \leq (1 -\frac12) + (\frac12 - \frac13) + (\frac13 - \frac14) ... (\frac1{n-1} -\frac1n) $$ $$\left(\sum_{i=1}^{n-1}\frac{1}{i}\right) - \log n \leq 1 - \frac1n $$ Thus, proving the second part of the inequality
Jordan curve; is the inside mapped to the inside by a homeomorphism?
$U\setminus |\gamma|$ may have several components $C$ (it's not given that $U$ is connected), but only one with the property that $C\cup|\gamma|$ is compact, namely $C=D$. The image of $D$ under $f$ must have the same property with respect to $V$, so $f(D)$ is the interior of $f(\gamma)$.
Solve simultaneously : $6(x + y) = 5xy, 21(y + z) = 10yz, 14(z + x) = 9zx$
You can divide the equations, to get $$\begin{align} \frac1x + \frac1y &= \frac56 = \frac{35}{42}\\ \frac1y+\frac1z &= \frac{10}{21} = \frac{20}{42}\\ \frac1x+\frac1z &= \frac{9}{14} = \frac{27}{42} \end{align}$$ Subtracting the second equation from the first yields $$\frac1x - \frac1z = \frac{15}{42}$$ and adding that to the last $$\frac{2}{x} = \frac{42}{42} = 1 \Rightarrow x = 2.$$ Subtracting from the last yields $$\frac{2}{z} = \frac{12}{42} \Rightarrow z = 7.$$ Inserting $x = 2$ for example into the first equation yields $y = 3$.
For a Poisson process, what is the distribution of the ratio of one arrival time to another?
What you are thinking of is that: $f_{S_1,S_6}(s,t)~=~ f_{S_1}(s)~f_{S_6-S_1}(t-s)$, since $S_1$ and $S_6-S_1$ are independent interarrival times.   (Also recall, $S_6-S_1$ has identical distribution to $S_5$, and that $S_k\sim\mathcal {Erlang}(k,\lambda)$.) Now, for $0\lt w\leq 1$ : $$\begin{align}\mathsf P(W\leq w)&=\mathsf P(S_1\leq wS_6) \\[1ex]&= \int_0^\infty \int_0^{wt} f_{S_1}(s)~f_{S_6-S_1}(t-s)~\mathsf d s~\mathsf d t\\[1ex]&= \int_0^\infty f_{S_1}(s)\int_{s/w}^\infty f_{S_6-S_1}(t-s)~\mathsf d t~\mathsf d s\\[1ex]&= \int_0^\infty f_{S_1}(s)\int_{s(1/w-1)}^\infty f_{S_6-S_1}(u)~\mathsf d u~\mathsf d s\\[1ex]&~~\vdots\end{align}$$
How many ways can $p+q$ people sit around $2$ circular tables - of sizes $p,q$?
Choose the $p$ people around the first table: $p+q \choose p$. For each set of $p$ people, there are $(p-1)!$ permutations once the first is placed (I assume a circular permutation has no effect on order, but "reversing" has). Likewise, there are $(q-1)!$ permutations for the second table. All in all, there are ${p+q\choose p}(p-1)!(q-1)!$ possibilities.
Divergent or convergent?
Hint: In order for $\sum A_n$ to be convergent, it is required that $A_n \to 0$ as $n \to \infty$.
Find limit "1 ^ infinity"
$$\displaystyle \lim_\limits{x \to \frac{\pi}{4}} \left(\frac{1 + \sin\left(\frac{\pi}{4} - x\right)\sin(2x)}{1 + \frac{1}{2}\cos(2x)}\right)^{\cot^3(x - \frac{\pi}{4})}=$$ $$\displaystyle \lim_\limits{x \to \frac{\pi}{4}} \left(\frac{2+\sqrt{2}(\cos(x)-\sin(x))\sin(2x)}{2+\cos(2x)}\right)^{-\tan^3\left(\frac{\pi}{4}+x\right)}=$$ $$\lim_{x \to \frac{\pi}{4}} \exp\left(\ln\left(\left(\frac{2+\sqrt{2}(\cos(x)-\sin(x))\sin(2x)}{2+\cos(2x)}\right)^{-\tan^3\left(\frac{\pi}{4}+x\right)}\right)\right)=$$ $$\lim_{x \to \frac{\pi}{4}} \exp\left(-\ln\left(\frac{2+\sqrt{2}(\cos(x)-\sin(x))\sin(2x)}{2+\cos(2x)}\right)\tan^3\left(\frac{\pi}{4}+x\right)\right)=$$ $$\exp\left(\lim_{x \to \frac{\pi}{4}} -\ln\left(\frac{2+\sqrt{2}(\cos(x)-\sin(x))\sin(2x)}{2+\cos(2x)}\right)\tan^3\left(\frac{\pi}{4}+x\right)\right)=$$ $$\exp\left( -\left(\lim_{x \to \frac{\pi}{4}}\ln\left(\frac{2+\sqrt{2}(\cos(x)-\sin(x))\sin(2x)}{2+\cos(2x)}\right)\tan^3\left(\frac{\pi}{4}+x\right)\right)\right)=$$ $$\exp\left( -\left(\lim_{x \to \frac{\pi}{4}}\sin^3\left(x+\frac{\pi}{4}\right)\csc^3\left(\frac{\pi}{4}-x\right)\ln\left(\frac{\sqrt{2}\sin(2x)(\cos(x)-\sin(x))+2}{2+\cos(2x)}\right)\right)\right)=$$ $$\exp\left( -\left(\lim_{x \to \frac{\pi}{4}}\csc^3\left(\frac{\pi}{4}-x\right)\ln\left(\frac{\sqrt{2}\sin(2x)(\cos(x)-\sin(x))+2}{2+\cos(2x)}\right)\right)\right)=$$ $$\exp\left(-\left(-\frac{3}{2}\right)\right)=e^{--\frac{3}{2}}=e^{\frac{3}{2}}$$
Symmetric and homogeneous three variable inequality with radicals.
By C-S $\left(\sum\limits_{cyc}\sqrt{\frac{y^2+z^2}{2(x^2+y^2)(x^2+z^2)}}\right)^2\leq\sum_(y^2+z^2)\sum\limits_{cyc}\frac{1}{2(x^2+y^2)(x^2+z^2)}=\frac{2(x^2+y^2+z^2)^2}{\prod\limits_{cyc}(x^2+y^2)}$. Thus, it remains to prove that $\left(\sum\limits_{cyc}(x^5+x^3y^2+x^3z^2+x^2y^2z)\right)^2\geq2(x^2+y^2+z^2)^2(x^2+y^2)(x^2+z^2)(y^2+z^2)$. Let $x+y+z=3u$, $xy+xz+yz=3v^2$ and $xyz=w^3$. Hence, we need to prove that $f(w^3)\geq0$, where $f(w^3)=(81u^5-135u^3v^2+54uv^4+9u^2w^3-5v^2w^3)^2-$ $-2(3u^2-2v^2)^2(81u^2v^4-54v^6-54u^3w^3+36uv^2w^3-w^6)$. We see that $f'(w^3)=2(81u^5-135u^3v^2+54uv^4+9u^2w^3-5v^2w^3)(9u^2-5v^2)+$ $+2(3u^2-2v^2)^2(54u^3-36uv^2+2w^3)\geq0$. Id est, $f$ is an increasing function, which says that $f$ gets a minimal value for extremal value of $w^3$, which happens in two following cases only. $y=1$ and $z\rightarrow0^+$, which gives $(x^2+1)^2(x^4+2x^3+x^2+2x+1)(x-1)^2$; $y=z=1$, which gives $(x^2+1)^2(x^3+2x^2+x+8)x(x-1)^2\geq0$. Done!
Irreducibility over $\mathbb{C}$ of symmetric polynomials
Let $e_k$ be the symmetric polynomial of degree $k$ in the variables $x_1, \dots, x_n$. Write $e_k$ as a product $cf_1 \dots f_r$, with each $f_i$ irreducible. Since $e_k$ has degree one with respect to each variable, each variable must appear in precisely one of the factors $f_1, \dots, f_r$, resulting in a corresponding partition $E_1$, $E_2$, ..., $E_r$ of the set of variables. By the uniqueness of the decomposition, permuting the variables must result, essentially, in the same decomposition (with an irreducible factor at worst being multiplied by a nonzero constant), hence in the same partition of the variables. This can be achieved in only two ways. Either $r = 1$, or else $r = n$ and each $E_i$ has cardinality one. If $r = 1$, then $e_k$ is irreducible. In the other case, the degree of $e_k$ must be at least $n$, so we must have $k = n$.
Let $(a_n)_{n \geq 0}$ be a strictly decreasing sequence of positive real numbers , and let $z \in \mathbb C$ , $|z| < 1$.
Simply note that $(1-z)\sum_{k=0}^\infty a_kz^k=a_0+\sum_{k=1}^\infty (a_k-a_{k-1})z^k$. If $\sum_{k=0}^\infty a_kz^k=0$ for some $|z|&lt;1$, $$a_0=\sum_{k=1}^\infty (a_{k-1}-a_k)z^k\implies$$ $$ \begin{align} a_0\leq \left| \sum_{k=1}^\infty (a_{k-1}-a_k)z^k\right| &amp;\leq \sum_{k=1}^\infty (a_{k-1}-a_k)|z|^k \\ &amp;&lt; \sum_{k=1}^\infty (a_{k-1}-a_k)\\ &amp;= a_0 - \lim_n a_n \end{align}$$ Hence $\lim a_n&lt;0$ which is a contradiction. This has some interesting consequences, like $\frac{1}{1-z}+e^z$ has no zeroes if $|z|&lt;1$...
Linear Algebra: Diagonalization question
Your matrices $P$ and $C$ are all correct. You have probably made a computational mistake. See Octave output here (https://octave-online.net/): $P^{-1}$ should be: octave:4&gt; inv(P) ans = 1 0 1 0 0 1 -1 1 -1
Is there any associative algebra that has all the algebraic properties of cross product?
A lie algebra $\frak g$ is a vector space equipped a bilinear operation $[-,-]$ satisfying $[x,x]=0$ for all $x\in\frak g$ and the Jacobi identity $[x,[y,z]]+[z,[x,y]]+[y,[z,x]]$. The space $(\Bbb R^3,\times)$ is a lie algebra. One should look up the classification of three-dimensional real lie algebras (which is fairly easy and textbook, but I don't have it memorized or anything) to see if there are any associative ones, besides the trivial abelian lie algebra of course (where the operation is $[v,w]=0$ for all $v,w\in\Bbb R^3$). What you seem to want is a lie algebra whose bracket operation is associative. Usually it is not associative. In fact, $\frak g$ is associative iff its derived subalgebra is contained in its center. Here's an example of such an algebra: quotient the full exterior algebra on a vector space by the relations encoding the Jacobi identity (we can restrict ourselves to only a basis of the space). This should be a "universal" construction in a sense. Indeed, if $\frak g$ is an associative Lie algebra of dimension $n$, then it should a homomorphic image of the lie algebra $$\frac{\Bbb C\langle x_1,\cdots,x_n\rangle}{(x_ix_j+x_jx_i,\, x_ix_jx_k+x_kx_ix_j+x_jx_kx_i)_{1\le i,j,k\le n}},$$ where $R\langle X\rangle$ stands for the nonabelian polynomial ring over $R$ generated by a set $X$, and the bracket operation is just multiplication.
Trouble with Inclusion-Exclusion (Multiplication Theorem)
I call that thing the product rule. But I think it is more commonly known as the chain rule. As for the example, that's not quite the approach you want to take. It is easier compute the complementary probability. Let $\bar A$ be the event that no dog sleeps in his spot. Notice that the complement is $A =\{\text{at least one dog sleeps in his spot}\}$. Let $A_i$ be the event that the dog $i$ sleeps in his spot $i$. Then $$P(\bar A) = 1-P(A) = 1-P(A_1\cup A_2\cup\dotsb\cup A_7\cup A_8)$$ At this step is where you want to use inclusion-exclusion.
Is it a Markov chain?
Assume that $X$ is a homogenous Markov chain on the state space $\{0,1\}$. Then, each $X_n$ is $\{0,1\}$-valued, and $D_n=1$ when $X_n=1$ and $X_{n+1}=0$, thus, $D_n=1$ implies $D_{n+1}=0$. Furthermore, if $D_n=1$, the first time $n+N$ after $n$ such that $D_{n+N}=1$ is distributed as $H_1+H_0$ where $H_1$ is the first hitting time of $1$ by the Markov chain $X$ starting from $X_0=0$, and $H_0$ is the first hitting time of $0$ by the Markov chain $X$ starting from $X_0=1$. Thus, $H_1$ and $H_0$ are independent geometric random variables, and the only case where $H_1+H_0-1$ is geometrically distributed, as it should be for $D$ to be a Markov chain, as the first hitting time of $1$ by $D$ starting from $0$, is when $P(H_1=1)=1$ or $P(H_0=1)=1$. To sum up, $D$ is not a Markov chain, except if one of the transitions $0\to0$ or $1\to1$ has zero probability for $X$.
exponential and Lie bracket
Sure. One way I like to think about commutators (Lie brackets) is "if you have $YX$, and you'd rather have $XY$, you have to add $[Y,X]$", i.e. $$ YX = XY + [Y,X]. $$ So, when you expand a binomial of the form $(X + Y)^n$, you're going to get a lot of terms that aren't 'in the right order', and you have to add commutators if you want to swap things around. That is, $$ (X + Y)^3 = X^3 + XXY + YXX + XYX + \dots $$ and so if you want an expression of the form $$ X^3 + 3XXY + \dots $$ you're going to have to add $[Y,X]X$ and $X[Y,X]$ to 'fix' the first term, and another $X[Y,X]$ to fix the second. So, in general, when you expand $(X+Y)^n$, you're going to get all the terms in the usual binomial expansion, just with the $X$'s and $Y$'s in a different order, which you can straighten out by introducing enough brackets. I can't tell you off the top of my head what the closed form would be, but there's probably a straightforward combinatorial approach.
How can I prove or disprove this function is differentiable at $(0,0)$?
Hint: Since $f=0$ on the axes, the partial derivatives of $f$ at $(0,0)$ both equal $0.$ If $f$ were differentiable at $(0,0),$ then we would have $$f(x,y) = f(0,0) +0\cdot x + 0\cdot y +o[(x^2+y^2)^{1/2}]\,\,\text {as } (x,y)\to (0,0).$$ Is that true?
Square root of irrationals
As you note, the author wants to find integers $x$ and $y$ such that $$x+y\sqrt{2}=\sqrt{34-24\sqrt{2}},$$ so you are looking for integer solutions only. Do note that for $x=\pm3\sqrt{2}$ you get $y=\mp2\sqrt{2}$ and so $$x+y\sqrt{2}=\pm(3\sqrt{2}-2\sqrt{2}^2)=\mp(4-3\sqrt{2}),$$ which is again the same solution as for $x=\pm4$ and $y=\mp3$.
Proof that Laplace expansion is row/column invariant, without induction
Your definition of "det" is itself inductive. If in proving your assertion about $n \times n$ matrices, you're not going to use any properties of "det" on $(n-1) \times (n-1)$ matrices, your definition could equally well be $$ \det(a_{ij})_{n\times n} = \begin{cases} \hfil a_{11} \hfil &amp; \text{if $n = 1$}\\[10pt] \displaystyle\sum_{k=1}^n (-1)^{k+1}a_{1k}\color{red}{\det'}(A_{1k}) &amp; \text{otherwise}, \end{cases} $$ where $\color{red}{\det'}$ is some completely different function (e.g., the zero function). I think that with an inductive definition like this, you pretty much have to do inductive proofs. (Indeed, your sharpest students might well ask "How do you even know that this defines a function?") There might be some "way to illustrate this fact which makes it clear", but that really amounts to proof-by-vigorous-assertion. I guess I'd be inclined to write out the det for a 1x1, 2x2, and 3x3 matrix (with indexed entries rather than numerical ones), and say "Look, all possible n-tuples with one index from each row and one index from each column appear!", and then spend a little while noticing which ones have positive or negative coefficients, and say "there's an alternative definition of det that uses this 'sum of permutations' approach instead...and writing it out is really messy. But let's see, in practice, what it produces if we swap two rows of the matrix..."
Is there any way to represent an imaginary number?
The complex numbers $a+bi$ is often represented by the point $(a,b)$ of the ordinary plane. You might want to look at the following brief description, and then at the lengthy Wikipedia article. So the "imaginary" number $i$ is represented as the point $(0,1)$, Not at all imaginary! The addition of two complex numbers then has a simple geometric representation. Multiplication of a complex number by another one also has an important geometric representation, essentially a rotation possibly followed by scaling. Multiplication by $i$ turns out to be counterclockwise rotation about the origin through $90$ degrees. So doing it twice is the same as turning through $180$ degrees, which turns $(a,b)$ into $(-a,-b)$, the same as multiplication by $-1$. This is "why" $i^2=-1$. As you can see, this was a very good question.
Mathematical modelling of a function
I considered @saulspatz's comment and I have come to the realization that I only have to find an equation of the line containing two points. It should be noted that the problem was asking for the mathematical model where the price is a function of the number of clocks sold. Thus, we have the ordered pairs (2400, 3.2) and (4800, 2.4). Using desmos app, we have $y=-\frac{x}{3000} + 4$.
Show that $L^1$ is strictly contained in $(L^\infty)^*$
If the domain of our functions is $\mathbb{R}$, you can show in fact that there are elements of $(L^\infty)^\ast$ which are not in $L^1$, but given every $g\in L^1$ we can associate it with a linear functional $\psi_g:L^\infty\to \mathbb{R}$ by $\psi_g(f)=\int fgdx$, so in fact we have $L^1\subset (L^\infty)^\ast$, and the containment is strict. To see that it is strict, first note that $C_b(\mathbb{R})$ of bounded continuous functions is a subspace of $L^\infty(\mathbb{R})$, and we can define a a continuous linear functional $\phi:C_b(\mathbb{R})\to \mathbb{R}$ on it by $\phi(f)=f(0)$. The Han-Banach theorem lets us then extend $\phi$ to a linear functional $\tilde{\phi}:L^\infty(\mathbb{R})\to\mathbb{R}$, which when restricted to $C_b(\mathbb{R})$ is still $\phi$. I don't know any good way to write this extension down as an actual formula, as its existence is guaranteed by Hahn-Banach but not constructively. So, we have found a rather strange element of $(L^\infty)^\ast$, and it turns out that this element does not arise from an element of $L^1$. To see this, note that if $\tilde{\phi}$ did arise in this way there would be some $g\in L^1$ such that $\tilde{\phi}(f)=\int_\mathbb{R}fg dx$, and in particular if we restrict so that $f\in C_b(\mathbb{R})$ we know that $\tilde{\phi}\mid_{C_b}=\phi$, and so we have $$\tilde{\phi}(f)=\phi(f)=\int_\mathbb{R}f(x)g(x)dx=f(0)$$ for all $f$ in $C_b(\mathbb{R})$. But, there is no $g\in L^1(\mathbb{R})$ which satisfies this, so $\tilde{\phi}$ is in $(L^\infty)^\ast$ but not in $L^1$.
A property of homogeneous of degree p functions:
Be $y=tx$. Then on one hand, you have $$\left.\frac{\mathrm d}{\mathrm dt} f(y)\right|_{t=1} = \left.\frac{\mathrm d}{\mathrm dt} f(tx)\right|_{t=1} = \left.\frac{\mathrm d}{\mathrm dt} t^p f(x)\right|_{t=1} = \left.\vphantom{\frac{\mathrm d}{\mathrm dt}} p t^{p-1} f(x)\right|_{t=1} = p f(x)$$ On the other hand, you have $$\left.\frac{\mathrm d}{\mathrm dt} f(y)\right|_{t=1} = \left.\left(\frac{\mathrm dy_1}{\mathrm dt}\frac{\partial}{\partial y_1} + \ldots + \frac{\mathrm dy_n}{\mathrm dt}\frac{\partial}{\partial y_n}\right)f(y_1,\ldots,y_n)\right|_{t=1} = \left(x_1 \frac{\partial}{\partial x_1} + \ldots + x_n \frac{\partial}{\partial x_n}\right)f(x)$$ It should be obvious how to extend that for higher derivatives.
Possible cardinality of power sets
The Konig theorem says that $$\kappa &lt; cf(2^{\kappa})$$ Thus if $cf(\lambda)=\omega$, such as $\aleph_{\omega}$ then $\lambda$ is not a power set.
On the maximal of polynomial at a point
Suppose $p$ is any real cubic polynomial satisfying $p(5) + p(25) = 1906$. For any real $n$, let $q(x) = p(x) - n(x-5)(x-25)$. Now $q$ is also a real cubic polynomial satisfying $q(5)+q(25) = p(5)+p(25) = 1906$, but $q(15) = p(15) + 100n$. Hence the value of $p(15)$ is not bounded, but can be any real number if you choose the coefficients properly.
Proving gcd(a, a+2) is 1 or 2 for every integer a?
HINT: If integer $d$ divides $a,a+2;$ $d$ will divide $1\cdot(a+2)-1\cdot a$ Th idea is to find a constant eliminating $a$
Summation of $\binom{N}{K}$
Recall that $${n \choose k}={n \choose n-k}$$ Hence $$\sum_{k=0}^{n/2} {n \choose k}=\sum_{k=0}^{n/2} {n \choose n-k}=\sum_{k=n/2}^n {n \choose k}$$ Therefore $$2^n=\sum_{k=0}^n {n \choose k}=\sum_{k=0}^{n/2} {n \choose k}+\sum_{k=n/2 +1}^n {n \choose k}=\sum_{k=0}^{n/2} {n \choose k}+\sum_{k=n/2}^{n} {n \choose k}-{n \choose n/2}=2\sum_{k=0}^{n/2} {n \choose k}-{n \choose n/2}$$ Which gives $$\sum_{k=0}^{n/2} {n \choose k}={2^{n} +{n \choose n/2} \over 2}$$
Question on a function of triangles
First note that if a triangle is subjected to a homothety by factor $r&gt;1$ then the area multiplies by $r^2$ and the perimeter by $r$, so that area/perimeter gets multiplied by $r$. This means for the triangle $ABC$ with longest side say $AB$, that we may expand and move the triangle until vertices $A,B$ are on the boundary of $D$, while increasing the ratio area/perimeter. If at this point the vertex $C$ happens to lie in the smaller part of $D$ cut by $AB$, reset $C$ to its reflection through $AB$, so that $C$ now lies in the larger part of $D$ cut by $AB$. Now suppose the vertex $C$ is moved so that the perimeter remains constant. This means $C$ moves on an ellipse with foci at $A,B$; this ellipse will not entirely lie in $D$, however it is clear that $C$ may be moved until triangle $ABC$ becomes isosceles, and that during this mmovement the area of $ABC$ increases, since the altitude from $C$ increases. Thus the ratio area/perimeter increases at this step also. Now move $C$ in the direction perpendicular to $AB$ and away from that line, until $C$ lies on the boundary of $D$. This will increase area more than perimeter: as a map it is an expansion in the direction perpendicular to $AB$ and thus multiplies area by some $k&gt;1$, while since the sides $AC$ and $BC$ are on a slant to the perpendicular, they will each expand by a factor less than $k$. So again the ratio area/perimeter has increased. We now have what is required, since we have the triangle $ABC$ with its vertices on the boundary of $D$, and during the process its ratio of area/perimeter has only increased. With a little more work one can show that in fact the actual max ratio occurs when the triangle $ABC$ is equilteral, with vertices on the boundary of $D$.
Surface integral over a cylinder problem
The small problem is that $\vec n$ needs to be normalized. But your bigger problem is that you are calculating the integral on the wrong surface. When you integrate $r$ from $0$ to $a$, and $\theta$ from $0$ to $2\pi$ (not $4\pi$), you are calculating the integral on the bottom cap of the cylinder, not on the side. So solving the first issue, $$\vec n=\frac{1}{2\sqrt{x^2+y^2}}(2x,2y,0)$$ Then the integrand will be $1$. For the second issue, the first integral is along the circumference, $dl$ from $0$ to $2\pi a$, and $dz$ from $0$ to $h$
Symmetric function and product of two functions
This is not true at all! For example, the function $f(x,y) \equiv 2$ is symmetric, obviously, and can also be written as $f(x,y) = g(x)h(y)$ where $g(x) \equiv 2$ and $h(x) \equiv 1$. It is true that $g(x)h(y) = g(y)h(x)$ for all $x,y$ by symmetry. Only if, we furthermore suppose there is some $X$ such that $g(X) = h(X) \neq 0$, then for all $Y$, $g(Y) = h(Y)$ by cancellation. However, this may not always happen, as the example above shows.
Change of order of double limit of function sequence
The answer to the specific question is Yes. Since $f_j$ converges (uniformly and hence) pointwise to $f$, $$ \mathop {\lim }\limits_{x \to a} \mathop {\lim }\limits_{j \to \infty } f_j (x) = \mathop {\lim }\limits_{x \to a} f(x). $$ For any $\varepsilon &gt; 0$, since $f_j$ converges uniformly to $f$, there exists $N = N(\varepsilon)$ such that $$ \sup _x |f_j (x) - f(x)| &lt; \varepsilon $$ for any $j &gt; N$. We assume that all limits exist. Hence, for any $j &gt; N$, $$ |\lim _{x \to a} f_j (x) - \lim _{x \to a} f(x)| = |\lim _{x \to a} (f_j (x) - f(x))| \le \varepsilon . $$ Define $p_j = \lim _{x \to a} f_j (x)$ and $p = \lim _{x \to a} f(x)$. Then, $$ |\lim _{j \to \infty } p_j - p| = \lim _{j \to \infty } |p_j - p| \le \varepsilon , $$ since $|p_j - p| \leq \varepsilon$ for any $j &gt; N$. Since $\varepsilon$ is arbitrary, $ \lim _{j \to \infty } p_j = p$, hence $$ \mathop {\lim }\limits_{j \to \infty } \mathop {\lim }\limits_{x \to a} f_j (x) = \mathop {\lim }\limits_{x \to a} f(x) = \mathop {\lim }\limits_{x \to a} \mathop {\lim }\limits_{j \to \infty } f_j (x). $$
How applicable is Goldbach's conjecture to real world scenarios?
In Mallory's immortal words: Because it's there. After the proof of Fermat's last theorem, Goldbach's conjecture is probably the leading example of a conjecture which is extremely simple to explain -- even a grade schooler will be able to understand what the question is -- but where no answer has been produced even after centuries of concerted effort by a host of very smart people. For some people, that very property makes it impossible not to think about the problem and try to figure out ways to attack it. There is something infuriating about such a simple statement resisting all the human ingenuity we can throw at it. A different question is why universities would pay people to think about this particular problem. Here one can point either to the fact that (as Patrick notes) the mathematics that's created while attacking hard problems such as this often turns out to have broader applications -- or to the fact that giving academics the freedom (at least collectively) to work on whatever problem catches their fancy has historically been very successful in getting useful and interesting breakthroughs that nobody could have anticipated. Even if Goldbach's conjecture in particular could be reliably know to lead nowhere useful, we don't have any workable general way to force basic research in a useful direction that works better than "follow your curiosity". Trying to stomp out individual lines of inquiry because they're deemed fruitless would generally create too many secondary problems (general miscontent, researchers who don't really care about Goldbach wondering whether they will be next, and so forth) to be worth it.
Triangles area related problem
First, let $\frac {AX}{XC}=\frac {CY}{YB}=\frac {XZ}{ZY} = k$ (this is just to make things easier). I also started from $A(\Delta BYZ) = S$. Then, except $\Delta ABZ$, we could find all other areas without a problem as you suggested. Now, notice that what we need to find is not $A(\Delta ABZ)$ but $A(\Delta ABC)$. Therefore, draw $AY$. Then $A(\Delta AYZ) = k^2S$. Then, since $A(\Delta AYC) = k(k+1)^2S$, we have $A(\Delta ABY) = (k+1)^2S$. Rest is simple algebra.
An example to be a local martingale but not a martingale
Take some random variable $Y$ which is not integrable (e.g. Cauchy distributed) and which is independent of $(W_t)_{t \geq 0}$. Define $\varphi_s(\omega):=Y(\omega)$, then $$\int_0^t \varphi_s \, dW_s = Y W_t$$ is a local martingale. However, it is not a true martingale since $$\mathbb{E}(|Y \cdot W_t|) = \mathbb{E}(|Y|) \mathbb{E}(|W_t|)=\infty.$$
Sketch the graph of $F_4^n(x)$ on the unit interval, where $F_4(x) = 4x(1-x)$. Conclude that $F_4$ has at least $2^n$ periodic points of period $n$.
Given the following functions: $\varphi(x)=-4x+2$, $\varphi^{-1}(x)=\frac{x-2}{-4}$ and $g(x)=x^2-2$ we have: $$F_4(x)=(\varphi^{-1} \circ g\circ \varphi)(x)$$ or $$F_4^{\circ n}(x)=(\varphi^{-1} \circ g^{\circ n} \circ \varphi)(x) \tag{1}$$ Further $g(x)$ can be written as $$g(x)=(\gamma^{-1} \circ T_2\circ \gamma)(x)$$ where $T_2(x)=2x^2-1$ (a Chebyshev polynomials), $\gamma(x)=\frac{x}{2}$ and $\gamma^{-1}(x)=2x$. Then $$F_4(x)=(\varphi^{-1} \circ \gamma^{-1} \circ T_2\circ \gamma \circ \varphi)(x)$$ or $$F_4^{\circ n}(x)=(\varphi^{-1} \circ \gamma^{-1} \circ T_2^{\circ n}\circ \gamma \circ \varphi)(x) \tag{2}$$ Chebyshev polynomials satisfy $$T_n(T_m(x))=T_{nm}(x) \Rightarrow T_2^{\circ n} (x)=T_{2^n}(x)$$ or $$F_4^{\circ n}(x)=(\varphi^{-1} \circ \gamma^{-1} \circ T_{2^n}\circ \gamma \circ \varphi)(x) \tag{3}$$ Periodic points of period $n$ are fixed points of $\circ n$ composition. Fixed points of $T_{2^n}(x)$ are fixed for $F_4^{\circ n}(x)$ &quot;through&quot; $\gamma(x)$ and $\varphi(x)$, both linear functions, i.e. Proposition 1. $T_{2^n}(x)=x \iff F_4^{\circ n}(y)=y$ where $y=(\varphi^{-1} \circ \gamma^{-1} )(x)$ We should also observe that if $y \in [0,1]$ then $$x=(\gamma \circ \varphi) (y) = -2y+1 \in [-1,1] \tag{4}$$ For Chebyshev polynomials we have $T_m(\cos{\alpha})=\cos{(m\alpha)}$. From $(4)$ we can assume that $\exists \alpha$ s.t. $x=(\gamma \circ \varphi) (y)=\cos{\alpha}$. As a result, for proposition 1 we will be looking at $$T_{2^n}(\cos{\alpha})=\cos{(2^n \alpha)}=\cos{\alpha}$$ which means for $k\in\mathbb{Z}$ $$2^n \alpha = \alpha +2k\pi \Rightarrow \alpha =\frac{2k\pi}{2^n-1}$$ or $$2^n \alpha = -\alpha +2k\pi \Rightarrow \alpha =\frac{2k\pi}{2^n+1}$$ and we found 2 sequences $$x_{1,k} = T_{2^n}(x_{1,k})=T_2^{\circ n} (x_{1,k}) \text{, where } x_{1,k}=\cos{\left(\frac{2k\pi}{2^n-1}\right)}$$ $$x_{2,k} = T_{2^n}(x_{2,k})=T_2^{\circ n} (x_{2,k}) \text{, where } x_{2,k}=\cos{\left(\frac{2k\pi}{2^n+1}\right)}$$ Applying proposition 1 $$y=(\varphi^{-1} \circ \gamma^{-1} )(x)=\frac{1-x}{2}$$ and considering that $\frac{1-\cos{\alpha}}{2}=\sin^2{\frac{\alpha}{2}}$, we have $$y_{1,k} =\sin^2{\left(\frac{k\pi}{2^n-1}\right)}$$ $$y_{2,k} =\sin^2{\left(\frac{k\pi}{2^n+1}\right)}$$ Some of these values will repeat, for example $$y_{1,2^n-k}=y_{1,k-1} \text{, so we can limit } k=\overline{1,2^{n-1}}$$ $$y_{2,2^n-k}=y_{2,k+1} \text{, so we can limit } k=\overline{0,2^{n-1}-1}$$ which makes $2^n$ altogether.
for which n is $(3+ i\sqrt{3})^{n} $a real number
Write $3+i\sqrt{3}$ with the $re^{i\theta}$ form. $$3+i\sqrt{3} = \sqrt{12} e^{i\tfrac{\pi}{6}}$$ $$(3+i\sqrt{3})^{n} = \sqrt{12} e^{i\tfrac{n\pi}{6}}$$ Which is real when $n=6k$ for some integer $k$.
Solve for $x, y, z$, and $t$
Adding, $7(x^2+y^2)=z^2+t^2$, whence $7 \mid z^2+t^2 \Rightarrow 7 \mid z, t$. Thus $49 \mid z^2+t^2 \Rightarrow 7 \mid x^2+y^2 \Rightarrow 7 \mid x, y$. Then $(\frac{x}{7}, \frac{y}{7}, \frac{z}{7}, \frac{t}{7})$ is a solution as well. We continue in this way to get $7^n \mid x, y, z, t$ for all positive integers $n$, so $x=y=z=t=0$.
Solving poisson's equation with boundary conditions
This can in principle be solved easily using my answer to Poisson solver using Mathematica, especially since that code was intended to be used with visual representations of the boundary conditions and inhomogeneities. So I'll illustrate that here. I am using the terminology from electrostatics, but I hope the analogies are clear. The conductor shape is the boundary on which Dirichlet boundary conditions are enforced. The inhomogeneous term in your equation is a constant which I incorporate in the variable chargePlus. There is no need to worry about the absolute quantitative size of the terms because your equation can be rescaled to whatever length scales are needed, and similarly the function $\phi$ can be rescaled arbitrarily by choosing appropriate units for the inhomogeneous term. First execute the code in the linked answer above. conductors = Graphics[{{GrayLevel[.5], Rectangle[1.1 {-1, -1}, 1.1 {1, 1}]}, {Black, Disk[{0, 0}, 1]}, {GrayLevel[.5], Disk[{0, 1.1}, .9, {Pi, 2 Pi}], Disk[{0, -1.1}, .9, {0, Pi}]}}, PlotRangePadding -&gt; 0, ImagePadding -&gt; None] chargePlus = Graphics[{{GrayLevel[0], Rectangle[1.1 {-1, -1}, 1.1 {1, 1}]}, {GrayLevel[.5], Disk[{0, 0}, 1]}, {GrayLevel[0], Disk[{0, 1.1}, .9, {Pi, 2 Pi}], Disk[{0, -1.1}, .9, {0, Pi}]}}, PlotRangePadding -&gt; 0, ImagePadding -&gt; None] chargeMinus = Graphics[{}, PlotRange -&gt; (PlotRange /. FullOptions[conductors])]; susceptibility = Graphics[{}, PlotRange -&gt; (PlotRange /. FullOptions[conductors])]; Timing[potential = poissonSolver[conductors, chargePlus, chargeMinus, susceptibility];] ListPlot3D[potential, PlotRange -&gt; All, PlotStyle -&gt; {Orange, Specularity[White, 10]}] The variables chargeMinus and susceptibility contain empty graphics so I don't show them. You can essentially draw any shape you like instead of the "batman" outline. I just eyeballed a rough approximation of the shape you sketched in the question.
Given f(x) and two correlated random variables x & y, how do I estimate the correlation of f(x) & f(y)
Yes, an estimate of the correlation of f(x) and f(y) is required. The function f(x) is non-linear. The application is ultimately for estimating the expected value E of another function of two variables G(f(x),f(y)). I have the estimate of E[G(a,b)] when a and b are random variables, expressed in terms of mean, variance, and correlation of a and b. But now I need the estimate of E[G(f(x),f(y))]. I have the estimate for mean and variance of a=f(x) in terms of the mean and variance of x, and likewise the mean and variance of b=f(y) in terms of the mean and variance of y, but I still need an estimate of the correlation of a=f(x) and b=f(y) in terms of mean, variance, and correlation of x &amp; y. Thanks
Evaluation of $\lim_{n \to \infty} \int_{0}^\infty \frac{1}{1+x^n} dx$
You can also show with a beautiful contour integral that for $n \geq 2$ $$ \int_{0}^{+\infty}\frac{\text{d}x}{1+x^n}=\frac{\pi}{n\sin\left(\displaystyle \frac{\pi}{n}\right)} $$ using that $$ \sin\left(\frac{\pi}{n}\right) \underset{(+\infty)}{\sim}\frac{\pi}{n} $$ You find that $$ \int_{0}^{+\infty}\frac{\text{d}x}{1+x^n} \underset{n \rightarrow +\infty}{\rightarrow}1 $$
how can i prove this linear Algebra problem?
Part 1. We have that $v_1$ and $v_2$ are linearly dependent. This means, there are $c_1,c_2$, not both zero so that $$c_1 v_1 + c_2 v_2 = 0.$$ $T$ is linear. $$0=T(c_1 v_1 + c_2 v_2) = c_1 T(v_1) + c_2 T(v_2).$$ So $T(v_1)$ and $T(v_2)$ are linearly dependent. Part 2. False. Any counterexample will do. Let $v_1=(1,0)^T$, $v_2=(0,1)^T$ and let $T$ be represented by the matrix $A=\pmatrix{1 &amp;0\\1&amp;0}.$ Then $Av_1=(1,1)^T$ and $Av_2=(0,0)^T$, linearly dependent. There are many examples.
If $p$ divides $a+b$ is it true that $p$ divides $a$ and $b$?
$2$ divides $1+1$. I think that's the simplest counterexample possible.
Help with distance question points A and B
The basic idea is to eliminate the river from the picture. Euclid's axiom tells you that the shortest distance between any two points (here, $A,B$) lies along the straight line through them. Here, the line segment $AB$ would be the hypotenuse of the right-angled triangle with sides $12$ and $2+3=5$.
Generating function of Meixner polynomial
Simplifying we obtain $$m_n(x; b, c) = n! \sum_{k=0}^n {x+b+n-k-1\choose n-k} (-1)^k {x\choose k} c^{-k} \\ = n! \sum_{k=0}^n [t^{n-k}] \frac{1}{(1-t)^{x+b}} [t^k] \left(1-\frac{t}{c}\right)^x \\ = n! [t^n] \frac{1}{(1-t)^{x+b}} \left(1-\frac{t}{c}\right)^x.$$ This is the claim.
Proving a commutative ring is isomorphic to Cartesian product of sets
Starting with your consideration: The isomorphism if $R$ is isomorphic to $Q\times S$ with (i)$a:=(1,0)$ and (ii)$b:=(0,1)$ would be the one that brings $(r_1,r_2)$ in $r_1a+r_2b$ that is $r_1a+r_2b$. So from this particular case you can construct a map that use $a$ and $b$ in a general form by reversing (*) and (**) to pass from an example to a general case: So let $\phi:Q\times S\longrightarrow R$ defined by $(ar_1,br_2)\mapsto ar_1+br_2$ you can verify that it is an homomorphism of rings and it is surjective and injective and the reason rely on the properties $a^2=a,$ $b^2=b$, $a+b=1$. For example note that $(a,b)$ is the unit of $Q\times S$ just as $(1,1)$ but the fact is that $a$ and $b$ doesen't need to be invertible, take $\frac{\mathbb{Z}}{(12)}$ with $a=\bar{9}$ and $b=\bar{4}$. If you know what is an $R$ module the answer is even easier becouse the $\phi$ map is forced to be a map of $R$ module that turns to be an homomorphism of rings.
Intuition behind the relationship between defining a homomorphism of commutative rings and finding two elements in the range that commute
HINT: Suppose $f$ is a homomorphism from $\mathbb{Z}[x, y]$ to $R$. How many specific values of $f$ do I need to know, before I know all of $f$? (For instance, you don't need to tell me "$f(1)=1$," since that's guaranteed by the assumption that $f$ is a homomorphism; similarly, if I know $f(x-2)$, then I know $f(2x+3)$, etc.)
Triviality of complexified tangent bundle of a closed surface
If $E \to X$ is a complex vector bundle of rank $k$, and $X$ is a CW complex of dimension $n &lt; 2k$, then $E \cong F\oplus\varepsilon_{\mathbb{C}}^1$ where $F$ is a complex vector bundle of rank $k - 1$. In particular, as $TS\otimes_{\mathbb{R}}\mathbb{C}$ has rank $2$, and $S$ has dimension $2 &lt; 4$, we see that $TS\otimes\mathbb{C} \cong L\oplus\varepsilon_{\mathbb{C}}^1$ where $L$ is a complex line bundle. As $L$ is determined up to isomorphism by its first Chern class, $TS\otimes_{\mathbb{R}}\mathbb{C}$ is trivial if and only if $c_1(L) = c_1(TS\otimes_{\mathbb{R}}\mathbb{C}) \in H^2(S; \mathbb{Z})$ is zero. If $S$ is not orientable, then $H^2(S; \mathbb{Z}) \cong \mathbb{Z}_2$ and the reduction modulo $2$ map $H^2(S; \mathbb{Z}) \to H^2(S; \mathbb{Z}_2)$ is an isomorphism. Under this map, $c_1(TS\otimes_{\mathbb{R}}\mathbb{C}) \mapsto w_2(TS\otimes_{\mathbb{R}}\mathbb{C})$. Now, as a real bundles, $TS\otimes_{\mathbb{R}}\mathbb{C} \cong TS\oplus TS$, so $$w_2(TS\otimes_{\mathbb{R}}\mathbb{C}) = w_2(TS\oplus TS) = w_2(TS) + w_1(TS)w_1(TS) + w_2(TS) = w_1(TS)^2.$$ On a closed surface $S$, we have $w_2(TS) = w_1(TS)^2$; this is really the statement that the second Wu class $\nu_2 = w_1^2 + w_2$ vanishes, see this note for more details. Therefore, we see that $TS\otimes_{\mathbb{R}}\mathbb{C}$ is trivial if and only if $w_2(TS) = 0$. Now note that $\langle w_2(TS), [S]\rangle \equiv \chi(S) \bmod 2$, so $TS\otimes_{\mathbb{R}}\mathbb{C}$ is trivial if and only if $\chi(S)$ is even.
Categoricity of second order theories - precisely what does it mean?
The point is that you already assume there is some set theory in play when you talk about second-order logic. In other words, you use first-order set theory to talk about second-order "everyday theories" like the natural numbers, etc. Different models of $\sf ZFC$ can have wildly different sets which they regard as "the true $\Bbb N$". If $M$ is a model of $\sf ZFC$, then there are $M_0$ and $M_1$ which are elementary equivalent to $M$, but $M_0$ is countable, and therefore has only countably many "natural number objects", whereas $M_1$ is such that it has uncountably many "natural number objects". Certainly these are not isomorphic, even though the models themselves pretty much agree on "how things should look like". So yeah, what we actually have is that first-order $\sf ZFC$ proves that the second-order theories of $\Bbb N$ etc. are categorical. Or, if you will, inside a fixed universe of set theory, $\Bbb N$ has a categorical second-order theory. But moving to a different universe might produce you with different theories and different models.
A quotient of tensor algebra of $V \otimes k$
Hint: if $T(V)$ is the tensor algebra on $V$, define $f: k \oplus V \to T(V)$ by $f(x, v) = x + v$ (where I am identifying $k$ and $v$ with their images in $T(V)$ expressed as the sum $k \oplus V \oplus V^{{\otimes}2} \oplus \ldots$). Then $f$ induces an algebra homomorphism $T(k \oplus V) \to T(V)$ by the universal property of the tensor algebra.
How many ways are there to arrange a m×n matrix with m×n different elements?
Every location in the matrix is unique, and there are $m\times n$ possible elements to be put in one position, after which there are $m \times n -1$ possible elements to be put in the next position ... etc. So, it is simply $(m\times n)!$
Proving Mean Value Theorem with Rolle's Theorem?
For $f$ continuous on $[a,b]$ and differentiable on $(a,b)$, the standard proofs I've seen use the function that gives the difference of $f$ and the function whose graph is the line segment joining the points $\bigl(a,f(a)\bigr)$ and $(b,f(b)\bigr)$; $$ \phi(x)=f(x)-f(a)-{f(b)-f(a)\over b-a}(x-a). $$ From the continuity and differentiablity of $f$ (and standard theorems such as the difference of continuous functions is continuous) it follows that $\phi$ is continuous on $[a,b]$ and differentiable on $(a,b)$. Since $\phi(a)=\phi(b)=0$, Rolle's theorem applies to $\phi$ on $[a,b]$. Writing down the result obtained from Rolle's Theorem gives all that is desired.