title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Proving Properties of Poisson in limit of Binomial
Theorem: If X and Y are two rv's with $F_X(x)$; $F_Y(y)$ and with existing MGF $M_X(t)$ and $M_Y(t)$, then $F_X(\cdot)\equiv F_Y(\cdot) \iff M_X(\cdot) \equiv M_Y(\cdot)$ The MGF of the binomial is, setting as usual $p=\frac{\lambda}{n}$ $[1-\frac{\lambda}{n}+\frac{\lambda}{n}e^t]^n$ Let's calculate the limit $$\lim_{n \to \infty}[1-\frac{\lambda}{n}+\frac{\lambda}{n}e^t]^n=\lim_{n \to \infty}[1+\frac{\lambda(e^t-1)}{n}]^n=e^{\lambda(e^t-1)}$$ Now we immediately recognize the MGF of a Poisson $\lambda$ $$M_Y(t)=e^{\lambda(e^t-1)}$$
Construct a function which has a minimum, two saddle points and no more critical points.
Since $f_y=2y$ and $f_x=p^{\prime}(x)$, you need to find $p(x)$ so that $p^{\prime}(x)=0$ for 3 values of $x$, and since $D=f_{xx}f_{yy}-(f_{xy})^2=2p^{\prime\prime}(x)$, you also need $p^{\prime\prime}(x)<0$ at two of the critical points and $p^{\prime\prime}(x)>0$ at the other critical point.
2D coordinates of rotating a "bent line"?
$$(b_1,b_2)=(a_1,a_2)-\vec{n_1}\cdot|AB|\cdot\cos(\angle OAB)+\vec{n_2}\cdot|AB|\cdot\sin(\angle OAB),$$ where $\vec{n_1}={(a_1,a_2)\over|OA|}$ and $\vec{n_2}={(-a_2,a_1)\over|OA|}$. Your first method should have worked as well, though.
Conclusions of The Fundamental Theorem of Algebra over $\Bbb{C}$.
Take a real polynomial $P$. As conjugation preserve products and sums, we know that $\forall x\in\mathbb C,~P(x) = 0 \Leftrightarrow P(\overline x)=0$. Note that $$(X-x)(X-\overline x) = X^2 - 2Re(x)X + |x|^2 \quad (\star),$$ a real polynomial. Starting from the factorisation $$P = \lambda \prod_{i=1}^p (X - x_i),$$ (given by the fundamental theorem) where $x_i\in\mathbb C$, you can reorder the terms so that either $x_i$ is real, else $x_i = \overline x_{i+1}$. The terms with $x_i$ real are the degree 1 terms of the factorisation, and with $(\star)$, you have your terms of degree 2.
Integral of $\sin^5(x)\cos(x)$
Your answer is correct. Note that by using a different integration method you can get an answer which looks different but it is not. For example $$\int\sin^5(x)\cos(x)=\int\sin(x) (1-\cos^2(x))^2 \cos(x) dx$$can becalculayted using the substitution $v=\cos(x)$. If you do this, the answer loos different, but that's just an illusion. Same way, you can use $$\sin^5(x)\cos(x)=\left(\frac{1-\cos(2x)}{2}\right)^2\frac{\sin(2x)}{2}$$ and then the substitution $u=\cos(2x)$.
Difference between F-space and Frechet space in W. Rudin's "Functional Analysis"
A standard example of an F-space which is not locally convex is the $L^p$ space with $0 < p < 1$. Let $(X,\mu)$ be your favorite measure space and let $L^p(\mu)$ be the space of all (equivalence classes of) measurable functions with $\int |f|^p\,d\mu\ < \infty$. Equip it with the metric $d(f,g) = \int |f-g|^p\,d\mu$. (Note we do not have a $1/p$ power on the outside, so this is not a norm as it fails to be homogeneous; but if we did include the $1/p$ the triangle inequality would fail.) Unless $(X,\mu)$ is something silly like a finite set, the resulting topological vector space is not locally convex. You can find some details in these lecture notes by Keith Conrad, especially example 2.19. The only part he doesn't discuss is the completeness, but this goes in the same way as the completeness of $L^p$ for $p \ge 1$.
Is it wrong to be irked when an author presents this linear algebra proposition without stating the subset is non-empty?
You're right that it needs to say nonempty, but it seems like a waste of energy to let it bother you. If you are surprised by people making mistakes, then I'd say you're in for a lifetime of surprises :)
How to prove that $(p-1)^2$ $\mid$ $(p-1)!$ when $p$ is a prime number and $p>5$?
Since $(p-2)>\frac{p-1}{2}>2$ and $\frac{p-1}{2}\cdot 2=(p-1)$, $(p-1)$ divides: $$ 1\cdot\color{red}{2}\cdot 3\cdot\ldots\cdot\color{red}{\frac{p-1}{2}}\cdot\frac{p+1}{2}\cdot\ldots\cdot(p-2)=(p-2)!.$$
set of all trace zero matrices with bounded entries in $M_2(\mathbb{R})$
The map $X\mapsto \operatorname{Tr}(X)$ is continuous (as a linear map over a finite dimensional vector space), so $X$ is closed and bounded, hence compact. The map $A\mapsto \det A$ is continuous so $Y$ compact. Now, what we have to see is whether we can have negative or positive determinant. Taking $A:=\pm C\pmatrix{-1&0\\0&1}$, we can have negative or positive values (we choose $0\leq C&ltM$, where $M$ is such that $|a_{ij}|\leq M, i,j\in\{1,2\}, A\in X)$.
Finding the probability of a region inside a pyramid
Really, you've done just about all of the work. The only missing part is the easy one. You know the volume of the pyramid is one, so that the probability is the fraction of the volume between $z \in [1/3,2/3]$, or $$3 \int_{1/3}^{2/3} dz \, (1-z)^2 = \left (\frac23 \right )^3 - \left (\frac13 \right )^3 = \frac{7}{27}$$ Actually, you don't even need integration. You know that the area of the base at $z=2/3$ is $3 (1/3)^2 = 1/3$ and at $z=1/3$ is $3 (2/3)^2 = 4/3$. The volume is then $$\frac13 \cdot \frac23 \cdot \frac43 - \frac13 \cdot \frac 13 \cdot \frac 13 = \frac{7}{27}$$
Find the function satisfying conditions. Is my method valid?
Your second approach is not valid for the reason you state. There are many functions that can not be expressed as power series.
Linear algebra done right 7.C.11
If the matrix representation of two operators with respect to some basis is the same, then the operators are equal and this doesn't have to be the case, even in your situation. In your case, call the distinct eigenvalues $\lambda_1,\lambda_2,\lambda_3$. Since $T_1$ has distinct eigenvalues, it is diagonalizable. Since $T_1$ is normal, you can choose an orthonormal basis $(v_1,v_2,v_3)$ of $\mathbb{F}^3$ consisting of eigenvectors of $T_1$ (with $T_1v_i = \lambda_i v_i$). The same argument applies to $T_2$ resulting in an orthonormal basis $(w_1,w_2,w_3)$ of eigenvectors of $T_2$ (with $T_2 w_i = \lambda_i w_i$). Define an operator $S \colon \mathbb{F}^3 \rightarrow \mathbb{F}^3$ by $S(v_i) = w_i$. Since $S$ sends an orthonormal basis to an orthonormal basis, the map $S$ is an isometry and we have $$ (S^{*}T_2S)(v_i) = S^{*}T(w_i) = S^{*}(\lambda_i w_i) = \lambda_i S^{-1}(w_i) = \lambda_i v_i = T_1(v_i) $$ and so $T_1 = S^{*}T_2S$.
A Question About Probability Inspired By A Textbook
In your case the probability of occurrence is already given in the question. It is $40/100=0.4$. Since there are total 3 events $X,Y$ & $Z$. And $X$ has $40$% chance of occurrence, so probability of occurring $X$ is $P(X)=40/(40+45+15)$.
Help with set builder notation
Your set is read as: The set of integers $a$ whose square $x$ is less than $100$. which equals $\{-9,-8,\ldots,8,9\}$, which isn't what you wanted to encode. Minor modifications can make your definition work - you just need to think about what you need to say. Firstly, you need to say that $a$ itself is less than $100$, whereas you have that something else is less than 100. Secondly, you need to say that $a$ is the square of something, not that something is the square of $a$.
Over a commutative ring, $AB = I_n$ and $BA = I_m$ implies that $m = n$.
Let $\mathfrak{m}$ be a maximal ideal of $R$, and let $k=R/\mathfrak{m}$. If $C=(c_{ij})\in M_{p\times q}(A)$, define $\bar{C}=(\overline{c_{ij}})_{ij}\in M_{p\times q}(k)$. It is not difficult to see that $\overline{C_1C_2}=\overline{C}_1 \overline{C}_2$, with obvious notation. Now if $AB=I_n$ and $BA=I_m$, then $\bar{A}\bar{B}=I_n\in M_n(k)$ and $\bar{B}\bar{A}=I_m\in M_m(k)$. Since $k$ is a field, a classical result of linear algebra tells you that $n=m$.
Proof that the DE corresponds to a mapping of circle.
As the comment suggests: $$u(t) = Ce^{it}$$ By the euler formula we have :$$Ce^{it}=C(\cos(t)+i\sin(t))$$ Which is the polar form of the equation of the circle.
ratio between the area of square $wxyz$ and the area of square $ abcd$ equal?
$XW=YZ=\frac{1}{3}BD$, $XY=WZ=\frac{1}{3}AC$ and $XYZW$ is square. Thus, the ratio is $\frac{2}{9}$ because $S_{ABCD}=\frac{1}{2}AC\cdot BD$ and $$\frac{S_{XYZW}}{S_{ABCD}}=\frac{\frac{1}{3}AC\cdot\frac{1}{3}BD}{\frac{1}{2}AC\cdot BD}=\frac{2}{9}$$ For example, let $M$ is a midpoint of $BH$. Hence, $$\frac{XY}{AC}=\frac{MY}{MC}=\frac{1}{3}$$
Converting $\pi$ to senary base
Alpha gives $3.050330051415124105234414053125321102301214442004115252553\\31420333131135535131233455334100151543444012343544520300450024223431..._6$ or $3.03232214303343241124122404140231421114302031002200344413221101\\04033213440043244401441042334133011323123421042011132102114201..._5$
Explanation for this simplification
Hint: The factorial is defined recursively as: $0!=1 \qquad (n+1)!=n!\cdot (n+1)$ so $$ (k+2)!=(k+1)!(k+2) $$ by definition.
Why it is impossible to draw a circle by Bezier Curve?
Hint: Consider that the coordinates of a Bezier curve are polynomials in $t$, and that the coordinates of a circle all lie on $x^2+y^2=1$ (without a loss of generality). What can we say about polynomials $x(t),y(t)$ such that $x(t)^2+y(t)^2=1$?
A cubic graph has a cutpoint if and only if it has a bridge
HINT: Let $G$ be a cubic graph, and suppose that the vertex $u$ is a cut vertex. Let $v_1,v_2$, and $v_3$ be the neighbors of $u$ in $G$. Show that every vertex of $G-u$ is in the same component of $G-u$ as at least one of the vertices $v_1,v_2$, and $v_3$. Conclude that $v_1,v_2$, and $v_3$ cannot all be in the same component of $G-u$. Without loss of generality assume that $v_1$ and $v_2$ are in different components of $G-u$. Show that if $\{u,v_1\}$ is not a bridge in $G$, then $v_3$ is the same component as $v_1$ in $G-u$. Similarly, if $\{u,v_2\}$ is not a bridge in $G$, then $v_3$ is the same component as $v_2$ in $G-u$. Conclude that at least one of the edges $\{u,v_1\}$ and $\{u,v_2\}$ must be a bridge in $G$.
Is the formula below for $S(n,k)$ correct?
By simplifying the formula \begin{eqnarray*} S(n,k) =\frac{1}{k!} \sum_{i=0}^{k} (-1)^{k-i} \binom{k}{i} i^n \end{eqnarray*} you have probably bumped $n$ down by one. The value $S(6,3)=90$ is correct, try looking at the partitions of $[6]$ into $3$ blocks.
What are the asymptotics in $k$ of $\sum_{n=1}^\infty\frac{1}{n(n+k)}$?
For $0\not=k\in\mathbb{N}$ the series telescopes: $$S_k={1\over k}\sum_{n=1}^\infty\left({1\over n}-{1\over n+k}\right)={1\over k}\left(1+{1\over2}+\cdots+{1\over k}\right)\approx{\log k\over k}$$ so the sum $\sum S_k^2$ converges.
Complex Analysis integral of $\int_0^\infty \frac{\sin(x)}{x(1+x^2)^2}dx$
The integrand is even, so writing it as $$ \int_0^\infty \frac{\sin(x)}{x(1+x^2)^2} \, {\rm d}x = \frac{1}{2} \int_{-\infty}^\infty \frac{\Im\left(e^{ix}\right)}{x(1+x^2)^2} \, {\rm d}x \, .$$ The imaginary part can be pulled infront of the integral when $x$ is not complex valued (which would not be the case for a complex contour), but only real. On the other hand, using complex analysis requires the contour to not be discontinuous, but it must avoid $x=0$. Therefore the integral is written as $$\Im \left(\int_{-\infty}^\infty \frac{e^{ix}}{2x(1+x^2)^2} \, {\rm d}x + \int_{|x|=\epsilon} \frac{e^{ix}}{2x(1+x^2)^2} \, {\rm d}x \right)$$ where the first integral is now a complex contour integral encircling the singularity at $0$ clockwise at radius $\epsilon$, while the second integral is counter-clockwise to compensate for this complex valued $\epsilon$-contour. The total contour is missing that $\epsilon$-circle and is known as the principal value. Eventually $\epsilon$ goes to $0$. The first integral contour can now be closed in an arc in the upper half-plane and it is trivial to see in this case that this arc - $\lim_{R\rightarrow \infty} x=R e^{it}$ with $0<t<\pi$ - vanishes. As a result, the residue theorem can be applied and hence $$=\Im \left(\frac{1}{2} \, \left\{ 2\pi i \, {\rm Res}_{x=i} + i\pi {\rm Res}_{x=0} \right\} \frac{e^{ix}}{x(1+x^2)^2} \right) \\ = \Im \left( i\pi \frac{{\rm d}}{{\rm d}x} \frac{e^{ix}}{x(x+i)^2} \Bigg|_{x=i} + \frac{i\pi}{2} \right) = \Im \left( \frac{-3\pi i}{4e} + \frac{i\pi}{2} \right) = -\frac{3\pi }{4e} + \frac{\pi}{2} \, .$$
Solving differential equations given a companion matrix?
So far so good. Now you have to show that $$x'=Cx$$ That is you have to take derivative of $$ \begin{pmatrix}C_1e^t+\frac{C_2e^{2t}}{4}+C_3e^{-t}\\ C_1e^t+\frac{C_2e^{2t}}{2}-C_3e^{-t}\\C_1e^t+C_2e^{2t}+C_3e^{-t}\end{pmatrix}$$ and show that it is the same as $$\begin{pmatrix}0 & 1 & 0\\ 0 & 0 & 1\\ -2 & 1 & 2\end{pmatrix} \begin{pmatrix}C_1e^t+\frac{C_2e^{2t}}{4}+C_3e^{-t}\\ C_1e^t+\frac{C_2e^{2t}}{2}-C_3e^{-t}\\C_1e^t+C_2e^{2t}+C_3e^{-t}\end{pmatrix}$$
Does polar form mean $re^{i \theta}$ or $r(\cos \theta + i \sin \theta)$?
Euler's famous formula says $$e^{i\theta}=\cos(\theta)+i\sin(\theta).$$ Then $$re^{i\theta}=r(\cos(\theta)+i\sin(\theta)).$$ You will sometimes find the notations $$r\text{ cis}(\theta)$$ or $$r\angle\theta.$$ You can indeed call these polar forms, reminding the identities $$\begin{cases}x=r\cos(\theta),\\y=r\sin(\theta).\end{cases}$$
Is there exist such $f$
As others have said there is no such $f$ when $X\subset{\mathbb R}$. But in an arbitrary metric space $X$ you may have such $f$'s. Here is an example: Let $X:={\mathbb R}$ and provide $X$ with the metric $$d(x,y):={|x-y|\over 1+|x-y|}\ .$$ Then all distances in $X$ are $<1$, so $X$ is "bounded". But for $|x-y|\ll 1$ the new distance is pretty much the usual one. It follows that the unbounded function $$f(x):=\log\bigl(1+|x|\bigr)$$ is locally Lipschitz continuous with Lipschitz constant $1$, whence uniformly continuous.
An elegant way to find the range of functions?
You can find the extreme values by using derivatives but since you already found the following bounds, you just need to show that these belong to the range; continuity does the rest. From $(x-1)^2 \ge 0 \implies x^2+1 \ge 2x$, you have: $$\frac{x}{x^2+1} \le \frac{1}{2}$$ and from $(x+1)^2 \ge 0 \implies x^2+1 \ge -2x$, you have: $$-\frac{1}{2} \le \frac{x}{x^2+1} $$ Now note that: $f(1) = \tfrac{1}{2}$ and $f(-1) = -\tfrac{1}{2}$; $f$ is continuous, so it takes all values between $-\tfrac{1}{2}$ and $\tfrac{1}{2}$.
Conjugation action on a semi-direct product $N\rtimes H$
Under conjugation one orbit is just the identity. So all other elements of $N$ are conjugate in $G$ and therefore have the same order. Let this order be a multiple of the prime $p$, then the $p$th power of an element of $N$ is still in $N$ but has a smaller order i.e. it has to be the identity. Since $N$ is a $p$-group, $Z(N)$ is non-trivial and is a characteristic subgroup of $N$. Therefore $Z(N)$ is fixed by $H$ and so all elements of $N$ are in $Z(N)$.
Prove $\left|\frac{2a}{b} + \frac{2b}{a} \right| \ge 4$ for all nonzero $a,b$
Note that this is just $$\left|\dfrac{2}{x} + 2x\right|\ge 4$$ Now squaring both sides like you suggest gives $$\dfrac{4}{x^2}+8+4x^2\ge 16$$ $$\iff \dfrac{4}{x^2}-8+4x^2\ge 0$$ $$\iff \left(\dfrac{2}{x} - 2x\right)^2\ge 0$$
Principle of Transfinite Induction
The ordinals are what you get when you iterate the operations of 'successor' and 'supremum' indefinitely, much like the natural numbers are what you get when you iterate the sole operator 'successor' indefinitely. Start with $0$. Iterating successors we get the natural numbers, which are the finite ordinals: $$0, 1, 2, 3, \dots $$ Now take the supremum. We call this ordinal $\omega$. Iterating successors we get a longer sequence: $$0, 1, 2, \dots, \omega, \omega+1, \omega+2, \dots$$ The supremum of this sequence is the ordinal $\omega + \omega$. We can then take even more successors: $$0, 1, \dots, \omega, \omega+1, \dots, \omega + \omega, \omega + \omega + 1, \dots$$ The supremum of this sequence is the ordinal $\omega + \omega + \omega$. Continuing in this way gives rise to the following ordinals: $$\omega,\ \omega+\omega,\ \omega+\omega+\omega,\ \omega + \omega + \omega + \omega, \dots$$ So we can take another supremum to obtain the ordinal $\omega \cdot \omega$. Likewise we obtain $\omega \cdot \omega \cdot \omega$, and so on, the supremum of all of which is $\omega^{\omega}$. Then we obtain $\omega^{\omega^{\omega}}$ and so on, the supremum of all of which is called $\varepsilon_0$... and so on. Continuing even further rise to the countable ordinals. But that itself is a set of ordinals, so it has a supremum, called $\omega_1$. Then we can take its successors $\omega_1+1$ and so on. The ordinals are precisely the things which can be obtained by iterating the successor operation and taking suprema of sets of ordinals. More formally, the (von Neumann) ordinals are the elements of the class $\mathrm{Ord}$, which is the closure of $\varnothing$ under the successor operation $x \mapsto x \cup \{ x \}$ and under taking arbitrary unions. The principle of transfinite induction essentially says that, for a given formula $P(x)$, if $P(0)$ is true, and the truth of $P(\alpha)$ is preserved by taking successors and suprema, then $P(\alpha)$ must be true of all ordinals $\alpha$. (We can omit the $P(0)$ case because $0 = \sup (\varnothing$).)
The motivation of weak topology in the definition of CW complex
This is an answer to the motivation for the definition of CW-complex. The main one is given by the following from Section $5$ of Whitehead's paper Combinatorial Homotopy I. Let $K$ be a CW-complex. (A) A map $f: X \to Y$ of a closed (open) subset, $ X \subset K$ in any space, $Y$, is continuous provided $f| X \cap \bar{e}$ is continuous for each cell $e \in K$. Nowadays we might do it a bit differently and say that the topology on $K$ is defined so that this property holds. This can be done by defining the $n$ skeleton $K^n$ to be obtained from the $(n-1)$-skeleton $K^{n-1}$ by attaching $n$-cells $e^n_\lambda$ by maps $a_\lambda: S^{n-1} \to K^{n-1}$. Then define the topology on $K$ so that a map $f: K \to X$ is continuous if and only if the restriction of $f$ to $K^n \to Y$ is continuous for all $n \geqslant 0$. This accords with the modern view that topology is "about" the category of spaces and continuous maps, and to construct these maps we often use the universal properties of pushouts and colimits (and for other examples, pullbacks and limits). A gentle introduction to adjunction spaces and finite cell complexes is given in the book Topology and Groupoids. Ioan James once said at Oxford that Whitehead took a year to prove the product property (H), which is now subsumed under the notion of convenient category of spaces. His motivation can also be seen in his highly original 1941 papers in the Annals of Math, referred to in CHI. The latter paper, and CHII and Simple Homotopy Types, constitute a rewrite and extension of these essentially prewar papers.
Check integral to convergence
Note that on the interval $[0,1]$ that $e^x\leq x+5$. This implies that $\frac{1}{x+5}\leq \frac{1}{e^x}$. Then using this for the integral, we get that: \begin{equation} \begin{split} &\int_0^1\frac{1}{xe^x}dx\geq \int_0^1\frac{1}{x(x+5)}dx\\ &=\frac{1}{5}\int_0^1\frac{1}{x}dx-\frac{1}{5}\int_0^1\frac{1}{x+5}dx\quad \text{Via partial fraction decomposition}\\ &=\frac{1}{5}(\ln(1)-\ln(0))-\frac{1}{5}(\ln(6)-\ln(5))=\infty \end{split} \end{equation} Therefore the integral diverges.
Prove that the field of quotients of an integral domain $D$ is the smallest field containing $D$. . My Attempt Shown
Since, $F′$ is a field, $F′$ is a sub field of $E$. This line is pretty much assuming what you are currently are trying to prove. You will have to actually make reference to how $D$ lies in $E$ to prove the statement. Why not just try to make a map from $F$ into $E$? Since you already have $D\subseteq E$, it's natural to just say $\phi(a)= a\in E$ for all $a\in D$. What about other elements of $F$? For each $b\neq 0$ in $D$, there is an element $b^{-1}\in E$ which is $b$'s inverse, and an element $b^{-1}\in F$ which plays the same role in $F$. Naturally you'd want $\phi(b^{-1})=b^{-1}\in E$, where the first $b^{-1}$ is in $F$ and the latter is in $E$. To map the remaining elements of $F$, we would require $\phi(ab^{-1})=\phi(a)\phi(b^{-1})=ab^{-1}$ (again the first $b^{-1}$ is the one in $F$ and the last one is in $E$.) Verify this gives a well-defined injective ring homomorphism from $F$ into $E$. The image of $\phi$ is the copy of $F$ that you seek.
Let $f:K\to K$ with $\|f(x)-f(y)\|\geq ||x-y||$ for all $x,y$. Show that equality holds and that $f$ is surjective.
We'll prove the statement for $K$ compact metric space. Let $\epsilon >0$. The compact space $K$ can be covered by finitely many open balls $B(y_i, \epsilon/2)$, $i=1, \ldots m_{\epsilon}$. Therefore, there exist at most $m_{\epsilon}$ points in $K$, any two at distance at least $\epsilon$. Let $n_{\epsilon}$ be the largest number $n$ so that there exist $n$ points $x_1$, $\ldots$, $x_n$ in $K$ with $d(x_i, x_j) \ge \epsilon$ for all $i\ne j$. Let $\mathcal{K}_{\epsilon} \subset K^{n_{\epsilon}}$ be the set of all such $n_{\epsilon}$-uples ${\bf x}\colon =(x_1, \ldots, x_{n_{\epsilon}})$. It is clear that $\mathcal{K}_{\epsilon}$ is compact. Define the function $F \colon \mathcal{K}_{\epsilon}\to [0, \infty)$, $$F (x_1, \ldots, x_{n_{\epsilon}})= \sum_{i< j} d(x_i, x_j)$$ Clearly, $F$ is continuous. Let ${\bf x}$ be a point of maximum. Since $d(f(x_i), f(x_j) ) \ge d(x_i, x_j)$, it follows that $(f(x_1), \ldots, f(x_{n_\epsilon}))$ is also in $\mathcal{K}_{\epsilon}$ and moreover, $d(f(x_i), f(x_j) ) = d(x_i, x_j)$ for all $i$, $j$. We are almost done. Consider now $y$, $z$ in $K$. There exist $i$, $j$ so that $d(f(x_i), f(y))< \epsilon$ and $d(f(x_j), f(z))< \epsilon$. Therefore, $d(f(y), f(z)) < d(f(x_i), f(x_j)) + 2 \epsilon$. Now, we also have $d(x_i, y) \le d(f(x_i), f(y))$ and $d(x_j, z)\le d(f(x_j), f(z))$. Therefore, $d(x_i, y)$, $d(x_j, z) < \epsilon$. Therefore, $d(x_i, x_j) < d(y,z) + 2 \epsilon$. Summing up \begin{eqnarray} d(f(y), f(z)) &< &d(f(x_i), f(x_j)) + 2 \epsilon\\ d(x_i, x_j) &< &d(y,z) + 2 \epsilon\\ d(f(x_i), f(x_j)) &= &d(x_i, x_j) \end{eqnarray} Therefore, $d(f(y), f(z)) < d(y,z) + 4 \epsilon$. Since $\epsilon$ was arbitrary, we get $d(f(x), f(y))\le d(x,y)$. Obs: The proof of @Harald Hanche-Olsen: here gave me the idea to consider $\epsilon$ -nets; thanks to @David Mitra: for the hint. $\bf{Added:}$ It is not hard to find examples of isometries of totally bounded but non-compact metric spaces $K$ that are not surjective. Take for instance $K = (e^{i n \theta})_{n \in \mathbb{N}}$, where $\frac{\theta}{\pi} \in \mathbb{R} \backslash \mathbb{Q}$, like $((.6+ .8 i)^n)_{n \in \mathbb{N}}$. However, I could not find examples of totally bounded metric spaces with expansive maps $f$ that are not isometries. It turns out that there are none. So we'll show that $f$ is an isometry if we only assume $K$ totally bounded . Again we consider $\epsilon$-nets. The gauge function for an $\epsilon$-net is now $$G(x_1, \ldots,x_n) \colon = \prod_{i< j} d(x_i, x_j)$$ bounded above by $\text{diam}(K)^{n_{\epsilon}}$. Let $p$ the supremum of $G$ for all possible elements in $\mathcal{K}_{\epsilon}$. The function $G$ may not achieve its supremum $g$ since $K$ is apriori not assumed compact. However, let ${\bf x}= (x_1, \ldots, x_{n_{\epsilon}})$ in $\mathcal{K}_{\epsilon}$ so that $G({\bf x}) \ge \frac{1}{1+\epsilon} g$. It is easy to see that this implies $$d(x_i, x_j) \le d(f(x_i),f(x_j)) \le (1+\epsilon) d(x_i, x_j)$$ for all $i$,$j$. Again, let $y$, $z$ in $K$ and $i$, $j$ so that $d(f(x_i), y) < \epsilon$ and $d(f(x_j), z) < \epsilon$. Like before, we show that $$d(f(y), f(z)) \le (1+\epsilon) ( d(y,z) + 2 \epsilon) + 2 \epsilon$$ done.
Solution of $19 x \equiv 1 \pmod{35}$
Your question can be stated like this: What is the inverse of $19$ modulo $35$? Since $\gcd(19,35) = 1$, $19$ is invertible mod $35$. Hence there is a unique solution $x\in\{0,\ldots,34\}$ of your equation. How to find it? Method 1: You could just check all these possibilities to find it. Method 2: A more structured (and in general, more efficient) way is to use the extended Euclidean algorithm on the pair $(35,19)$. It gives you numbers $a,b\in\mathbb{Z}$ with $19a + 35b = 1$. Now reading this equation modulo $35$, $$19a\equiv 1\mod 35.$$ Method 3: As you wrote, by Fermat $19^{34}\equiv 1\mod 35$, so $19\cdot 19^{33} \equiv 1\mod 35$. Now you can evaluate $19^{33}$ by the square-and-multiply algorithm. However note that in general, this is slower than the extended Euclid algorithm. Concerning your question on the smallest $x$: The solution set has the structure $x_0 + 35\mathbb{Z}$. So if you have any solution $x_0$, the smallest solution is the remainder of $x_0$ modulo $35$.
Please calculate $\lim_{x\rightarrow 0}\frac{\ln(\cos x)}{x^2}$
Hint#1: You can use these series (you can find them here) $$\cos(x)\approx1-\frac{x^2}{2}+o(x^2)$$ $$\ln(1-x)\approx -x+o(x)$$ Here I used O notation After the consistent application of these series, you will get the correct answer $-\frac{1}{ 2}$ Edit Hint#2 $$\ln(\cos(x))\approx\ln\left(1-\frac{x^2}{2}+o(x^2)\right)\approx-\frac{x^2}{2}+o\left(-\frac{x^2}{2}+o(x^2)\right)\approx-\frac{x^2}{2}+o(x^2)$$ All properties of $o(f)$ you can find here (in section "Little-o notation")
Length of polynomial ring modulo a homogeneous regular sequence
Using the following exact sequence $$0 \rightarrow R/(f_1,\dots,f_m)(-s_{m+1}) \stackrel{f_{m+1}}{\rightarrow}R/(f_1,\dots,f_m) \rightarrow R/(f_1,\dots,f_{m+1}) \rightarrow 0$$ you get $H_{R/(f_1,\dots,f_{m+1})}(t)=(1-t^{s_{m+1}})H_{R/(f_1,\dots,f_m)}(t)$. Inductively $H_{R/(f_1,\dots,f_{m+1})}(t)=(1-t^{s_{m+1}})\cdots(1-t^{s_1})H_R(t)$.
Big O notation - estimation of run time
Okay, if $m^2$ were the only relevant changing value, then I could guess your ratio of $m$ values was roughly, 14.7 (sqrt of 216 roughly) . The notation, is mostly used as a way to show a term or set of terms, that approaches the value of the overall function as it's variable(s) get larger. For example: $$4z^2+ 20876549321768543219865740$$ has a relatively huge constant term. But, by $$z=2284543133854.586647726934526$$ that only accounts for roughly half the value of the polynomial. Above this, the term with a variable, starts to take over half. By :$$z=22845431338545.86647726934526$$ the constant term is now at under 0.1 percent of the value of that function. Above this, a good approximation is simply $$4z^2$$ and above $$z=2$$ this estimate is estimated by $$z^2$$. With $n$ constant, your function if large enough to overwhelm other terms will roughly scale with the square of the ratio of $m$ values.
Is $\prod_{\mathbb{R}}\mathbb{R} = \mathbb{R}^\mathbb{R}$?
Your argument about the product starts well, but then it goes off the rails. Why do you say that "it doesn't account for functions that are not surjective"? Or "functions with vertical asymptotes"? It certainly doesn't include functions that are not defined on all of $\mathbb{R}$, but then neither does $\mathbb{R}^{\mathbb{R}}$. As for surjectivity, nothing in the definition implies surjectivity: the "constant tuple" that is $1$ everywhere is an element of $\prod \mathbb{R}$; what makes you think this is surjective? Remember the very definition of a direct product: if $\{X_i\}_{i\in I}$ is a family of sets, then $$\prod_{i\in I}X_i = \Bigl\{ f\colon I\to \cup X_i \,\Bigm|\, f(i)\in X_i\text{ for each }i\in I\Bigr\}.$$ And if $X$ and $Y$ are sets, then $$X^Y = \{f\colon Y\to X\}$$ that is, all functions with domain all of $Y$ and with $f(y)\in X$ for all $y\in Y$. So $$\prod_{r\in\mathbb{R}}\mathbb{R} = \Bigl\{ f\colon \mathbb{R}\to\mathbb{R}\,\Bigm|\, f(i)\in\mathbb{R}\text{ for each }i\in\mathbb{R}\Bigr\} = \mathbb{R}^{\mathbb{R}}.$$
Real Analysis, Folland problem 5.5.63 Hilbert Spaces
a)It is actually simpler : $\sum_{n=1}^{\infty}|\langle x,u_n\rangle|^2 \leq \lVert x\rVert^2\le +\infty$. So $\lim_{n \to \infty}|\langle x,u_n\rangle|^2=0$, so $\lim_{n \to \infty}\langle x,u_n\rangle = 0$. It is true $\forall x$, so by Riesz representation theorem, $\forall f \in H^*$, $\lim_{n \to \infty}f(u_n)=0$. b) Let $x_0 \in B-S$. Let $V$ be a weak neighbourhood of $x_0$. $\exists \epsilon>0$ and $\phi_1,...,\phi_n \in H^*$ such as $x_0+V_{\epsilon,\phi_1,...,\phi_n} \subseteq V$. Since $H$ is infinite dimensional, $\exists x\neq 0$ such as $x \in \bigcap_{k=1}^n ker(\phi_k)$. If $f(t)=||x_0+tx||, t\in \mathbb{R}$, $f$ is continuous on $\mathbb{R}$ and we have $f(0)=||x_0||<1$ and $ \lim_{t \to \infty} f(t) =+\infty $. We apply the intermediate value theorem : $\exists t_0>0$ such as $f(t_0)=1$. Then $x_1=x_0+t_0x \in S$ and we have : $|\phi_k(x_1-x_0)|=t_0|\phi_k(x)|=0 \le \epsilon,\space 1\le k \le n$. So $x_1 \in x_0+V_{\epsilon,\phi_1,...,\phi_n}\subseteq V$ and $S \cap V \neq \emptyset$. Finally $S$ is weakly dense in $B$.
Find hypotenuse of a sphere given radius.
I gather that you are trying to understand Ross Millikan's answer to this question about the work done when oil is lifted out of a spherical tank. A reason for your confusion is that you are drawing the wrong diagram. Consider the diagram below: The circle with radius $r_c$ is a cross-section of a sphere with radius $r_s$ at a distance of $|z|$ from the center. By the Pythagorean Theorem, $$r_c^2 + |z|^2 = r_s^2$$ Since the square of a real number is the square of its absolute value, $$r_c^2 + z^2 = r_s^2$$ Solving for $r_c$ yields \begin{align*} r_c^2 & = r_s^2 - z^2\\ r_c & = \sqrt{r_s^2 - z^2} \end{align*} In that problem, the sphere has radius $6$, so the radius of the cross-section is $$r_c = \sqrt{36 - z^2}$$ Therefore, the area of the cross-section is $$A = \pi r_c^2 = \pi\left(\sqrt{36 - z^2}\right)^2 = \pi(36 - z^2)$$ Thus, the integral $$V = \int_{-6}^{0} \pi(36 - z^2)~\text{dz}$$ yields the volume of the oil in the bottom half of the sphere. Multiplying by the density $\rho$ of the oil yields the mass $$m = \int_{-6}^{0} \rho\pi(36 - z^2)dz$$ of the oil. The work done to lift the oil out of the sphere is $$W = \int_{-6}^{0} 9.8\rho(8 - z)\pi(36 - z^2)dz$$ where $9.8~\text{m}/\text{s}^2$ is the upward acceleration against gravity and $8 - z$ is the distance oil at a height $z$ is lifted to reach the top of the spout.
Conversion of the polar equation $ r=\sin(4\theta) + 2$ into Cartesian.
Notice $$ r = \sin ( 4 \theta ) + 2 = 2 \sin ( 2 \theta ) \cos ( 2 \theta ) + 2 = 4 \sin \theta \cos \theta ( \cos^2 \theta - \sin^2 \theta) + 2 $$ using $x = r \sin \theta$, $y = r \cos \theta $ and $x^2 + y^2 = r^2 $, we have $$ \boxed{\sqrt{ x^2 + y^2} = 4 \frac{xy(x^2-y^2)}{(x^2+y^2)^2}+ 2}$$
An example of a semiring which is not a ring.
Let $a< c<d< b$. Then $(a,b)\setminus (c,d)=(a,c]\cup [d,b)$, which is clearly not an interval. Thus the set of intervals is not closed under set difference.
Classification of prime ideals in $\mathbb{Z}[i]$
For your last question, every prime ideal of $\mathbb{Z}$ is contained in a maximal ideal of $R$, which is prime. For the first question, notice that $F_q [x]/(x^2+1)=(\mathbb{Z}[i])/(q)$. So since the first splits into a ring with two prime ideals, the second does as well, which shows that q is not prime, but in fact is composed of two primes. The ideal $(-x-a)=(x+a)$, so reversing the sign of $x$ switches between the ideals, which means the primes are complex conjugates. Finding the exact primes using this method does not work well; for instance, this method would suggest that the prime factors of 5 and 13 are $2+i,2-i$ and $5+i,5-i$ respectively (since 2 and 5 are the respective square roots of -1). The first case is true, and the second is not; the norm of $5-i$ is 26, not 13. The problem is that $5-i=(1-i)(2+3i)$, where $(2+3i)$ is the actual prime dividing 13. So one method you could use to find the actual primes is to take $a+i$ (where $a$ is a root of -1 in your finite field), and find its norm; if its norm is not prime, that means you can factor $a+i$. Keep doing so till you get a piece with the right norm.
Why is $\int_{\mathbb{R}^3} |p\rangle \langle p| d\lambda(p)=id$?
A projection-valued measure on $\mathbb{R}$ is an assignment $\Pi$ that maps a Borel measurable subset $U$ of $\mathbb{R}$ (e.g., an interval) to an orthogonal projection $\Pi(U)$, such that $\Pi(\emptyset) = 0$ and $\Pi(\mathbb{R}) = I$, if $\{U_k\}$ is a countable collection of Borel measurable subsets such that $U_i \cap U_j = \emptyset$ for $i \neq j$, then $\Pi_{\cup_k U_k} = \sum_k \Pi_{U_k}$, where the sum on the right converges strongly, i.e., $\Pi_{\cup_k U_k}\xi = \sum_k \Pi_{U_k}\xi$ for all vectors $\xi$ in your Hilbert space $H$, or equivalently, such that for any $\xi$, $\eta \in H$, $\mu_{\xi,\eta} : U \mapsto \langle \xi \lvert \Pi(U) \rvert \eta \rangle$ defines a complex measure on $\mathbb{R}$. In particular, for any unit vector (i.e., pure state) $\xi \in H$, $\mu_{\xi,\xi} : U \mapsto \langle \xi \lvert \Pi(U) \rvert \xi \rangle$ defines a probability measure on $\mathbb{R}$. Hence, for any Borel measurable function $f : \mathbb{R} \to \mathbb{C}$, one can define an unbounded operator $\int_{-\infty}^\infty f(x) d\Pi(x)$ on $H$ with dense domain $D_f := \{\xi \in H \mid \int_{-\infty}^\infty \lvert f(x) \rvert^2 d\mu_{\xi,\xi}(x) < \infty \}$ by setting $$ \forall \xi,\; \eta \in D_f, \quad \left\langle \xi \left\lvert \int_{-\infty}^\infty f(x) d\Pi(x) \right\rvert \eta \right\rangle := \int_{-\infty}^\infty f(x) d\mu_{\xi,\eta}. $$ Now, what does all this have to do with observables? Let $A$ be a quantum observable, i.e., a possibly unbounded self-adjoint operator. The spectral theorem then tells you that there exists a unique spectral valued measure $\Pi$ on $\mathbb{R}$ such that $A = \int_{-\infty}^\infty a d\Pi(a)$; for example, in the case of momentum $\hat{p} = i\tfrac{d}{dx}$, $\Pi(U) = \int_U \lvert p \rangle\langle p \rvert dp$ in physics notation, so that $\lvert p \rangle\langle p \rvert dp$ can really be interpreted as physics notation for $d\Pi(p)$ wherever it appears. In particular, for any Borel subset $U$ of $\mathbb{R}$, $$ \Pi(U) = \int_{-\infty}^\infty \chi_U(a)d\Pi(a) = ``\int_U d\Pi(a)" $$ is the orthogonal projection onto the subspace of pure states where $A$ is observed to take its value in $U$, and if $\xi \in H$ is a unit vector, and thus a pure state, then $\langle \xi \lvert \Pi(U) \rvert \eta \rangle = \int_U d\mu_{\xi,\xi}$ is the probability that the observed value of $A$ in the pure state $\xi$ lies in $U$. Still more is true: if $a$ denotes, by abuse of notation, the classical observable corresponding to $A = \hat{a}$, then for any measurable function $f : \mathbb{R} \to \mathbb{R}$, $f(A) = \int_{-\infty}^\infty f(a) d\Pi(a)$ is the quantum observable corresponding to the classical observable $f(a)$, and its expectation value in any pure state (unit vector) $\xi \in H$ is $\langle \xi \lvert f(A) \rvert \xi \rangle = \int_{-\infty}^\infty f(a) d\mu_{\xi,\xi}$. Finally, suppose that $A$ is a self-adjoint operator with discrete spectrum $\{a_k\}$, let $H_k := \ker(A - a_k I)$ be the eigenspace corresponding to the eigenvalue $a_k$, and let $P_k$ denote the orthogonal projection onto $H_k$. Then the corresponding projection-valued measure $\Pi$ is given by $ \Pi(U) := \sum_{a_k \in U} P_k, $ where the right hand side converges strongly, i.e., $\Pi(U)\xi = \sum_{a_k \in U} P_k\xi$ for all $\xi \in H$, so that for any measurable function $f : \mathbb{R} \to \mathbb{C}$, $$ f(A) = \int_{-\infty}^\infty f(a) d\Pi(a) = \sum_k f(a_k) P_k, $$ where the sum on the right hand side, again, converges strongly on the domain of $A$; in particular, the spectral theorem reads $$ A = \int_{-\infty}^\infty a d\Pi(a) = \sum_k a_k P_k, $$ which, up to convergence issues, is precisely the spectral theorem of finite-dimensional linear algebra. From a symbolic standpoint, you can symbolically write $$ d\Pi(a) = \sum_k P_k \delta(a-a_k)da, $$ where $\delta(a-a_k)da$ is the Dirac delta supported at $a_k$.
Is it false that the complement of an open set is closed?
This community wiki solution is intended to clear the question from the unanswered queue. The complement of an open set is closed. This is one definition of being closed. Alternatively, notice that $\mathbb C$ is Hausdorff and so singletons are closed, in particular $\{0\}$ is closed. One definition of a function being continuous is that inverse images of closed sets are closed, thus $Z(f) = f^{-1}(\{0\})$ is closed.
Cantor's diagonal argument without equality
If we consider ZFC as a theory in first-order logic without equality (that is, we remove the non-logical axioms mentioning equality) then the Axiom of Extensionality, which says that two sets are equal if and only if they have the same elements, can be considered as a definition of equality. Then we can rewrite every axiom and every theorem to avoid mentioning equality by replacing every occurrence of "$x = y$" in the logical and non-logical axioms with "$\forall z\,(z \in x \iff z \in y)$" (replacing $x$, $y$, and $z$ with appropriate variable names.) The resulting theory is essentially the same as ZFC. In particular, they are not complete, and seem so far to be consistent. I don't know if this meets your criterion of not indirectly involving equality, because there may be some theorems that are hard to understand without putting them back in terms of equality (which is now a defined notion.) However this may just be because we are used to seeing them stated in this manner, and in any case the judgement seems subjective.
If $P|x|^2 = \int |X|^2dP < \infty$ then there exists $M$ so that $P |x|^2 < (\frac{M}{2})^2 P[-\frac{1}{2}M, \frac{1}{2}M]$?
Note that $$\Big(\frac{M}{2}\Big)^2P\Big(\Big[-\frac{M}{2},\frac{M}{2}\Big]\Big)\to\infty$$ as $M\to\infty$, and so for any positive real number $a$ there is some $M$ such that this expression is $&gt;a$. Now take $a=\int x^2\;dP$.
Particular limit calculation
Note that $$\frac{\sum_{i=2}^{j}\lceil \log(i) \rceil}{\sum_{i=2}^{j-1}\lceil \log(i) \rceil}=1+\frac{\lceil \log(j) \rceil}{\sum_{i=2}^{j-1}\lceil \log(i) \rceil}, $$ so you need to show that the final term on the right goes to zero. When $2^{n-1}&lt;j\le 2^n$, the numerator equals $n$, while at least $2^{n-2}$ terms in the denominator are $n-2$ or $n-1$. More precisely, $$\sum_{i=2}^{j-1}\lceil \log(i) \rceil \ge \sum_{i=2^{n-2}+1}^{2^{n-1}}\lceil \log(i) \rceil = \sum_{i=2^{n-2}+1}^{2^{n-1}}(n-1)=2^{n-1}(n-1),$$ so that $$\frac{\lceil \log(j) \rceil}{\sum_{i=2}^{j-1}\lceil \log(i) \rceil}\le\frac{n}{2^{n-1}(n-1)},$$ and we're done.
Evaluating the $L_2[-1, 1]$ inner product on rescaled Legendre polynomials
I finally got this to work, so I'd like anyone looking for answeres here to know I made a mistake in (1); it should say: $\left(\frac{d^n}{dt^n}(t^2-1)^n\right)$ $= \sum _{k=0}^n\left(\begin{matrix}n\\k\end{matrix}\right)\left(\prod _{j=0}^{n-1}(2n-2k-j)\right)t^{2n-2k}(-1)^k$ $= \sum _{k=0}^n\left(\begin{matrix}n\\k\end{matrix}\right)\left(\frac{(2n-2k)!}{(n-2k)!}\right)t^{2n-2k}(-1)^k$ (but with 0 instead of n - 2k whenever n - 2k &lt; 0), and so $\left(\frac{d^{2n}}{dt^{2n}}(t^2-1)^n\right)$ $=(2n)!$. That in mind, the apparently best way to come up with an inner product of 1 is to convert the integral into one in terms of $t = cos(\theta)$, $dt= sin(\theta)d\theta$, change the bounds as $-1 = cos(\pi)$ and $1 = cos(0)$, and then use an integration by parts trick to get a recursive definition of the integral we actually wanted a value for; ultimately it comes out to $\int _{-1}^1 (t^2 -1)^n = (-1)^n \frac{2^{2n+1} (n!)^2}{(2n+1)!}$, which neatly turns the whole thing into 1 as desired (after the correction to the 2nth-derivative bit in light of the repairs to (1)).
The integral $\int\ln(x)\cos(1+(\ln(x))^2)\,dx$
First, consider the following integral: $$I(t)=\int\sin(1+(t+\ln(x))^2)\ dx$$ By letting $x=e^u$ and $\sin(\theta)=\Im(e^{i\theta})$, we get $$I(t)=\int e^u\sin(1+(t+u)^2)\ du=\Im\int e^{u+(1+(t+u)^2)i}\ du$$ This may then be solving using the error function, $$\int e^{u+(1+(t+u)^2)i}\ du=\frac{-\sqrt{\pi i}e^{i+\frac{(2it+1)^2}{4i}}\operatorname{erf}\left(\frac{2iu+2it+1}2\sqrt i\right)}2$$ It then follows that $$I'(t)=\frac d{dt}\int\sin(1+(t+\ln(x))^2)\ dx=\int2(t+\ln(x))\cos(1+(t+\ln(x))^2)\ dx\\I'(t)=\Im\frac d{dt}\frac{-\sqrt{\pi i}e^{i+\frac{(2it+1)^2}{4i}}\operatorname{erf}\left(\frac{2iu+2it+1}2\sqrt i\right)}2$$ Thus, $$\frac12I'(0)=\int\ln(x)\cos(1+(\ln(x))^2)\ dx$$ Evaluating the derivative, one gets $$\begin{align}\frac12I'(t)&amp;=\Im\frac{-\sqrt{\pi i}}4\frac d{dt}e^{i+\frac{(2it+1)^2}{4i}}\operatorname{erf}\left(\frac{2iu+2it+1}2\sqrt i\right)\\&amp;=\Im\frac{-\sqrt{\pi i}}4e^{i+\frac{(2it+1)^2}{4i}}\left[(2it+1)\operatorname{erf}\left(\frac{2iu+2it+1}2\sqrt i\right)+\frac{4i}{\sqrt\pi}e^{-\left(\frac{2iu+2it+1}2\sqrt i\right)^2}\right]\end{align}$$ Finally, $$\int\ln(x)\cos(1+(\ln(x))^2)\ dx=\Im\frac{-\sqrt{\pi i}}4\left[e^{i+\frac1{4i}}\left[\operatorname{erf}\left(\frac{2iu+1}2\sqrt i\right)+\frac{4i}{\sqrt\pi}e^{-\left(\frac{2iu+1}2\sqrt i\right)^2}\right]\right]$$ Where $u=\ln(x)$.
Find the norm of operator $S:C[0,1]\rightarrow C[0,1]$, $(Sx)(z)=z\int_{0}^{1}x(t)dt$.
You are right. Pick $ x(t) = 1$ $\forall t \in [0,1]$ and you get the complete result.
Multivariate Chain rule, why addition?
The problem occurs where you "nudge" the Jacobian. Your $\delta h/ \delta f$ is still a function of both $f$ and $g$ (Example: $f(x,y)=x^2y$ then $\delta f/\delta x = 2xy$ depends on $x$ and $y$). So when you nudge the Jacobian the way $\delta h/ \delta f$ changes is more complicated (in fact it involves the chain rule!). Why addition? Consider $f(x,y)$. The gradient $\nabla f = [ f_x \ \ f_y ]$ is the derivative of $f$ in the sense that it encodes the first order changes in $f$. In fact, the linearization of $f$ at $(a,b)$ is $$F(x,y) = f(a,b) + [f_x(a,b) \ \ f_y(a,b) ] \begin{bmatrix} x-a \\ y-b \end{bmatrix} = f(a,b) + f_x(a,b)(x-a) + f_y(a,b)(y-b)$$ What happens if we substitute in new variables like $x=g(u,v)$ and $y=h(u,v)$? Suppose that $a=g(c,d)$ and $b=h(c,d)$. Linearize $g$ and $h$ at $(c,d)$ (call these $G$ and $H$). Then we get $G(u,v) = g(c,d) + g_u(c,d)(u-c)+ g_v(c,d)(v-d)$ and $H(u,v) = h(c,d)+h_u(c,d)(u-c)+h_v(v-d)$. Now feed the linearizations $G$ and $H$ into the linearization $F$: $F(G(u,v),H(u,v))$ $$= f(G(c,d),H(c,d)) + f_x(G(c,d),H(c,d))(G(u,v)-G(c,d)) + f_y(G(c,d),H(c,d))(H(u,v)-H(c,d))$$ $$ = f(a,b) + f_x(a,b)(g(c,d) + g_u(c,d)(u-c)+ g_v(c,d)(v-d)-g(c,d)) + f_y(a,b)(h(c,d)+h_u(c,d)(u-c)+h_v(v-d)-h(c,d))$$ $$ = f(a,b) + (f_x(a,b)g_u(c,d)+f_y(a,b)h_u(a,b))(u-c) + (f_x(a,b)g_v(c,d)+f_y(a,b)h_v(c,d))(v-d)$$ In other words, the linearization of the composition $f(g(u,v),h(u,v))$ has partials: $f_u = f_xx_u+f_yy_u$ and $f_v = f_xx_v+f_yy_v$. The chain rule!! Really what's going on is that the derivative of a function (= Jacobian matrix) exists to help encode linearizations. Linearizing a composition of functions results in composing the linearizations of the functions being composed. Since composition of linear things is encoded by matrix multiplication, we get that the Jacobian matrix of $m \circ n$: $J_{m \circ n}$ is the product of the Jacobian matrices of $m$ and $n$: $J_m$ and $J_n$. So the chain rule says: $J_{m \circ n} = J_m J_n$. In other words, in the linearized world, function composition is matrix multiplication. :)
Blow-up an irreducible component
$\newcommand{\Spec}{\operatorname{Spec}}$ Let $R$ be a ring, $g\in R$ and set $I=\{f\in R: fg^k=0,\ \mbox{for some } k\geq 1\}$. I claim that $\Spec(R)\leftarrow\Spec(R/I)$ is the blow-up of $\Spec(R)$ along the principal ideal $(g)$. To show this I use the universal property of blow-ups. Firstly, it is clear that $g$ becomes regular at $R/I$. Secondly, if $\Spec(R)\leftarrow X$ is any morphism such that the image of $g\in \Gamma(X,\mathcal{O}_X)$ defines an invertible ideal, then in particular $g$ is regular in $\Gamma(X,\mathcal{O}_X)$ and therefore the image of $I$ in $\Gamma(X,\mathcal{O}_X)$ must be zero. This means that $R\to \Gamma(X,\mathcal{O}_X)$ factorizes uniquely through $R\to R/I$, whence $\Spec(R)\leftarrow X$ factorizes uniquely through $\Spec(R)\leftarrow \Spec(R/I)$.
Pass to the limit under the sign of integral
$$\int_\Omega a(w_k) (\nabla u_k - \nabla u) \nabla v \text{ d}x \to 0$$ since $\nabla u_k \to \nabla u$ weakly in $L^2(\Omega)$ and $a(w_k)\nabla v \in L^2(\Omega)$. This can be seen from $$\int_\Omega |a(w_k) \nabla v|^2 \text{ d}x \leq \Lambda^2 \|\nabla v\|_{L^2}^2.$$
Find support of a wavelet
You have to study the refinement equation. Assume finite support and compare the support intervals on both sides. For D4, the refinement equation reads as $$\phi(x)=a_0\phi(2x)+a_1\phi(2x-1)+a_2\phi(2x-2)+a_3\phi(2x-3)$$ Assume that for some finite interval $\operatorname{supp} \phi\subset [a,b]$. Then interesting things in $\phi(2x-k)$ only happen when $x\in[\frac12(a+k),\frac12(b+k)]$. For equality of the enclosing intervals on both sides, $a=\frac12 a$ and $b=\frac12(b+3)$ are necessary, from which one finds $[a,b]=[0,3]$.
Closed form expression for Integral of exponential function
Hint: $~\displaystyle\int_0^\infty\exp\Big(-c~x^\lambda\Big)~dx=\frac{\bigg(\dfrac1\lambda\bigg)!}{\sqrt[\lambda]c}~,~$ see $\Gamma$ function for more information. But since your lower integration limit is $n_0$ instead of $0$, you will have to use the incomplete $\Gamma$ function. One can also write it as $~\dfrac n\lambda\cdot\text{E}\bigg(1-\dfrac1\lambda~,~c~n^\lambda\bigg),~$ see exponential integral for more information.
Given a function and compute the definite integral of its inverse
Hint. By Laisant's formula, if $f(a)=c$ and $f(b)=d$, then $$ \int_{c}^df^{-1}(x)dx+\int_a^bf(x)dx=bd-ac $$ Notes. When $f$ is differentiable, one can prove the result by change of variables: $$ \int_c^d f^{-1}(y)dy=\int_a^b f^{-1}(f(x))f'(x)dx=\int_a^b xf'(x)dx=xf(x)|_{a}^b-\int_a^bf(x)dx $$ A proof without words:
Eigenvalues and eigenvectors.
$\lambda$ is an Eigenvalue when $T(A)=\lambda A=A^T$. Then $T(T(A))=(A^T)^T=A=\lambda^2A$. The Eigenmatrices are such that either $A^T=A$ or $A^T=-A$, i.e. all symmetric or antisymmetric matrices.
If $V$ is a subspace of $\mathbb{R^n}$, what is $(V^\perp)^\perp$?
$V^{\perp}$ is the set of all vectors that are perpendicular to $V$. For example, consider $\mathbb{R}^3$. Let $V$ be the xy-plane. What is $V^{\perp}$? The answer is all the vectors that point from the origin to any point on the z axis? Ok, so what is $\left({V^{\perp}}\right)^{\perp}$? We know that $V^{\perp}$ is any vector that lies along the z axis. $\left({V^{\perp}}\right)^{\perp}$ is the set of all vectors that are perpendicular to the z-axis. And this, of course, is the xy-plane. In general, $\left({V^{\perp}}\right)^{\perp} = V$.
Intersection of a curve on an exceptional divisor with the exceptional divisor
(I'm sorry, this should be a comment, but I cannot comment yet). I am not sure your claim is true. Take a quartic surface $X \subset \mathbb P^3$ with a singularity $p$ of type $A_3$ and let $W$ be a minimal resolution. $W$ is a K3 surface and the fibre over $p$ is a divisor $E$ with three rational irreducible components $C_1$, $C_2$ and $C_3$, such that $C_i \cdot C_{i+1}=1$ for $i=1,2$ and $C_i^2 = -2$. If you take $C = C_2$ then $C \cdot E = 1 -2 +1 = 0$.
Theorem 5.5 in Baby Rudin: Do we need the continuity of $f$ on the entire interval?
I think it is sufficient to just assume the continuity of $f$ in $x$ and not in the hole interval. That is because the only place he actually uses the continuity of $f$ in the proof is for showing $\lim_{~t\rightarrow x}v(s) = 0$. There are other places in the proof where the differentiability in $x$ is used, but not the continuity itself. To show that $\lim_{~t\rightarrow x}v(s) = 0$, however, doesn't require the continuity of $f$ in the hole interval: From the Rudin's proof it follows that $\lim_{~s\rightarrow y}v(s) = 0$ therefore, for every given $\epsilon$, there exist $\beta$ such that: $$0&lt;|s-y|&lt;\beta\Rightarrow |v(s)|&lt;\epsilon$$ Using the continuity in $x$ it follows that for for every given $\beta$, there exist $\delta$ such that: $$|t-x|&lt;\delta\Rightarrow |s-y|&lt;\beta$$ and therefore: $$|t-x|&lt;\delta\Rightarrow |s-y|&lt;\beta\Rightarrow |v(s)|&lt;\epsilon$$ and: $\lim_{~t\rightarrow x}v(s) = 0$
Can I get a polynomial out of a polynomial division?
Suppose $\frac{f}{g}$ gives you a polynomial $h$ of degree $k$. If degree of $f$ is $3$ and degree of $g$ is $3$. Then we have $f=gh$. Degree of $f$ is equal to the degree of $g$ plus the degree of $h$. Hence degree of $h$ must be $0$, the constant polynomial.
Firing Solution on a Moving Target
Based on your system, you only need a quartic equation, so your unknowns $t, V_x, V_y, V_z$ have an analytic solution. For simplicity, I will continued what you have achieved so far namely, $$t = \tfrac{F_y - V_y - \sqrt{(V_y - F_y)^2 - 2g(P_y - \beta_y)}}{g}\tag1$$ $$|\vec V\|^2 = \frac{(\beta_x - P_x - F_xt)^2}{t^2} + \frac{(\beta_z - P_z - F_zt)^2}{t^2} + V_y^2\tag2$$ Express $(1)$ as a quadratic in $t$, $$\Big(t-\tfrac{F_y - V_y \color{red}- \sqrt{(V_y - F_y)^2 - 2g(P_y - \beta_y)}}{g}\Big) \Big(t-\tfrac{F_y - V_y \color{red}+ \sqrt{(V_y - F_y)^2 - 2g(P_y - \beta_y)}}{g}\Big) = 0$$ and I get the simpler, $$gt^2+2(V_y-F_y)t+2(P_y-\beta_y) = 0\tag3$$ Solve for $V_y$, $$V_y = \frac{2(\beta_y-P_y+F_yt)-gt^2}{2t}$$ Substitute into $(2)$ $$|\vec V\|^2 = \frac{(\beta_x - P_x - F_xt)^2}{t^2} + \frac{(\beta_z - P_z - F_zt)^2}{t^2} + \frac{\big(2(\beta_y-P_y+F_yt)-gt^2\big)^2}{(2t)^2}\tag4$$ and one can see $(4)$ is just a quartic in $t$ that can solve your system of equations. Once you have $t$, then you can recover $V_x, V_y, V_z$. P.S.1 Since you are using a particular sign, $\color{red}-\sqrt{x}$, in your system, then not all of the four roots may be valid solutions. P.S.2 For a simple way to solve quartics, see this post.
Uniform and exponential distribution
I suppose that the exponential distribution is independent of the end of the experiment. Let $X$ be the end of the experiment and $Y$ be the time at which the device turns of. You are looking at the probability of the event $X\leq Y$, that is \begin{align*} \mathbb P[X\leq Y] &amp;= \int_2^6 \int_x^\infty f_{X,Y}(x,y) dy dx\\ &amp;= \int_2^6 \int_x^\infty \frac{1}{4}\cdot \frac{1}{4}e^{-\frac{1}{4}y} dy dx\\ &amp;= \frac{1}{4}\int_2^6 e^{-\frac{1}{4} x} dx\\ &amp;= e^{-\frac{1}{2}} - e^{-\frac{3}{2}} \end{align*}
The Tangent Disc Topology
No: a metrizable space is paracompact and therefore normal. Also, the tangent disk space is separable but not second countable, which is impossible for a metrizable space.
Find points on a triangle
Express points along the sides of the triangle in barycentric coordinates (note figure) relative to the vertices $\textbf{a}, \textbf{b}, \textbf{ep}$. Barycentric coordinates are nonnegative and constrained to add to 1: $\lambda_1 + \lambda_2 + \lambda_3 =1 $, $\lambda_i \geq 0$ and the vertices are arranged as columns in a matrix $\textbf{R}=[\textbf{a}|\textbf{b}|\textbf{ep}]$. Known points on your triangle are expressed as: $\textbf{mp} = (1/3,1/3,1/3)^t \textbf{R}$ and $\textbf{cp} = (1/2,1/2,0)^t \textbf{R}$. Then for example, $\textbf{c} = (\lambda_1,0,1-\lambda_1)^t \textbf{R}$, for any choice of $\lambda_1 &gt; 1/2$. (Note that the 2nd barycentric coordinate is 0 because $\textbf{c}$ lies on the side containing $\textbf{a},\textbf{ep}$. By symmetry you can see how to the express the other points $\textbf{d},\textbf{e},\textbf{f}$. They will "maintain an average distance between or center between cp and ep" as long as the same value of the nonzero baricentric coordinate is used throughout (there will always be 2 nonzero coordinates, and one can be expressed in terms of the other as above). Note there is no assumption that the triangle be isosceles or that it be contained in $\mathbb{R^2}$.
Questions on the Weierstrass approximation theorem
and 2. are well known properties of continuous functions on compact intervals, or more generally, on compact sets. If $f\colon[a,b]\to\mathbb{R}$ is continuous, then: It is bounded and attains the maximum and the minimum (Weierstrass name is associated with this result.) It is uniformly continuous As for your last question, there is no difference between $&lt;$ or $\le$. The proof can be written with any of them.
Descartes rule of sign multivariable
The equation $$7(1+x+x^{2})(1+y+y^{2})=8x^2y^2$$ does not have any integer solutions. This is because $$\text{LHS}=7\bigg(1+\underbrace{x(x+1)}_{\text{even}}\bigg)\bigg(1+\underbrace{y(y+1)}_{\text{even}}\bigg)=\text{odd}$$ while RHS is even.
How to demonstrate the subset definition?
You're on the right way! Your definition of $S \subset T$ as $\forall x (x \in S \rightarrow x \in T) \land \exists x (x \in T \land \neg x \in S)$ should work fine. For $S = T$ you can use $\forall x (x \in S \leftrightarrow x \in T)$ And for $S \subseteq T$ you can use $\forall x (x \in S \rightarrow x \in T)$ Using these definitions, you should be able to prove $S \subseteq T \leftrightarrow (S \subset T \lor S =T)$ just fine.
Trace topology with metric topology ident
Let $\tau_A$ be the trace topology induced by $\tau_d$, let $\rho$ be the restriction of $d$ to $A\times A$, and let $\tau_\rho$ be the topology on $A$ induced by $\rho$; you want to show that $\tau_A=\tau_\rho$. Doing so is just a matter of showing that each of $\tau_A$ and $\tau_\rho$ is a subset of the other. To show that $\tau_A\subseteq\tau_\rho$, you need to start with an arbitrary $U\in\tau_A$ and show that $U\in\tau_\rho$. The natural way to try to do this is to let $x\in U$ be arbitrary and show that there is an $r&gt;0$ such that $B_\rho(x,r)\subseteq U$, where $B_\rho(x,r)$ is the open $\rho$-ball of radius $r$ centred at $x$. What do you know about $U$? Since it’s in $\tau_A$, there is a $V\in\tau_d$ such that $U=V\cap A$. Clearly $x\in V$, so there is an $r&gt;0$ such that $B_d(x,r)\subseteq V$. Can you finish it from there? In the other direction it suffices to show that for any $x\in A$ and $r&gt;0$, $B_\rho(x,r)\in\tau_A$. You know that $B_d(x,r)\in\tau_d$, so $B_d(x,r)\cap A\in\tau_A$. What’s the last step that you need here?
nomenclature question: nonconvex functions with only saddle points and global minima
Since a critical point is either a local minimum, a local maximum or a saddle point, these are just the functions with no local maximum and no non-global local minima. Any differentiable function falls into this class if you restrict the domain to exclude any local maxima and non-global local minima. So I doubt that you'll get very much in the way of properties.
How to differentiate these type of problems?
The fundamental theorem of calculus states that if $f(x)=\int_a^xg(t)dt$, then $f'(x)=g(x)$. You will have to apple the chain rule as well, in this case.
Proof the linear dependence of square singular matrix
One direction: Suppose that a column is a linear combination of the other columns. WLOG, assume that this column is $C_1$. Then, you know that $C_1 = a_2C_2 + \cdots + a_nC_n$ for some suitable scalars $a_1, a_2, \ldots a_n$. On the other hand, we know that elementary column operation $C_1 \mapsto C_1 - a_2C_2 - \cdots a_nC_n$ preserves the determinant. However, after this transformation, the first column is $[0\;0\;\cdots 0]^T$ and thus, the determinant is $0$. Other direction: Suppose that no column can be written as a linear combination of the other columns. This tells us that the $n$ columns are linearly independent. Thus, the $n$ columns form a basis of $F^n$. In this view, it can be seen that $A$ is a bijective linear transform from $F^n$ to $F^n$. Now, consider the inverse of this linear transform. By properties of linear transformations, this too will be linear. Consider the matrix representation $B$ of this linear transform. It follows that $AB = BA = I$ and thus, the matrix $A$ is invertible and hence, non-singular.
Let A and B be two sets of sequences. I want to say whether or not they are orthonormal basis for $l^2$
$(-2,1,0,0,0,\cdots)$ is orthogonal to everything in $A$. So $A$ is not an orthogonal basis. The second set is an orthogonal basis; all the standard basis elements can be written in terms of these elements, and they are mutually orthogonal.
dense subsets in metric spaces
Part a) is a famous theorem, the Baire Category Theorem. b) Suppose $\mathbb R -\mathbb Q =\bigcup_{n\in \mathbb N} F_n $,where $F_n$ is closed for every $n$.so we can write $\mathbb Q$ as $\bigcap_{n\in \mathbb N}(\mathbb R -F_n)$ . Note that every $\mathbb R -F_n$ is open and contains $\mathbb Q$, therefore dense. Let $\mathbb Q = \{q_1,q_2...\}=\bigcup_{n\in \mathbb N} \{q_n\}$ then $\emptyset=\mathbb Q- \mathbb Q = \bigcap_{n\in \mathbb N}(\mathbb R -F_n-\{q_n\} )$. Note that the set on the right side of the equality is intersection of countable many open dense sets. By a) it must be dense. Contradiction.
Proof that $\frac{a_n}{3^n}$ is a Cauchy sequence that converges
If you went to show convergence by showing that the sequence is Cauchy then you cannot assume convergence a priori.(Since this is what you must show) Let $m&gt;n$ Then $$|s_m-s_n| \leq \sum_{k=n+1}^m\frac{|a_k|}{3^k} \leq \sum_{k=n+1}^m\frac{1}{3^{k}}=\frac{3}{2}\frac{1}{3^{n+1}}(1-\frac{1}{3^{m-n}}) \to 0$$ as $m,n \to +\infty$
How many natural solutions does the equation $x^2 - c y^2 = 1$ have?
The equation $X^2 - d Y^2 = 1$ for $d$ a positive integer, is called Pell equation (while Pell actually had not much to do with it); having this name at hand it will be easy to find lots of information on it. It indeed has infinitely many integral solutions. One reason this equation is relevant is that it solutions are linked to invertible elements in the ring of algebraic integers of real quadratic fields.
Associated primes of a quotient module.
Assume to the contrary suppose that $q$ is not associated to $M/xM$. We localize at $q$ to assume that $(R,q)$ is local and $\sqrt{(p,x)} = q$. Since $q$ is not associated to $M/xM$, there exists $y \in q$ such that $x,y$ is an $M$-regular sequence. Observe that $y^l$ is $M/xM$-regular for $l \ge 1$. Therefore, we may assume that $y \in (p,x)$. Write $y = w + ax$ for some $w \in p$ and $a \in R$ and $I = (x,y) = (w,x)$. We show that $w,x$ is an $M$-sequence. Since $\operatorname{depth}(I,M) = 2$, by the statement below we conclude that $H_i(w,x;M) = 0$ for $i &gt; 0$. This implies that $w \in p$ is $M$-regular. This contradicts the assumption $p$ associated to $M$. Let $R$ be a Noetherian ring and $I$ an $R$-ideal. Let $x_1,\dots,x_n$ be a generating set for $I$. For a finitely generated $R$-module $M$, $$ \operatorname{depth}_I(M) = n - \sup \{ i \mid H_i( x_1,\dots,x_n ; M) \neq 0 \}. $$ Here $H_i ( x_1, \dots, x_n; M)$ denote the ith Koszul homology of $M$ with respect to $x_1,\dots,x_n$.
Help finding the Marginal PDF of Y given a Density Function of Two Variables
The support for the joint distribution is $\{(x,y): 0&lt;x&lt;2, x&lt;y&lt;2x\}$ That is the triangle $\triangle(0,0)(2,2)(2,4)$ This is the same region as $\{(x,y): 0&lt;y&lt;4, y/2&lt; x&lt; \min\{2,y\}\}$ Which is the union: $\{(x,y): 0&lt;y&lt;2, y/2&lt;x&lt;y\}\cup\{(x,y): 2\leqslant y&lt; 4, y/2&lt;x&lt;2\}$ That is $\triangle(0,0)(1,2)(2,2)\cup\triangle(1,2)(2,2)(2,4)$ Plot the points and you see how the triangle is divided into two parts at the $y=2$ horizon. &nbsp; Draw horizonal likes through the upper and lower triangles and see why the bounds for the integral producing the $Y$-marginal pdf are different for each part. Hence why: $$f_Y(y) =\frac{5}{32} \begin{cases}\displaystyle \int_{y/2}^y x^2(4-y)\;\mathrm d\, x &amp;:&amp; 0&lt;y&lt;2 \\[1ex]\displaystyle \int_{y/2}^2 x^2(4-y)\;\mathrm d\, x &amp;:&amp; 2\leqslant y&lt; 4 \\[1ex] 0 &amp;:&amp; \text{otherwise}\end{cases}$$
Arccotx definite integral
Hint: $\cot^{-1}(x)$ is a decreasing function on the entire domain, which implies that for $x \in (-\infty$; $\infty$) your resulting value will decrease from $\pi$ to $0$. Now, remember, that $\cot^{-1}(\cot(x)) = x$ shifted by required amount of $\pi$ to be in the $(0;\pi)$. Since $0, 1, 2$ and $3$ are inside the $(0;\pi$), no shifts are needed and the result is basically $\cot^{-1}(\cot(x)) = x$ for $x \in \{0, 1, 2, 3\}$ which lets you figure out the breakpoints for the shifts and thus the points of partition of your integral. And the last thing: $\cot(x)$ is a decreasing function, so $\cot(3) &lt; \cot(2) &lt; \cot(1)$
How to evaluate $(-2\sqrt2)^{2/3}$?
Let $x = (-2 \sqrt 2)^{2 \over 3}$. $x^3 = (-2\sqrt{2})^2 = 8$. Let $\omega \neq 1$ be a root of $x^3 = 1$. Then, the roots of $x^3 = 8$ are $2, 2\omega, 2\omega^2$. Let's compute $\omega$. $x^3 - 1 = (x - 1)(x^2 + x + 1)$. Hence $\omega = \frac{-1 + i\sqrt{3}}{2}$ or $\frac{-1 - i\sqrt{3}}{2}$. Hence the roots of $x^3 = 8$ are $2, -1 + i\sqrt{3}, -1 - i\sqrt{3}$.
getting the fundamental solution of Laplace's equation from the heat kernel
i would read this paper and see the references contained there: http://link.springer.com/article/10.1023%2FA%3A1015341614741 danny
Solution verification of $\sum_{k=1}^{\infty}\frac{x^n}{n+1}$ with $x\in\mathbb{R}$
So you need to see what happens to $\lim_{n\rightarrow\infty}\dfrac{x^{n}}{n+1}$ for $x&lt;-1$. You may argue with subsequence: For $n=2k$, we have $\lim_{k\rightarrow\infty}\dfrac{x^{2k}}{2k+1}=\lim_{k\rightarrow\infty}\dfrac{(|x|^{2})^{k}}{2k+1}=\infty$. So the limit does not go to zero, the series does not converge.
Absolute error formula
There's a typo. The actual length is $3.96635989732264...$
Addition and Multiplication of Field Proof
Your attempted solution apparently amounts to substituting the definition of addition and multiplication, but these are given to you so there's no need to "find them". In fact, there are lots of different ways to define addition and multiplication. Most of them would not result in a field (think of $(a,b)+(c,d) = (0,0)$). On the other hand, there might be more than one which will result in a field. So the question really depends on the definition of addition and multiplication. In order to check whether this object is a field, you have to verify all the field axioms. For example, in a field $x + y = y + x$. In your case, we need to verify that $$(x_1,x_2) + (y_1,y_2) = (y_1,y_2) + (x_1,x_2).$$ All we need to do is substitute the definition of addition: $$(x_1,x_2) + (y_1,y_2) = (x_1+y_1,x_2+y_2)$$ whereas $$(y_1,y_2) + (x_1,x_2) = (y_1+x_1,y_2+x_2),$$ and both expressions are equal by the commutativity of real addition. Some of the field axioms require you to identify a zero element and a unit element, i.e. the $0$ and $1$ of the field. What should these be? Do they satisfy all the required axioms? For example, is every non-zero element invertible?
Matrix differentiation proof of quadratic product $x^TAx$
The explanation is the following: the numbers $$ \sum_{j=1}^{n} a_{kj}x_j $$ are the entries of $Ax$, which is a column. On the other hand, $$ \frac{d\alpha}{dx}=\left(\frac{\partial\alpha}{\partial x_1},\ldots,\frac{\partial\alpha}{\partial x_n}\right) $$ is a row. That is why you need to take the transpose $(Ax)^T=x^TA^T$. Added: alternatively, note that $$ \begin{split} (x+h)^TA(x+h) &amp;=x^TAx+x^TAh+h^TAx+h^TAh\\ &amp;=x^TAx+(x^TA+x^TA^T)h+h^TAh, \end{split} $$ and so $$ \frac{d}{dh}(x+h)^TA(x+h)|_{h=0}=x^TA+x^TA^T. $$
Combinatorics with at least
The intuition behind adding different outcomes is the following: Say, that you can conduct an experiment with one way or with another way, then you can conduct the experiment in a total of $1+1=2$ ways. This is exactly the same logic used here.
Regarding the derivative of an implicit line
I think you should use the chain rule, as the function $\sigma$ is just a composition $f\circ h$, where $h$ is the function from the real line to $\mathbb{R}^2$ sending $x$ to $(x,x^2+1)$.
How many significant figures in 0.0
Although there are explicit rules for counting significant figures, they are really a rough idea for the accuracy of a number. You have shown that the mass is less than $0.000\ 5$ gram, but it could be $0.000\ 000\ 005$ gram for all you know. In the sense that one significant figure indicates that you know the value to about $\pm 10\%$ (it ranges from $5\%$ if the figure is $9$ to $50\%$ if the figure is $1$), it makes sense to say you have no significant figures at all because you have no bound on the fractional error that might be made.
geometric series with probability
We look at the three-person game, since it is more interesting. We also assume that the deck is shuffled between draws, in order to simulate tossing a fair $k$-sided die. You will find it easy to adapt the idea to the simpler $2$-person game, and if you wish, to a $d$-person game. Let our players be A, B, and C. We suppose A tosses first, then (if necessary) B, then, if necessary, C, then, if necessary A, and so on. To make the notation simpler, let $p=\frac{1}{k}$. Player A can win on the first draw, the fourth, the seventh, the tenth, and so on. The probability she wins on the first draw is $p$. In order for A to win on the fourth draw, A, B, and C must all fail to win on their first three draws, and then a must win. The probability of this is $(1-p)^3p$. In order to win on the seventh draw, A, B, C must all fail twice, and then A must win. This has probability $(1-p)^6p$. Similarly, the probability A achieves her win on the tenth draw is $(1-p)^9p$. The probability A achieves her win on the thirteenth draw is $(1-p)^{12}p$. And so on. So the probability that A wins is $$p+(1-p)^3p+(1-p)^6p +(1-p)^9 p+ (1-p)^{12}p+\cdots.$$ The above is an infinite geometric series. Recall that the infinite geometric series $a+ar+ar^2+ar^3+\cdots$ has sum $\frac{a}{1-r}$ (if $|r|\lt 1$). In our case, $a=p$ and $r=(1-p)^3$, so the probability A wins is $$\frac{p}{1-(1-p)^3}.$$ This can be "simplified" to the less attractive expression $\dfrac{1}{3-3p+p^2}$. We could go through a very similar calculation for the probability that B wins. However, there is a shortcut. Suppose that A fails to win on her first throw (probability $1-p$). Then effectively B is now first, so has probability of winning $\dfrac{p}{1-(1-p)^3}$. So the probability B wins is $$\frac{(1-p)p}{1-(1-p)^3}.$$ A similar argument shows that the probability C wins is $$\frac{(1-p)^2 p}{1-(1-p)^3}.$$ Another way: We can avoid summing an infinite series. Let $a$ be the probability that A ultimately wins. As discussed earlier, the way for B to win is for A to fail on her first draw. Then effectively B is the first player. So the probability B is the ultimate winner is $(1-p)a$. Similarly, the probability C is the ultimate winner is $(1-p)^2a$. But it is (almost)clear that someone must ultimately win, the probability the game goes on forever is $0$. It follows that $$a+(1-p)a+(1-p)^2 a=1,$$ and therefore $$a=\frac{1}{1+(1-p)+(1-p)^2}.$$ A little calculation shows that this is the same answer as the one obtained earlier.
A postive, decreasing function $f$ such that $\lim_{n \rightarrow \infty} \ln f(n) / \ln n$ neither diverges nor converges to $-\infty$.
You can let $\frac{\ln f(n)}{\ln n}$ oscillate: If $f(n)=\frac1n$, then $\frac{\ln f(n)}{\ln n}=-1$, and if $f(n)=\frac1{n^2}$, then $\frac{\ln f(n)}{\ln n}=-2$. So have a look at a recursion such as $$ f(n)=\begin{cases}1&amp;\text{if } n=1\\\tfrac1{n^2}&amp;\text{if }f(n-1)&gt;\tfrac1n\\ f(n-1)&amp;\text{if } f(n-1)\le \tfrac1n\end{cases}$$ (Exercise: Write down an explicit formula for $f$ using logs, exponentials and the floor function)
Proving that $8^n-2^n$ is a multiple of $6$ for all $n\geq 0$ by induction
For $n\geq 0$, let $S(n)$ denote the statement $$ S(n) : 6 \mid (8^n-2^n)\Longleftrightarrow 8^n-2^n=6m, m\in\mathbb{Z}. $$ Base case ($n=0$): $S(0)$ says that $6\mid (8^0-2^0)$, and this is true. Inductive step: Fix some $k\geq 0$ and assume that $S(k)$ is true where $$ S(k) : 6\mid (8^k-2^k)\Longleftrightarrow 8^k-2^k=6\ell, \ell\in\mathbb{Z}. $$ To be shown is that $S(k+1)$ follows where $$ S(k+1) : 6\mid (8^{k+1}-2^{k+1})\Longleftrightarrow 8^{k+1}-2^{k+1}=6\eta, \eta\in\mathbb{Z}. $$ Beginning with the left-hand side of $S(k+1)$, \begin{align} 8^{k+1}-2^{k+1} &amp;= 8\cdot 8^k-2\cdot 2^k\tag{by definition}\\[0.5em] &amp;= 8(8^k-2^k)+6\cdot 2^k\tag{rearrange}\\[0.5em] &amp;= 8(6\ell)+6\cdot 2^k\tag{by $S(k)$, the ind. hyp.}\\[0.5em] &amp;= 6(8\ell+2^k)\tag{factor out $6$}\\[0.5em] &amp;= 6\eta,\tag{$\eta=8\ell+2^k; \eta\in\mathbb{Z}$} \end{align} we end up at the right-hand side of $S(k+1)$, completing the inductive step. Thus, by mathematical induction, the statement $S(n)$ is true for all $n\geq 0$. $\blacksquare$
$A$ is skew hermitian, prove $e^A$ is unitary
Look at this: matrix exponential. $$(e^A)^He^A=e^{A^H}e^A=e^{-A}e^A=e^0=I$$ A longer answer is you expand the matrix exponential. And for a real skew-symmetric matrix $A$, $e^A$ is not only an orthogonal matrix but also a rotation matrix because $\det e^A=1$. In addition, for every vector $a$, there is an associating skew-symmetric matrix $A$. Denote $A=[a]_{\times}$. Then $e^A$ can be expressed as $$e^A=I+\frac{[a]_{\times}}{\|a\|}\sin \|a\|+\left(\frac{[a]_{\times}}{\|a\|}\right)^2(1-\cos\|a\|)$$ which actually is the Rodrigues' rotation formula.
Projective Spaces which are not Vector Spaces
Maybe this answer isn't completely satisfactory, but we can define a projective space in terms of modules over a division ring. Let $A$ be a division ring and let $M$ be an $A$-module. The projective space over $M$ is $$ \Bbb P(M)=M\setminus\{0\}/\sim $$ where $x\sim y$ if and only if there exists a $\lambda\in A^*$ such that $x=\lambda\cdot y$.
intersection multiplicity at non-zero point
You can apply the automorphism $(x,y)\mapsto (x+1,y+1)$ and the intersection is the same as the intersection of $f(x+1,y+1)=x+y$ with $g(x+1,y+1)=x^2+2x+y^2+2y$ at $(0,0)$, which is $2$ as you said: replacing the parametrisation $(t,-t)$ of the first in the second you get $2t^2$ so the intersection number is $2$.
What is the conditional expectation $E(X^2\mid X+Y=1) $ if $X$ and $Y$ are i.i.d standard normal?
I'm afraid you'll find this "hacky" but... Consider the transformed variables $S=X+Y$ , $R=X-Y$. It's easy to see that these variables (which correspond to a scaled 45 degrees rotation) are iid, $N(0,2)$. Now $X=(S+R)/2$. Then we want $$\begin{align} E[X^2 \mid S] &amp;= E\left[\left(\frac{S+R}{2}\right)^2 \mid S\right]\\ &amp;=\frac{1}{4}\left(E[S^2\mid S] + 2 E[S R \mid S] + E[R^2\mid S] \right)\\ &amp;=\frac{1}{4}\left(S^2 + 2 S E[R ] + E[R^2] \right)\\ &amp;=\frac{1}{4}(S^2+2) \end{align} $$ Or $$ E[X^2 \mid X+Y=1] = \frac{1}{4}(1^2+2)=\frac{3}{4}$$ Quick sanity check: recall that we must have $E[E[X^2 \mid S]]= E[X^2]=1$. And, indeed $E[\frac{1}{4}(S^2+2)]=\frac{1}{4}(2+2)=1$.
Partial Orders: "Minimum property" iff "Maximum property"?
In the linearly ordered set $X = \{-\frac{1}{n}: n=1,2,\ldots\} \cup \{0\}$ (in the order inherited from the reals) we have that $X$ satisfies the upper condition (check this), but not the lower, because the left part has an upper bound $0$, but no maximum. So the properties are not equivalent. Use $X = \{0\} \cup \{\frac{1}{n}: n=1,2,\ldots\}$ for the other implication, of course.
How to find the slope of a line when you only have a point and an angle?
Sketch this line out. Draw a xy-plane and draw a line through $(4, 7)$ that looks like it makes a $45^\circ$ angle with the y-axis. Now, call the y-intercept $(0, b)$. Now, make a right triangle with vertical and horizontal legs using $(0, b)$ and $(4, 7)$ as the endpoints of the hypotenuse. $(0, 7)$ should be the vertex at the right angle. Now, this line makes a $45^\circ$ angle with the y-axis. This triangle has that angle in it, so this triangle is a $45^\circ-45^\circ-90^\circ$ triangle. This means that the legs are equal. One leg goes from $(0, b)$ to $(0, 7)$ and thus has length $7-b$. The other goes from $(0, 7)$ to $(4, 7)$, meaning it has length $4$. Solving the equation $7-b=4$, we get $b=3$. Now, the slope can be found from $(0, 3)$ and $(4, 7)$, giving us a slope of $m=1$. Now, using the fact that $y=mx+b$, we get the equation $y=x+3$.