title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Find the general solution of linear system of equations (homework assignement)
$$\lambda x_1+x_2+x_3=1$$ $$x_1+\lambda x_2+x_3=\lambda$$ $$x_1+x_2+\lambda x_3=\lambda^2$$ Adding all three equations together gives $$(\lambda+2)(x_1+x_2+x_3)=1+\lambda+\lambda^2$$ If $\lambda=-2$ we have $0=1+(-2)+(-2)^2=3$, which is absurd. Thus we can divide by $\lambda+2$: $$x_1+x_2+x_3=\frac{1+\lambda+\lambda^2}{\lambda+2}$$ Subtracting this from all three original equations gives $$(\lambda-1)x_1=1-\frac{1+\lambda+\lambda^2}{\lambda+2}$$ $$(\lambda-1)x_2=\lambda-\frac{1+\lambda+\lambda^2}{\lambda+2}$$ $$(\lambda-1)x_3=\lambda^2-\frac{1+\lambda+\lambda^2}{\lambda+2}$$ If $\lambda=1$, all three of the original equations reduce to $x_1+x_2+x_3=1$, whose solution is simply $$x_1=p,x_2=q,x_3=1-p-q\qquad p,q\in\mathbb R\tag1$$ Otherwise, dividing by $\lambda-1$ gives the unique solution of $$x_1=\frac{1-\frac{1+\lambda+\lambda^2}{\lambda+2}}{\lambda-1}=\frac1{\lambda+2}-1\\ x_2=\frac{\lambda-\frac{1+\lambda+\lambda^2}{\lambda+2}}{\lambda-1}=\frac1{\lambda+2}\\ x_3=\frac{\lambda^2-\frac{1+\lambda+\lambda^2}{\lambda+2}}{\lambda-1}=\frac{(\lambda+1)^2}{\lambda+2}\tag2$$ In conclusion, the solutions of the linear system are as in $(1)$ if $\lambda=1$ none if $\lambda=-2$ as in $(2)$ otherwise.
Domain of derivative on open interval is open
First question: no. For instance, let $$f(x)=\begin{cases}x^2&\text{if }x\in\mathbb{Q}\\0&\text{if }x\notin\mathbb{Q}\end{cases}$$ $f'(0)$ exists, but $f$ s continuous only in $0$. The same trick works if you take a continuous function $g(x)$ that is nowhere differentiable and multiply it by $x^2$: its derivative will exists only in $0$. Second question: I think what you want is something like this: take a continuous function $g:\mathbb R\to \mathbb R$ such that it is nowhere differentiable. Let $$f(x)=\begin{cases} x^2&\text{if } x>0\\x^2g(x)&\text{if }x\le 0\end{cases}$$ In this case, $E=[0,+\infty)$ and $x=0$ fails your test.
Direct inequality proof of $f(x) = \frac{x}{1+x}$
it is pretty strait forward: $f(x+y)=\frac{x+y}{1+x+y}=\frac{x}{1+x+y}+\frac{y}{1+x+y}\leq\frac{x}{1+x}+\frac{y}{1+y}=f(x)+f(y)$ notice that the inequality is valid only because $x,y>0$.
Is there a more general concept than position and space?
It's possible to encompass your notion, at least in the sense I read it, in that of a generalized quasimetric space. This kind of space has a weak distance function $d$ which takes pairs of points to nonnegative real numbers or $\infty$. We don't require $d(x,y)=d(y,x)$, but we do require that if $x\neq y$ then $d(x,y)$ and $d(y,x)$ are greater than $0$ and that $d(x,y)+d(y,z)\geq d(x,z)$. So your "line" could be described by a quasimetric with $d(x,y)=5,d(x,z)=10,d(y,x)=\infty,d(y,z)=10,d(z,x)=5,d(z,y)=\infty$. I do think this will satisfy the triangle inequality just stated-for a space that doesn't, you could just ignore the triangle inequality, so that you're using a premetric, which Wikipedia says is a somewhat nonstandard term. I will point out that this model of your problem loses the 1-dimensionality that you wanted. One way to get it back would be simply to put an ordering on the underlying set, which would be completely independent of the quasimetric. And it is indeed possible to generalize even further. One well-known generalization is the topological space, which can be described as a set $X$ together with a collection of symmetric relations $r,s,t,...$ where $r(x,y)$ can be interpreted as "$x$ is $r$-close to $y$". These relations satisfy some axioms to make them reasonably space-like, but can encompass much stranger phenomena even than the one you describe. Topological spaces aren't the only such possible generalization-it's a question of finding a balance between great generality and retaining something geometric. Others that come to mind mostly require notions from category theory, which might not fit your needs well. Anyway, as you already admit, we have to add more structure back in to get much of anything interesting-the diversity of topological spaces is far beyond comprehensibility. But you might want to pick up an introduction to topology, if this intrigues you.
Proof integral converging : $\int \sin ( \sin (x) )dx$
Let $F(t)=\int_0^t\sin(\sin(x))$. You have $\sin(\sin(x))\geq 0$ for all $0\leq x\leq\pi$ hence $a:=F(\pi)>0$. Since $x\mapsto\sin(\sin(x))$ is odd and $2\pi$-periodic we have $$\int_0^{2\pi}\sin(\sin(x))dx=\int_{-\pi}^\pi\sin(\sin(x))dx=0$$ and $$F(n\pi)=\int_0^{n\pi}\sin(\sin(x))dx= \begin{cases} a&2\nmid n\\ 0&2\mid n \end{cases}$$ hence cannot converge for $n\to\infty$. Since $F(2n\pi)=0$ for $n\in\Bbb N$ and since $F$ is bounded, integration by parts yelds for $n>0$ $$\left|\int_{2n\pi}^{2(n+1)\pi}\frac{\sin(\sin(x))}xdx\right| =\left|\int_{2n\pi}^{2(n+1)\pi}\frac{F(x)}{x^2}\right| \leq\frac{\sup|F|}{2\pi n^2}$$ Consequently, $$\int_0^\infty\frac{\sin(\sin(x))}xdx=\int_0^{2\pi}\frac{\sin(\sin(x))}xdx+\sum_{n=1}^\infty\int_{2n\pi}^{2(n+1)\pi}\frac{\sin(\sin(x))}xdx$$ converges because $\sin (\sin (x))\sim x $ for $x\to 0$.
What does it mean when all the values of a row in a matrix are 0?
Geometrically, if you have an all zero vector in a matrix (row or column; doesn't matter), it means that it represents a dimension-crushing transformation. For instance, a three-dimensional matrix with an all zero row will take three-dimensional vectors and project them into a some two-dimensional plane in three-dimensional space. The converse isn't true: not all dimension-crushing matrices announce themselves by exhibiting such obvious zeros. Such transformations are trap door functions: they have no inverse. Therefore, the corresponding matrices are not invertible. Matrices which have no inverse have a determinant of zero (and it's obvious from the way the determinant is calculated that this is so for matrices that are cross-cut by an all-zero row or column). Non-invertible matrices are called singular, or degenerate. So, if you understand this dimension-crushing aspect, it's easy to see that $Ax = b$ might have no solutions, or many solutions, if $A$ is degenerate. Suppose that $A$ is a 3D matrix that crushes vectors into a 2D plane. Now suppose that vector $b$ lies outside of the plane. Well, then there are no solutions! There is no vector $x$ such that the matrix $A$ will project it onto $b$, because $A$ takes everything into some particular plane, and $b$ is not in that plane. Now suppose that $b$ is inside that plane. Then there are many solutions, because many vectors from 3D space project to any given point on that plane, including $b$. So all of this gives you a whole new geometric view on systems of linear equations.
Sufficient condition for the block matrix $\big(\begin{smallmatrix} B & A^T \\ A & 0 \end{smallmatrix} \big)$ to be invertible
Note that $\ker A = {\cal R}Z$. Suppose $Bu + A^T v =0, Au = 0$. Then $u \in \ker A= {\cal R} Z$, hence $u = Zw$ for some $w$. Then $Z^T B Zw + Z^T A^T v = Z^T B Zw + (AZ)^T v = Z^T B Zw = 0$. Hence $w=0$ since $Z^T BZ>0$. Hence $u=0$ and since $A^T v = 0$ and $A$ has full row rank, we have $v=0$.
How to integrate $\int_0^\pi\frac{dx}{\sqrt{3-\cos(x)}}$?
The integral can be written as an elliptic integral of the first kind with complex elliptic modulus: $$I=\int_{0}^{\pi}\frac{d\phi}{\sqrt{3-\cos{\phi}}}\\ =\int_{0}^{\pi}\frac{d\phi}{\sqrt{2+2\sin^2{\left(\frac{\phi}{2}\right)}}}\\ =\frac{1}{\sqrt2}\int_{0}^{\pi}\frac{d\phi}{\sqrt{1+\sin^2{\left(\frac{\phi}{2}\right)}}}\\ =\sqrt2\int_{0}^{\frac{\pi}{2}}\frac{d\theta}{\sqrt{1+\sin^2{\theta}}}\\ =\sqrt2\,K(-1)$$ The integral could also be written as a beta function by first substituting $x=\sin\theta$, $\theta=\arcsin{x}$, $d\theta=\frac{dx}{\sqrt{1-x^2}}$, and then substituting $t=x^4$, $x=t^{\frac14}$, $dx=\frac14t^{-\frac34}dt$: $$I=\sqrt2\int_0^1\frac{1}{\sqrt{1+x^2}}\frac{dx}{\sqrt{1-x^2}}\\ =\sqrt2\int_0^1\frac{dx}{\sqrt{1-x^4}}\\ =\sqrt2\int_0^1\frac{dt}{4t^{\frac34}\sqrt{1-t}}\\ =\frac{\sqrt2}{4}\int_{0}^{1}t^{-\frac34}(1-t)^{-\frac12}dt\\ =\frac{\sqrt2}{4}\text{B}(\frac14,\frac12).$$ Using the identity $\text{B}(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$ and Euler's reflection formula $\Gamma(1-z)\Gamma(z)=\frac{\pi}{\sin{(\pi z)}}$, this becomes $$I=\frac{\sqrt2}{4}\text{B}(\frac14,\frac12)=\frac{\sqrt2}{4}\frac{\Gamma(\frac14)\Gamma(\frac12)}{\Gamma(\frac34)}=\frac{\Gamma(\frac14)^2}{4\sqrt{2\pi}}$$
Intervals of inverse of CDF
The "inverse" of the CDF is called the quantile function. It is really a generalized inverse because it exists even when $F$ is not a bijection from $[-\infty,\infty]$ to $[0,1]$. In this situation it is not an inverse because your CDF is not one-to-one. That said, you have $F_X^{-1}(y)$ correct for $0<y<1/2$ and $1/2<y<1$. For $y=0$ and $y=1$ really the only sensible way to define $F_X^{-1}(y)$ is as $-\infty$ and $+\infty$ respectively, so that monotonicity is retained. For $y=1/2$ the definition of $F_X^{-1}(y)$ becomes sensitive to how exactly you choose to define the quantile function. The most common definition is $F_X^{-1}(y)=\inf \{ x : y \leq F_X(x) \}$, so that here $F_X^{-1}(1/2)=1$.
Inverse integral transform of $\cos(t-u)$
$$ f(u) = \int_0^{2\pi} g(t)\cos(u-t)\,dt \tag 1$$ $$ f(u) = \cos(u)\int_0^{2\pi} g(t)\cos(t)\,dt +\sin(u)\int_0^{2\pi} g(t)\sin(t)\,dt $$ $\int_0^{2\pi} g(t)\cos(t)\,dt=c_1 \tag 2$ $\int_0^{2\pi} g(t)\sin(t)\,dt=c_2 \tag 3$ $$f(u)=c_1\cos(u)+c_2\sin(u) \tag 4$$ If $f(u)$ is not sinusoidal the equation $(1)$ is not valid. The problem has no solution. If $f(u)$ is sinusoidal the equation $(1)$ is valid. You can fit the data with the sinusoidal function $(4)$ which gives the values of $c_1$ and $c_2$ on the form of numerical approximates (Regression method). Then one can find an infinity of functions $g(t)$ which satisfies Eqs.$(2)$ and $(3)$. The solution is not unique.
Prove that all hyperbolic straight lines are congruent to $x$-axis
$z_o^*$ is the inverse of $z_o$. The inverse of $C$ is $C$ itself, since circle inversion is conformal and knowing the two points of intersection with $S^1$ already uniquely defines the single orthogonal circle through these two points. Inversion preserves incidence, so if $z_o$ lies on $C$, then $z_o^*$ lies on the image of $C$, i.e. on $C$ itself as well.
Uniqueness of the Adjoint operator
Fix $y \in Y$. Then $\langle T(-),y \rangle$ it a functional, for which is has to exist a unqiue vector $T^*(y)$ which verifies $$ \langle T(x),y \rangle = \langle x, T^*(y) \rangle $$ for all $x \in V$. Now, we have to see that $T^*$ is linear. By construction, $$ \begin{align} \langle T(x),\alpha z+y \rangle &= \overline{\alpha} \langle T(x),z \rangle + \langle T(x),y \rangle = \overline{\alpha} \langle x,T^*(z) \rangle + \langle x,T^*(y) \rangle \\ & = \langle x, \alpha T^*(z) + T^*(y)\rangle. \end{align} $$ and therefore since $\alpha T^*(z) + T^*(y)$ represents $\langle T(-),\alpha z+y \rangle$, by the uniqueness of such vector we have that $$ \alpha T^*(z) + T^*(y) = T^*(\alpha z+y). $$ Plugging $\alpha = 1$ or $y = 0$ proves each condition of linearity. Now, uniqueness: suppose that you have another transformation $S$ that verifies $\langle T(x) , y \rangle = \langle x, S(y) \rangle$ for all $x,y$ in $V$. It suffices to see that $T^*(y) = S(y)$ for each $y \in V$, so let's fix $y \in V$. For any $x \in V$, $$ \langle x, T^*(y) -S(y) \rangle = \langle x, T^*(y) \rangle - \langle x,S(y) \rangle = \langle T(x),y \rangle - \langle T(x),y \rangle = 0 $$ and so $T^*(y) - S(y) = 0$, which concludes the proof: recall that a vector $v$ is zero if and only if $\langle v, z \rangle = 0$ for all $z \in V$.
Square root of x : $\sqrt{x}$ (Numerical Method)
You problem asks for $p(x) = \frac{4-x}{4} + \frac{x}{2}$ for which $p(0) = 1 = f(1)$ and $p(4) = 2 = f(4)$. Then $p(2) = \frac{3}{2}$.
Can this be considered a "rigorous" definition of a limit?
No, this is not equivalent to the usual definition of a limit. Consider for example $$f(x) = \begin{cases} x \sin(1/x) & x \neq 0 \\ 0 & x = 0 \end{cases}$$ Then $\lim_{x \to 0} f(x) = 0$ as can be easily checked (note that $|f(x)| \le |x|$ for all $x \in \mathbb{R}$). However with your definition then $f$ wouldn't have this limit. Indeed, applied to $a = 0$ and $l = 0$, this would mean that we would have $$|x| < |y| \iff |x \sin(1/x)| < |y \sin(1/y)|.$$ This is false. Let $x = \frac{2}{3\pi}$ and $y = \frac{1}{\pi}$. Then $|x| < |y|$, however $|f(x)| = \frac{2}{3\pi} > |f(y)| = 0$. I guess what you were going for was "the closer $x$ is to $a$, then the closer $f(x)$ is to $l$"? The problem with that, as you can see, is that $f(x)$ can oscillate a lot around $l$ even though it gets closer and closer to it.
Proof of $n-$dimensional Brownian motion identities for components of $B_t$
All I have to go on is that $B_{t,i}$ and $B_{t,j}$ are independent variables and similarly $B_{s,i}$ and $B_{s,j}$ are as well. That is not all you have to go on! The definition of $n$ dimensional Brownian motion includes the property that the processes $(B_{t,i} : t \ge 0)$ and $(B_{t,j} : t \ge 0)$ are independent (indeed, mutually independent as $i$ varies). This means that the $\sigma$-fields $\sigma(B_{t,i} : t \ge 0)$ and $\sigma(B_{t,j} : t \ge 0)$ are independent; in particular, given any multivariate Borel functions $f,g$ and any finite number of indices $t_1, \dots, t_n$, $s_1, \dots, s_m$, the random variables $$f(B_{t_1, i}, \dots, B_{t_n, i}), \quad g(B_{s_1,j}, \dots, B_{s_m,j})$$ are independent. (It also holds for countably many indices once you define what that means.) Observe that this is much stronger than merely saying that $B_{t,i}, B_{t,j}$ are independent for each $t$. So in fact, $B_{t,i}-B_{s,i}$ is independent of $B_{t,j}-B_{s,j}$, and so your proposed argument is perfectly well justified. The first problem really has very little to do with Brownian motion. You know that $B_{t,i}-B_{s,i}$ has a normal distribution with mean 0 and variance $t-s$, so this is really just asking you to compute the fourth moment of a normal random variable, i.e. $E[(\sqrt{t-s} Z)^4] = (t-s)^2 E[Z^4]$ where $Z \sim N(0,1)$. There are several ways to verify that $E[Z^4]=3$: write down an integral involving the Gaussian density and integrate by parts four times; use the moment generating function or characteristic function; or look it up on Wikipedia.
chromatic polynomial for helm graph
My guess is that the OP (incorrectly) used $n=7$ or $n=9$ in the calculation. In fact the correct value is $n=4$: $$z\left((1-z)^4(z-2)+(z-2)^4(z-1)^4\right)=\\=z^9-12 z^8+62 z^7-179 z^6+315 z^5-346 z^4+232 z^3-87 z^2+14 z$$ Note the result is monic of degree $9$, corresponding to $9$ vertices. (or note that $n=4$ corresponding to $H_4$) Substituting $z=3$ we get $$3\left((1-3)^4(3-2)+(3-2)^4(3-1)^4\right)=3((-2)^4+2^4)=96$$
Is there a notation for least/greatest element of partially ordered set?
I’ve commonly seen the following: $0, \bot$- minimal element in a partial ordered set $1, \top$- maximal element
Combinatorics - seating 7 people around table with 8 seats; two people have to be two seats apart
Decide the seating order of the people, starting from one of the brothers, say Ivan. Then position the other brother, Alexei, in one of the two slots (fourth and fifth) that fulfill the "separated by two others" condition - $2$ options. Then with Ivan and Alexei resolved, order the remaining five people in one of $5!=120$ ways. Finally add the empty chair to the right of someone, $7$ options, giving $2\cdot 120\cdot 7 = 1680$ options.
Is this the $Ax=B$ form for the following system of equations?
@MorganRogers and @AlexanderGeldhof provided the answers in their comments. Thank you! The next step of my problem is here
Proof that a linear operator is continuous.
Yes, the argument is fine. The hypothesis is a little weird, as the role of $f$ is, as you showed, limited to its value at $1$. And also, if $f(t)=0$ for any single $t$, that would force $T=0$.
Can certain things never *ever* be proved?
It is always possible to extend a first-order logical system by adding axioms to make it a complete logic: one in which every sentence is either provable or disprovable. However, if your starting point is a system like first-order arithmetic, then you have no effective (i.e., computable) way of knowing whether an axiom you have added is true in the standard model of the original system. So the resulting "logic" is useful for some theoretical purposes, but fails to satisfy the important property that you can effectively check whether a sequence of statements is a proof.
Convergence $f^{k+1}(x)=f^k(x)\log(f^k(x))$
I will follow Michael's suggestion, and use lower indexes. Let's inspect few terms in the sequence $f_k(x)$: $$ f_1(x) = \log(x) \qquad f_2(x) = \log(x) \cdot \log(\log(x)) \qquad f_3(x)= \log(x) \cdot \log(\log(x)) \cdot \log(\log(\log(x))) $$ Therefore $f_k(x) = \prod_{n=1}^k \log^{(\circ n)}(x)$. The sum, thus, has a form: $$ g(x) = \frac{1}{h_1} + \frac{1}{h_1 h_2} + \frac{1}{h_1 h_2 h_3} + \ldots = \frac{1}{h_1} \left( 1 + \frac{1}{h_2} \left( 1 + \frac{1}{h_3} \left( 1+ \ldots \right)\right) \right) $$ with $h_1(x) = x \log(x)$, $h_2(x) = \log(\log(x))$, $h_3(x) = \log(\log(\log(x)))$, and so on. Thus $$ x \log(x) g(x) = h_1(x) g(x) = 1 + o(1) $$ and thus for $x \gg 0$, $g(x) = O \left( \frac{1}{x \log(x)} \right)$.
Proof that the Michael Line is Hausdorff ($T_{2}$)
Your proof is correct: you separate the points with sets that are standard-open (in the reals) and because the Michael line topology has as its topology a superset of the usual topology (because we can always take $F = \emptyset$), these standard-open sets are still open in the new topology and are as required.
Showing $S^2/{\sim}$ (real projective plane) is Hausdorff
By definition of the quotient topology, $\pi(A)$ is open iff $\pi ^{-1}(\pi(A))$ is open. But this is just $A \cup -A$, which is a union of two open sets. Similarly for $B$.
Show that $\operatorname{div} X = - \delta X^\flat$
It seems that you are using $\delta \alpha = -g^{ij} \nabla_j\alpha_i$ as your definition. So your question is really why $\nabla_i X^i = g^{ij} \nabla _i X_j$, or in general why $$\nabla_i X^k = g^{jk} \nabla _i X_j\ .$$ There are several ways to answer your question. The first one is the laziest one (and the most useful one). Because both sides of your equation are independent of coordinate, it suffices to check the equation using any one coordinate system at any point $x$. In particular, we use normal coordinate such that $$g_{ij}=g^{ij} = \delta _{ij}\ ,\ \text{ and }\ \ \Gamma_{ij}^k = 0 = \partial_k g_{ij}\ \ \text{at }x\ .$$ Then $$ g^{jk} \nabla_i X_j = g^{jk} \nabla_i \big( g_{jl} X^l\big) = \frac{\partial X^k}{\partial x^i} = \nabla_i X^k\ \ \ \text{at }x\ .$$ The second method is to compute directly: $$g^{jk} \nabla_i X_j = g^{jk} (X_{j, i} - \Gamma_{ij}^l X_l) = g^{jk} \big((g_{jm}X^m)_i - \Gamma_{ij}^l g_{lm}X^m\big)$$ which is $$g^{jk} \nabla_i X_j = X^k_{\ ,i} + g^{jk}X^m \big(g_{jm,i} - \Gamma_{ij}^l g_{lm} \big)\ .$$ Using the definition of $\Gamma$, $$g_{jm,i} - \Gamma_{ij}^l g_{lm} = g_{jm,i} - \frac{1}{2} g^{ln} \big( g_{jn, i} + g_{ni, j} - g_{ij,n} \big)g_{lm} = \frac{1}{2}\big(g_{ij,m} + g_{jm,i} -g_{mi,j}\big)$$ Hence $g^{jk} \nabla_i X_j = X^k_{\ ,i} + \Gamma_{im}^k X^m = \nabla_i X^k$. The last one is conceptual. The process of raising or lowering indices can be thought of a composition of two operation. First you tensor $X$ with the metric $g$ (which is a $(0,2)$ tensor) to form a $(1, 2)$-tensor $$ (g\otimes X)^k_{\ ij} = g_{ij} X^k\ ,$$ follow by a contraction $C$ (summing one upper indices and lower indices). Thus $$(X^b)_i = C(g\otimes X)_i= g_{ij}X^j \ .$$ As $\nabla$ commute with any contraction and $\nabla g=0$ (metric condition), $$\nabla X^b = \nabla \big(C (g\otimes X)\big) = C\nabla(g\otimes X) = C\big( \nabla g \otimes X + g\otimes \nabla X\big) = C\big(g\otimes \nabla X\big)\ .$$ This is a coordinate free expression of the equation $$\nabla_i X_j = g_{jk} \nabla_i X^k \Leftrightarrow g^{jk}\nabla_i X_j = \nabla_i X^k\ .$$
How do you know if an integral is impossible?
Integration is "difficult". There isn't one systematic approach that will work by hand for most integrals. When it comes to integrating in exams, your best bet is to just practice integrating lots of different things until you start to recognise patterns. Many integrable functions are of the form: $$\int f'(x)g'(f(x))\mathrm{d}x = g(f(x))+c$$ Your second example can be seen to be of this form: $$\frac{1}{\ln(x^2)} = \frac{1}{2\ln(x)}$$ $$\frac{1}{2x\ln(x)} = \frac{1}{2}\frac{1}{x}\frac{1}{\ln(x)} = \frac{1}{2}\ln'(x)\ln'(\ln(x))$$ $$\int \frac{1}{x\ln(x^2)} = \int \frac{1}{2}\ln'(x)\ln'(\ln(x)) = \frac{1}{2}\ln(\ln(x))+c$$ The key is to be able to notice these patterns, and I think the only way you can do that is through practice. As for integrability "in theory", this is entirely different to whether or not you can integrate by hand in an exam, and the comments below your question are quite good :)
A tetrahedron with known vertex coordinates is rotated and translated; knowing the coordinates of three transformed vertices, find those of the fourth
There is a unique representation $\vec{AD} = a \vec{AB} + b \vec {AC} + c \vec { AB } \times \vec {AC}$ for some constants $a,b,c$. Then, $ \vec {A_1 D_1} = a \vec{A_1B_1} + b \vec {A_1C_1} + c \vec { A_1B_1 } \times \vec {A_1C_1}$. So we can can calculate $D_1$ Notes: This approach also allows for cases where the tetrahedron is scaled.
Can this be integrated in any way?
Assume $b\neq0$ and $a\neq1$ for the key case: Hint: $\dfrac{dx}{dt}=\dfrac{1}{(1-x)x}\left(\dfrac{a-x}{1-a}+be^{-t}\right)$ $\left(be^{-t}+\dfrac{x-a}{a-1}\right)\dfrac{dt}{dx}=x(1-x)$ Let $u=e^{-t}$ , Then $t=-\ln u$ $\dfrac{dt}{dx}=-\dfrac{1}{u}\dfrac{du}{dx}$ $\therefore\left(bu+\dfrac{x-a}{a-1}\right)\left(-\dfrac{1}{u}\dfrac{du}{dx}\right)=x(1-x)$ $\left(u+\dfrac{x-a}{(a-1)b}\right)\dfrac{du}{dx}=\dfrac{x(x-1)u}{b}$ This belongs to an Abel equation of the second kind. Let $v=u+\dfrac{x-a}{(a-1)b}$ , Then $u=v-\dfrac{x-a}{(a-1)b}$ $\dfrac{du}{dx}=\dfrac{dv}{dx}-\dfrac{1}{(a-1)b}$ $\therefore v\left(\dfrac{dv}{dx}-\dfrac{1}{(a-1)b}\right)=\dfrac{x(x-1)}{b}\left(v-\dfrac{x-a}{(a-1)b}\right)$ $v\dfrac{dv}{dx}-\dfrac{v}{(a-1)b}=\dfrac{x(x-1)v}{b}-\dfrac{x(x-1)(x-a)}{(a-1)b^2}$ $v\dfrac{dv}{dx}=\dfrac{((a-1)x^2-(a-1)x+1)v}{(a-1)b}-\dfrac{x(x-1)(x-a)}{(a-1)b^2}$
Normal distribution of juice
$945$ mL is $5$ mL below the mean. $5$ mL is half of the standard deviation. So the question is, what is the probability that a standard normal random variable is less than $-1/2\ {}$? Usually you'll get that from a table or from software.
Convergence of the sum of a family of real-valued functions
consider $\phi(x)=x$ then consider $$\lim_{n->\infty}\sum_{i=1}^n\phi(1/n) = \lim\ n/n = 1$$ I hope it's easy to see that $$\lim_{\delta -> 0^+}\sum_{i=1}^\infty \phi_j(\delta) \ge \lim_{n->\infty}\sum_{i=1}^n\phi(1/n) = 1$$ of course you can take sum $$\lim_{n->\infty}\sum_{i=1}^{n^2}\phi(1/n) = \lim_{n -> \infty}n =\infty$$ probably it needs some low level explanations why we can point these sums in this situation but it should be correct
Prove that the set of functions is uncountable using Cantor's diagonal argument
What you should realize is that each such function is also a sequence. The diagonal arguments works as you assume an enumeration of elements and thereby create an element from the diagonal, different in every position and conclude that that element hasn't been in the enumeration. To be concrete assume that $f_n$ is such an enumeration and consider the function $$\phi(2n) = \begin{cases}b & \mbox{if } f_n(2n) = a\\ a & \mbox{otherwise}\end{cases}$$ Since $\phi(2n) \ne f_n(2n)$ we have that $\phi\ne f_n$.
Putnam Problem, Pigeonhole Principle
Call the sequence $a_1,\ldots,a_m$ and the $n$ distinct terms $t_1,\ldots,t_n$. For each $k$ from $1$ to $m$ define the set $$S(k)=\{j\mid \hbox{$t_j$ occurs an odd number of times among $a_1,\ldots,a_k$}\}\ .$$ Now consider two cases. For some $k$ we have $S(k)=\varnothing$. $S(k)$ is never empty. Then there are fewer than $m$ distinct $S(k)$ so at some point we have $S(k_1)=S(k_2)$ and then... Since you asked for hints rather than a solution I'll leave it there....
PRML - Predictive distribution - The line is not smooth
Without seeing your code what I will say is that it seems like your plots are correct and that any problems are in extrapolating these polynomial models onto regions where you don't have training data, and also the interpretation of what the jagged line plot displays. To help explain I tried to simulate a similar set of data to what you are working with, so here is a a fit on the full data - which corresponds to your final figure and everything is looking pretty good. Now going back to the "jagged line" plot, what you seem to be doing here is the following Computing the predicted mean and variance at $x$ based on a set of data $\mathbf{x}, \mathbf{t}$ Appending the predicted mean to an existing vector of mean points $\mathbf{m}$, and then appending $x,t$ to to $\mathbf{x}$ and $\mathbf{t}$ However, the mean at all points $x \in \mathbf{x}$ will change as soon as you add the new training pair $(x,t)$ to $\mathbf{x},\mathbf{t}$. So what is the jagged line showing then? Well it is the collection of means, $m(x_n)$, conditional on only the training data up to $x_{n-1}$, that is $$ y_n = m(x_n) \;| \; (x_1,\ldots,x_{n-1}, t_1, \ldots, t_{n-1} ). $$ However there is no reason to think the sequence $(y_1,\ldots,y_n,\ldots)$ should vary smoothly because adding a new point will change the mean prediction over the whole range. In the next plot I have both the red line $\{ y_n \}$ equivalent to what you displayed in your fourth plot and I have also included plots of the predicted mean over all of the data points recalculated after updating the training set to include the points up to $x_n$ which are given by the (smooth) dotted black lines - note how the end point of the black lines are equal to the red line at $x_n$. Finally with regards to the explosive behavior, that is simply a case of the poor behaviour of these polynomials extrapolating a long way outside of regions for which you have data and I have plotted an another example displaying the same phenomena. In summary as far as I can tell what you are doing is working as intended.
Peirce’s law in a Hilbert system $\mathbf H$.
IMO, there is something missing in the description of the system: something linking $\text f$ with the negation sign: $\lnot$. The simplest way to add it is to define the latter with the former: $\lnot A := A \to \text f$. If so, the derivation of Peirce's Law is straightforward, using Deduction Theorem (provable with MP, Ax,1 and Ax.2: see many similar posts on this site): 1) $(A \to B) \to A$ --- premise 2) $\lnot A$ --- assumption [a] 3) $A$ --- Assumption [b] 4) $\text f$ --- from 2) and 3) by MP, using the definition of $\lnot A$ 5) $B$ --- from 4) and Ax.8, by MP 6)$A \to B$ --- from 3) and 5) by DT, discharging assumption [b] 7) $A$ --- from 6) and 1) by MP 8) $\text f$ --- from 2) and 7) 9) $\lnot \lnot A$ --- from 2) and 8) by DT and using again the def of $\lnot A$, discharging assumption [a] 10) $A$ --- from 9) Ax.9, by MP $((A \to B) \to A) \to A$ --- from 1) and 10) by DT
Is this a proposition?
The claim is not a claim of the form that is considered in propositional logic -- it belongs squarely in predicate logic. Some authors of introductory texts go to the trouble of defining a quasi-formal concept of what the word "proposition" means. Usually not much is actually done with this concept and it is forgotten about completely when you get to define propositional logic formally. Often it looks like the main purpose of offering the definition is to attempt to explain why it's called "propositional" logic. Unless you anticipate being asked "is such-and-such English sentence a proposition?" in an exam, I would not worry about a particular author's definition of the word. It is not going to be important.
Simple reduced rings which are not domains
Such rings do not exist. It was proved in 1968: Andrunakievič, V. A.; Rjabuhin, Ju. M. Rings without nilpotent elements, and completely prime ideals. Dokl. Akad. Nauk SSSR 180 1968 9–11. From MathSci review; ...From here, it follows that a ring has no non-zero nilpotent elements if and only if it is isomorphic to a subdirect product of (non-commutative) integral domains. {This article has appeared in English translation [Soviet Math. Dokl. 9 (1968), 565–568].} Reviewed by V. Dlab
upper bound for $\int \limits_{a}^{b} \frac{t \ln \ln t}{\ln t} dt$?
Here's the best I could cook up: Let $t=e^{e^u}$ and $a,b>1$ so that we have $$I=\int_{\ln(\ln(a))}^{\ln(\ln(b))}ue^{2e^u}\ du$$ Notice then that $$e^{2e^u}=1+2e^u+2e^{2u}+\frac43e^{3u}+\frac13e^{4u}+\mathcal O(e^{5u})$$ And by integration by parts, we find that $$I=\frac12u^2+2(u-1)e^u+\left(u-\frac12\right)e^{2u}+\frac4{27}\left(3u-1\right)e^{3u}+\frac1{48}\left(4u-1\right)e^{4u}\bigg|_{\ln(\ln(a))}^{\ln(\ln(b))}\\+\mathcal O\left(\ln^5(b)\ln(\ln(b))\right)-\mathcal O\left(\ln^5(a)\ln(\ln(a))\right)$$
Explanation for how limit of a sum can be converted to definite integration
I was responding to your initial version; I'm a bit confused after your edit. You now have $\lim_{n\rightarrow \infty}(a/n)$ and $\lim_{n\rightarrow \infty}(b/n)$ as bounds for the integral, but those limits are both $0$...? I'll leave the answer below for now; perhaps you can clarify via comments. In my book it is written that $$\lim_{n\rightarrow\infty}\sum_{r=an}^{r=bn}f(\frac{r}{n})\left(\frac{1}{n}\right)$$ can be converted to $$\int_{a}^{b}f(x)dx$$ Can someone explain how the the sum is being converted to the integration? Your book probably covers upper and lower and/or Riemann sums and the definite integral as a limit of (one of) these types of sums. The more general sum, a Riemann-sum, is formed by partitioning the interval $[a,b]$ into subintervals $[a=x_0,x_1],[x_1,x_2],\ldots,[x_{n-1},x_n=b]$ and picking an arbitrary point $x_i^*$ in each subinterval $[x_{i-1},x_i]$. Then consider the sum: $$\sum_{i=1}^n f\left(x_i^* \right) \left(x_i-x_{i-1} \right)=\sum_{i=1}^n f\left(x_i^* \right) \Delta x_i$$ Taking a limit is subtle in the sense that these sums not only depend on the chosen partition, but also on the selected points $x_i^*$. If (all) the lengths of the subintervals tend to $0$, the sum converges to the definite integral. The sums become simpler with some specific choices, such as: subintervals of equal length $\Delta x_n = \tfrac{b-a}{n}$, so the partition becomes: $$[a,a+\Delta x_n],[a+\Delta x_n,a+2\Delta x_n], \ldots , [a+(n-1)\Delta x_n,a+n\Delta x_n=b]$$ choosing the point $x_i^* \in [x_{i-1},x_i] = [a+(i-1)\Delta x_n,\color{purple}{a+i\Delta x_n}]$ as $\color{purple}{x_i^*=a+i\Delta x_n}$. The sum now simplifies to an expression which depends on $n$ only and you take the limit $n \to \infty$: $$\lim_{n \to \infty} \sum_{i=1}^n f\left( a+i\Delta x_n \right) \Delta x_n =\lim_{n \to \infty} \sum_{i=1}^n f\left( a+i \tfrac{b-a}{n} \right) \tfrac{b-a}{n} \tag{$*$}$$ Interpretation of the sums is now easier: you divide $[a,b]$ into $n$ subintervals of equal length and you pick an end point of each interval where you evaluate $f$. Now I have the feeling that your sum $$\lim_{n\rightarrow\infty}\sum_{r=an}^{r=bn}f(\frac{r}{n})\left(\frac{1}{n}\right)$$ mixes the general case where the interval is $[a,b]$ with a more specific case where the interval is $[0,1]$ since in that case, taking $\color{blue}{a=0}$ and $\color{red}{b=1}$, $(*)$ becomes: $$\lim_{n \to \infty} \sum_{i=1}^n f\left( \color{blue}{0}+i \tfrac{\color{red}{1}-\color{blue}{0}}{n} \right) \tfrac{\color{red}{1}-\color{blue}{0}}{n}=\lim_{n \to \infty} \sum_{i=1}^n f\left(\frac{i}{n} \right) \frac{1}{n}$$
Proof: Number theory: Prove that if $n$ is composite, then the least $b$ such that $n$ is not $b$-pseudoprime is prime.
Let $b$ be the minimum such element and assume it is not prime.Then $b=cd$ and $b^{n-1}\not\equiv 1 \bmod n$ . Then $b^{n-1}=c^{n-1}d^{n-1}\not \equiv 1$. So one of $b^{n-1}$ and $c^{n-1}$ is not $1\bmod n$. Contradicting the minimality of $b$.
understanding of the number relation for all n, n^3 mod 9 is 0,1, or 8
You only need to cube each one: \begin{align} (3m+k)^3 &= 27m^3 + 27m^2k + 9mk^2 + k^3 \\ &= 9 (3m^3 +3m^2k+mk^2)+k^3 \end{align} You can now see that $(3m+k)^3 \equiv k^3 \text{ mod }9 $. So when $k=0,1,2$ you find every cube is $0,1 \text{ or }8 \text{ mod } 9$. Added: Every whole number is of the form $3m+k$ for some $m, k$ and where $k=0,1$ or $2$.
Linear independency of a group of vectors with one defined as a linear combination of the others
No. Take $n=2, k=3$ and $a_1=(1,0), a_2=(2,0)$ and $a_3=(3,0)$.
Let $H$ be a subgroup of $G$, and suppose that $G$ acts by multiplication over the set $X:=G/H$ of the left-hand side classes of $H$ over $G$.
The kernel of $\lambda$ should be $\bigcap_{g\in G}gHg^{-1}$. This is called the normal core of $H$. I think this is what you want. $A$ doesn't have a kernel, as $X$ is only a set. But $\operatorname{Sym}X$ is a group, and $\lambda$ a homomorphism. So we can talk about $\operatorname{ker}\lambda:=\{g\in G:\lambda(g)=e\}$.
Uniform distribution Measure
Let $U=\{(x,1)\,|\,x\in [0,1]\}$, $R=\{(1,y)\,|\,y\in [0,1]\}$, $L=\{(0,y)\,|\,y\in [0,1]\}$, and $D=\{(x,0)\,|\,x\in [0,1]\}$. It is clear that $\partial ([0,1]^2)=L\cup R\cup U\cup D$. Notice that for all $x,y\in [0,1)$ we have $$U\cup R\subseteq [0,1]^2\backslash [0,x]\times [0,y],$$ so $$\mu(U\cup R)\le \mu([0,1]^2)-\mu ([0,x]\times [0,y])=1-xy$$ for all $x,y\in [0,1)$, showing that $\mu(U\cup R)=0$. Since the Lebesgue measure is shift-invariant, we see that $\mu(U\cup R)=\mu(D\cup L)$. Thus, $$0\le \mu(\partial ([0,1]^2)=\mu(U\cup R\cup L\cup D)\le \mu(U\cup R)+ \mu(L\cup D)=0$$
Converting from scientific notation to binary notation.
Aren't you allowed to convert the mantissa(fractional part) directly into binary? If you are allowed, then multiply by 2 at each step and remember the integer part: .34375 x 2 = 0.6875 0.6875 x 2 = 1.375 0.375 x 2 = 0.75 0.75 x 2 = 1.5 0.5 x 2 = 1.0 Thus .34375 in decimal = .01011 in binary.
solution of $y' = \exp \left(-\frac yx\right) + \frac yx$
This can be written in the form $y'=F(\frac{y}{x})$, so this is a homogeneous ODE. Let $z=\frac{y}{x}$, so that we obtain $y' = e^{-z} + z$. Observe that $z=\frac{y}{x} \implies y=zx \implies y'=z'x+z$ by the product rule. Therefore, $$ \begin{align} y' &= e^{-z} + z \\ z'x+z &=e^{-z}+z \\ z'x &=e^{-z} \\ z'&= \frac{e^{-z}}{x} \\ \frac{dz}{dx} &= \frac{e^{-z}}{x} \\ e^z dz &= \frac{dx}{x} \\ \int e^z dz &= \int \frac{dx}{x} \\ e^z &=\ln|x| + C \\ z &= \ln\left(\ln|x| + C \right). \\ \end{align} $$ Recall, form above, that $y=zx$. So the general solution is $y=x\ln(\ln|x|+C)$. Now we must solve for $C$ using the initial condition, $$ \begin{align} y(e) &= 0\\ e \ln(\ln|e|+C) &= 0\\ e \ln(1 + C) &= 0\\ \ln(1+C) &= 0\\ 1+C &= e^0\\ 1+C &= 1\\ C &= 0 \\ \end{align} $$ So the solution is, $$y=x\ln \left(\ln|x| \right).$$ $ \ $ You may find these resources helpful: Paul's Online Notes - Substitutions PatrickJMT Video Examples
Uniform boundedness principle for lower semicontinuous functions
No, the fact cannot be generalized to lower semi-continuous functions in the same form. A counterexample Take $X = \mathbb{R}$ and let $\{ q_n : n \in \mathbb{N} \}$ be an enumeration of all rationals. Consider the family $\mathcal{F} = \{ f_n : n \in \mathbb{N} \}$ where $f_n : X \to \mathbb{R}$ is defined as $$f_n(x) = \begin{cases} -n & \text{if } x = q_n \\ 0 & \text{otherwise} \end{cases} = -n \cdot \chi_{\{ q_n \}}(x).$$ Then every $f \in \mathcal{F}$ is lower semi-continuous. Also for each $x \in X$ there is at most one $f \in \mathcal{F}$ such that $f(x) \neq 0$, hence $\displaystyle \sup_{f \in \mathcal{F}} |f(x)| < \infty$. But for an arbitrary $n_0 \in \mathbb{N}$ the set $$A = \{ x \in X : (\exists f \in \mathcal{F}) \, |f(x)| > n_0 \} = \{ q_n : n > n_0 \}$$ is dense in $X$, so there is no open ball $B \subseteq X$ disjoint from $A$. Where does the proof fail? The proof fails, because for each $n_0 \in \mathbb{N}$ and $f \in \mathcal{F}$ the set $\{ x \in X : f(x) \leqslant n_0 \}$ is closed by the lower semi-continuity, but for the set $$\{ x \in X : |f(x)| \leqslant n_0 \} = \{ x \in X : -n_0 \leqslant f(x) \leqslant n_0 \}$$ to be closed, full continuity is required due to the inequality from below. A weaker generalization The proof goes through if we demand a weaker conclusion: Let $(X, d)$ be a complete metric space and $\mathcal{F}$ be a collection of lower semi-continuous, real-valued functions such that $\displaystyle (\forall x \in X) \, \sup_{f \in \mathcal{F}} f(x) < \infty$. Then there is $n_0 \in \mathbb{N}$ and an open ball $B \subseteq X$ such that $\displaystyle (\forall x \in B) \, \sup_{f \in \mathcal{F}} f(x) \leqslant n_0$.
Completion of $ Q(i)$ with respect to prime ideal (1+i)
Since $5=(2+i)(2-i)$, the only prime ideals above $5$ are the ideals $(2\pm i)$. But in an extension of number fields $L/K$ , you have $[L:K] =\sum [L_w:K_v]$, the sum bearing over all the $w$'s above a fixed $v$. Here $v$ is the 5-adic valuation and the global degree is 2, hence the two completions are $\mathbf Q_5$. This is actually the phenomenon called total splitting of a prime. Things are different for 2. Although $2 =(1+i)(1-i)$, this is not a genuine prime decomposition in $\mathbf Z[i]$ because the relation $i(1-i)=1+i$ shows that the two factors $1\pm i$ differ (multiplicatively) by a unit. In fact, $2$ decomposes as $2=-i (1+i)^2$, which means that $2$ is totally ramified, with ramification index $2$, so a fortiori the local degree is $2$. It follows that the completion at the prime $2$ is $\mathbf Q_2(i)$.
What is minimum speed needed to throw object over a wall that height h and at distance d?
Substituting $(d,h)$ into the standard projectile trajectory equation gives $$\begin{align}h&=d\tan\theta-\frac {gd^2}{2u^2\cos^2\theta}\\ u^2&=\frac {gd^2}{2\cos^2\theta(d\tan\theta-h)}\\ &=\frac {gd^2}{2(d\sin\theta\cos\theta-h\cos^2\theta)}\\ &=\frac {gd^2}{d\sin2\theta-h(1-\cos 2\theta)}\\ &=\frac {gd^2}{R(\frac dR\sin2\theta+\frac hR\cos 2\theta) -h} &&\scriptsize(R=\sqrt{d^2+h^2})\\ &=\frac {g(R^2-h^2)}{R(\sin2\theta\cos\alpha+\cos 2\theta\sin\alpha) -h} &&\scriptsize(\tan\alpha=h/d)\\ &=\frac {g(R^2-h^2)}{R(\underbrace{\sin(2\theta-\alpha))}_{<=1}-h}\\ {u^*}^2&=\frac {g(R^2-h^2)}{R-h} &&\scriptsize (u^*=\text{minimum launch velocity})\\\ &=g(R+h)\\ &=\color{red}{g(\sqrt{d^2+h^2}+h)} \end{align}$$ Note that launch velocity is at a minimum when $2\theta-\alpha=\frac\pi2$, i.e. when $\theta=\frac\pi 4+\frac\alpha 2$.
If a function maps an input to its inverse, is it bijective?
Your problem stems from the fact that the word "inverse" has (at least) two meanings. One describes the inverse of a function, another the inverse of a number (or element of some algebraic structure where that makes sense). These meanings are essentially different unless you move up one level of abstraction, where you "multiply" functions by composing them. Then they can occur in the same problem - with their two meanings. That's exactly what you encountered in your example with permutations. As @DustanLevenstein points out, the function from permutations to permutations that assigns to each permutation its inverse (as a permutation thought of as a function) has an inverse (as a function). (This is a good beginner's question.)
Problem in rth factorial moment about origin of hyper geometric distribution
I omit the terms which are the same in both lines. Second line: $x\cdot (x-1)\cdot \ldots \cdot (x-r+1)=\frac{x!}{(x-r)!} $ $^aC_x=\frac{a!}{x!(a-x)!}$ Product 1: $\frac{x!}{(x-r)!}\cdot \frac{a!}{x!(a-x)!}=\frac{1}{(x-r)!}\cdot \frac{a!}{(a-x)!}$ Third line: $a\cdot (a-1)\cdot \ldots \cdot (a-r+1)=\frac{a!}{(a-r)!}$ $^{a-r}C_{x-r}=\frac{(a-r)!}{(x-r)!\cdot (a-x)!}$ Product 2: $\frac{a!}{(a-r)!}\cdot \frac{(a-r)!}{(x-r)!\cdot (a-x)!}=\frac{a!}{(x-r)!\cdot (a-x)!}$ Product 1 and product 2 are equal.
$\binom{n^2}{n}$ complexity (as a function of $n$)
By using Stirling's approximation $$n!\sim \sqrt{2\pi}\,\frac{n^{n+1/2}}{e^n},$$ we obtain $$\binom{n^2}{n}=\frac{n^2(n^2-1)\cdot (n^2-n+1) }{n!}\sim \frac{C n^{2n}}{n!}\sim C' n^{n-1/2}e^n$$ where $C$ and $C'$ are positive constants.
Show tha triadiagonal is M-matrix.
It is well known that the eigenvalues of (the symmetric) $A$ are positive so $A$ is SPD. A symmetric Z-matrix is an M-matrix iff $A$ is SPD. See the third condition listed here.
Difference between $J$ and $j$
They're Bessel function of different kinds, see Bessel function
Is my graph and tree proofing correct for this degree sequence?
(2) is correct. For (1) you did not prove it is a graph. Can you construct a graph with such a sequence? Showing one would be a proof...
Covering space action on an orientable manifold $M$ implies $M/G$ orientable (Hatcher)
You have seen a couple of exercises concerning orientability so I suggest you should be able do solve it yourself with the following hint: Hint: You have to try to push down the orientation $\mu$ to $M/G$. To do this you have to use, that there are no orientation reversing covering homeomorphisms, by noting that the preferred generators of any orbit in the covering gives you the same preferred generator at the image point in $M/G$. For this just consider the commutative diagram of local homology groups, corresponding to a covering translation of $G$ together with the projection. Now note that under a orientation preserving homeomorphism, preferred generator is mapped to preferred generator. (Now the hint actually turned out to be a whole proof.)
Moments of diffusion processes
One of the methods is the forward Kolmogorov's equation: $$ \begin{cases} \frac{\partial m}{\partial t} &= a(x)\frac{\partial m}{\partial x} + b(x)\frac{\partial^2 m}{\partial x^2}, \\ m(0,x) &= f(x) \end{cases} $$ where $m(t,x) = \mathsf E_xf(X_t) = \mathsf E[f(X_t)|X_0=x]$. In your case you should make calculations for $f_1 = x$ and $f_2 = x^2$. Then the variance will be given by $$ V[X_T] = m_2(T,x) - (m_1(T,x))^2 $$
How to identify and classify singularities of $\frac{\sin(z)}{1-\tan(z)}$
Note that $\displaystyle \lim_{z \to \frac{(n+1)\pi}{2}} \frac{\sin z}{1-\tan z} = \lim_{z \to \frac{(n+1)\pi}{2}} \frac{\sin z \cos z}{\cos z-\sin z} = 0$. So those points are removable singularities. You can check that $\displaystyle \tan z = 1 \iff z = \frac{\pi}{4} + n\pi, \, n\in\mathbb{Z}$ and those points are simple poles. Hence Laurent series becomes $\displaystyle \frac{a_{-1}}{z-w_n} + a_0 + a_1(z-w_n) + a_2(z-w_n)^2 + \cdots$ where $\displaystyle w_n = \frac{\pi}{4} + n\pi$. This gives: $\displaystyle \frac{\sin z}{1-\tan z} = \frac{a_{-1}}{z-w_n} + a_0 + a_1(z-w_n) + a_2(z-w_n)^2 + \cdots$ $\displaystyle \sin z = (1-\tan z) \left( \frac{a_{-1}}{z-w_n} + a_0 + a_1(z-w_n) + a_2(z-w_n)^2 + \cdots \right)$. Use this equality to find Laurent coefficients by considering Taylor series of $\sin z$ and $1-\tan z$ at $\displaystyle z=w_n$. For example, let $n=0$ so $w_0 =\dfrac{\pi}{4}$ $\displaystyle \sin z = \frac{1}{\sqrt{2}} + \frac{x-\frac{\pi}{4}}{\sqrt{2}} - \frac{\left( x-\frac{\pi}{4} \right)^2}{2\sqrt{2}} - \frac{\left( x-\frac{\pi}{4} \right)^3}{6\sqrt{2}} + \frac{\left( x-\frac{\pi}{4} \right)^4}{24\sqrt{2}} + \frac{\left( x-\frac{\pi}{4} \right)^5}{120\sqrt{2}} - \cdots $ $\displaystyle = \left( -2\left( x-\frac{\pi}{4} \right) - 2\left( x-\frac{\pi}{4} \right)^2 - \frac{8}{3}\left( x-\frac{\pi}{4} \right)^3 + \cdots \right) \left( \frac{a_{-1}}{z-\frac{\pi}{4}} + a_0 + a_1\left(z-\frac{\pi}{4}\right) + a_2\left(z-\frac{\pi}{4}\right)^2 + \cdots \right)$ $\displaystyle \implies -2a_{-1} = \frac{1}{\sqrt{2}}, \, -2a_0 -2a_{-1} = \frac{1}{\sqrt{2}}, \, -2a_1 -2a_0 -\frac{8}{3}a_{-1} = -\frac{1}{2\sqrt{2}}, \, \dots$ by multiplying and comparing the terms of same degree.
Understanding of extension fields with Kronecker's thorem
Question 1) Yes, the definition in Gallian is equivalent to "$F$ is a subfield of $E$." When he says that the operations of $F$ are the operations of $E$ restricted to $F$, that does indeed mean that you can add/subtract/multiply elements of $F$ just by viewing them as elements of $E$ and performing the same operations. Question 2) I like that you are being picky here, and making sure that in the definition is indeed satisfied. Although Gallian does not explicitly say so here, you can identify $F$ with the set of cosets of constant polynomials, and now you really do have a copy of $F$ as a subfield of $E$. Your alternative definition looks pretty good, but you need specify that $\phi$ is a nonzero homomorphism (unless you require homomorphisms to send 1 to 1, in which case this is already guaranteed). Also, since you have a homomorphism, the operations on $\phi(F)$ are automatically those inhereted from $E$.
$A$ be a real symmetric matrix of size $n$ ; is $I_n+A$ always non-singular ? Is $I_n - A$ always singular ?
Even simpler counter-examples: $$A = 0 \implies I_n - A = I_n \text{ is nonsingular}$$ $$A = -I_n \implies I_n + A = 0 \text{ is singular}$$
Eliminating all removable discontinuities
Let's use this definition of a removable singularity (from StinkingBishop's answer): $f$ has a removable discontinuity at $a$ iff $\lim_{x \to a}f(x)$ exists but $\lim_{x \to a}f(x) \neq f(a)$. Since all of the discontinuities of $f$ are removable, $\lim_{x \to a} f(x)$ exists for all $a \in \mathbb R$. (If $f$ is continuous at $a$, then $\lim_{x \to a}f(x)$ exists and $\lim_{x \to a}f(x) = f(a)$. If $f$ has a removable discontinuity at $a$, then $\lim_{x \to a}f(x)$ exists but $\lim_{x \to a}f(x) \neq f(a)$.) This means that it makes sense to define a function $g(x) := \lim_{y \to x} f(y)$. Claim: For all $a \in \mathbb R$, $\lim_{x \to a} g(x) = g(a)$. (This proves that $g$ is continuous everywhere.) Proof: Fix an $a \in \mathbb R$. Fix an $\epsilon > 0$. Since $\lim_{x \to a} f(x) = g(a)$, there exists a $\delta > 0$ such that $$x \in (a - \delta, a) \cup (a, a + \delta) \implies f(x) \in \left( g(a) - \tfrac 1 2 \epsilon, g(a) + \tfrac 1 2 \epsilon \right) \subset \left[ g(a) - \tfrac 1 2 \epsilon, g(a) + \tfrac 1 2 \epsilon \right].$$ But then, $$x \in (a - \delta, a) \cup (a, a + \delta) \implies g(x) = \lim_{y \to x} f(y) \in \left[ g(a) - \tfrac 1 2 \epsilon, g(a) + \tfrac 1 2 \epsilon \right] \subset \left( g(a) - \epsilon, g(a) + \epsilon \right).$$ [To spell this out, if $x \in (a - \delta, a) \cup (a, a + \delta)$, then there exists an open neighbourhood $U$ of $x$ such that $f(y) \in \left[ g(a) - \tfrac 1 2 \epsilon, g(a) + \tfrac 1 2 \epsilon \right]$ for all $y \in U$. Hence $\lim_{y \to x} f(y) \in \left[ g(a) - \tfrac 1 2 \epsilon, g(a) + \tfrac 1 2 \epsilon \right]$.] This shows that $\lim_{x \to a} g(x) = g(a)$.
How is this formula arrived at?
The total number of copies at each array size $n$ looks to me like it follows this table: $$\begin{array}{ccccc} \lceil \log_2 n \rceil & n & \text{array size} & \text{copies on resize} & \text{total copies} \\ 0 & 1 & 1 & 0 & 0 \\ 1 & 2 & 2 & 1 & 1 \\ 2 & 3-4 & 4 & 2 & 3\\ 3 & 5-8 & 8 & 4 & 7 \\ 4 & 9-16 & 16 & 8 & 15 \\ \vdots & \vdots & \vdots &\vdots &\vdots \\ \lceil \log_2 n \rceil & n & 2^{\lceil \log_2 n \rceil} & 2^{\lceil \log_2 n \rceil-1} & 2^{\lceil \log_2 n \rceil}-1 \end{array} $$ The formula for the total number of copies is evidently $$2^{\lceil \log_2 n \rceil} - 1.$$ I don't see a connection between this formula and the one you posted, without assuming typos or other errors. Is he perhaps counting something other than the total number of copies?
Does $Ax + By = C$ pass through any lattice point?
Assuming you mean the lattice of integers, that is equivalent to checking if the equation has integer solutions. Assume that $k\times GCD(A,B)=C$. Then by Euclid's algorithm, there are $X,Y$ such that $AX+BY=GCD(A,B)$, so $AkX+BkY=C$, where $kX$ and $kY$ must be integers. On the other hand, assume $Ax+By=C$ has a solution. Then $GCD(A,B)\left(\frac{A}{GCD(A,B)}X+\frac{B}{GCD(A,B)}Y\right)=C$ so $GCD(A,B)$ divides $C$.
Prove that this $1$-differential form on $S^1$ is well-defined
Yes, in spirit this is OK. You should be specifying, however, that $U_i = \{x\in S^1: x_i\ne 0\}$. And your "then $v=$ ..." is not totally correct, as $v$ could be any scalar multiple of this vector. An alternative approach (more amenable to generalization) is to observe that since $S^1$ is defined by the equation $x_1^2+x_2^2=1$, the $1$-form $$\frac 12 d(x_1^2+x_2^2) = x_1\,dx_1+x_2\,dx_2$$ is identically $0$ on $S^1$. It follows by simple algebra that $\dfrac{dx_1}{x_2}+ \dfrac{dx_2}{x_1} = 0$ wherever $x_1x_2\ne 0$, i.e., on $U_1\cap U_2$. It's better to learn to work with differential forms without always having to evaluate them on tangent vectors.
How to define a morphism from the Spec of the completion of $O_{Y,y}$ to $Y$?
By taking affine neighbourhood of $y$, we may assume $Y=\operatorname{Spec}B$, and define it as $B\to O_{Y,y}\to\hat{O}_{Y,y}$ where the first map is localization $O_{Y,y}=B_{m_y}$ and the second map is the completion.
order-isomorphism can send chain to other chain
OUTLINE: The simplest way to prove this would be by induction on $n$. For $n=1$ this is obvious, $f(x)=x+(c_1-a_1)$ is an automorphism of $\Bbb Q$ mapping $a_1$ to $c_1$. Now suppose this is true for $n$, and take $(n+1)$-tuples $a_i,c_i$. Then there is an automorphism of $\Bbb Q$ mapping $a_i$ to $c_i$ for $i\leq n$, call it $F$. Now find a way to "cut and paste" isomorphism on the domain after $a_n$, and define $f$ as wanted.
Counting ways to put objects(balls) in a row with some restrictions
For the first one, let’s ignore the white balls for a moment. There are $\binom{m-1+k}k$ ways to arrange $m-1$ red and $k$ black balls, so there are also $\binom{m-1+k}k$ ways to arrange $m$ red balls and $k$ black balls so that a red ball is first. The $n$ white balls must now be inserted into this string of $m+k$ balls in such a way that each of them is immediately followed by a red or a black ball. This means that they must go into the $m+k$ slots before the first red ball or between adjacent red and black balls, and only one white ball can go into any of those slots. Thus, there are $\binom{m+k}n$ ways to choose positions for the white balls and therefore $$\binom{m-1+k}k\binom{m+k}n$$ possible arrangements altogether. The second problem is actually a little simpler. Requiring that the first red ball precede the last white ball rules out only one arrangement of the red and white balls, namely, the one in which all $n$ white balls precede all $m$ red balls. Thus, there are $\binom{n+m}n-1$ possible arrangements of the red and white balls. The $k$ black balls must now be inserted into the $n+m+1$ slots before the first of the red and white balls, between adjacent red and white balls, or after the last of the red and white balls. Again we can put at most one black ball into each slot, so there are $\binom{n+m+1}k$ ways to insert the black balls into the string and therefore $$\left(\binom{n+m}n-1\right)\binom{n+m+1}k$$ arrangements altogether.
Limit of an infinite sum (error function)
You're trying to compute $\lim_{x\to\infty}\text{erf}(x)$ using the series expansion: $$\text{erf}(x)=\frac{2}{\sqrt{\pi}}\sum_{n=0}^{\infty}\frac{(-1)^{n}x^{2n+1}}{n!(2n+1)}$$ and finding that the terms of the series don't converge as $x\to\infty$. This is not a contradiction, because you've simply established that $$\lim_{x\to\infty}\lim_{m\to\infty}\sum_{n=0}^{m}\frac{(-1)^{n}x^{2n+1}}{n!(2n+1)} \ne\lim_{m\to\infty}\lim_{x\to\infty}\sum_{n=0}^{m}\frac{(-1)^{n}x^{2n+1}}{n!(2n+1)}\ . $$ In other words, interchanging the two limits doesn't yield the same result. The interchange of limits is not legal (i.e., the interchange won't give the same result) unless additional conditions are satisfied, such as uniform convergence. See Proving $\int_{0}^{\infty} \mathrm{e}^{-x^2} dx = \dfrac{\sqrt \pi}{2}$ for alternative ways to evaluate this integral.
Solution of Bessel equation
Anyway if it's about $$u''+\left(1+\frac{1-4p^2}{x^2}\right)u=0$$ here's my solution: Compare it with $$u''+u=0$$. Observe that $$u(x) = A \sin(x-a)$$ is a solution of $u''+u=0$, & has zeros at $a$ and $a+\pi$. If $p>\frac{1}{2}$, $1-4p^2<0$, so $1+\frac{1-4p^2}{x^2}<1$ for $x\in[a,a+\pi)$. Now, by the Sturm comparison theorem, a solution to the proposed ODE cannot have more than one zero in $[a,a+\pi)$, as $u(x) = A \sin(x-a)$ does not have a zero in $(a,a+\pi)$.
Solution to Hamilton-Jacobi differential equations
By integral curve they mean a solution to the system. Suppose $t \mapsto (x(t),y(t))$ is a solution, then let $\phi(t) = H(x(t),y(t))$. Then \begin{eqnarray} \dot{\phi}(t) &=& \frac{\partial H(x(t),y(t))}{\partial x} \dot{x}(t) + \frac{\partial H(x(t),y(t))}{\partial y} \dot{y}(t) \\ &=& (-\dot{y}(t)) \dot{x}(t) + (\dot{x}(t)) \dot{y}(t) \\ &=& 0 \end{eqnarray} It follows that $\phi$ is constant.
Help with Taylor Polynomial Estimation Solution.
You should finish the error bound something this, to avoid using a pre-computed value of $e^{.1}$. We have$$1.105171<e^{.1}<1.105171+\frac{10^{-5}}{120}e^{.1}$$ The second inequality gives $$e^{.1}<\frac{1.105171}{1-\frac{10^{-5}}{120}}\approx1.105171092097591$$ so that $$1.105171<e^{.1}<1.1051711$$
Bounded distance of powers of a matrix from identity (Frobenius norm) implies that the matrix is identity
Yes. First notice that $A^k$ is bounded. If you write $A$ in the Jordan form, you can see that all eigenvalues must satisfy $|\lambda^k-1| \le {1 \over 2}$. In particular, we must have $|\lambda| =1$, and hence $\lambda =1$. Now consider a Jordan block $J$ of $A$, I claim that it must have size one. If not, then the $(1,2)$ element of $J^k$ is $k$, which is a contradiction. Hence $A$ consists of Jordan blocks of size one with eigenvalue one and so $A=I$.
Showing that a sequence converges
For $a_n>\sqrt7$, $$a_{n+1}=\frac{3a_n+7}{a_n+3}=3-\frac2{a_n+3}>\sqrt7$$ so the sequence is bounded below, while $$a_{n+1}-a_n=\frac{7-a_n^2}{a_n+3}<0$$ and the sequence is decreasing. The value $\sqrt7$ is nothing but the limit of $a_n$, found as the positive root of $$a=\frac{3a+7}{a+3}.$$
What is the laplace transform of the following function
$$\begin{align}f(t)&=t^{10}\cosh t\\&=t^{10}\left(\dfrac{e^t+e^{-t}}2\right)\\&=\dfrac12t^{10}e^t+\dfrac12t^{10}e^{-t}\\\mathcal{L}(f(t))&=\dfrac12\cdot\dfrac{10!}{(s-1)^{11}}+\dfrac12\cdot\dfrac{10!}{(s+1)^{11}}\end{align}$$
Prove sum of two convex funtions is convex using first and second derivatives
Since you're talking about second derivatives, I'll assume $f$ and $g$ are twice differentiable. Then $f, g$ are convex if their second derivatives are nonnegative. But then the second derivative of $f+g$ is $f''+g'' \geq 0$ since $f'' \geq 0, g'' \geq 0$. So $f+g$ is convex, since it has nonnegative second derivative.
Upper Bound of Difference of Exponentials
You can use the Mean Value Theorem $$\frac{f(b)-f(a)}{b-a}=f'(\xi)$$ with $\xi \in [a,b]$
Let $S$ a subspace and $V$ a vector space. Show that the additive identity of $S$ is the additive identity of $V$.
My spiel is not too heavy on the logic, but I think it conveys the essential ideas: Consider: $\forall v \in V, \; 0_V + v = v + 0_V = v; \tag 1$ now let $s \in S \subset V; \tag 2$ then $s \in V, \tag 3$ whence, via (1), $0_V + s = s; \tag 4$ also, by (2), $0_S + s = s; \tag 5$ combining (4) and (5) we find $0_V + s = 0_S + s; \tag 6$ we may now write $(0_V + s) + (-s) = (0_S + s) + (-s), \tag 7$ from which $0_V + (s + (-s)) = 0_S + (s + (-s)); \tag 8$ now, $s + (-s) = 0_V, \tag 9$ so (8) becomes $0_V + 0_V = 0_S + 0_V, \tag{10}$ and thus by virtue of (1), $0_V = 0_V + 0_V = 0_S + 0_V = 0_S. \tag{11}$
The shape of an orthonormal matrix
It should be clear that the first entry in $UX$ is given by $( \frac{1}{\sqrt{n}}, \ldots \frac{1}{\sqrt{n}} ) \cdot(\mu, ... \mu)^{T}= \sqrt{n} \mu$. Now suppose that the $j -th$ row of $U$ has the form $(a_1,a_2,...,a_n)$. Since $U$ is orthonormal we have $0=(a_1,a_2,...,a_n) \cdot ( \frac{1}{\sqrt{n}}, \ldots \frac{1}{\sqrt{n}} )^T =\frac{1}{\sqrt{n}}(a_1+a_2+...+a_n)$, thus $a_1+a_2+...+a_n=0$. The $j -th$ entry in $UX$ is therefore $$= \mu(a_1+a_2+...+a_n)=0.$$
Sigma Algebra where M is any function?
1) Set $E_1 := A$, $E_2 := B\backslash A$, $E_3 := E_4 := \dots := \emptyset$ and see $$ m(A) \leq m(A) + m(E_2) + m(\emptyset) + \dotso = m(\cup_n E_n) = m(B)$$ 2) Define $E_1 := A_1\backslash (\cup_{i=2}^{\infty}A_i), E_2 := A_2\backslash (\cup_{i=3}^{\infty}A_i)$ and so on. This gives you a disjoint partition of $\cup_n A_n$. Thus $$ m(\cup_n A_n) = \sum_n m(E_n) \leq \sum_n m(A_n)$$ where the last step is true by 1), since $E_n \subset A_n$.
Tricky proof of a result of Michael Nielsen's book "Neural Networks and Deep Learning".
Goal: We want to minimize ΔC ≈ ∇C ⋅ Δv by finding some value for Δv that does the trick. Given: ||Δv|| = ϵ for some small fixed ϵ ∈ ℝ > 0 (this is our fixed “step size” by which we’ll move down the error surface of C). How should we move v (what should Δv be?) to decrease C as much as possible? Claim: The optimal value is Δv = -η∇C where η = ϵ / ||∇C|| , or, Δv = -ϵ∇C / ||∇C|| Proof: 1) What is the minimum of ∇C ⋅ Δv? By Cauchy-Schwarz inequality we know that: |∇C ⋅ Δv| ≤ ||∇C|| * ||Δv|| ∴ min(∇C ⋅ Δv) = - ||∇C|| * ||Δv|| (recall: if |x| ≤ 2, x ∈ [-2, 2]) 2) By substitution, we want some value for Δv such that: ∇C ⋅ Δv = -||∇C|| * ||Δv|| = ∇C ⋅ Δv = -||∇C||ϵ 3) Consider the following: ∇C ⋅ ∇C = ||∇C||2 (because ||∇C|| = sqrt(∇C ⋅ ∇C)) ∴ (∇C ⋅ ∇C) / ||∇C|| = ||∇C|| 4) Now multiply both sides by -ϵ: (-ϵ∇C ⋅ ∇C) / ||∇C|| = -ϵ||∇C|| Notice that the right hand side of this equality is the same as in (2). 5) Rewrite the left hand side of (4) to separate one of the ∇C’s. The other term will be our Δv such that ∇C ⋅ Δv = -||∇C||ϵ. ((-ϵ∇C) / ||∇C||) ⋅ ∇C = -ϵ||∇C|| ∴ because the Cauchy-Schwarz inequality told us what the minimum of ∇C ⋅ Δv could be (by considering |∇C ⋅ Δv|), we know Δv must be (-ϵ∇C) / ||∇C|| ∎
figure 8 homeomorphism to line
Here's a hint: You probably know that one invariant of a topological space $X$ is its number of connected components. One way to generalize this notion is to ask: "after removing a point $x$ from $X$, how many connected components does the resulting space $X \setminus x$ have?" Can you think of a way to bootstrap this and distinguish the spaces?
Irreducibility of Polynomials without Eisenstein
Hint: For the first one, Suppose $f$ is reducible in $\mathbb{Q}[X]$. Clearly, there exists no rational root of the form $p/q$ (You could check this!!). So, it's possible that $f(x)$ is the product of two irreducible quadratics. Suppose $f(X) = (X^2+aX+b)(X^2 + cX+d)$, where $a,b,c,d \in \mathbb{Q}$ Using this, you can arrive at a contradiction. For the second one, as it has already been mentioned in the comments, $g$ is irreducible in $\mathbb{Q}[X,Y]$. Note that, you could use the fact that $\mathbb{Q}[X,Y]= (\mathbb{Q}[X])[Y]$. i.e Treat $g$ as a polynomial of $Y$ with coefficients in $\mathbb{Q}[X]$.
Text books on computability
Cutland, Computability (CUP). A beautifully lucid and elegant classic text.
How to prove the midpoint of a chord is also the midpoint of the line segment defined by the points of intersection of other two chords with it?
The theorem is known as the "Butterfly Theorem". One fairly simple proof is the following (from Selected Problems and Theorems of Elementary Mathematics by D. O. Shklyarsky, N. N. Chentsov and I. M. Yaglom). Let O be the center of the given circle. Since OM ⊥ CD, in order to show that CM = MD, we have to prove that ∠COM = ∠DOM. Drop perpendiculars OK and ON from O onto PS and QR, respectively. Obviously, K is the midpoint of PS and N is the midpoint of QR. Further, ∠PSR = ∠PQR and ∠QPS = ∠QRS, as angles subtending equal arcs. Triangles SPM and QRM are therefore similar, and SP/SM = QR/QM, or SK/SM = QN/QM. In other words, in triangles SKM and QNM two pairs of sides are proportional. Also, the angles between the corresponding sides are equal. We infer that the triangles SKM and QNM are similar. Hence, ∠SKM = ∠QNM. Now, have a look at the quadrilaterals OKCM and ONDM. Both have a pair of opposite straight angles, which implies that both are inscribable in a circle. In OKCM, ∠SKM = ∠COM. In ONDM, ∠QNM = ∠DOM. From which we get what we've been looking for: ∠COM = ∠DOM.
Prove $(A×C)\cap(B×D)=(A\cap B)×(C\cap D)$
$\begin{aligned} &(x, y)\in (A\times C)\cap (B\times D) &\iff\\ &(x, y)\in (A\times C) \wedge (x, y)\in (B\times D) &\iff\\ &x\in A \wedge y\in C \wedge x\in B \wedge y\in D &\iff\\ &x\in A\cap B \wedge y\in C\cap D &\iff\\ &(x, y)\in (A\cap B)\times (C\cap D) & \end{aligned}$ so $(A\times C)\cap (B\times D) = (A\cap B)\times (C\cap D)$
Does a (not-uniformly) convergent sequence of bounded functions converge to a bounded function?
For a somewhat contrived but simple example, define $$f_n(x) = \begin{cases} -n & \text{ if }x < -n \\ x & \text{ if }-n \leq x \leq n \\ n & \text{ if }x > n \\ \end{cases}$$ Then each $f_n$ is bounded ($|f_n(x)| \leq n$ for all $x \in \mathbb R$), and $f_n$ converges pointwise to $f(x) = x$.
Ratio of two sides of a triangle
Note that the triangles AON and ANM share the same base line AN and use the ratios below $$\frac32 = \frac{OP}{PM}=\frac{Area_{AON}}{Area_{ANM}} = \frac{\frac{ON}{OB}Area_{ABC}}{\frac12 \frac{NB}{OB}Area_{ABC}} =2\frac{ON}{NB}$$ Thus, $$\frac{ON}{NB} = \frac34$$
Closest Positive-Definite Matrix Subject to a Contraint
Let me formulate the optimization problem: $$ \min_{B\in {\Re}^{2n\times 2n}} \lVert A - B \rVert_{F}^2 ~~~~~{\rm s.t.}~~~ B + Q \geq 0.$$ The objective function is convex as it is norm minimization. The constraint function is convex. -> Therefore, we have a convex optimization problem. Convex optimization problems are very well understood and there is software available to solve them. For software, I would recommend you the cvx-interface for Matlab http://cvxr.com/cvx/. For further reading on convex optimization see http://web.stanford.edu/~boyd/cvxbook/.
Prove that $\|{e^{At}x_o}\| \geq e^{-\lambda t}\|{x_o}\|$
Write $\|x_0\|_2=\|e^{-At}e^{At}x_0\|_2\leq \|e^{-At}\|_2\|e^{At}x_0\|_2$. Now, $\|e^{-At}\|_2\leq e^{\|At\|_2}=e^{\lambda t}$, where $\lambda:=\|A\|_2$. Notice that if $A$ is nonzero, then $\lambda>0$. It's not clear what norm you want here, but the same proof should work for any consistant norm, that is: $\|Ax\|\leq \|A\|\|x\|$
To find annihilator of given module
If $\{x_1,x_2,\dots,x_n\}$ is a set of generators of the $R$-module $M$, then $r\in R$ annihilates $M$ if and only if it annihilates all the generators. The $\mathbb{Z}$-module $\mathbb{Z}_{14}$ has $1+14\mathbb{Z}$ as generator. The $\mathbb{Z}$-module $\mathbb{Z}_4\times\mathbb{Z}_6$ has $(1+4\mathbb{Z},0+6\mathbb{Z})$ and $(0+4\mathbb{Z},1+6\mathbb{Z})$ as generators. The integer $r$ annihilates $1+n\mathbb{Z}$ if and only if $r\in n\mathbb{Z}$.
Annulus without ambient space?
First of all, "loops" will not detect something as subtle as you mean. However, it is also true that taking the closure of an open subset and considering the boundary of the resulting space depends on how you embed a space. For example the subsets $\{(x,y,z) \mid z \neq \pm 1\} \subset S^2$ is topologically a cylinder $S^1 \times \mathbb R$ which is homeomorphic to the open annulus. Likewise, you can consider $\{(x,y,z) \mid |z|<1/2\}$ is also an annulus. The "boundary" of the first is two points and the boundary of the latter is $S^1 \coprod S^1$, despite them being homeomorphic. On the other hand "loops" will only detect up to homotopy. For example, as a space unto itself, $S^1$ and $S^1 \times \mathbb R$ will have the same fundamental group.
Finite measure divided into small measures
Prove it when $E$ is a $G_\delta$, that is, a countable intersection of open sets. In the general case, prove that there is a $G_\delta$, say $F$, such that $\lambda(F)=\lambda^*(E)$ and $E\subset F$.
How many functions $ f : A \rightarrow A$ can be defined such that $\text{f(1) < f(2) < f(3)?}$
There are fewer than $8$ options for $f(3)$. For example, you cannot choose $f(3)=1$, because then $f(2)$ cannot be defined so that $f(2)&lt;f(3)$ is satisfied. Instead, think about it this way: It's not important what $f(4)\dots, f(8)$ are, you just need to count the number of ways you can define $f(1),f(2), f(3)$ and then multiply that by $8^5$. NOTE: it's $8^5$ because you can map any of the $5$ elements $4,5,6,7,8$, and each of them can be mapped to any of $8$ options. So, $8$ options for $4$, $8$ for $5$ and so on, a total of $8^5$ options. For the function values on $\{1,2,3\}$, every function that satisfies the contidion $f(1)&lt;f(2)&lt;f(3)$ can "uniquely" determine a subset of size three $\{f(1), f(2), f(3)\}\subseteq A$.
Where does the following series converge?
$$ \sum_{i=1}^n\frac1 {n+i} = \sum_{i=1}^n\frac1n\,\frac1 {1+\frac in} \to\int_0^1\frac1 {1+x}\,dx=\log2. $$
How to solve $ \tan (m \theta) + \cos(n \theta) =0$
As marty cohen answered, for the most general case, only numerical methods will be able to solve the equation $$\tan (2m \theta) + \cos(2n \theta) =0$$ which, as he wrote, would be better written as $$\tan(t)+\cos(rt)=0$$ The problem is that the function $$f(t)=\tan(t)+\cos(rt)$$ presents an infinite number of discontinuities at $t=(2k+1)\frac \pi 2$ and thius is never very good. Assuming $\cos(t)\neq 0$ as a possible solution, I suggest that you look instead for the zero's of $$g(t)=\sin(t)+\cos(t)\cos(rt)$$ which is continuous everywhere. Starting froma "reasonable" guess $t_0$, Newton method will updtate it according to $$t_{n+1}=t_n+\frac{\cos (t_n) \cos (r t_n)+\sin (t_n)}{\sin (t_n) \cos (r t_n)+\cos (t_n) (r \sin (r t_n)-1)}$$
A question about Möbius transformation in Complex Variables
It does not really matter about the coefficients. Any (invertible) transformation has $ad-bc \neq 0.$ Such a mapping, written as a function, is the composition of a finite string of just three types: First $f(z) = A z$ for some nonzero (real or complex) $A$ takes circles with center at $0$ to others, lines through $0$ to others. Second $g(z) = z + B$ takes lines to lines and circles to circles, it is just a translation. Third, $h(z) = \frac{-1}{z}$ does a little of both. Since $0$ is sent out to $\infty$ and $\infty$ is brought back to $0,$ the following bunch of things happen: A) a line through $0$ is sent to a line through $0.$ B) a line not through $0$ is sent to a circle that passes through $0,$ with center elsewhere. C) A circle that passes through $0$ is sent to a line that does not pass through $0.$ D) A circle that does not pass through $0$ is sent to another circle that does not pass through $0.$ You really ought to draw some things yourself for $h(z) = -1/z.$ There is a good reason for the minus sign, but if you prefer draw $H(z) = 1/z.$
Reflection of a graph about the line $y=2x$.
Write $t= 2x$, then we have to find a reflection of $y= {4+2t\over 4+t^2}$ across $y=t$. So replace $y$ and $t$ in given equation and express $y$ in this new equation: $$ t={4+2y\over 4+y^2}$$ So $$ 4t+ty^2=4+2y\implies ty^2-2y+4t-4=0$$ so $$ y= {2\pm 2\sqrt{1-4t^2+4t}\over 2t} = {1\pm \sqrt{-x^2+2x+1}\over 2x}$$ so non of the offered.
Proof a rule to be admissible.
In general, no. You can do such if you have a completeness meta-theorem and invoke it. Say you work with a system which just has a conditional introduction rule and a conditional elimination rule: {(cond $\alpha$ $\beta$), $\alpha$} $\vdash$ $\beta$ You might show that model-theoretically (semantically/using truth tables) the following holds: |= (cond (cond (cond $\alpha$ $\beta$) $\alpha$) $\alpha$) But, you can't admit the rule of inference (cond (cond $\alpha$ $\beta$) $\alpha$) $\vdash$ $\alpha$ for the above system. If you did then $\vdash$ (cond (cond (cond $\alpha$ $\beta$) $\alpha$) $\alpha$) which doesn't work for the above system, and you could find a model which satisfies the conditional introduce rule, the conditional elimination rule, but doesn't satisfy the last formula.
I can't understand the algebraic simplification below
Consider $$ \frac{1}{2 \sqrt{3}} = \frac{\sqrt{3}}{\sqrt{3}} \frac{1}{2 \sqrt{3}} = \frac{\sqrt{3}}{2 \sqrt{3} \sqrt{3}} = \frac{\sqrt{3}}{6}$$ As can be seen I have multiplied the fraction by $1 = \sqrt{3} \ / \sqrt{3}$.
Confused about Wikipedia page on differential forms
For a vector $y=(y_1,\ldots,y_n) \in \textbf{R}^n$, the i-th coordinate function $x^i$ simply picks out the i-th entry, that is, $x^i(y)=y_i$. Thus, if $e_1,\ldots,e_n$ is a basis for $\textbf{R}^n$, then any vector $y=(y_1,\ldots,y_n)$ can be written as $y=\sum_{i=1}^n x^i(y)e_i$.
What is the cumulative binomial distribution, on the probability of "at least one"
Hint $$P(X\geq k) = 1-P(X \lt k)$$ Solution $$P(X\geq 1) = 1-P(X\lt 1) = 1- {1000\choose 0}\left(1\over 6\right)^0\left(1-{1\over 6}\right)^{999} = 1-({5\over 6})^{999}\approx 1$$ This "trick" is pretty obvious as it stands and it's pretty common to use it when dealing with calculating the cumulative probability of at least $m$ successes of big data sets when $m\ll n$