title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How does this list of optimal values prove Farkas' lemma?
Is there a feasibility relationship between a linear program and it's dual Yes, if the primal is infeasible, the dual is infeasible or unbounded. If the primal is unbounded, the dual is infeasible. You can switch primal and dual in these statements. These statements follow from the weak duality theorem. The two lines of reasoning here are: If the dual is feasible, the dual has optimal value 0, and therefore the primal also has optimal value 0 (by strong duality), and therefore you cannot have $x$ such that $c^Tx < 0$, $Ax \leq 0$. If there is an $x$ for which $c^Tx < 0$ and $Ax \leq 0$, then the primal problem is unbounded (fill in $tx$ and let $t\to\infty$), therefore the dual is infeasible.
How to read this in simple English?
$\exists x P(x)$ is read as there exists $x$ such that $P(x)$ holds Likewise $\forall x P(x)$ is read as for every $x$, $P(x)$ holds. $A \implies B$ is read as If $A$ then $B$. Thus, bringing all of this together, $\exists x P(x) \implies \forall x P(x)$ is read as If there exists $x$ such that $P(x)$, then $P(x)$ holds for every $x$. Which is the same as what you have.
When a scheme theoretical fiber is reduced?
If you are over a field of char. $0$, then yes, it is true that the fibre over a generic point of the image is reduced. To see this, first replace $W$ with the closure of the image of $\phi$, so as to assume that $\phi$ is dominant. Then the morphism $\phi$ corresponds to an injection of finite-type domains over $k$, say $A \hookrightarrow B$. (The injectivity follows from the dominance of $\phi$.) Now let's look at the generic fibre of $\phi$: this is Spec of the tensor product $K(A) \otimes_A B$ (here $K(A)$ denotes the fraction field of $A$), which is a localization of $B$, and so a domain (and hence reduced). Now if we are in char. zero, the reduced $K(A)$-algebra $K(A)\otimes_A B$ is necessarily geometrically reduced (i.e. stays reduced after extending to an algebraic closure of $K(A)$), and the property of fibres being geometrically reduced is open on the base, i.e. on Spec $A$; thus the fibres over an open subset of the image of $\phi$ will be reduced. (See e.g. Thm. 2.2 of these notes. You can find this in many places (though mabye not Hartshorne); these notes are just what turned up near the top of a quick google search.) In char. $p$, this need not be true. E.g. consider the Frobenius map $\mathbb A^1 \to \mathbb A^1$, given by $x \mapsto x^p$. Then the fibre over every (closed) point is non-reduced. (The fibre over the generic point is reduced, but not geometrically reduced.) Nevertheless, if the generic fibre (in the scheme-theoretic sense) is geometrically reduced, then the fibre over an open set of closed points will be reduced too.
Proof that (S(X),$\circ$) is a group.
You are almost there. For the neutral element: indeed you need the identity map. Then $\mathrm{Id}\circ f(x)=\mathrm{Id}(f(x))=f(x)=f(\mathrm{Id}(x))$ for all $x\in X$, by definition of the identity map. Then for the inverse, every bijective function has an inverse, so because $f$ is a bijection there exists a two-sided inverse $f^{-1}$. Note that you also need to check that the composite of two bijections $X\to X$ is again a bijection $X\to X$. Then, together with associativity, which you already checked, you conclude $S(X)$ is a group, using composition of maps $\circ$ as the operation.
Limit of sum as definite integral
Consider the function $$f(x)=\frac{1}{1+x+x^2}$$ and note that the your inequality holds because $f$ is strictly decreasing. Indeed for $n\geq 1$, $k\geq 0$, and $x\in [\frac{k}{n},\frac{k+1}{n}],$ $$f(\frac{k}{n})> f(x)> f(\frac{k+1}{n}).$$ By integrating over the interval $[\frac{k}{n},\frac{k+1}{n}]$, we get $$\frac{f(\frac{k}{n})}{n}=\int_{\frac{k}{n}}^{\frac{k+1}{n}}f(\frac{k}{n})dx> \int_{\frac{k}{n}}^{\frac{k+1}{n}}f(x)dx> \int_{\frac{k}{n}}^{\frac{k+1}{n}}f(\frac{k+1}{n})dx=\frac{f(\frac{k+1}{n}) }{n}.$$ Finally we take the sum for $k=0,\dots,n-1$, $$\frac{1}{n}\sum_{k=0}^{n-1}f(\frac{k}{n})>\int_0^1 f(x)\,dx >\frac{1}{n}\sum_{k=0}^{n-1}f(\frac{k+1}{n}).$$ Note that the last sum on the right is equal to $$\frac{1}{n}\sum_{k=1}^{n}f(\frac{k}{n})= \frac{1}{n}\sum_{k=0}^{n-1}f(\frac{k}{n})+\frac{f(1)-f(0)}{n}.$$ P.S. Once we have the double inequality, we may conclude that $$\lim_{n\to \infty}\sum_{k=0}^{n-1} \dfrac{n}{n^2+kn+k^2}=\lim_{n\to \infty}\sum_{k=1}^{n} \dfrac{n}{n^2+kn+k^2}=\int_0^1\frac{dx}{1+x+x^2}=\dfrac{\pi}{3\sqrt{3}}.$$
A definite integration problem on law of large number
(This was already posted on the site.) Consider an i.i.d. sequence $(U_n)$ uniform on $(0,1)$, then the $n$th integral on the LHS is $E[W_n]$, where $$ W_n=\frac{g(U_1)+\cdots+g(U_n)}{h(U_1)+\cdots+h(U_n)}. $$ You already know that $W_n\to w$ almost surely, where $w$ is the RHS. Furthermore, $0\leqslant W_n\leqslant M$ almost surely hence $W_n\to w$ in $L^1$, which implies that $E[W_n]\to w$.
Solving a recuurence relation
You're almost done. All that remains is to find $a,b,c$. Use the initial conditions to get a linear system for $a,b,c$. You probably won't get nice numbers, because $s,t,r$ are not very nice (see WA). On the other hand, the matrix in the linear system is a Vandermonde matrix and its inverse can be computed explicitly.
If the absolute value of an analytic function $f$ is a constant, must $f$ be a constant?
Since there has been no posted/accepted answer, I'll post my own solution. Let $f = u + iv$, so $|f| = |u + iv| = \sqrt{u^2 + v^2}$. This implies $u^2 + v^2 = k$ for some constant $k$. If $k = 0$ then we are done, so consider $ k \ne 0$. Now taking partial derivatives we find $$uu_x + vv_x = 0$$ $$uu_y + vv_y = 0$$ Using Cauchy-Riemann equations $$uv_y + vv_x = 0$$ $$-uv_x + vv_y = 0$$ Equating both sides gives $ v_x(v+u) + v_y(u-v) = 0$ and the result follows immediately.
Differentiability through paths
A broad hint, but (deliberately) not a complete solution Suppose that $\alpha$ and $\beta$ are two differentiable paths with $\alpha(0) = \beta(0) = (0,0)$ and $\alpha'(0) = \beta'(0) \ne 0$. In short form, '$\alpha$ and $\beta$ agree to first order at $0$'. You can prove that if $f \circ \alpha$ is differentiable at $0$, then so is $f \circ \beta$, and the derivatives agree. (I think I'd use an epsilon-delta argument to show this. The proof should not depend on $f$, by the way, but the assumption that the derivative $\alpha'(0)$ is nonzero will be essential.) Once you know that, you need only consider a few possible curves through the origin, namely those of the form $$ \alpha(t) = t \mathbf{v} $$ where $\mathbf{v} = (a, b) = r (\cos \theta, \sin \theta) $ is some nonzero vector, i.e., $r \ne 0$. For each such vector, it's easy to write down $f \circ \alpha(t)$ and observe that it's differentiable, and then (with the help of the second paragraph) you're done.
$V=\min \{X,Y\}$ is independent of $W=|X-Y|$
$V$ and $W$ are not independent in general. Suppose for example that $X$ and $Y$ share the same distribution with $P(X=1)=P(X=2)=P(Y=1)=P(Y=2)=\frac{1}{2}$. Then $P(V=2,W=1)=0\neq P(V=2) \times P(W=1)$
Hint to find angle $\hat{C}$
Drawing the diagonal $BD$, we find that $BD=a\sqrt{2}$ and $CDB=105^{\circ}$. We can use the Cosine Rule to find $BC$: $$BC^2=a^2+(a\sqrt{2})^2-2a^2\sqrt{2}\cos105$$ Thus, $BC=a\sqrt{2+\sqrt{3}}$. We can now use the Cosine Rule again to find $\hat{C}$: $$\hat{C}=\cos^{-1}\left(\frac{a^2+a^2(2+\sqrt{3})-2a^2}{2a^2\sqrt{2+\sqrt{3}}}\right)=\cos^{-1}\left(\frac{1+\sqrt{3}}{2\sqrt{2+\sqrt{3}}}\right)= 45^{\circ}$$
How to calculate volume of 3d convex hull?
Since you have the plane intersections too, It is equivalent to the volume of a polyhedron. (the fact that yours is convex doesn't matter). http://en.wikipedia.org/wiki/Polyhedron#Volume
Maximum Likelihood Estimator of scaled beta
If you have just a single observation, $X$, then $\theta$ is necessarily $\ge X$, so the decreasing function $\theta\mapsto2x/\theta^2$ has as its domain the interval $[X,\infty)$. And there is indeed a value of $\theta$ in that domain where $\theta\mapsto2x/\theta^2$ assumes its maximum value. If there are $n$ i.i.d. observations $X_1,\ldots,X_n$, then the domain is $\left[\max\{X_1,\ldots,X_n\},\infty\right)$.
Laplace approximation of Poisson posterior from MacKay
According to the wikipedia page, the derivative of $\psi_0(x+1)$ is $\psi^{(1)}(x+1)$. Its series representation can be written as $$\sum_{k=1}^\infty\frac1{(x+k)^2}=\zeta(2,x+1).$$
clarification on binary and decimal representations of integers
The uniqueness of the integers $a_i$ and $q_i$ means that there is a unique base-$b$ representation of $N$ that is obtainable by the described algorithm — that is, the algorithm doesn't involve any arbitrary choices. It doesn't rule out the possibility that there could be some other representation that was not so obtainable. On the other hand, one way of proving uniqueness would be to argue that any representation of $N$ must be obtainable by applying the algorithm.
Divergent subsequence of an unbounded sequence
construction part is correct and nice. In the construction of required sequence, you have picked $n_k$ which satisfies so and so condition. Use the same index $n_k$ to prove it diverges.
$f(X^p)$ irreducible or $p$th power if $f$ irreducible
I did not manage to give a proof considering those field extensions, so I hope you don't mind if my suggestion for a proof uses another method: Let $f(x) = (x-\theta_1)\dots(x-\theta_n)$ in an algebraic closure $\overline K$ of $K$. Let $\lambda_i$ be the unique element of $\overline K$ with $\lambda_i ^p=\theta_i$. Then $$f(x^p) = (x^p-\theta_1) \dots (x^p-\theta_n) =(x^p-\lambda_1^p) \dots (x^p-\lambda_n^p) = (x-\lambda_1)^p \dots (x-\theta_n)^p=\Big ((x-\lambda_1) \dots (x-\lambda_n) \Big )^p.$$ Now let $(x-\lambda_1) \dots (x-\lambda_n) = x^n+a_nx^{n-1}+...+a_0 \in \overline K[X]$, then we find $$f(x^p) = (x^n)^{p}+a_n^p(x^{n-1})^p+...+a_0^p.$$ Now suppose all coefficients of $f(x^p)$ are already in $K^p$, then all $a_i$ are in $K$ and thus $g = (x-\lambda_1) \dots (x-\lambda_n)\in K[X].$ For showing that $g$ is irreducible it is sufficient to see that if $(x-\lambda_1) \dots (x-\lambda_m)$ with $m<n$ was already in $K[X]$, then so would be $(x-\lambda_1^p) \dots (x-\lambda_m^p)=(x-\theta_1) \dots (x-\theta_m)$ which is a contradiction to irreducibility of $f$. This completes one direction of our proof. Suppose otherwise there is a coefficient of $g$ not belonging to $K^p$. We show, that $f(x^p)$ is irreducible: Suppose that is not the case. We know, that $f(x^p)$ is not of the form $g^p$. However, the factors of $f(x^p) = g^p \cdot h^p$ with some non-trivial $g^p,h^p \in K[X]$ have to be $p$-th powers too, because they must not share any roots in $\overline K$. So $g,h$ are of the form $g'(x^p),h'(x^p)$ with $g',h' \in K[X]$, showing $g'(x)\cdot h'(x)=f(x)$, a contradiction to irreducibility of $f$.
Uniform and Pointwise convergence. Basic questions.
For your third question, JohnD's answer in the page you are quoting, contains a trick that's often used in Analysis courses. That is, try to find: $$\sup_{x\in[0,1]}|f_n(x)-f(x)|$$ The supmemum occurs at $x$ such that: $$\frac{df_n(x)}{dx}=0$$ Solving the above for $x$, you get: $$x=\pm\frac{1}{n}$$ $\frac{1}{n}\in[0,1]$, so substitute back to the function to get: $$\sup_{x\in[0,1]}|f_n(x)|=f_n\left(\frac{1}{n}\right)=\frac{1}{2}$$ And this is fixed and does not vanish, therefore convergence in $[0,1]$ is not uniform. Addendum for comment: I am adding a graphic, so you can see what's happening as a response to your second question. This is the graph of $f_1(x)$, $f_2(x)$,..., $f_5(x)$, from right to left. Note that the supremum is given by $\left(\frac{1}{n},f_n\left(\frac{1}{n}\right)\right)=\left(\frac{1}{n},\frac{1}{2}\right)$ and is moved to the left on each iteration, but always stays at 1/2.
Transformation matrices with 2 different bases
Not exactly, but almost. You have to take the inverse of $T$. Nevertheless, you can verify your solution by calculating $Ab_i$ (which are just the columns of $AM$) and expressing them in basis $B'$. (This last step is equivalent to multiplying by $T^{-1}$ from the left. Can you see why?)
Why aren't all chaotic sets also chaotic attractors?
But my issue is, $\omega(x_0)$ is clearly contained in $\omega(x_0)$, so isn't $\{ f^n(x_0)\}$ attracted to $\omega(x_0)$? Going by the definition, yes it’s attracted to itself, but that doesn’t necessarily mean it’s an attractor – as $\omega(x_0)$ may have a zero measure. Yes, this is in conflict with the common use of language, but that’s academics for you. To understand non-attracting chaotic sets, first consider fixed points. There are three important types of these: Stable fixed points: The orbit from any initial condition from a sufficiently small neighbourhood of the fixed point will converge to the fixed point. A simple example would a ball in a potential well with friction; with the fixed point (in state space) corresponding to the ball being in rest at the bottom of the potential well. These fixed points are attractors. Unstable fixed points: The orbit from any initial condition from the vicinity of the fixed point will move away from the fixed point. A simple example is a ball on a perfect dome; with the fixed point corresponding to the ball being in rest on the top of the dome. These fixed points are not attractors. Saddle fixed points: The orbit from any initial condition from the vicinity of the fixed point will move away from the fixed point. A simple example is a ball with friction on a, well, saddle surface. There is a line of initial conditions, which will yield orbits converging to the point, but all others will move away from it. These fixed points are not attractors either, as the states attracted to it (the points on the line) have zero measure. To understand attracting and non-attracting chaotic sets, just replace the fixed point with a chaotic set: These are a set of points describing a chaotic motion. In case of a chaotic attractor, any initial condition in a sufficiently close vicinity will yield an orbit moving towards this set (the attractor). In case of a chaotic saddle (a.k.a. chaotic repeller, non-attracting chaotic set), almost all initial conditions in the vicinity will yield orbits that move away from the set (the saddle). In either way, the transient motion towards or away from the set will be chaotic.
How to prove that a function is convex.
Hint: use Mean value $$f'(c)= f(1)-f(0)=RHS$$
What should I substitute?
Hint: For $$F(n)=\int_0^1 x^{2n+1}\sqrt{\dfrac{1+x^2}{1-x^2}}\ dx$$ set $x^2=\cos2t\implies2x\ dx=-2\sin2t\ dt,0\le2t\le\dfrac\pi2$ $x=1\implies t=0,x=`0\implies t=\dfrac\pi4$ $\sqrt{\dfrac{1+x^2}{1-x^2}}=+\cot t$ $$F(2)=-\int_{\pi/4}^0\cos^22t\cot t\sin 2t\ dt$$ $$=-\int_{\pi/4}^0(1+\cos4t)\cos^2t\ dt$$ $$=\int_0^{\pi/4}\dfrac{(1+\cos4t)(1+\cos2t)}2\ dt\text{ as }\int_a^bf(x)\ dx=-\int_b^af(x)\ dx$$ Use Werner Formulas to complete the assignment
every open set in the extended real line ($\overline{\mathbb R}$) is a countable union of segments
Yes, it's correct. For a simple proof: observe that the subspace topology on $\Bbb R\subset\bar{\Bbb R} $ is just the usual one, so for any open set $U\subseteq\bar{\Bbb R} $ we have that $U\cap\Bbb R$ is a countable union of disjoint intervals. Then it only remains to consider the cases when $\pm\infty\in U$.
Proof that $(1-x)^n\cdot \left ( \frac{1}{1-x} \right )^n=1$
In general, $$ \left(\sum\limits_{k=0}^{+\infty}a_kx^k\right)\cdot\left(\sum\limits_{k=0}^{+\infty}b_kx^k\right)=\sum\limits_{k=0}^{+\infty}c_kx^k\qquad\text{with}\qquad c_k=\sum\limits_{i=0}^ka_ib_{k-i}. $$ In your case, $$ \sum\limits_{k=0}^{+\infty}c_kx^k=1\qquad\text{if and only if}\qquad c_0=1\ \text{and}\ c_k=0\ \text{for every}\ k\geqslant1. $$ Edit: Here, choosing $a_k=(-1)^k{n\choose k}$ and $b_k=D(n,k)$, one gets $(1-x)^n=\sum\limits_{k=0}^{+\infty}a_kx^k$ and $\dfrac1{(1-x)^n}=\sum\limits_{k=0}^{+\infty}b_kx^k$ hence $(1-x)^n\cdot\dfrac1{(1-x)^n}=1=\sum\limits_{k=0}^{+\infty}c_kx^k$ with $c_k=\sum\limits_{i=0}^ka_ib_{k-i}$ for every $k\geqslant0$. Since the series expansion of the function $x\mapsto 1$ is $1=1+0\cdot x+0\cdot x^2+0\cdot x^3+\cdots$, one gets $c_0=1$ and $c_k=0$ for every $k\geqslant1$.
Create 'smooth breakpoint function' by using integral?
I'm not quite sure that I understand the question, but if you're simply looking for $f_4(x)$ with $f_4'(x)$ given by your expression, you can find it by using $$ \int \frac{1}{1+e^{-kx}} dx = \int \frac{e^{kx}}{e^{kx}+1}dx = \frac{1}{k} \int \frac{(e^{kx}+1)'}{e^{kx}+1} dx = \frac{1}{k} \ln(e^{kx}+1) + C. $$
Is the projection of an ellipse still an ellipse?
A parallel projection from a plane to another is an affine transformation (it turns a parallelogram into another parallelogram). If you plug the affinely transformed coordinates in the equation of the ellipse, you still get a quadratic equation. And as the point set is bounded, it must be another ellipse. Addendum: Even a perspective projection can preserve an ellipse. Indeed, analytically it is expressed by an homographic transformation, which is of the form $$x'=\frac{ax+by+c}{gx+hy+i},y'=\frac{dx+ey+f}{gx+hy+i}$$ and when you plug this into a quadratic equation, you still get a quadratic equation. Anyway, due to the possibly cancelling denominators, you can get an hyperbola as well.
Cauchy product application with Euler's number
You used the right way : The two series are absolutely convergent, thus the product of the series is the serie of the Cauchy products, whose coefficient is $$u_n = \sum_{k=0}^n \frac{x^k}{k!}\frac{(-x)^{n-k}}{(n-k)!}$$ But, for $n\geq 1$, it can be rewrited as $$u_n = \frac{x^n}{n!}\sum_{k=0}^n {n\choose k} (-1)^{n-k}$$ which is equal to 0 by binomial theorem. Edit : I'll answer your comment. For $n\geq 1$, $u_n = 0$ because $\sum_{k=0}^n {n\choose k} (-1)^{n-k} = (1-1)^n = 0$. Then, $$\left( \sum_{k=0}^\infty \frac{x^k}{k!}\right)\left( \sum_{k=0}^\infty \frac{(-x)^k}{k!}\right) = \sum_{k=0}^\infty u_k = u_0 = 1$$
Number of ways 1a,1b,5 can add up to n (with this being a permutation)
First you need to build the recurrence relation and then you need to solve that. Recurrence relation: Let $a_n$ be the number of ways to dispense $\$n$ using the given currencies. To pay $\$n$ exactly one of the following should happen: Suppose $\$n-1$ have been paid then to pay the total amount $\$n$, the last denomination used can be $\$1$ coin. Suppose $\$n-1$ have been paid then to pay the total amount $\$n$, the last denomination used can be $\$1$ bill. Suppose $\$n-5$ have been paid then to pay the total amount $\$n$, the last denomination used can be $\$5$ bill. The number of ways in which the first two cases can happen is $a_{n-1}$ and the number of ways in which the last case can happen is $a_{n-5}$. Thus $$a_n=2a_{n-1}+a_{n-5}.$$ Solving the recurrence relation: Let us assume that the solution is of the form $a_n=r^n$. Then substituting this into the recurrence relation yields: $$r^{n}=2r^{n-1}+r^{n-5}.$$ This results in: $$r^5=2r^{4}+1.$$ Now solutions to this will help you construct the closed form of $a_n$.
Need help finding inverse under $a\circ b = a^b b^a$
The equation $2^{-x}=x^2$ admits three real solutions : $x=-2,\;x=-4\;$ and a more sophisticated solution involving the LambertW function as returned by WolframAlpha (the only positive solution will be this last one). These solutions may be obtained by writing $\,2^{-x}=x^2\,$ as $\,1=x\,e^{x\frac{\ln(2)}2}\,$ or $\,-1=x\,e^{x\frac{\ln(2)}2}\,$ that we may express as : $\;\displaystyle \frac{\ln(2)}2=\left(x\frac{\ln(2)}2\right)\,e^{\left(x\frac{\ln(2)}2\right)}\;$ or $\;\displaystyle \frac{-\ln(2)}2=\left(x\frac{\ln(2)}2\right)\,e^{\left(x\frac{\ln(2)}2\right)}$ Since the LambertW function is defined implicitly by $\displaystyle z=W(z)e^{W(z)}$ we see that the answers are given by $\;W\left(\frac{\ln(2)}2\right)=x\frac{\ln(2)}2\;$ and $\;W\left(\frac{-\ln(2)}2\right)=x\frac{\ln(2)}2\;$ that is by $$\tag{1}\boxed{\displaystyle x=\frac2{\ln 2} W\left(\frac{\ln 2}2\right)}$$ and $$\tag{2}\;\boxed{\displaystyle x=\frac2{\ln 2} W\left(\frac{-\ln 2}2\right)}$$ The subtle point is that this second solution $(2)$ will be split in two sub-solutions since the $\rm LambertW$ function admits two branches for $x \in \left(-\dfrac1e,0\right)\;$ (i.e. for $ W\left(\dfrac{-\ln 2}2\right)$) : the upper one gives $W_0\left(\dfrac{-\ln 2}2\right)=-\ln 2\;$ (divided by $\ln 2$ and multiplied by $2$ returns the $x=-2\,$ solution) while the lower branch gives the value noted $W_{-1}\left(\dfrac{-\ln 2}2\right)$ (after multiplication by $\dfrac2{\ln 2}$ this returns the $x=-4\;$ solution). The solution $(1)$ is the only real and positive solution : $$\boxed{\displaystyle x=\frac2{\ln 2} W\left(\frac{\ln 2}2\right)\approx 0.766664695962123}$$ To clarify further : you probably had to find an approximation of the answer (possibly using the plot of $x\to 2^{-x}$ and $x \to x^2$ on the same graphics as shown on WolframAlpha) or find it numerically (say by iterations or with a Taylor series) or play with the LambertW function...
Prove that for every point in one-sheeted hyperboloid, there exists at least one line which is full contained in it
Using rotational invariance, you need to show it only for the points of the form $(r, 0,\sqrt{r^2-1}$) for $r \geq 1$. Then $x=r+ \sqrt{r^2 -1}t$, $y= \pm t$, $z=\sqrt{r^2-1} +rt$ give two lines that go through the point and are contained in the surface.
Short question to a pole-zero plot
It's just a double pole at the origin (you can think of it as two poles that coincide). Regarding the meaning: in general, it's nothing remarkable; but to get a definite answer you should specify what you are speaking of. Is this the Z transform of a discrete signal? If so, the normal convention is to write it in terms of $z^{-1}$, instead of $z$ - and the mere formula (or pole-zero spec, if it's a rational function) is not enough, you need to specify the region of convergence.
Probability that a random bridge board does not contain a sequence?
if i get your question right, an ideal sample set E will be a set of all subsets E={S2,S3,S4,…,S12} whereas is subset of three serial number s=[n-1,n,n+1] your sample space for all possible options are equal 2^11 one of these event meet the required criteria which is ‘E' in your case. the probability of having Pr(E=1) = 1/(2^11) hope it helps
Substitution in integral, how shall I proceed
Hint : $$\begin{align}\int_2^\infty\frac1{\big(\log n\big)^{\log n}}dn&~=~\int_2^{\exp\big(e^2\big)}~\frac1{\big(\log n\big)^{\log n}}dn~+~\int_{\exp\big(e^2\big)}^\infty~\frac1{\big(\log n\big)^{\log n}}dn~\le\\&~\le~\int_2^{\exp\big(e^2\big)}~\frac1{2^{\log n}}dn~+~\int_{\exp\big(e^2\big)}^\infty~\frac1{\big(e^2\big)^{\log n}}dn~=~\\&~=~\int_2^{\exp\big(e^2\big)}~\frac1{n^{\log2}}dn~+~\int_{\exp\big(e^2\big)}^\infty~\frac1{n^2}dn,\end{align}$$ since $a^{\log b}=b^{\log a}.~$ Can you take it from here ?
Is it true that $Gal(K/F)\cong S_{n_1}\times \cdots S_{n_k}$?
This is not always true as shows the answer of this question for $X^n-2$, the order of the Galois group is $n\phi(n)$ or $n\phi(n)/2$. https://mathoverflow.net/questions/143739/galois-group-of-xn-2
Identical indistinguishable items Probability doubt
Each event in a sample space doesn't have to have equal probability. This is true for many of the famous distributions like Binomial, Poisson, etc. The probability of getting the red is $1/2$. Your approach looks good and is leveraging the fact that RW and WR are mutually exclusive and then applying the multiplication rule. $$ P(\text{draw the red ball}) = P(RW \cup WR) = P(RW) + P(WR) = P(R)P(W|R) + P(W)P(R|W) = 1/4(1) + 3/4(1/3) = 1/2$$ You could also see this from the pmf of a hypergeometric RV. Let $X =$ # of red balls you get from 2 draws. $$ P(X=1) = \frac{{1 \choose 1}{3 \choose 1}}{{4 \choose 2}} = 1/2$$ If you want each event to have equal probability in your sample space it could help to label the balls. So there are $12$ events in the sample space $R_1 W_1, R_1 W_2, R_1 W_3, ...$ and then you can treat these events as equiprobable and can count the events. Then the probability of one red will be $6/12 = 1/2$
About a metric over $C^{\infty}(\Omega)$
Hints: (most of these I expect you've seen) Let $(X,d)$ be a (pseudo)metric space. Show that $\delta(x,y) = \frac{d(x,y)}{1+d(x,y)}$ defines a bounded (pseudo)metric on $X$. Let $(d_n)_{n\geq 1}$ be a family of pseudometrics on $X$ such that for all $x,y\in X$ the sequence $(d_n(x,y))_{n\geq 1}$ is bounded. Show that $\eta(x,y) = \sum_{n\geq 1}2^{-n}d_n(x,y)$ defines a pseudometric on $X$, and that if the family is separating then $\eta$ is a metric. It might be worthwhile to represent $$q_m(\Phi) = \max\left\lbrace \sup\{|D^\alpha\Phi(x)|:x\in K_m\} : |\alpha|\leq m\right\rbrace$$
Listing all sets in sigma-algebra of a random variable when there are similar mappings?
Any real valued function is measurable on a space if the sigma algebra contains all subsets. So Y is measurable. $\sigma (Y)$ consists of sets of the type $Y^{-1} (A)$. To write down these sets you have see which of the values 0,2,4 is in A. The answer is $\sigma (Y)=\{ \{.\},\Omega,\{b\},\{d\},\{a,c\},\{b,d\},\{a,b,c\}, \{a,c,d\}\}$
Prove $\lim _{ n\rightarrow \infty }{ { x }_{ n }^{ k } } ={ \left( \lim _{ n\rightarrow \infty }{ { x }_{ n } } \right) }^{ k }$
Assume $\displaystyle\lim_{n\to\infty}x_n=L$ and $\displaystyle\lim_{n\to\infty}y_n=M$. Then, $(x_n)$ is bounded and thus there is a constant $c>0$ such that $$|x_n|\leq c,\quad\forall\ n\in\mathbb N.$$ Furthermore, given $\varepsilon>0$ there is $n_0\in\mathbb N$ such that $$n>n_0\quad\Rightarrow\quad |x_n-L|<\frac{\varepsilon}{2|M|},\quad|y_n-M|<\frac{\varepsilon}{2c} .$$ As a consequence, $$n>n_0\quad\Rightarrow\quad |x_ny_n-LM|\leq |x_n||y_n-M|+|M||x_n-L|<c\frac{\varepsilon}{2c}+|M|\frac{\varepsilon}{2|M|}=\varepsilon.$$ This proves that $$\lim_{n\to\infty} (x_n\cdot y_n)=LM=\left(\lim_{n\to\infty} x_n\right)\left(\lim_{n\to\infty} y_n\right)$$ Therefore, $$\lim_{n\to\infty}x_n^k=\lim_{n\to\infty}(x_n\cdot x_n\cdots x_n)=\left(\lim_{n\to\infty} x_n\right)\left(\lim_{n\to\infty} x_n\right)\cdots \left(\lim_{n\to\infty} x_n\right)=\left(\lim_{n\to\infty} x_n\right)^k$$
Order of convergence of Newton's method for $g(x) = xe ^{x/2} - 1 + x$
Yes that is correct and should result in k x[k] f(x[k]) dx[k] dx[k]/dx[k-1]^2 0 2.500000000000000 10.2259 -1.155037e+00 -1.155037e+00 1 1.344962880055704 2.97987 -6.967936e-01 -5.222906e-01 2 0.648169320860770 0.544435 -1.923188e-01 -3.961079e-01 3 0.455850509909319 0.0283948 -1.116912e-02 -3.019781e-01 4 0.444681389050070 8.70351e-05 -3.444617e-05 -2.761232e-01 5 0.444646942882529 8.23361e-10 -3.258703e-10 -2.746395e-01 6 0.444646942556658 -1.11022e-16 4.394048e-17 4.137855e+02 for the iteration formula in the title ($g(x)=...+x$) and k x[k] f(x[k]) dx[k] dx[k]/dx[k-1]^2 0 2.500000000000000 5.22586 -7.625347e-01 -7.625347e-01 1 1.737465307480700 1.40446 -4.065176e-01 -6.991336e-01 2 1.330947690963831 0.258294 -1.153082e-01 -6.977523e-01 3 1.215639529705251 0.0167891 -8.598166e-03 -6.466745e-01 4 1.207041363790715 8.83367e-05 -4.572034e-05 -6.184403e-01 5 1.206995643450354 2.48783e-09 -1.287697e-09 -6.160199e-01 6 1.206995642162657 4.44089e-16 -2.298597e-16 -1.386231e+02 for the iteration as formulated in the text ($g(x)=...-x$). Both nicely show quadratic convergence until the number of available digits runs out.
How does one find the reduced Singular Value Decomposition of a row or column vector?
$$a=(a/\| a \|) \begin{bmatrix} \| a \| \end{bmatrix} \begin{bmatrix} 1 \end{bmatrix}.$$ This pretty much follows by knowing the shape of a reduced SVD: if $A \in \mathbb{C}^{m \times n}$ then $U \in \mathbb{C}^{m \times r}, \Sigma \in \mathbb{R}^{r \times r}, V^* \in \mathbb{C}^{r \times n}$, where $r=\text{rank}(A)$. Now take $m=k,r=1,n=1$ (where $a$ is $k \times 1$).
Prove the theorem of Nicomachus by induction
Notice that by adding your equations you get \begin{align*} 1^3&=1\\ 1^3+2^3&=1+3+5\\ 1^3+2^3+3^3&=1+3+5+7+9+11\\ 1^3+2^3+3^3+4^3&=1+3+5+7+9+11+13+15+17+19 \end{align*} And notice also that these equations imply the equations you want. (You get them simply by subtracting the two consecutive equations.) So you actually want to prove that sum of the first $n$ cubes is the same as the sume of the first $1+2+\dots+n=\frac{n(n+1)}2$ odd numbers. $$1^3+2^3+\dots+n^3 = \sum_{k=1}^{n(n+1)/2} (2k-1)$$ If you also know that the sum of the first $n$ odd numbers gives a square, this is the same as $$1^3+2^3+\dots+n^3 = \left(\frac{n(n+1)}2\right)^2$$ See also: Proving $1^3+ 2^3 + \cdots + n^3 = \left(\frac{n(n+1)}{2}\right)^2$ using induction Direct Proof that $1 + 3 + 5 + \cdots+ (2n - 1) = n\cdot n$
Asymptotic Normality of MLE when data is modelled with covariates
I guess, one way to convince myself for the asymptotic normality of MLE when $X_1,\cdots, X_N$ are independent but not identical is to use roughly the same arguments as i.i.d. scenario except the following two changes: [Change 1] The classical i.i.d. CLT is replaced with Lyapunov CLT (http://en.wikipedia.org/wiki/Central_limit_theorem#cite_ref-6) which does not require identical observations (but independent is required). It states that under some conditions, if $Z_i \overset{ind}{\sim} (\mu_i,\sigma^2_i)$ then: $$ \frac{1}{(\sum_{i=1}^{n}Var(Z_i))^{1/2}}\sum_{i=1}^{n} (Z_i-\mu_i) \overset{d}{\rightarrow} N(0,1) $$ [Change 2] Let $Z_i \overset{ind}{\sim} (\mu_i,\sigma^2_i)$ and $c_i,i=1,2,\cdots$ be the sequence of constants. Instead of using the regular SLLN, use arguments similar to WLLN with a condition that $\frac{\sum_{i=1}^n \sigma_i^2}{ (\sum_{i=1}^n c_i)^2} \rightarrow 0 $ as $ n\rightarrow \infty$. By Chebychev's inequality for all $\epsilon$: $$ Pr(|\frac{\sum_{i=1}^n Z_i}{\sum_{i=1}^n c_i}-\frac{\sum_{i=1}^n\mu_i}{\sum_{i=1}^n c_i}|\ge \epsilon) \le \frac{Var(\frac{\sum_{i=1}^n Z_i}{\sum_{i=1}^n c_i}-\frac{\sum_{i=1}^n\mu_i}{\sum_{i=1}^n c_i})}{\epsilon^2} =\frac{Var(\sum_{i=1}^n Z_i)}{(\sum_{i=1}^n c_i)^2\epsilon^2}=\frac{\sum_{i=1}^n \sigma_i^2}{ (\sum_{i=1}^n c_i)^2\epsilon^2} $$ So if the condition is satisfied, then we have WLLN for non-identical scenario for the sequence of random variables $Z_1,Z_2,\cdots$. The LLN used in Step (1) and (4) of the proof for i.i.d. case above is replaced with this assuming the condition is satisfied. In summary, we can (roughly) show the asymptotic normality of MLE when the observations are not identically distributed taking the following steps: (1) $\hat{\theta}\rightarrow \theta$ in probability under the condition discussed in [Change 2]. (2) The following approximation holds for large $n$ because of (1): $$ [\sum_{i=1}^n Var(S_i(\theta))]^{1/2} (\hat{\theta}-\theta) \approx- \frac{ \frac{1}{[\sum_{i=1}^n Var(S_i(\theta))]^{1/2}} S_n(\theta) }{ \frac{1}{[\sum_{i=1}^n Var(S_i(\theta))]} S_n'(\theta) } =- \frac{ \frac{1}{[\sum_{i=1}^n Var(S_i(\theta))]^{1/2}} \sum_{i=1}^n S_i(\theta) }{ \frac{1}{[\sum_{i=1}^n Var(S_i(\theta))]} \sum_{i=1}^n S'_i(\theta) } $$ (3) The numerator converses to $N(0,1)$ by Lyapunov CLT [Change 1]. (4) By LLN of [Change 2], for large $n$ the denominator is approximately $\frac{1}{\sum_{i=1}^n Var(S_i(\theta))} \sum_{i=1}^n E(S'_i(\theta))$ (5) $ Var(S_i(\theta)) = E(S'_i(\theta))= I_i(\theta) $ for all $i$. (6) By Slutsky theorem and (2),(3),(4),(5), for large $n$, $$ [\sum_{i=1}^n Var(S_i(\theta))]^{1/2} (\hat{\theta}-\theta)\approx N(0,[\frac{ \sum_{i=1}^n E(S'_i(\theta)) }{\sum_{i=1}^n Var(S_i(\theta))} ]^{-2}) $$ Therefore: $$ \hat{\theta}\approx N(0,\frac{\sum_{i=1}^n Var(S_i(\theta))}{ [\sum_{i=1}^n E(S'_i(\theta))]^2 } ) = N(0,\frac{1}{ \sum_{i=1}^n I_i(\theta) } ) $$
What is a cyclic ideal?
This link suggests it means principal ideal: http://books.google.de/books?id=RjkWZs-6zg8C&pg=PA19&lpg=PA19&dq=%22cyclic+ideal%22&source=bl&ots=U1eT61QUAk&sig=INCuWZWxulLSZEyxTpmzVcnm7CI&hl=de&ei=83m7TuCRI4Ol8QOz8uDZBw&sa=X&oi=book_result&ct=result&resnum=6&ved=0CFYQ6AEwBQ#v=onepage&q=%22cyclic%20ideal%22&f=false I.e. it is an ideal generated by one element. You usually call a module generates by one element cyclic. Since ideals are also submodules of a ring, the same terminology may be applied to them. Edit: your ideal, however, seems to be principal, since it is the square of a principal ideal. In fact, if I am not mistaken, $(x+y)^2$ is the zero ideal in $\mathbb{C}[x,y]/(x^2,xy,y^2)$...
Regarding $X \sim N_{0,1}$ and $Y = \left( \begin{array}{ccc} X \\ X \end{array} \right)$.
Part 1 is incorrect. For your choice of $H$, you have $P(Y \in H) = P(X \in R, X = 0) = 0$. Instead, you should take $H = \{(x, x) \mid x \in \mathbb R\}$. For part 2, just remember the definition of the characteristic function $\phi(v) = E[\exp(iv^t Y)]$ and use the fact that $X \sim \mathcal{N}(0, 1)$. Part 3 then follows from part 2.
Advanced Integral: $\int_0^1\frac{\text{Li}_2(x^2)\arcsin^2(x)}{x}dx$
I wasn't able to find a closed-form for this, but I was able to simplify it to $$\frac{\pi^2}{48} \left( 2\pi^2 \ln(2) - 7\zeta(3) \right) - \sum_{n=1}^{\infty} \frac{2^{2n-2} H_n}{n^4 \binom{2n}{n}}$$ Evaluate $$I = \int_0^1\frac{\text{Li}_2(x^2)\arcsin^2(x)}{x}dx$$ Expanding $\arcsin^2(x)$ using the power series yields: $$\int_0^1 \text{Li}_2(x^2) \sum_{n=1}^{\infty} \frac{2^{2n-1}}{n^2 \binom{2n}{n}} x^{2n-1} dx$$ Swapping integration and sum: $$\sum_{n=1}^{\infty} \frac{2^{2n-1}}{n^2 \binom{2n}{n}}\int_0^1 \text{Li}_2(x^2) x^{2n-1} dx$$ Making the substitution $u = x^2$: $$\sum_{n=1}^{\infty} \frac{2^{2n-2}}{n^2 \binom{2n}{n}}\int_0^1 \text{Li}_2(u) u^{n-1}du$$ The inner integral would be $$\int_0^1 \sum_{k=1}^{\infty} \frac{u^k}{k^2} u^{n-1} du = \sum_{k=1}^{\infty} \frac{1}{k^2} \frac{1}{k+n} = \frac{\pi^2}{6n} - \frac{H_n}{n^2}$$ Which makes the overall integral into $$\sum_{n=1}^{\infty} \frac{2^{2n-2}}{n^2 \binom{2n}{n}}\left(\frac{\pi^2}{6n} - \frac{H_n}{n^2}\right)$$ Or splitting the sums up: $$\frac{\pi^2}{24}\sum_{n=1}^{\infty} \frac{2^{2n}}{n^3 \binom{2n}{n}} - \sum_{n=1}^{\infty} \frac{2^{2n-2} H_n}{n^4 \binom{2n}{n}}$$ Let $f(x) = \sum_{n=1}^{\infty} \frac{x^{2n}}{n^3 \binom{2n}{n}}$. Then $f'(x) = 2\sum_{n=1}^{\infty} \frac{x^{2n-1}}{n^2 \binom{2n}{n}} = \frac{4\arcsin^2\left( \frac{x}{2} \right)}{x}$ Then the integral to solve for the first sum is $$\int_{0}^{2}\frac{4\arcsin^{2}\left(\frac{x}{2}\right)}{x}dx = 4\int_{0}^{1}\frac{\arcsin^{2}\left(x\right)}{x}dx$$ Making the substitution $x \to \arcsin(x)$ yields $$4\int_0^{\pi/2} x^2 \cot(x) dx$$ This can be done by complex methods (substituting $u = e^{2ix}-1$ and then doing partial fractions) to get the indefinite integral in closed form. Then the integral would be $$\pi^2 \ln(2) - \frac{7}{2}\zeta(3)$$ This then makes the original integral to $$\frac{\pi^2}{48} \left( 2\pi^2 \ln(2) - 7\zeta(3) \right) - \sum_{n=1}^{\infty} \frac{2^{2n-2} H_n}{n^4 \binom{2n}{n}}$$ I will start from your second attempt: $$I=\sum_{n=1}^\infty\frac{1}{n^2}\underbrace{\int_0^{\pi/2}x^2\cot x \sin^{2n}(x) dx}_{I_n}$$ Using integration by parts, $I_n$ is equal to $$I_n = x^2 \frac{\sin^{2n}(x)}{2n} \Big|^{\pi/2}_0 - \int_0^{\pi/2} x \frac{\sin^{2n}(x)}{n} dx$$ Which simplifies to $$\frac{\pi^2}{8n} - \frac{1}{n} \int_0^{\pi/2} x\sin^{2n}(x) dx$$ Splitting the $\sin^{2n}(x)$ as $\sin^{2n-1}(x)\sin(x)$ so that I can integrate by parts: $$J_n = \int_0^{\pi/2} x\sin^{2n}(x) dx = \int_0^{\pi/2} \sin^{2n-1}(x) x \sin(x)dx$$ Integrating by parts: $$1-\int_{0}^{\frac{\pi}{2}}\left(-x\cos\left(x\right)+\sin\left(x\right)\right)\left(2n-1\right)\cos\left(x\right)\sin\left(x\right)^{\left(2n-2\right)}dx$$ Separating and evaluating gives the relation $$J_n = \frac{1}{2n} - (2n-1) J_n + (2n-1)J_{n-1}$$ which has the solution $$J_n = \frac{1}{4n^2} + \frac{2n-1}{2n} J_{n-1}$$ with $J_0 = \frac{\pi^2}{8}$ The explicit solution to this is $$\frac{\binom{2n}{n}}{4^n}\left(\frac{\pi^2}{8} + \sum_{m=1}^{n} \frac{4^{m-1}}{\binom{2m}{m} m^2}\right)$$ Which then makes $I_n$ $$\frac{\pi^2}{8n} - \frac{1}{n} \frac{\binom{2n}{n}}{4^n}\left(\frac{\pi^2}{8} + \sum_{m=1}^{n} \frac{4^{m-1}}{\binom{2m}{m} m^2} \right)$$ The original integral/sum is then $$\sum_{n=1}^{\infty} \frac{1}{n^2} \left( \frac{\pi^2}{8n} - \frac{1}{n} \frac{\binom{2n}{n}}{4^n}\left(\frac{\pi^2}{8} + \sum_{m=1}^{n} \frac{4^{m-1}}{\binom{2m}{m} m^2} \right) \right)$$ This can be simplified to $$\frac{\pi^2}{8} \zeta(3) - \frac{\pi^2}{8}\underbrace{\sum_{n=1}^{\infty} \frac{\binom{2n}{n}}{4^n n^3}}_{S_1} - \underbrace{\sum_{n=1}^{\infty}\frac{\binom{2n}{n}}{4^n n^3} \sum_{m=1}^{n} \frac{4^{m-1}}{\binom{2m}{m} m^2}}_{S_2} \tag 1$$ Focusing on $S_2$, $\sum_{n=1}^{\infty}\frac{\binom{2n}{n}}{4^n n^3} \sum_{m=1}^{n} \frac{4^{m-1}}{\binom{2m}{m} m^2}$: This can be rewritten as $$\sum_{m=1}^{\infty} \frac{4^{m-1}}{\binom{2m}{m} m^2}\left(\sum_{n=1}^{\infty}\frac{\binom{2n}{n}}{4^n n^3} - \sum_{n=1}^{m-1} \frac{\binom{2n}{n}}{4^n n^3} \right) = S_1\underbrace{\sum_{m=1}^{\infty} \frac{4^{m-1}}{\binom{2m}{m} m^2}}_{S_3} - \sum_{m=1}^{\infty} \frac{4^{m-1}}{\binom{2m}{m} m^2}\sum_{n=1}^{m-1} \frac{\binom{2n}{n}}{4^n n^3} $$ $S_3$ can be simplified using the series expansion of $\arcsin^2(x)$ to get $S_3 = \frac{\pi^2}{8}$ This then simplifies the overall integral/sums to $$\frac{\pi^2}{8} \zeta(3) - \frac{\pi^2}{4}\sum_{n=1}^{\infty} \frac{\binom{2n}{n}}{4^n n^3} + \sum_{m=1}^{\infty} \frac{4^{m-1}}{\binom{2m}{m} m^2}\sum_{n=1}^{m-1} \frac{\binom{2n}{n}}{4^n n^3} \tag 2$$ Using Mathematica, I found $S_1 = \frac{-\pi^2 \ln(4) + \ln^3(4) + 12\zeta(3)}{6}$, but don't have a proof for this. I feel like there might be a proof of this somewhere on MSE, but unfortunately Approach0 is down right now (so I can't search as effectively).
Why is some power of a permutation matrix always the identity?
There are only finitely many ways to permute finitely many things. So in the sequence $$P^1,\ P^2,\ P^3,\ldots$$ of powers of a permutation $P$, there must eventually be two powers that give the same permutation, meaning that $P^i=P^j$ for some $i>j\geq0$. Permutations are reversible so $P$ is invertible, hence $$P^{i-j}=P^iP^{-j}=P^j(P^j)^{-1}=I.$$ And yes, a $2\times2$-block means a $2\times2$-matrix here. The hint suggest to choose a $5\times5$-matrix that has a $2\times2$-matrix and a $3\times3$-matrix on its diagonal, and zeroes elsewhere.
Convergence of $ \sum_{n=1}^\infty \frac{\log n}{n^q+1} $
Try the limit comparison test with $\displaystyle \sum_{n=1}^\infty \frac{1}{n^p}$ for various values of $p$.
Why is it true that if $0\leq{}x\leq{1\over2}$ then $(1-x)\geq(2e)^{-x}$?
There appears to be an implicit assumption that $x\geq 0$. Take logarithms on both sides and note that $\log(1-x) +x \log(2e)$ is concave in $x$. Hence we only need to check the inequality for $x=1/2$ and $x=0$. Now, $$\frac{1}{2} \geq \frac{1}{2} \sqrt{\frac{2}{e}} = \frac{1}{\sqrt{2e}} = (2e)^{-1/2},$$ and $$1 \geq (2e)^0.$$
Poincaré-Bendixson Theorem and Limit Cycle:find the trapping region
Dynamical System Find the periodic solution for the dynamical system: $$ % \begin{align} % \dot{x} &= y - x^{3} + x \\ % \dot{y} &= -x - y^{3} + y \\ % \end{align} \tag{1} % $$ Fixed points The single fixed point is at the origin: $$ \left[ \begin{array}{c} \dot{x} \\ \dot{y} \end{array} \right]_{(0,0)} = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right] $$ Compute $\dot{r}$ The polar coordinate transform $$ % \begin{align} % x &= r \cos \theta \\ % y &= r \sin \theta \\ % \tag{2} \end{align} % $$ implies $$ r^{2} = x^{2} + y^{2} \tag{3} $$ Differentiate (3) with respect to time: $$ 2r\dot{r} = 2x \dot{x} + 2y \dot{y} $$ Therefore $$ \dot{r} = \frac{x \dot{x} + y \dot{y}} {r} \tag{4} $$ Transform $\dot{x}$ and $\dot{y}$ to $r$ and $\theta$ using (2): $$ % \begin{align} % \dot{x} &= y - x^{3} + x = r\left(\sin \theta - r^{2}\cos^{3}\theta + \cos \theta \right) \\ % \dot{y} &= -x - y^{3} + y = r\left(-\cos \theta - r^{2}\sin^{3}\theta + \sin \theta \right) \\ % \end{align} % $$ Inserting these identities in $(4)$ produces the final differential equation: $$ \dot{r} = -\frac{1}{4} r \left(r^{2} \left(\cos(4\theta)+3 \right)- 4 \right) \tag{5} $$ Trapping region Identify regions where $\dot{r}$ is expanding $\left(\dot{r}>0 \right)$ or shrinking $\left(\dot{r}<0 \right)$. Examine the bounding values $$-1 \le \cos 4\theta \le 1$$ Case 1: $\cos 4\theta = 1$ Equation (4) becomes $$ \dot{r} = r \left( 1 -r^{2} \right) $$ The zones of $\dot{r}$ $\color{blue}{increasing}$ and $\color{red}{decreasing}$ are $$ \begin{cases} \color{blue}{\dot{r} > 0} & \color{blue}{r < 1} \\ \color{red}{\dot{r} < 0} & \color{red}{r > 1} \\ \end{cases} $$ Case 2: $\cos 4\theta = -1$ Equation (4) is now $$ \dot{r} = r \left( 1 - \frac{r^{2}}{2} \right) $$ The two zones are $$ \begin{cases} \color{blue}{\dot{r} > 0} & \color{blue}{r < \sqrt{2}} \\ \color{red}{\dot{r} < 0} & \color{red}{r > \sqrt{2}} \\ \end{cases} $$ The zones are shown in the figure below. Case 1 on the left, case 2 on the right. The third case combines the first two. Red regions are where the flow is inward; blue regions mark outward flow. You can think of the process as adding the first two figures and using the rules red + red = red, blue + blue = blue, and red + blue = gray. The trapping region is $1 < r <\sqrt{2}$. When $r<1$, $\dot{r}>0$ and $r$ will $\color{blue}{increase}$. When $r > \sqrt{2}$, $\dot{r} < 0$, and $r$ will $\color{red}{decrease}$. But in the trapping region the sign of $\dot{r}$ $\color{gray}{oscillates}$. Results The flow field is plotted with the trapping region shown as the shaded annulus and the null clines as red, dashed lines.
Determinant property
Your first sentence is correct, since you can make use of the property of $\mbox{det}(AB)=\mbox{det}(A)\mbox{det}(B)$, finding $$0=\mbox{det}(A^2)=\mbox{det}(A)^2\to \mbox{det}(A)=0$$ However, your last reasoning is uncomplete, since there are matrices (different from zero) which can have determinants equal to zero. For example $$\left(\begin{array}{} 1&1\\1&1\end{array}\right)$$
Prove that if $a+b-c=1$ then $a^2+b^2-c^2=1-2ab+2c$
\begin{align} a^2+b^2-c^2 & = (a+b)^2 - 2ab - c^2 \\ & = (1+c)^2 - 2ab - c^2 \\ & = 1 - 2ab + 2c \end{align} where the transition from the second to the third expression derives from the condition $a+b-c=1$.
Question regarding differentiability of a composite function
It is allowed, but not necessary, to look at the $A_i$. We are given $$A(x)=f\bigl(|x|\bigr)\>x\ .$$ Since $x\mapsto|x|:=\sqrt{x_1^2+x_2^2+\ldots+x_n^2}$ is differentiable at all points $x\ne0$ it follows from general principles about differentiability that $A$ is differentiable at all points $x\ne0$. It remains to consider the point $x=0$. For small $|x|$ we have the approximation $$A(x)-A(0)=A(x)\doteq f(0)x\ .$$ This leads to the conjecture that in fact$$dA(0).X=f(0)X\qquad(X\in T_0)\ .$$ In order to prove this we look at $$A(0+X)-A(0)-f(0)X=\bigl(f\bigl(|X|\bigr)-f(0)\bigr)\>X\ .$$ Here the right hand side is indeed $=o\bigl(|X|\bigr)$ when $X\to0$, since $f$ is continuous at $0$.
Invariant Inner Product on Lie Algebra
Here's how to prove one direction, namely if the inner product $\langle,\rangle$ on $V$ is invariant under $\mathcal{D} : G \to \text{GL}(V)$ then it is invariant under $d : \mathfrak{g} \to \mathfrak{gl}(V)$. Take any $X \in \mathfrak{g}$ and $v,w \in V$. Then we want to show that $$\langle d(X)v,w\rangle = - \langle v,d(X)w\rangle.$$ Now $e^{tX} \in G$ for all $t \in \Bbb{R}$ and so $$\langle \mathcal{D}(e^{tX})v,\mathcal{D}(e^{tX})w \rangle = \langle v,w\rangle.$$ Since every bilinear form $B(v,w)$ on a finite dimensional vector space is given as $$B(v,w) = v\cdot Aw$$ for some $\dim V \times \dim V$ matrix $A$ (where the $\cdot$ denotes scalar dot product) we can say that $$ \mathcal{D}(e^{tX})v \cdot A\left( \mathcal{D}(e^{tX})w \right) = v \cdot Aw$$ for some suitable matrix $A$. The secret now is to differentiate both sides with respect to $t$ and set $t = 0$ to get $$\mathcal{D}(e^{tX})v \cdot \frac{d}{dt} A\left( \mathcal{D}(e^{tX})w \right)\bigg|_{t=0} + A\left( \mathcal{D}(e^{tX})w \right) \cdot \frac{d}{dt}\mathcal{D}(e^{tX})v \bigg|_{t=0} =\\ \hspace{1in} \mathcal{D}(e^{tX})v \cdot A\left( \frac{d}{dt}\left(\mathcal{D}(e^{tX})w\right) \right)\bigg|_{t=0} + A\left( \mathcal{D}(e^{tX})w \right) \cdot \frac{d}{dt}\mathcal{D}(e^{tX})v \bigg|_{t=0}=0.$$ We now use the identity $\frac{d}{dt}\mathcal{D}(e^{tX})\bigg|_{t=0}= d(X)$ and the relationship between $G$ and $\mathfrak{g}$ to get $$ v \cdot A(d(X)w) + A(w) \cdot d(X)v = v \cdot A(d(X)w) + d(X)v \cdot A(w)$$ and so $$\langle v,d(X)w \rangle + \langle d(X)v,w \rangle = 0 $$ as desired.
Find Fourier series coefficients of $f(x)$.
Hint 1: You only need to solve the indefinite integral $$\int x\cos(ax)$$ where $a$ is a constant. You can solve this integral using integration by parts. Hint 2: For your integral remember to divide the integral to two integrals (where the sign of $x$ is known)
How do I place the limits of integration for double integrals?
Tip for order of $dA$ check at this link. Next, the limits of integration is easy to recognize thanks to Fubini's Theorem.
Weighted space with two weight functions
We have $$ C_{w_1,w_2}=C_{w_1}\cap C_{w_2}. $$ $C_{w_1,w_2}$ is the intersection of two Banach spaces, and is itself a Banach space with the norm $$ \|f\|_{w_1,w_2}=\max\bigl(\|f\|_{w_1},\|f\|_{w_2}\bigr). $$
Rewriting 2D stationary heat equation.
You're not really missing anything. You are given a specific stationary differential equation. Since it represents a heat equation, we can conclude that $u_t=f$, where $f$ would be independent of $t$ since the heat distribution is supposed to be stationary. That is, you're supposed to match the given equation to the heat equation and draw conclusions.
Why WLLN implies this convergence?
If $Y_1, \ldots, Y_n$ are i.i.d., the WLLN implies $\frac{1}{n} \sum_{i=1}^n Y_i$ converges in probability to $E[Y_1]$. Here, with $Y_i=X_i^2$ you have $E[Y_1]=E[X_1^2] = \text{Var}(X_1) + E[X_1]^2 = \sigma^2 + \mu^2$.
Correct name for multi-dimensional array/matrix/tensor
"Array" is not used much in math, except possibly by some computational math authors. I generally think of that as computer science terminology. I think they also allow for non-number objects to appear as entries (like strings, for example), and some people are uncomfortable calling something with string entries a matrix. "Array" seems to be used to encompass any sort of list of numbers in one shape or another, usually when you identify an object by numerical components. Vectors, linear transformations and tensors (=multilinear transformations) all have coordinate representations, if you fix a basis. linear transformations generalize to tensors. matrices represent linear transformations the way "multidimensional arrays" represent tensors (Here is an informal diagram that I hope is not too misleading, and which I hope people do not take too seriously.) "vector $\rightarrow$ linear transformation $\rightarrow$ tensor" is analogous to: "coordinates of vector $\rightarrow$ matrix $\rightarrow$ multidimensional matrix" I'm using "array" in "multidimensional array" because some people have a hard time thinking of a matrix as anything besides a flat object, but you can also say "multidimensional matrix". While matrices have rows and columns, the multidimensional versions can "go in more directions". A matrix has componets with two indices. If your array has items with three indices, you could make them into a cube-matrix. If it has four indicies on each entry, you could make it into a hypercube-matrix etc.
Little question about a complex equation
Since $|z|^2 = z \bar z$, the equation factorizes as $z(1+\bar z) = 0$, from which it is immediately clear that $z=0$ and $\bar z = -1$ (i.e. $z=-1$) are solutions and that these are the only ones.
Solve this equation : $y'(x)+\frac{1}{x}=\frac{1}{y}$
$y'(x)+\dfrac{1}{x}=\dfrac{1}{y}$ $y\dfrac{dy}{dx}+\dfrac{y}{x}=1$ This belongs to an Abel equation of the second kind. Let $x=e^{-t}$ , Then $\dfrac{dy}{dx}=\dfrac{\dfrac{dy}{dt}}{\dfrac{dx}{dt}}=\dfrac{\dfrac{dy}{dt}}{-e^{-t}}=-e^t\dfrac{dy}{dt}$ $\therefore-e^ty\dfrac{dy}{dt}+e^ty=1$ $y\dfrac{dy}{dt}-y=-e^{-t}$ This belongs to an Abel equation of the second kind in the canonical form. Please follow the method in https://arxiv.org/ftp/arxiv/papers/1503/1503.05929.pdf
Fundamental groups of Grassmann and Stiefel manifolds
Grassmanians are homogeneous spaces. In the real case, you have the oriented Grassmanian $G^0(k,\mathbb{R}^n)$ of oriented $k$-planes in $\mathbb{R}^n$ is diffeomorphic to $SO(n)/\left(SO(k)\times SO(n-k)\right)$ where $SO(n)$ is the collection of $n\times n$ special orthogonal matrices. Likewise, the nonoriented Grassmanian $G(k,\mathbb{R}^n)$ of nonoriented $k$-planes in $\mathbb{R}^n$ is diffeomorphic to $SO(n)/S(O(k)\times O(n-k))$. Finally, the complex Grassmanian, $G(k,\mathbb{C}^n)$, is diffeomorphic to $SU(n)/(SU(k)\times SU(n-k))$ where $SU(n)$ denotes the $n\times n$ special unitary matrices. (Some slight modifications may be necessary when $k=0$ or $k=n$). Once you have written them like this, you have a general theorem that given compact Lie groups $G$ and $H$, then $H\rightarrow G\rightarrow G/H$ is a fiber bundle. In particular, we can use the long exact homotopy sequence. It follows immediately that the complex Grassmanian is simply connected because $SU(n)$ is both connected and simply connected. In the real case, a bit more work needs to be done. For the oriented Grassmanian, it's enough to note that the canonical map $SO(k)\rightarrow SO(n)$ is a surjection on $\pi_1$ as soon as both $n$ and $k$ are bigger than 1, (isomorphism when $n,k>2$) and is always an isomorphism on $\pi_0$. Thus, the real oriented Grassmanian is simply connected. This also gives the answer for the unoriented real Grassmanian because there is a natural double covering $G^0(k,\mathbb{R}^n)\rightarrow G(k,\mathbb{R}^n)$ given by forgetting the orientation. Hence, the real unoriented Grassmanian has $\pi_1=\mathbb{Z}/2\mathbb{Z}$. Alternatively, note that the induced map from $S(O(k)\rightarrow O(n-k))$ to $SO(n)$ is an isomorphism on $\pi_1$, but that $S(O(k)\times O(n-k))$ has more than one component.
Is this element in the ring of integers?
Assume $\tau' = \frac{x'+y'\alpha + z' \alpha^2}{3} \in \mathcal{O}_K$. If we prove that $\tau' = 0$ then taking the contrapositive of this statement means $\tau' \neq 0 \implies \tau' \not\in \mathcal{O}_K$. We have $\alpha = \theta +2$ and so, as Connor Harris suggested in the comments, we can rewrite this as $\tau' = \frac{(x'+2y'+4z') + (y'+2z')\theta + z'\theta^2}{3}$. Since $\tau' \in \mathcal{O}_K$ by your previous results this forces the coefficients of $\frac{1}{3},\ \frac{\theta}{3},$ and $\frac{\theta^2}{3}$ to be zero. Hence we get the equations \begin{array}{rcl} x'+2y'+4z' & = & 0 \\ y'+2z' & = & 0 \\ z' & = & 0 \end{array} Therefore we must have $x' = y' = z' = 0$, and so $\tau' = 0$, as required.
Without Foundation, can axiom of choice be derived from $\forall\alpha\in\mathbf{Ord} (P(\alpha)\text{ can be well ordered})$?
I don't think it works with just ZF minus foundation ($\mathbf{ZF}^-$). To get your choice function (which you only know to work in the wellfounded part of the universe) to work in illfounded part, you would require a version of Coret's axiom Every set has the same size as a wellfounded set (or something similar) which is not a theorem of $\mathbf{ZF}^-$.
If two closed plane curves are outside each other, can there be a point inside both of them?
(Thanks to Moishe Cohen for his very helpful hint. He is not to blame for the use I have made of it, however!) Lemma If $E$ is a connected, closed subset of a connected, normal topological space $X$, and $G$ is a connected component of $X \setminus E$, then $G \cup E$ is connected. Proof Define $H = \overline{G} \cap E$. If $H = \emptyset$, then $\overline{G}, E$ are disjoint closed subsets of $X$, whence there exist disjoint open subsets $U, V$ of $X$ such that $\overline{G} \subseteq U$ and $E \subseteq V$. We cannot have $U = \overline{G}$, because then $U$ would be a closed, open, non-empty proper subset of $X$, so $X$ would be disconnected, contrary to hypothesis. On the other hand, we cannot have $U \ne \overline{G}$, because then $G$ would be properly contained in the open subset $U$ of $X \setminus E$, contrary to its definition. Therefore, $H \ne \emptyset$. We have $G \subseteq G \cup H \subseteq \overline{G}$, therefore $G \cup H$ is connected. Also, $(G \cup H) \cap E = H \ne \emptyset$. Since $G \cup H$ and $E$ are both connected, it follows that their union $(G \cup H) \cup E = G \cup E$ is connected. $\square$ Corollary The union of $E$ with any collection of connected components of $X \setminus E$ is connected. $\square$ Proposition If $\sigma$ is a closed curve in $\mathbb{C}$, then $[\sigma] \cup I(\sigma)$ is connected. Proof Let $z, w \in \mathbb{C}$. If $z \in I(\sigma)$, then by definition $n(\sigma, z) \ne 0$. If $z, w$ are in the same connected component of $\mathbb{C} \setminus [\sigma]$, then $n(\sigma, w) = n(\sigma, z) \ne 0$, whence $w \in I(\sigma)$. Therefore, $I(\sigma)$ is a union of connected components of $\mathbb{C} \setminus [\sigma]$. The result now follows from the above corollary. $\square$ Theorem For closed curves $\sigma$ and $\tau$ in $\mathbb{C}$, if $[\sigma] \subset O(\tau)$ and $[\tau] \subset O(\sigma)$ then $I(\sigma) \subset O(\tau)$. Proof Because $[\tau] \subset O(\sigma)$, we have $I(\sigma) \subseteq \mathbb{C} \setminus [\tau]$, therefore $[\sigma] \cup I(\sigma) \subseteq \mathbb{C} \setminus [\tau]$. It follows that the connected set $[\sigma] \cup I(\sigma)$ is contained in the same connected component of $\mathbb{C} \setminus [\tau]$ as $[\sigma]$. By hypothesis, this component is a subset of $O(\tau)$, whence $I(\sigma) \subset O(\tau)$. $\square$
$f=x^4+1$ is reducible over infinite field $F$ with characteristic $p>0$
If $\sqrt{-1} \in F$ then $x^4+1 = \left(x^2+\sqrt{-1}\right)\left(x^2-\sqrt{-1}\right)$. If $\sqrt{2} \in F$ then $x^4+1 = \left(x^2+\sqrt{2}x+1\right)\left(x^2-\sqrt{2}x+1\right)$. If $\sqrt{-2} \in F$ then $x^4+1 = \left(x^2+\sqrt{-2}x-1\right)\left(x^2-\sqrt{-2}x-1\right)$. Since $\text{char }F = p$, we have the subfield $\mathbb{F}_p$ generated by $1$. If $-1$ is a square mod $p$ we are done since $\sqrt{-1} \in \mathbb{F}_p \subseteq F$. Otherwise, $\left(\frac{-2}{p}\right) = \left(\frac{-1}{p}\right)\left(\frac{2}{p}\right) = -\left(\frac{2}{p}\right)$, so if $p$ is odd then either $2$ or $-2$ is a square mod $p$. If $p = 2$ then $-1$ is a square mod $p$ so in all cases we are done and $x^4+1$ is reducible in $F$ (in fact, over the subfield $\mathbb{F}_p$).
What is inheriting of topology? What is the use of studying it?
Yes, if $M, \mathcal{T})$ is a topological space, and $S \subseteq M$ is a subset, we can make $S$ into a topological space $(S,\mathcal{T}_S)$ by defining $$\mathcal{T}_S = \{O \cap S\mid O \in \mathcal{T}\}$$ This is a natural topology, in the sense that it is the smallest one that makes the inclusion map $i_S:S \to M, i_S(s)=s$ continuous between $S$ and $(M,\mathcal{T})$. It allows us to talk about continuity of functions on $S$ too, etc. In many branches of mathematics substructures (subspaces in linear algebra, subgroups in group theory, subgraphs in graph theory etc. etc.) play an important role. You can ask which spaces occur as subspaces of other spaces etc, and whether properties of the large $M$ "inherit" to subspaces $S$ (some do, some don't , some under conditions on $S$ etc.).
Example of a Dedekind domain that has only finitely many prime ideals, and is not a field?
Very simply, $\mathbb Z_{(2)} = $ ring of all rationals with odd denominator, or any DVR (local PID not a field) Recall that Dedekind domains may be thought of as globalizations of DVRs since for local domains we have Dedekind $\iff$ PID $\iff$ DVR
Can the level set of a singular value be an embedded submanifold?
No. It is well known that any closed subset $A \subseteq \mathbb{R}^n$ can be realized as the zero set $F^{-1}(0)$ of a smooth function. If $\overline{A^{\circ}} = A$ then $F|_{A^{\circ}} = 0$ implies that $dF|_{A^{\circ}} = 0$ and then by continuity we also have $dF|_{A} = 0$. Hence, $F$ has constant rank $0$ on $A$ but $A$ need not be an embedded manifold. For example, you can take $A = [0,1]^2 \subseteq \mathbb{R}^2$ or if you want an example which is not even a manifold with corners, just take the interior region of a sufficiently ugly simple closed loop in $\mathbb{R}^2$.
Is there a topology so that $f(x)=x$ is continuous but $g(x)=x^2$ is not?
Try the lower limit topology on $\mathbb{R}$. It's also called the Sorgenfrey line. A basis for the open sets is given by half-open intervals $\{[a,b)\mid a<b\in\mathbb{R}\}$. Here's an easier example. Consider the topology $T$ with three open sets $T=\{\emptyset,\mathbb{R},(10,20)\}$. Then $f$ is continuous, but $g$ is clearly not since $g^{-1}((10,20))=(-\sqrt{20},-\sqrt{10})\cup(\sqrt{10},\sqrt{20})$ which is not open.
Prediction Using Linear Regression
The slope coefficient is $\hat{\beta}_1$ in $Y = \beta_0 + \beta_1 X $, so its $99\%$ CI is $$ \left( 1.21 - 0.1265t_{0.995}(60), 1.21 + 0.1265t_{0.995}(60) \right) $$
Find cubic Bézier control points given four points
Your problem, as stated, does not have a unique solution. Suppose that point $P_j$ is at location $(3j, 0)$, for each integer $j$, so that they're equi-spaced on the $x$-axis. Now let $y$ be any real number. Then by adding control points at locations $$(6i+1, y)\\ (6i+2, y)\\ (6i + 4, -y)\\ (6i+5, -y)$$ for each integer $i$, you get two "control points" between any two of your original points. For instance, near the origin, for $y = 2$, we have points $$ (-3, 0) \leftarrow (\mathrm{one~ of~ the}~ P_i)\\ (-2, -2)\\ (-1, -2)\\ (0, 0) \leftarrow (\mathrm{one~ of~ the}~ P_i)\\ (1, 2)\\ (2, 2)\\ (3, 0) \leftarrow (\mathrm{one~ of~ the}~ P_i) $$ These determine two Bezier segments that glue up nicely at the origin, with a slope of $2$ at the origin. You may say "But it's obvious that the control points in this case should be on the $x$-axis!" and I say "but your problem statement doesn't require it." Indeed, I chose this example because it was easy to write, but given any set of $P_i$, I can again find an infinitely family of ways to place the intermediate control points so as to join the $P_i$ with Bezier segments. I'm going to suggest that you consider looking at Catmull-Rom splines, which are piecewise cubics passing through a sequence of points like your $P_i$. Each segment of a CR-spline can be expressed as a Bezier curve, because the Bezier basis functions span the space of cubic curves. One detailed reference on this is Computer Graphics: Principles and Practice, 3rd edition, of which I am a coauthor, but there are plenty of other references as well. Here are somewhat brief details on CR spline construction from a sequence of points $M_0, M_1, \ldots$. I'm going to describe how to find the control points for the part of the curve between $M_1$ and $M_2$, so as to avoid any negative indices. The four control points will be $P_0, P_1, P_2, P_3$. Two of these are easy: \begin{align} P_0 &= M_1 \\ P_3 &= M_2 \end{align} so that the Bezier curve starts and ends at $M_1$ and $M_2$, respectively. The other two are only slightly trickier. We compute \begin{align} v_1 &= \frac{1}{2} (M_2 - M_0)\\ v_2 &= \frac{1}{2} (M_3 - M_1) \end{align} which are the velocity vectors at $M_1$ and $M_2$. We then have \begin{align} P_1 &= P_0 + \frac{1}{3} v_1 = M_1 + \frac{1}{6} (M_2 - M_0)\\ P_2 &= P_4 - \frac{1}{3} v_2 = M_2 - \frac{1}{6} (M_3 - M_1) \end{align} Applying these rules in the example I gave earlier, with $$ M_i = (3i, 0) $$ we have \begin{align} M_0 &= (0, 0)\\ M_1 &= (3, 0)\\ M_2 &= (6, 0)\\ M_3 &= (9, 0) \end{align} so that \begin{align} P_0 &= (3, 0)\\ P_3 &= (6, 0)\\ v_1 &= \frac{1}{2}((6, 0) - (0, 0)) = (3, 0)\\ v_2 &= \frac{1}{2}((9, 0) - (3, 0)) = (3, 0)\\ P_1 &= P_0 + \frac{1}{3} v_1 = (3, 0) + (1, 0) = (4, 0)\\ P_2 &= P_3 - \frac{1}{3} v_2 = (6, 0) - (1, 0) = (5, 0) \end{align} as expected. Hint for the start and end points: Assuming you have a sequence of points $$ M_0, M_1, \ldots, M_{n-1} $$ you can let \begin{align} M_{-1} &= M_0 - (M_1 - M_0) = 2M_0 - M_1 \\ M_n &= M_{n-1} + (M_{n-1} - M_{n-2}) = 2M_{n-1} - M_{n-2} \end{align} to extend your list just enough that the CR scheme above provides interpolation all the way from $M_0$ to $M_{n-1}$.
Sufficient conditions for the convergence of Newton's Method
If $x_{n+1}=g(x_n)$ is the iteration, it is sufficient that $x$ belong to a closed interval $I\subset\mathbb{R}$ such that $$ g(I)\subset I, \qquad\text{and}\qquad |g'(x)|<1 \text{ for all }x\in I. $$ This then allows you to use a fixed-point theorem to show that $g$ has a unique fixed point in $I$ and $x_n$ must converge to it. One difference between this condition and the one you wrote down in the question is that it says nothing about $\epsilon_1$. And indeed you won't ever know $\epsilon_1$ because that amount to knowing the exact root.
Integral formula for $f(\sqrt x)$, where $f $ is smooth and even
Make the substitution $t=xy$. We get \begin{align*} w^{(k)}(x^2)&=\frac{(2x)^{-2k+1}}{(k-1)!}\int_0^1(x^2-x^2y^2)^{k-1}f^{(2k)}(xy)xdy\\ &=\frac{2^{-2k+1}}{(k-1)!}x^{-2k+1}x^{2(k-1)}x\int_0^1(1-y^2)^{k-1}f^{(2k)}(xy)dy\\ &=\frac{2^{-2k+1}}{(k-1)!}\int_0^1(1-y^2)^{k-1}f^{(2k)}(xy)dy, \end{align*} and by the dominated convergence theorem $$\lim_{x\to 0}\: w^{(k)}(x^2)=f^{(2k)}(0)\int_0^1(1-y^2)^{k-1}dy.$$ You can get the integral formula by induction: show it for $k=1$ and if it's true for $k$ then $$2xw^{(k+1)}(x^2)=\frac{2^{-2k+1}}{(k-1)!}\int_0^1(1-y^2)^{k-1}f^{(2k+1)}(xy)ydy,$$ and integrating by parts \begin{align*} w^{(k+1)}(x^2)&=\frac{2^{-2k-1}}{x(k-1)!}\int_0^12y(1-y^2)^{k-1}f^{(2k+1)}(xy)dy \\ &=\frac{2^{-2k-1}}{x(k-1)!}\left(\left[-\frac{(1-y^2)^k}kf^{(2k+1)}(xy)\right]_{y=0}^{y=1}+\frac 1k\int_0^1(1-y^2)^kxf^{(2k+2)}(xy)dy\right)\\ &=\frac{2^{-2(k+1)+1}}{k!}\int_0^1(1-y^2)^kf^{(2(k+1))}(xy)dy. \end{align*} (since $f^{(2k+1)}(0)=0$ for all $k\geq 1$).
Fundamental group of complement in $\mathbb{R}^3$ of $z$-axis and two circles
Your post seems to have two questions: Seifer-Van Kampen Thm and group presentation, I'll answer them separately... Use Seifert-Van Kampen's Thm to compute $\pi_1(X)$: For convenience, you can regard $X$ as $X'=B\setminus(\{(x,y,z)\mid x,y=0\}\cup (S^1\sqcup S^1))$, where $B$ is homeomorphic to a 3 dimensional ball (There is an obvious deformation retraction). Take $U=X'\cap\{(x,y,z)\mid z<2\}$ and $V=X'\cap\{(x,y,z)\mid z>1\}$, then $U\cap V\simeq\{\text{ punctured plane}\}\simeq S^1$, which means $\pi_1(U\cap V)\cong\Bbb{Z}=\langle \alpha\rangle$. Also, we have $U\approx V$ (you can easily see this by drawing a picture), and $U\simeq T^2=S^1\times S^1$, which means $\pi_1(U)\cong\Bbb{Z}^2=\langle a,b\rangle$ and $\pi_1(V)\cong\Bbb{Z}=\langle c,d\rangle$. This conclusion doesn't seem to be obvious, but if you observe $U$, it has a vertical hole and a removed ring inside of it. By expanding the vertical hole and the tube inside of it, do you see the homotopy equivalence? If you feel confused, I can draw a picture to illustrate it. Alternatively, you can think of it as a punctured plane rotating with respect to $z$-axis, so now there is a homotopy equivalence between this space and $S^1\times S^1$ because the punctured plane deformation retracts onto $S^1$ (Hope it's clear now) Now, consider the following mappings. $i_*:\pi_1(U\cap V)\to\pi_1(U)$ induced by the inclusion says $i_*(\alpha)=b$ (it is a loop that encloses the central vertical hole). Similarly, $j_*:\pi_1(U\cap V)\to\pi_1(V),\alpha\mapsto d$. Apply Seifert-Van Kampen's Thm, we get $$\pi_1(X)\cong\pi_1(X')\cong(\pi_1(U)*\pi_1(V))/N\cong(\Bbb{Z}^2(a,b)*\Bbb{Z}^2(c,d))/\langle b^{-1}d\rangle$$ Note that $a$ represents the basic loop that encloses the lower tube, whereas $c$ represents the loop that encloses the upper tube created by the removal of $S^1$. Group presentation: Claim: $(\Bbb{Z}^2(a,b)*\Bbb{Z}^2(c,d))/{\langle b^{-1}d\rangle}=(\Bbb{Z}*\Bbb{Z})\times\Bbb{Z}$ $$(\Bbb{Z}^2(a,b)*\Bbb{Z}^2(c,d))/\langle b^{-1}d\rangle=\langle a,b,c,d\mid b=d,ab=ba,cd=dc\rangle=\langle a,b,c\mid ab=ba,bc=cb\rangle$$ We see that $a$ and $c$ forms a free group of two generators and they're not commutative. So this is $(\Bbb{Z}*\Bbb{Z})\times\Bbb{Z}$, which agrees to your answer that solves it from a different perspective. It's also possible to derive the same group presentation from your answer. Let $a$ be the loop that encloses the lower point of $A$ (resp. $c$ that encloses the upper one), and $b$ be the loop around the vertical hole. Then, $a$ and $c$ are the generators of the group $\Bbb{Z}*\Bbb{Z}$ and $b$ commutes with them. So, we have $\langle a,b,c\mid ab=ba,bc=cb\rangle$.
What *can* Euclid prove?
You can see : Ian Mueller, Philosophy of Mathematics and Deductive Structure in Euclid's Elements (1981 - Dover reprint). It is a detailed analysis of Euclid's Elements logical structure. In particular, Ch.1.2 : Book I of the Elements , is devoted to the reconstruction of Book I, with graphical maps depicting the logical dependencies between Propositions. You can use them, to check what Prop depends from the "flawed" ones : I mean those Prop of Book I which have some "hidden assumptions" asking for additional axioms not stated by Euclid.
Construct a continuous function $f$ over $[0,1]$ satisfying $f(0) = f(1)$ but $f(x) \neq f(x+a)$
Let $ n $ be the largest integer such that $ na < 1 $. Let $ g $ be any continuous function on $ [0, a] $ such that $$ g(0) = 0 $$ $$ g(1-na) = -n $$ $$ g(a) = 1 $$ Then choose $ f(ka+x) = g(x) + k $ for $ k \in \mathbb{N}, x \in [0,a) $ Edit: I drew a picture of $ f $: Basically, $ f $ is found by first setting $ f(x+a) = f(x) + 1$, with $ f(0) = f(1) = 0 $. This gives you all the points in the above drawing. Then choose a continuous $ g $ on the first 3 points, copy it and translate it by $ (a,1) $ a bunch of times to obtain $ f $.
How to show the formula for the sine of two vectors with linear complex structure
In two dimensions we have $\langle u,Jv\rangle=\det(u,v)$. Now use that $$\det(u,v)^2=\det\begin{pmatrix}\langle u,u\rangle& \langle u,v\rangle\\ \langle v,u\rangle &\langle v,v\rangle\end{pmatrix}$$ together with $\langle u,v\rangle=\|u\|\|v\|\cos(\theta)$ and the Pythagorean theorem.
Is the Axiom of Finite Additivity even an axiom?
You repeatedly used the Axiom of Finite Additivity in your proofs (starting with the proof of $P(\emptyset)=0$). To correctly show that finite additivity follows from countable additivity, just let $A_1=A$, $A_2=B$ and $A_i=\emptyset$ for $i>2$. Then the equality $$P(A\cup B)=P(\bigcup_i A_i)=P(A)+P(B)+``\infty\cdot P(\emptyset)\!"$$ can only hold if $P(\emptyset)=0$. However, nobody said that the given collection of axioms is minimal. For example the group axioms are also often listed in a redundant (=non-minimal) way.
Is it true that $\text{If Pr$(\alpha)=1$, then $\alpha\equiv\top$}$
No. For example, if $X$ is a random variable with a continuous distribution, then for each possible value $x$, $\Pr(X = x) = 0$ and $\Pr(X \ne x) = 1$. But $X \ne x$ is not "logically true". In fact, the random variable must have some value, so there must be some real $x$ such that $X=x$. It's just that any particular $x$ has probability $0$ of being that value.
How to remember which function is concave and which one is convex?
I think it just depends on how you learn. When I took calculus, we didn't use "concave" and "convex" - rather, we (and the AP exam) used "concave up" and "concave down." I still use these as a grad student. One can also remember that concave functions look like the opening of a cave.
Probability with replacement.
Using the negative binomial you get $$\mathbb{P}[X=x]=\binom{x-1}{2-1}0.75^2\cdot0.25^{x-2}$$ for $x=2,3,4,5,\dots$ Thus $$\mathbb{P}[X\geq8]=\sum_{x=8}^{\infty}\binom{x-1}{2-1}0.75^2\cdot0.25^{x-2}=$$ $$=1-\sum_{x=2}^{7}\binom{x-1}{2-1}0.75^2\cdot0.25^{x-2}$$
$G=HK$ then the index of a subgroup is determined by $H$ and $K$
Suppose that $G=HK$ with $H \cap K=1$. Then (assuming $G$ is finite), the conclusion holds for all subgroups $R$ if and only if $H,K \unlhd G$ and $(|H|,|K|)=1$. You can do the "if" part. For the "only if" part, if $H \not\unlhd G$, then there exists $k \in K \setminus N_G(H)$, and the conclusion fails for $R = k^{-1}Hk$. Similarly if $K \not\unlhd G$. If $H,K \unlhd G$ (so $G \cong H \times K$), but $(|H|,|K|) \ne 1$, then there is a prime $p$ dividing both $|H|$ aand $|K|$, and the conclusion fails for $R = \langle hk\rangle$, where $h \in H$ and $k \in K$ both have order $p$.
If a function fg is surjective under composition and f is surjective, is g surjective?
You are right on both counts. Suppose $g:A\to B$, $f:B\to C$. For the first question, suppose $C$ has only one element and $B$ has more than one element. Let $g$ be the function that takes every element of $A$ to a fixed element of $B$: $g(x) = b_0$ for all $x\in A$. Then $g$ is not surjective, but $f\circ g$ and $f$ both are. For the second question, let $A$, $B$, $C$ be arbitrary and suppose $g(a_0) = g(a_1)$. Then $f(g(a_0)) = f(g(a_1))$, or $(fg)(a_0) = (fg)(a_1)$. Since $f\circ g$ is injective, $a_0 = a_1$, so that $g$ is injective. Note that you don't actually need the hypothesis that $f$ is injective (for example, $B$ could be some huge set. $g$ injects into $B$, and its image injects into $C$ under $f$, but the rest of $B$ does not).
Question regarding the convexity of sets
Let restate the problem using $T$ for one of the sets to avoid confusion above: let $S \subset \mathbb{R}^{n}\times \mathbb{R}$ and consider it’s projection onto $\mathbb{R}^n$, $T =\{ x\in \mathbb{R}^n|(x, y)\in S, y\in \mathbb{R}\}.$ (a) Assuming $S$ is convex, prove that $T$ is convex. Let $x_1\in T$ and $x_2\in T$. To establish that $T$ is convex, we want to show that for $\theta\in (0,1)$, $\theta x_1 + (1-\theta)x_2 \in T$. Since $x_1\in T$, there is some $y_1\in \mathbb{R}$ such that $(x_1,y_1)\in S$. Similarly there is some $y_1\in \mathbb{R}$ such that $(x_2,y_2)\in S$. For any $\theta\in(0,1)$, convexity of $S$ implies that, $$ \left(\theta x_1 + (1-\theta)x_2, \theta y_1 + (1-\theta)y_2 \right) \in S. $$ We conclude that $\theta x_1 + (1-\theta)x_2$ is the projection of some point in $S$, i.e., $\theta x_1 + (1-\theta)x_2 \in T$. Thus $T$ is convex.
How to prove $2^{\sqrt{f(n)}} \in O\ (2^{f(n)})$ if $f:\Bbb{N}\rightarrow \Bbb{R^+}$?
Hint: $f(n)+1 \ge \sqrt{f(n)}$ no matter what $f(n)$ is.
Different ways to calculate the expected value of getting all faces of a 6 sided die atleast once
Without loss of generality let us focus on one of the numbers, say $1$. The probability that we see $1$ on the $n^{\text{th}}$ attempt given that we did not observe $1$ on the previous $n-1$ tries is given by: ${(\frac{5}{6})}^{n-1} \cdot \frac{1}{6}$ You can use the above to compute the expected value of the number of attempts to make to see $1$. We then multiply the above by $6$ to get the answer we seek.
Given that $G/H=\{xH:x \in G\}$ is group under operation $(xH)(yH)=(xyH)$ . Then $H$ is normal subgroup of $G$
We clearly have $xhH=xH$ for $x\in G, \ h\in H$. Then, by being well defined, we arrive to $$(xhx^{-1})H=xhH\cdot x^{-1}H=xH\cdot x^{-1}H=(xx^{-1})H=eH=H$$ Thus, $xhx^{-1}\in H$.
Countable countable models
If you already know that $\omega$-categoricity is equivalent to the condition that every $S_n(T)$ is finite, then there's an easy argument that doesn't use [*]. Suppose for contradiction that there is some $n\in \omega$ such that $S_n(T)$ is infinite. Pick some countable set of these and realize them: Then you have a countable model $A\models T$ which realizes infinitely many $n$-types. But $n$-tuples which realize different types cannot be in the same orbit of the action of $\text{Aut}(A)$ on $A^n$, so there are infinitely many orbits, contradiction. The point is that [*] is really two statements: (1) If $a$ and $b$ are in the same orbit, they realize the same type. (2) If $a$ and $b$ realize the same type, they are in the same orbit. Point (1) is true in any model, while point (2) is only true in certain models, e.g. saturated ones. And the argument above only uses (1), not (2).
Are there Lebesgue-measurable functions not almost everywhere equal to a continuous function
Yes. Fix any measurable set $A$ such that both $A$ and its complement have non-null intersection with each nonempty open interval. Examples are discussed here. Then the characteristic function of $A$ is as desired, since removing a null set does not change this intersection property, which rules out having a continuous extension.
Find the sum of infinite series
Let $\frac{1}{5}=x$ and $f(x)=x+\frac{1}{3}x^3+\frac{1}{5}x^5+...$ Thus, $$f'(x)=1+x^2+x^4+...=\frac{1}{1-x^2}$$ and $$f(x)=\int\limits_0^x\frac{1}{1-t^2}dt=\ln\sqrt{\frac{1+x}{1-x}},$$ which after substitution $x=\frac{1}{5}$ gives the answer: $$\frac{1}{2}\ln1.5.$$
subgroup generated by a permutation
Because your permutation has order $6$ you have found all the elements of subgroup generated by $\alpha$. The subgroup is $\langle \alpha \rangle = \{\alpha^0 = id, \alpha^1, \alpha^2, \alpha^3, \alpha^4, \alpha^5\}$. It's an easy exercise (if you don't see it right away) to verify this really is a subgroup. It's maybe worth to check that your permutation really has order 6. You can see that noting that $lcm(3,2,3) = 6$ (Those numbers $3,2,3$ are length of each cycle in $\alpha$).
Find the first derivative of the following function
$$f(x)= \frac{1}{1+e^{3x}}$$ $$f(x)= (1+e^{3x})^{-1}$$ (just rewrote it) $$f'(x)= -1(1+e^{3x})^{-2}(e^{3x})(3)$$ $$f'(x)= -\frac{3e^{3x}}{(1+e^{3x})^2}$$ Simple chain rule Also for VA: $$ 1 + e^{3x} = 0$$ $$ e^{3x} = -1$$ $$ ln(e^{3x}) = ln(-1)$$ $$ 3x = ln(-1)$$ $$ 3x = undefined$$ You cannot take ln(negative number). Thus the VA does not exist.
What does domain reduction mean
You are reducing the range or limit in which the function can act. $F(x)\rightarrow$$G$ where $G$ is an implicit function or an irreducible constant will be non reducible as $R(n)\rightarrow$$F(x)$ for all $N$$\in$$\mathbb{R}$
Can someone give me some clue on how to show that rationals are well ordered? Thank you in advance.
The rational numbers are not well-ordered. A set is well-ordered by a partial order $\prec$ if every non-empty set has a minimum element. The rational numbers with their natural order are not well-ordered, since the set itself has no minimum. There is no smallest rational number. But the rational number can be well-ordered. Simply by showing that there is a bijection $f\colon\Bbb Q\to\Bbb N$, and then defining $q\prec p\iff f(q)<f(p)$. You can show that the order $\prec$ is indeed a well-ordering of $\Bbb Q$, but it is incompatible with the natural order of the rational numbers.
Maximal logarithmic likelihood function of the Student-t distribution
The following used to contain a mistake, found by Hans. I have found a fix, and have incorporated into the following proof. Here is a sketch of an argument that the log likelihood function has at most one critical point. It should be read in conjunction with Hans's partial answer. The idea is to use the "variation diminishing property of the Laplace transform", according to which the Laplace transform of a function with $k$ sign changes cannot have more than $k$ sign-changing zero-crossings. For a function $\phi:\mathbb R^+\to\mathbb R$, let $S(\phi)$ be the maximal $k$ for which there exist $0<x_0<x_1<x_2<\cdots < x_{k}$ for which $\phi(x_i)\phi(x_{i+1})<0$ for all $0\le i < k$. Then the Laplace transform $g(s)=\int_0^\infty e^{-sx} G(x)dx$ of $G$ obeys $S(g)\le S(G)$. This topic is not well explained in Wikipedia articles, but the result used here is theorem V.80 in vol 2 of Pólya and Szegő's Problems and Theorems in Analysis, is discussed at length in Karlin's book Total Positivity (see Theorem 3.1, page 21, and pages 233 and 237), in papers by I.J. Schoenberg, etc. One can think of it as a continuous analogue of Descartes' Rule of Signs. I used it in answering this MSE problem. If the logarithm of the likelihood function had two or more local maxima, its derivative would have three or more roots, since between every two local maxima lies a local minimum. So it suffices, by the variation diminishing property of the LT, to show that what the OP, in his draft answer, calls $\tilde f$ has at most two sign changes. This seems evident numerically, but deserves proof just as much as the original problem does. Here is one way of seeing this, using another application of the variation diminishing property of the Laplace transform. Here the argument. First, I will change notation, using $s$ instead of $t$ and setting $y=x^2$. The claim is that, for fixed real $y\ge0$, $$\tilde f(s) = \frac{1-e^{-ys}}s +(1-y)e^{-ys}-\frac 2{1+e^{-s}}$$ has at most two sign changes as a function of $s\in\mathbb R^+$. Let $g(s)=\dfrac{1+e^{-s}}{s^2}\tilde f(s)$; clearly $g$ has as many sign changes as $\tilde f$ does. But $g$ is itself a Laplace transform: \begin{align}g(s)&=\frac{1+e^{-s}-e^{-ys}-e^{-(y+1)s}}{s^3}+(1-y)\frac{e^{-ys}+e^{-(y+1)s}}{s^2} - \frac2{s^2}\\ &=\int_0^\infty e^{-sx} G(x) dx,\end{align} from which one reads off \begin{align}G(x)&=\frac 1 2\left((x)_+^2 - (x-y)_+^2 +(x-1)_+^2 - (x-(y+1))_+^2\right) \\&+ (1-y)\left((x-y)_++(x-y-1)_+\right)-2x.\end{align} Here $(x)_+=\max(x,0)$. Since $x\mapsto (x)_+$ is continuous, so is $G$. If $y<1$ the function $G$ is piece-wise quadratic on each of the intervals $(0,y)$, $(y,1)$, $(1,y+1)$, $(y+1,\infty)$; if $y>1$ then $G$ is piecewise quadratic on the intervals $(0,1)$, $(1,y)$, $(y,y+1)$, $(y+1,\infty)$, so verification of the lemma is in principle easy in a case-by case manner. In practice, tedious and error prone. If $y<1$ the formula for $G(x)$ reduces to $$G(x)=\begin{cases} x^2/2 -2x&0\le x<y\\ y^2/2-x-y&y\le x<1\\ x^2/2-2x+(y-1)^2/2&1\le x<1+y\\ y^2-2y-1&1+y\le x\end{cases}$$ and if $y>1$, the formula reduces to $$G(x)=\begin{cases} x^2/2 -2x&0\le x<1\\ x^2-3x+1/2&1\le x<y\\ x^2/2-2x+(y-1)^2/2&y\le x<1+y\\ y^2-2y-1&1+y\le x.\end{cases}$$ These can be merged into the following, where the cases are referred to below: $$ G(x)=\begin{cases} x^2/2 -2x&\text{A: if }0\le x<\min(1,y)\\ y^2/2-x-y&\text{B: if }y\le x< 1\\ x^2-3x+1/2&\text{C: if }1\le x< y\\ x^2/2-2x+(y-1)^2/2&\text{D: if }\max(1,y)\le x< 1+y\\ y^2-2y-1&\text{E: if }1+y< x \end{cases} $$ Note that cases B and C are mutually exclusive. Computations show that for fixed $y$ the function $G(x)$ has at most one sign change; I sketch an argument for this below. (Omitting an analysis of the possibility of sign changes at the case boundaries.) $G$ has no sign changes in cases A or E (in A, the only possibilities are $x=0$ or $x=4$, the former is not a sign change, and $x=4$ does not obey $0\le x<\min(1,y)$. Constant functions, as in case E, do not have sign changes.) Case B has no sign changes, for the value $x=y^2/2-y$ violates $y<x<1$. In case C, a sign change could only occur at $x=(3\pm\sqrt 7)/2$, and then $1<x<y$ implies $x=(3+\sqrt7)/2$ and $y>(3+\sqrt7)/2$. In case D, a sign change can only occur at $x=2\pm\sqrt{3+2y-y^2}$, and $\max(0,x)<x<y$ is only possible if $x=2+\sqrt{3+2y-y^2}$ and $1+\sqrt 2<y<(3+\sqrt 7)/2$. Putting these together: if $y<1$ then there can be no sign changes in the relevant cases A,B,D,E. If $y>1$ there might be at most one sign change in each of C, D (out of the relevant A,C,D,E), but not actually both, since that would violate $(3+\sqrt7)/2 <y<(3+\sqrt 7)/2$. Hence, $G$ has at most one sign change among A,B,C,D,E. Finally, since $G(0)=0, G'(0)=-1, G(\infty)=y^2-2y-1$, we see $G$ has exactly one sign change if $y^2-2y-1>0$ and none if $ y^2-2y-1\le0$. The meta-motivation is to shoehorn the original question into an application of the variation diminishing machinery given in my first paragraphs. The micro-motivation for my choice of $g$ (and hence of $G$) comes from the realization that $\tilde f$ is the Laplace transform of the signed measure $$\mu = \lambda_{[0,y]} + (1-y)\delta_y -2\sum_{k\ge0}(-1)^k \delta_k,$$ where $\lambda_{[0,y]}$ is Lebesgue measure restricted to $[0,y]$ and $\delta_k$ represents the unit point mass at $k$. The signed measure $\mu$ has infinitely many sign changes, but the telescoping series $\mu*(\delta_0+\delta_1)$ does not, where $*$ denotes convolution of measures, so $1+e^{-s}$ times $\tilde f$ is a better candidate for the variation diminishing trick sketched above. Dividing by a power of $s$ has the effect of smoothing the signed measure, and eliminating some small oscillations that create their own extraneous sign changes. The mistake Hans found in an earlier version of this answer was to divide by $s$, which allowed for 3 sign changes for a certain range of $y$. Dividing by $s^2$ fixed this problem, at the price of making $G$ piecewise quadratic instead of piecewise linear.
How to prove this inequality $ \Big| \frac{-a + \sqrt{a^2-b^2}}{b} \Big| < 1$
Note that$$\left|-a+\sqrt{a^2-b^2}\right|=a-\sqrt{a^2-b^2},$$since $0&lt;b&lt;a$, and that\begin{align}a-\sqrt{a^2-b^2}&lt;b&amp;\iff a-b&lt;\sqrt{a^2-b^2}=\sqrt{(a-b)(a+b)}\\&amp;\iff(a-b)^2&lt;(a-b)(a+b)\\&amp;\iff a-b&lt;a+b,\end{align}which is true, since $b&gt;0$.
Factoring a quadratic mod $p$
You need an element $a$ of order $3$ then $$x^2 -x+1=(x+a)(x+a^{-1})\in \Bbb{F}_p[x]$$ $x^{p-1}-1$ (of degree $p-1$) has $p-1$ distinct roots in $\Bbb{F}_p$ thus $$x^{p-1}-1=\prod_{n=1}^{p-1}(x-n)\in \Bbb{F}_p[x]$$ Take any root $b$ of $x^{p-1}-1$ which is not a root of $x^{(p-1)/3}-1$, the order of $b$ divides $p-1$ but not $(p-1)/3$ thus it is $3m$ so that $a=b^m$ works.
Convergence in probability of sample variance
$$ \frac 1n \sum (X_i - \bar{X})^2 = \frac 1n \sum (X_i^2 - 2 X_i \bar{X}) + \bar{X}^2 $$ Now apply the large law of large numbers: $$ \frac 1n \sum X_i^2 \to EX_i^2 = \sigma^2 + \mu^2\\ \bar{X}^2\to \mu^2\\ \frac 1n \sum2 X_i \bar{X} \to 2\times \mu\times \mu $$because operations are compatible with convergence almost sure. Eventually, sum everything and you will find $\sigma^2$. Thus, $\frac1n\sum (X_i - \bar X )^2\to\sigma^2$ almost surely. Since almost sure convergence implies convergence in probability, this proves that $\frac1n\sum (X_i - \bar X )^2\to\sigma^2$ in probability, as desired.
Maximal matching without weights
The important word here is maximal (that is the reason that the 'al' is underlined). You do not want a matching of maximum size. This could not have size 3 because for example the edges ag, bh, ci, and df are independent. The idea in this exercise is, that you want to find 3 edges that are independent (and thus form a matching) but at the same time all other edges need to be incident with at least one of those 3. In the picture the vertices e and j are cut off (I suppose), if they are adjacent, the 3 edges bg, di, and ej seem to be a maximal matching: They clearly form an independent set and each of a, c, f, and h has neighbours only in {b, d, e, g, i, j}.