title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Frontier points of set is always closed
If $\forall\ \delta>0 \exists \textbf{b} \in B_{\delta}(\textbf{a}) \cap S_F$, then $\textbf{a}\in (S_F)_F$. But $(S_F)_F\subset S_F$. Then, $\textbf{a}\in S_F$. For otherwise, $S_F\cap \mathbb{R}^n-S_F=\emptyset$. You can assume previosly that $S_F\neq \mathbb{R}^n$ and $S_F \neq \emptyset$ ('cause both cases are immediate) and you will have your contradiction. Another way to see this: We can translate your definition of $x\in S_F$ by $$\forall\ \epsilon>0: B_{\epsilon}(x) \cap S \neq \emptyset \wedge \forall\ \epsilon>0: B_{\epsilon}(x) \cap \mathbb{R}^n-S \neq \emptyset $$ If $y\not\in S_F$, then $$\exists\ \epsilon_0>0 : B_{\epsilon_0}(y)\cap S = \emptyset \vee \exists\ \epsilon_0>0 : B_{\epsilon_0}(y)\cap \mathbb{R}^n - S = \emptyset $$ if $$\exists\ \epsilon_0>0 : B_{\epsilon_0}(y)\cap \mathbb{R}^n - S = \emptyset$$ Then, $$\exists\ \epsilon_0 >0 : B_{\epsilon_0}(y) \subset S$$ That is, all ball is contained in $\mathbb{R}^n - S_F$. For otherwise, if $$\exists\ \epsilon_0>0 : B_{\epsilon_0}(y)\cap \mathbb{R}^n - S = \emptyset$$ then $$\exists\ \epsilon_0 >0 : B_{\epsilon_0}(y) \subset \mathbb{R}^n-S$$ For the same reason, we can conclude that $\mathbb{R}^n - S_F$ is open.
How many three-digit odd numbers are there with no repeated digits?
Because you can't do it going last-middle-first. The multiplication principle only works when the number of options at each step is exactly the same regardless - but if we chose the middle number first, then depending on whether the middle digit was a $0$ or not, we'd get a potentially different number of possibilities for the first digit.
Complex analysis - existence of field $\mathbb{C}$
The smallest field extension of $\Bbb R$ in which the irreducible polynomial $x^2 + 1$ has a root is $\Bbb R[x] / (x^2 + 1)$. We want to find an isomorphism between $\Bbb R[x] / (x^2 + 1)$ and $\Bbb C$. To do this, define a homomorphism $\varphi : \Bbb R[x] \to \Bbb C$, $\varphi(x) = i$. By direct computation, the ideal $(x^2 + 1)$ is in the kernel. Since $x^2 + 1$ is irreducible and $\Bbb R[x]$ is a PID, $(x^2 + 1)$ is maximal. Thus $\ker \varphi = (x^2 + 1)$. Now the first isomorphism gives the desired result.
Geometric genus of a (possibly non-complete) intersection in P^n
I think all you can really say in general is the following. Let $N$ denote the normal bundle of $Y$ in $\mathbb P^n$. Then by the adjunction formula you have that $\omega_Y \cong \bigwedge^c N (-n-1)$ where $c$ is the codimension of $Y$ in $\mathbb P^n$. Hence, you must have $\rho_g(Y)=h^0(Y,\omega_Y)=h^0(Y,\bigwedge^c N (-n-1))$ which could be pretty much any nonnegative number. In order to say something substantive you need to make some assumptions on how $Y$ is embedded into $\mathbb P^n$ that is reflected in the positivity of the normal bundle $N$, specifically a bound on its first Chern class. You may also do this by making assumptions on the tangent bundle $T_Y$ of $Y$ and using the exact sequence $0 \rightarrow T_Y \rightarrow T_{\mathbb P^n} \rightarrow N \rightarrow 0$. But unless you make some more assumptions on $Y$ you can't say anything.
Combinatorial Proof of $|T| \leq 2 \binom{n}{\lfloor n/2 \rfloor}$
Not sure if this proof counts as combinatorial. Let $T_1=\{B\in T\mid \exists B\in T:B\subset A\}$ and show that $T_1$ and $T\setminus T_1$ are anti-chains, which lets us apply Sperner’s Theorem to both.
Show that the following identity is formally correct
Suppose you have an arbitrary finite set of primes, e.g. $\{2,3,5\}$. Look at $$ \left(1 + \frac{1}{2^2} + \frac{1}{2^4} + \frac{1}{2^6} +\cdots \right)\left(1 + \frac{1}{3^2} + \frac{1}{3^4} + \frac{1}{3^6} +\cdots \right)\left(1 + \frac{1}{5^2} + \frac{1}{5^4} + \frac{1}{5^6} +\cdots \right). $$ This expands to $$ \sum_{a,b,c \ge 0} \frac{1}{2^{2a}} \frac{1}{3^{2b}} \frac{1}{5^{2c}} = \sum_{a,b,c \ge 0} \left(\frac{1}{2^a} \frac{1}{3^b} \frac{1}{5^c}\right)^2. $$ The fraction runs throught the list of reciprocals of all possible products of $2$s, $3$s, and $5$s. Now instead of just those primes, $2$, $3$, and $5$, use the list of all primes. The set of all possible products of not-necessarily distinct primes is just the set of all positive integers.
Field Extension, Splitting Field and Galois Theory
Let $\alpha_{1} = \alpha, \alpha_{2}, \dots, \alpha_{n}$ be the roots of $f$, and let $\Omega = \{ \alpha_{1}, \dots, \alpha_{n} \}$. The Galois group $G \cong S_{n}$ acts transitively on $\Omega$, so $\mathbb{Q}(\alpha)'$ (the subgroup of $G$ corresponding to $\mathbb{Q}(\alpha)$ in the Galois correspondence) is $S_{n-1}$. Now $S_{n}$ is $2$-transitive as $n \ge 2$, and thus primitive, so the $1$-point stabilizer $S_{n-1}$ is maximal in $S_{n}$. It follows that there are two possibilities for $\mathbb{Q}(\alpha^4) \subseteq \mathbb{Q}(\alpha)$: either it is $\mathbb{Q}(\alpha)$, or it is $\mathbb{Q}$. If the latter holds, then $\alpha$ is a root of a polynomial $g = x^{4} - c$, for some $c \in \mathbb{Q}$, and the roots of $g$ are $\alpha, -\alpha, i \alpha, -i \alpha$. Clearly $g$ has no roots in $\mathbb{Q}$ (otherwise $\pm \alpha \notin \mathbb{Q}$, or $i \alpha \in \mathbb{Q}$, so that $\alpha^{2} \in \mathbb{Q}$, and in any case $n < 3$), so either it splits in $\mathbb{Q}[x]$ as a product of two irreducibles of degree $2$ (which is excluded by assumption), or it is irreducible in $\mathbb{Q}[x]$, and thus $g = f$. But then the group $G$ is not $S_{4}$, but rather of order at most $8$, as $L = \mathbb{Q}(\alpha, i)$, so that $\lvert L : \mathbb{Q} \rvert \le \lvert \mathbb{Q}(\alpha)(i) : \mathbb{Q}(\alpha) \rvert \cdot \lvert \mathbb{Q}(\alpha) : \mathbb{Q} \rvert \le 2 \cdot 4 = 8$.
Similar, but different curve to critically damped harmonic oscillator solution
It turns out that you can generate the desired curve with any damped oscillator with a strong enough spring force (large values of $k$). (I may edit this answer with some graphs once I have them)
Statements regarding sequences and series
Most of your answers look alright. A few remarks: $7.$ If $\sum a_n$ converges then $\sum \frac{1}{a_n}$ diverges to infinity. True Do you know anything about the sign of $a_n$? If not, note that the convergence of $\sum a_n$ implies that $a_n \to 0$, but not that $a_n > 0$; i.e. it's possible that $a_n <0$. Also note that it doesn't even have to diverge to $\pm\infty$, for example: $$a_n = \frac{(-1)^n}{n} \implies \frac{1}{a_n} = (-1)^nn$$ $10.$ If $\sum a_n$ diverges and $(b_n)$ is bounded then $\sum a_nb_n$ diverges to infinity. True If $\sum a_n$ diverges, it doesn't necessarily diverge to infinity (think of $1-1+1-1+1-\ldots$ for example); now take $b_n \equiv 1$. And as pointed out by Akiva Weinberger in the comment, you can even make it converge (e.g. take $b_n = \tfrac{1}{n}$ and keep $a_n$ as above).
Equivalence Relation and Quotient Space
HINT: Let $X=\{0,1,2\}$ have the indiscrete topology. Find a map $f:\Bbb R\to X$ such that if $A\subseteq X$, then $f^{-1}[A]$ is open in $\Bbb R$ if and only if $A=X$ or $A=\varnothing$. Note that this will be the case if $f^{-1}[\{x\}]$ is neither closed nor open for each $x\in X$. (Why?)
Combination to find integers satisfying a condition
No, the sequence $x_1,x_2,\dots,x_{k}$ is not necessarily increasing, so your conclusion is not correct. Let $t_k=x_k-k\geq 0$, then we have to count the number of non-negative integer solutions of $$t_1+t_2+\dots+t_k=n-\frac{k(k+1)}{2}$$ which is, by Stars-and-Bars, given by $$\binom{n-\frac{k(k+1)}{2}+k-1}{k-1}=\binom{\frac{2n-k^2+k-2}{2}}{k-1}.$$ P.S. My answer is correct. Probably there is a typo in your book. The integer $\binom{\frac{2n-k^2+k-2}{2}}{k-1}$ is precisely the coefficient of $t^n$ in $$ (t + t^2 +t^3+ \dots)(t^2 + t^3 +t^4+\dots)\dots (t^k + t^{k+1}+t^{k+2}+\dots),$$ that is $$[t^n]\prod_{j=1}^k\frac{t^j}{1-t}=[t^{n-\frac{k(k+1)}{2}}](1-t)^{-k} =(-1)^{{n-\frac{k(k+1)}{2}}}\binom{-k}{n-\frac{k(k+1)}{2}}=\binom{n-\frac{k(k+1)}{2}+k-1}{k-1}.$$ Numerical example. Take $n=8$ and $k=3$, then it is easy to see that $$ (t + t^2 +t^3+ \dots)(t^2 + t^3+ t^4 +\dots)(t^3 + t^{4}+t^{5}+\dots) =t^6+3t^7+6t^8+\dots$$ and the coefficient of $t^8$ is $6$: $$\binom{\frac{16-9+3-2}{2}}{3-1}=\binom{4}{2}=6.$$
Conditional probabilitiy exercise.
Clearly, $$P(X = 0) = \int_{0}^{+\infty}P(X = 0\mid m = t)f(t)\,dt$$ where $f(t)$ is probability density of Gamma$(2, 1)$ distribution. We have $$P(X = 0\mid m = t) = e^{-t}$$ and $$f(t) = te^{-t}$$ Substitution gives simple convergent integral. And yes, it converges to $1/4$.
Why isn't the SVD simplified to USV (no transpose)?
One way is to write $A=USV^*$ as $AV=US$, which says that the image of the basis in the columns of $V$ is the basis in the columns of $U$ properly scaled. This is the exact content of the SVD. If you write $A=USV$, then the statement would talk about the image of the basis in the rows of $V$ being the columns of $U$, not very nice.
Uses: Total Variation
One of the several theorems known as the Riesz Representation Theorem tells us that if $X$ is a locally compact Hausdorff topological space, then any continuous functional $\phi$ on the space $C_c(X)$ of compactly supported continuous functions on $X$ can be represented uniquely as a bounded Radon measure $\mu_\phi$ such that $\phi(f) = \int f \mathrm{d} \mu_\phi$ for all $f \in C_c(X)$. This isomorphism will have some cool properties. Firstly, the total variation of $\mu_\phi$ is equal to the norm of $\phi$. The positive functionals are equivalent to the nonnegative measures (as opposed to the “signed measures”). Finally, the positive linear functionals with $\| \phi \| = 1$ are equivalent to the space of Radon probability measures on $X$. Finally, note that if $X$ is compact, then $C_c(X) = C(X)$.
Differential Equation with Integral
If we take the derivative, $\dfrac{d}{dx}(y'+4y+5\int_0^x y\,dx = e^{-x})$, we have: $$\tag 1 y''+4y'+5y = -e^{-x}$$ We now have a second order DEQ, so we need two initial conditions and only have one. We can use the first IC to find the second one: $$y'(0) + 4 y(0) + 5\int_0^0 y\,dx = e^{-0} \rightarrow y'(0) = 1$$ Now, we can use undetermined coefficients to find the homogeneous solution and guessing to find the particular solution. For the homogeneous, we have: $$m^2 + 4m + 5 = 0 \rightarrow m_{1,2} = -2 \pm ~i$$ This gives a homogeneous solution of: $$y_h(x) = e^{-2x} (c_1 \cos x + c_2 \sin x)$$ For the particular solution, we guess at a solution of $y_p = ae^{-x}$, substitute and solve for $a$, yielding: $$y_p(x) = -\dfrac{1}{2}e^{-x}$$ Our solution is: $$y(x) = y_h(x) + y_p(x) = e^{-2x} (c_1 \cos x + c_2 \sin x)-\dfrac{1}{2}e^{-x}$$ Now, substitute in the two ICs and solve for $c_1$ and $c_2$. Spoiler $y(x) = \dfrac{1}{2} e^{-2x}(\cos x + 3 \sin x - e^x)$
A square root of i with negative imaginary part
It is clearest to use polar coordinates first. If $z = re^{i\theta}$ (polar form of a complex number), then the $n^{th}$ power of $z$ is simply $z^n = r^ne^{in\theta}$. Now, to solve $z^2 = i$: Since $i = e^{i\pi/2}$, we are looking for $z = re^{i\theta}$ such that $r^2 = 1$ and $2\theta = \pi/2$. Well if we take $r = 1$, then $\theta = \pi/4$ OR $\theta = 5\pi/4$ (since $5\pi/2$ is the same angle measure as $\pi/2$). A little thought shows there are no other distinct complex numbers that work, so $z = e^{i\pi/4}$ and $z = e^{i5\pi/4}$ are the square roots of $i$. Then use Euler's formula to express these in rectangular, and you find that $e^{i5\pi/4} = (-1 - i)/\sqrt{2}$, having negative imaginary part.
Given particle undergoing Geometric Brownian Motion, want to find formula for probability that max-min > z after n days
The following horrible formula for the joint distribution of max, min and end value of a Brownian motion was copied without guarantees from the Handbook Of Brownian Motion (Borodin/Salminen), 1.15.8, p.271. First, for simplicity, this is only written for $\sigma=1,t=1$, and the more general case comes directly from scaling. If we shorten W as the Brownian Botion at t=1, m as the minimum and M as the maximum over $[0,1]$, then for $a < min(0,z) \le max(0,z) < b$ it holds $$ P(a < m, M < b, W \in dz) = \frac{1}{\sqrt{2\pi}}e^{(\mu z-\mu^2/2)} \cdot \sum_{k =-\infty}^{\infty} \Bigl(e^{-(z+2k(b-a))^2/2} - e^{(z-2a + 2k(b-a))^2/2} \Bigr) dz\; . $$ (Apologies for using z here in a different context.) If one really wants to, one can compute from this an even more horrible formula for the above probability. It is now in principle possible to derive from this a formula for what you want, by finding the density function $p_{m,M,W}$, and using $$ P(e^M-e^m\le r) = \int_{(x,y,z)\ :\ e^x \le e^z \le e^y \le e^x + r} p_{m,M,W}(x,y,z) d(x,y,z)\;, $$ but I shudder at the monster I expect to fall out from this. It might be better to give up and simulate the probability in question, and find some asymptotics. However, if you would like to proceed with it, I suggest you look not into the Handbook Of Brownian Motion, but rather into this paper, as it is much more readable.
Help with PDE/Green's formula question.
I am assuming $b>0$. Let's first examine the smallest eigenvalue of $\Delta$ for $u=0$ on boundary: if $$ -\Delta u = \lambda u $$ Multiply both side by $u$, integrate: $$ \int_D -u\Delta u \,dx= \int_D \lambda u^2\,dx \tag{1} $$ Green's formula is using divergence theorem $$ \int_D \nabla \cdot \boldsymbol{F} \,dx = \int_{\partial D} \boldsymbol{F}\cdot\boldsymbol{n} \,dS\tag{$\dagger$} $$ on $\boldsymbol{F} = \psi \nabla \varphi$, thus $\nabla \cdot \boldsymbol{F} = \psi \Delta \varphi + \nabla \varphi \cdot \nabla \psi$ and $(\dagger)$ becomes: $$\int_D \psi \Delta \varphi \,dx = -\int \nabla \varphi \cdot \nabla \psi\, dx +\int_{\partial D} \psi ( \nabla \varphi \cdot \boldsymbol{n} )\, dS $$ Plugging this formula back to (1), and using $u=0$ on boundary: $$ \int_D \nabla u\cdot\nabla u \,dx - \int_{\partial D} u ( \nabla u \cdot \boldsymbol{n} )\, dS= \int_D |\nabla u|^2 \,dx= \int_D \lambda u^2\,dx $$ if $a<\lambda_1$, the smallest possible eigenvalue, this implies for ANY permissible u in some space (in this case $u$ solves the original PDE) $$ \int_D |\nabla u|^2 \,dx \geq \int_D \lambda_1 u^2\,dx > \int_D a u^2\,dx\tag{2} $$ Now multiply both sides of $$ \Delta u+au-bu^2 = 0 $$ by $u$ and integrate over $D$: $$ \int_D(u\Delta u + au^2 -bu^3) dx = 0 $$ again by Green's identity $$ \int_D(-\nabla u\cdot \nabla u + au^2 -bu^3) dx + \int_{\partial D} u ( \nabla u \cdot \boldsymbol{n} )\, dS = 0 $$ By $u=0$ on boundary, above integral is: $$ \int_D(-|\nabla u|^2 + au^2 -bu^3) dx = 0 $$ which is: $$ \int_D bu^3 \,dx = \int_D(-|\nabla u|^2 + au^2) dx < 0 $$ by (2). Therefore there does not exist a positive $u$ on $D$, otherwise $\displaystyle\int_D bu^3 \,dx >0$.
Show that $f$ has absolute minimum at $x=0$.
If $x\ne 0$, $x^4>0$ and $2+\sin(1/x)\ge1$, so certainly $x^4(2+\sin(1/x))>0=f(0)$.
Analogue of Leibniz Rule for Stochastic Integrals
I am probably very late in answering this, but maybe the proof is useful for someone like me coming just now to this question. Using the integral form of $f$: \begin{align*} Y_t = & - \int_t^T \left[ f(0, u) + \int_0^t \alpha(s, u)ds + \int_0^t \sigma(s, u) dw_s \right] du \,, \end{align*} Using Fubini's theorem (twice): \begin{align*} Y_t = & - \int_t^T f(0, u) du - \int_0^t \int_t^T \alpha(s, u) du ds - \int_0^t \int_t^T \sigma(s, u) du dw_s \,, \\ = & - \int_t^T f(0, u) du - \int_0^t \int_s^T \alpha(s, u) du ds - \int_0^t \int_s^T \sigma(s, u) du dw_s \,, \\ & + \int_0^t \int_s^t \alpha(s, u) du ds + \int_0^t \int_s^t \sigma(s, u) du dw_s \,, \\ = & - \int_0^T f(0, u) du - \int_0^t \int_s^T \alpha(s, u) du ds - \int_0^t \int_s^T \sigma(s, u) du dw_s \,, \\ & + \int_0^t f(0, u) du + \int_0^t \int_0^u \alpha(s, u) ds du + \int_0^t \int_0^u \sigma(s, u) dw_s du \,, \end{align*} Given that $r(u) = f(u, u)$ and using the definition for $Y_0$ we have that: \begin{align*} Y_t = & Y_0 - \int_0^t \int_s^T \alpha(s, u) du\,ds - \int_0^t \int_s^T \sigma(s, u) du\,dw_s + \int_0^t r(u) du \,. \end{align*} Which is the desired result in the integral form.
A little help proving this statement?
Since we are given that $b_n>0$ and for $\sqrt{a_nb_n}$ to exist, we need $a_n > 0$. Now since $a_n$ is monotone decreasing sequence bounded below, we have that $\lim_{n \to \infty} a_n$ exists. We shall now prove that $b_n$ is bounded above by $a_1$. If $b_1 > a_1$, we then obtain that $b_2 \in (a_1,b_1)$, which gives us $b_2 < b_1$ contradicting the fact that $b_n$ is a monotone increasing sequence. Hence, $b_1 < a_1$. Now by induction if $b_k < a_1$, then $b_{k+1} = \sqrt{a_kb_k} < \sqrt{a_1 b_k} < a_1$. Hence, we now have that $b_n$ to be a monotone increasing sequence bounded above by $a_1$. Hence, $\lim_{n \to \infty} b_n$ exists. Now lets prove that the limits are equal. Let $\lim_{n \to \infty} a_n = A$ and $\lim_{n \to \infty} b_n = B$. Since we have $$b_{n+1} = \sqrt{a_n b_n} \implies \lim_{n \to \infty} b_{n+1} = \lim_{n \to \infty} \sqrt{a_n b_n} \implies \lim_{n \to \infty} b_{n+1} = \sqrt{\lim_{n \to \infty} a_n \lim_{n \to \infty} b_n}$$ This gives us that $$B = \sqrt{AB} \implies B =0 or B=A$$ However, $B=0$ is not possible, since $b_n$ is a positive montone increasing sequence. Hence, we obtain that $B=A$, i.e., $$\lim_{n \to \infty} b_{n} = \lim_{n \to \infty} \sqrt{a_n}$$
Weak law and strong law of large numbers and modes of convergence
Th fact that the strong law of large numbers implies the weak law of large numbers in contains in the following property: If $(\Omega,\mathcal F,\Bbb P)$ is a probability space and $\{X_n\}$ a sequence of real valued random variables with converges almost everywhere to $X$, it converges in probability to $X$. To see that, we can consider $X_n\to 0$ and $X_n\geq 0$, replacing $X_n$ by $|X_n-X|$ if necessary. The set $$C:=\bigcap_{p\geq 1}\bigcup_{n\geq 1}\bigcap_{k\geq n}\{\omega\in\Omega,X_k\leq \frac 1p\},$$ has by hypothesis, measure $1$. Fix $\varepsilon>0$ and $p_0$ with $p_0^{-1}\leq \varepsilon$. We have \begin{align} \limsup_k\Bbb P\{X_k\geq \varepsilon\}&\leq \limsup_k\Bbb P\{X_k\geq p_0^{-1}\}\\ &\leq\inf_{n\in\Bbb N}\Bbb P\left(\bigcup_{k\geq n}\{X_k\geq p_0^{—1}\}\right)\\ &\leq \Bbb P\left(\bigcap_{n\in\Bbb N}\bigcup_{k\geq n}\{X_k\geq p_0^{—1}\}\right)\\ &\leq \Bbb P(\Omega\setminus C)=0, \end{align} which gives the wanted result.
Does my insurance company commit Gambler's Fallacy, or do I?
The probability of your car being damaged on any given day is not increased just because it was or was not damaged on a previous day. But over a period of time, as time grows large, the probability of your car being damaged approaches 1. The gambler's fallacy is when one assumes that an independent event has a higher probability due to it not having occurred previously. In this case the gambler's fallacy is not being committed, as what you're concerned with is the probability of your car being damaged over a period of time.
Show that a given equation has no solutions in integers
By my comment above, we have that $(x-y)=\pm 1, \pm 7, \pm 11$ or $\pm 77$. Similarly, $(x-2y)=\pm 1, \pm 7, \pm 11$ or $\pm 77$. Case I: Assume $x-y=1=x-2y$, then $y=0$ and $x=1$, but then $(x-3y)(x-2y)(x-y)(x+2y)(x+3y)=1\neq 77$. Case II: Assume $x-y=-1$ and $x-2y=1$, then $y=-2$ and $x=-3$, but then $x+3y=-9$ which is not a divisor of $77$. Case III: Assume $x-y=7$ and $x-2y=1$, then $y=6$ and $x=13$, but then $x+2y=25$ is not a divisor of $77$. And so on. In principle you can proceed in this fashion and exclude all $64$ cases. You can remove many cases by making some more observations. You can also work in a reverse way. Notice that $77=1\cdot 77$ or $ 7\cdot 11$. (or $-1\cdot -77$ or $-7\cdot -11$). Thus either one factor is $77$ and all others are $1$. Or one factor is $7$, one other factor is $11$ and all others are $1$ (up to signs). I admit that this seems like a brute force method, but it is doable and will work.
Integral of measure of super level sets
One possible way to rewrite the first integral is by applying Tonelli's Theorem. Indeed, \begin{align} \int_a^{\infty}\mu(\{|f| > r\})\,dr = &\ \int_a^{\infty}\int_{\{x:|f(x)| > r\}}\,d\mu(x)\,dr \\ = &\ \int_{\{x:|f(x)| > a\}}\int_a^{|f(x)|}\,dr\,d\mu(x)\\ = &\ \int_{\{|f| > a\}}(|f| - a)\,d\mu \\ = &\ \int_{X}(|f| - a)_+\,d\mu. \end{align} I hope this helps!
Mathematical Induction Question - forgetting a simple rule?
$\frac{3(5^{k+1}-1+4 \cdot 5^{k+1})}{4}= \frac {3[(1 + 4)5^{k+1} - 1]}{4}=...$ Hint: $1+4 = 5$.
Equivalence of limit at infinity and one-sided limit at zero
The theorem might say the following: Suppose $g$ is continuous at $a$ and $g(x)$ differs from $g(a)$ for $x$ sufficiently close to $a$. Further suppose $\lim\limits_{u\to g(a)} f(u)$ exists. Then $$ \lim_{x\to a} f(g(x)) = \lim_{u\to g(a)} f(u) $$ In this case you would have $g(x) = \dfrac 1 x$ and $a=+\infty$. You could regard $+\infty$ as a point in a space $\mathbb R\cup\{+\infty,-\infty\}$ and regard intervals $(m,+\infty]$ as its open neighborhoods and write a proof in the language of topology. Or you could write an $\varepsilon$-$\delta$ proof.
Find an angle in a triangle with cevians
Yes, you can solve it with any angles. I'll only use the Sine Law. Let $CP$ intersect $AB$ at $K.~$ Let $AC=a.~$ Then $AB=a,~$ $$ BC=\dfrac{a\sin 40^{\circ}}{\sin70^{\circ}},\quad BP=\dfrac{a\sin 20^{\circ}\sin 40^{\circ}}{\sin 120^{\circ}\sin 70^{\circ}},\quad BK=\dfrac{a\sqrt{3}\sin 20^{\circ}\sin 40^{\circ}}{2\sin 120^{\circ}\sin 70^{\circ}}, $$ $$ KP=\dfrac{a\sin 20^{\circ}\sin 40^{\circ}}{2\sin 120^{\circ}\sin 70^{\circ}},\quad AK=\dfrac{a\sin(90^{\circ}-x)\sin 20^{\circ}\sin 20^{\circ}\sin 40^{\circ}}{2\sin x\sin 120^{\circ}\sin 70^{\circ}}. $$ Substitute those values of $AK$ and $BK$ into $AK+BK=a.~$ You'll get an equation in terms of $x$. The solution is $x=10^{\circ}.~$ See WolframAlpha.
A family of countable sets
This is not true. The least uncountable ordinal, $\omega_1$ is the union of all the smaller ordinals, all of which are countable, and all of which make a linearly ordered family as in your question. However, you can prove that under the requirements you pose, the only counterexamples are those of cardinality $\aleph_1$. So without additional hypothesis (specifically the continuum hypothesis) you cannot use the real numbers as a counterexample. Also, your attempt to use the rational numbers is futile, since the set of rational numbers is countable.
Putnam and Beyond 2e Section 1.5 Example
I think the removed markers leave no actual, empty spots behind. Or, equivalently, "reverse the two neighbouring markers" looks past these open spots to the next remaining marker. Otherwise, specifying that there are two of them seems a bit strange. We have the following sequence of moves: $$\begin{array}{llllll} W&W&W&W&W&W\\ W&W&B&O&B&W\\ B&O&W&O&B&W\\ W&O&O&O&W&W\\ B&O&O&O&O&B\end{array} $$ where $W$ is a marker with the whie side up, $B$ is a marker with the black side up, and $O$ is a removed marker.
2 questions about counting and permutation
Question 2: We use the Principle of Inclusion/Exclusion. There are $\frac{12!}{3!3!3!3!}$ permutations (words). We now count the bad words, in which there are $3$ consecutive occurrences of a or of b or of c or of d. We count the words with $3$ consecutive a. Group the three a into a superletter, which we will call A. Then we have $10$ "letters," which we can arrange in $\frac{10!}{3!3!3!1!}$ ways. The same is true for $3$ consecutive b, c, or d. But if we add these $4$ numbers (or equivalently multiply the answer for aaa by $4$, the number $4\cdot \frac{10!}{3!3!3!1!}$ double-counts the words that have both aaa and bbb, and the same for all $\binom{4}{2}$ ways of choosing $2$ letters. By replacing aaa by the "letter" A, and bbb by B, we can see that there are $\frac{8!}{3!3!1!1!}$ words that have a a triple a or a triple b. So our new estimate of the number of bad words is $4\cdot \frac{10!}{3!3!3!1!}-\binom{4}{2}\cdot \frac{8!}{3!3!1!1!}$. However, we have subtracted too much, for we have subtracted once too many times the number of words that have three triple letters. Our new estimate of the number of bad words is $4\cdot \frac{10!}{3!3!3!1!}-\binom{4}{2}\cdot \frac{8!}{3!3!1!1!}+\binom{4}{3}\cdot \frac{6!}{3!1!1!1!}$. However, we have added back once too often the bad words with all $4$ letters occurring in groups of $3$. So we need to subtract $\binom{4}{4}\cdot\frac{4!}{1!1!1!1!}$. Now put things together. The number of good words is $$\binom{4}{0}\cdot \frac{12!}{3!3!3!3!}-\binom{4}{1}\cdot \frac{10!}{3!3!3!1!}+\binom{4}{2}\cdot \frac{8!}{3!3!1!1!}-\binom{4}{3}\cdot \frac{6!}{3!1!1!1!}+\binom{4}{4}\cdot\frac{4!}{1!1!1!1!}.$$ Remark: If $k_1+k_2+\cdots +k_t=n$, then $\frac{n!}{k_1!k_2!\cdots k_t!}$ is often denoted by the multinomial symbol $\binom{n}{k_1,k_2,\dots,k_t}$. That notation would make our formulas look better. We do not wish to write out solutions for Question 1. But the answer to (a) will turn out to simplify to $\frac{18!}{3!3!3!3!3!3!}$. Part (b) is ambiguous, it is not clear whether we still want $3$ to each. But then the problem is trivial. If there is no restriction on the numbers given to each person, then a Stars and Bars argument gives the answer $\binom{23}{5}$.
Irreducibility of a polynomial over rationals.
No that is not enough. That would only show that it has no linear factors. As Daniel pointed out, it could factor as a product of two quadratic polynomials. You could try a couple of approaches: 1) Use the factorization of this polynomial over $\mathbb{C}$ to conclude that it cannot factor over $\mathbb{Q}$. Factor it completely over $\mathbb{C}$ and show that no product of these factors (other than the product of all four of them) lies in $\mathbb{Q}[x]$. 2) Make a change of variables and try to apply the Eisenstein Criterion.
Showing $\lim_{m\rightarrow\infty} \left(1-\frac1{m^2}\right)^m=1$
$$\left(1-\frac{1}{m^2}\right)^m=\left(1-\frac{1}{m}\right)^m\times \,\,\left(1+\frac{1}{m}\right)^m$$
Find the minimal polynomial of $\sqrt[3]{3} + \sqrt{5}$ over $\mathbb{Q}$.
If $x=\sqrt[3]3+\sqrt5$, then $\left(x-\sqrt5\right)^3=3$. In other words $x^3-3 \sqrt{5} x^2+15 x-5 \sqrt{5}-3=0$. But\begin{align}x^3-3 \sqrt{5} x^2+15 x-5 \sqrt{5}-3=0&\iff x^3+15x-3=(3x^2-5)\sqrt5\\&\implies(x^3+15x-3)^2=5(3x^2-5)^2\\&\iff x^6-15 x^4-6 x^3+375 x^2-90 x-116=0.\end{align}
KKT condition for the proximal algorithm
I don't know why this is being called a "KKT condition", but it is straightforward to say that whenever $Q$ is a convex set, $g$ is a function $Q \to \mathbb R$, and $x^* = \arg \min_{x \in Q} g(x)$, we must have $\langle \nabla g(x^*), y - x^*\rangle \ge 0$ for all $y \in Q$. All this is saying is that, when moving from $x^*$ in the direction of $y$, the rate of change of $g$ is non-decreasing. This is true even when $x^*$ is just a local minimizer. The hypothesis that $Q$ is convex, though, is necessary. Without that, it's possible that an arbitrarily small change from $x^*$ in the direction of $y$ immediately leaves $Q$ - and in that case, it's fine if $g$ decreases in that direction. In this particular example, $g(x) = f(x_k) + \langle f'(x_k), x-x_k\rangle + \frac1{2h_k}\|x - x_k\|^2$, and we can compute that $$\nabla g(x) = f'(x_k) + \frac1{h_k}(x-x_k).$$ Going the other way for a convex function such as this example's $g(x)$: the condition says that $x_{k+1}$ is a local minimizer along any line through $x_{k+1}$; therefore (by convexity) it's a global minimizer along any line through $x_{k+1}$; therefore it is a global minimizer on all of $Q$.
Problem understanding invariant subspaces and foliations
I think you're asking this in the context of the controlled dynamical system $(\mathcal{U},\Sigma): \dot{x}(t)=Ax(t)+Bu(t)$, but i'll try to keep the context general. First of all, your definition of invariant subspace doesn't seem right to me (it's correct if you denote a proper subset by the symbol $\subset$). If $A \in \text{Hom}(V,V)=\mathcal{L}(V,V)$ is a linear map, a subspace $\mathcal{S} \subseteq V$ is called $A$-invariant if $A\mathcal{S} \subseteq \mathcal{S}$. For now let's consider $V:=\mathbb{R}^n$, then we can also restate $A$-invariance as: A subspace $\mathcal{S} \subseteq \mathbb{R}^n$ is $A$-invariant if the matrix $T_0 \in \mathbb{R}^{n \times r}$ consisting of basis elements of $\mathcal{S}$ satisfies $AT_0=T_0M$, for some $M\in \mathbb{R}^{r \times r}$, $r \le n~,~r,n \in \mathbb{Z}_{>0}$. It literally means under the action of operator $A$ vectors in $\mathcal{S}$ remains in $\mathcal{S}$. Some exercises for you : If $A$ is an homeomorphism i.e algebraic isomorphism then show $\text{(a)}\{0\}~\text{(b)}V ~\text{(c)}\text{Null}(A)~\text{(d)}\text{Im}(A)$ are $A$-invariant. Now, to show the connection between invariant subspaces and eigenspace, i'll take the simplest case, the rest is upto you to figure out. Consider a one-dimensional invariant subspace, suppose $0 \neq v \in V$ and let $U:=\left\{ \lambda v : \lambda \in \mathbb{K}\right\}=\text{Span}(v)$, where $\mathbb{K}:=\mathbb{R} ~\text{or}~\mathbb{C}$. Now if $U$ is invariant under $A$ then from definition we have $Av=\lambda v~,~\lambda \in \mathbb{K}$, well this gives us the motivation to define a quantity called eigenvalue, i.e if $A \in \mathcal{L}(V,V)$, $\mathbb{K} \ni \lambda$ is called an eigenvalue of operator-$A$ if $\exists v \in V~,~v \neq 0$ : $Av=\lambda v$, and $(\lambda,v)$ is called the eigenspace. Hope this makes things clear. Further, we know that $\mathcal{S}=\text{Span}\{v_1,v_2,\ldots,v_r\}$, and define $T_0:=\left[ v_1 \cdots v_r \right]$, now consider the case when the eigenvalue matrix $M$ is not diagonalizable (diagonalizable case can be handled similarly), then for some matrix $W$ s.t $\text{det}(W) \neq 0$ we have $MW=W \Lambda$, where $\Lambda$ is the Jordan matrix of eigenvalues. Now define $V:=T_0W$, to this end, we look at $T_0MW=T_0W\Lambda=V\Lambda$, but we know that $T_0M=AT_0$ which gives us $T_0MW=AT_0W=AV$, so from these two equations we see that $AV=\Lambda V$, which is the eigenvalue equation implying that columns of $V$ actually forms an eigenspace of $A$, associated with $\Lambda$. Now let $T_1$ is a matrix whose colums are the eigenvectors of $\mathcal{S}$-perp i.e $\mathcal{S}^{\perp}$. Then the matrix formed by stacking columns of $T_0$ and $T_1$ is say, $T:=\left[T_0 ~~T_1 \right]$ and $\text{det}(T) \neq 0$. Then we have $T_{i}:=T^{-1}=\begin{pmatrix}T_{i1} \\T_{i2} \end{pmatrix}$ and $TT_{i}=T_0T_{i1}+T_1T_{i2}=\mathbb{I}$. Also $$T_iT=\begin{pmatrix} T_{i1}T_0 &T_{i1}T_1 \\T_{i2}T_0&T_{i2}T_1\end{pmatrix}=\begin{pmatrix}\mathbb{I}_r &0\\0&\mathbb{I}_{n-r}\end{pmatrix} $$ Finally $$\begin{align}T^{-1}AT&=T_{i}AT=T_i\left[AT_0~~AT_1\right]=\begin{pmatrix}T_{i1} \\T_{i2} \end{pmatrix}\left[T_0M~~AT_1\right]\\&=\begin{pmatrix}T_{i1}T_0M&T_{i1}AT_1\\T_{i2}T_0M &T_{i2}AT_1\end{pmatrix}=\begin{pmatrix}M&T_{i1}AT_1\\0&T_{i2}AT_1\end{pmatrix}\end{align}$$ This is the block-triangular matrix that comes out. Similar matrices arise in the context of controllability analysis as a consequence of rechability space i.e $$\mathfrak{R}_0\underbrace{=}_{[1]}\langle A|B\rangle:=B+AB+\cdots+A^{n-1}B$$ is $A$-invarint i.e $A\mathfrak{R}_0 \subseteq \mathfrak{R}_0$, in fact it's the smallest $A$-invarint subspace. Similarly the unobservable subspace is a (biggest)$A$-invarint subspace. I guess foliation is a bit over-kill in this context. Foliation is basically an equivalence relation in a $n$-manifold, which in this case is $\mathbb{R}^n$, surely it's a smooth manifold with a single chart atlas i.e $\left(\mathbb{R}^n,(\text{id}:\mathbb{R}^n \to \mathbb{R}^n) \right)$, take any subset $Y\subseteq \mathbb{R}^n $ define an equivalence relation: $$\left( x \sim y ~\text{if}~x-y \in Y ~\forall x,y \in \mathbb{R}^n \right)$$and define $\left[x\right]:=\left\{y \in \mathbb{R}^n:y \sim x\right\}$ which are connected and injectively immersed submanifolds, i think this is how the equivalence classes has been defined on the initial condition which makes the solutions stay in the same algebraic structure. [1]: Note this is not inner-product or "bra-ket" notation, Wonham, Murray: Linear Multivariable control uses this abundantly.
recurrence relation dependent inversly on n
Another way of writing $F(n)$ is: $F(n) = F(0) + \displaystyle \sum_{k=1}^{n} \frac{1}{k} = F(0) + H_{n}$ where $H_n$ is the $n$th harmonic number. The exact value of a harmonic number can't be calculated in $\mathcal{O}(\log{n})$, but you can compute an approximation. Assuming this is a programming problem, that should be enough. Referencing Wikipedia: http://en.wikipedia.org/wiki/Harmonic_number $H_n \sim \ln{n} + \gamma + \frac{1}{2n} - \displaystyle \sum_{k=1}^\infty \frac{B_{2k}}{2kn^{2k}} = \ln{n} + \gamma + \frac{1}{2n} - \frac{1}{12n^2} + \frac{1}{120n^4} - \cdots$ Where $\gamma$ is the Euler-Mascheroni Constant and $B_k$ are the Bernoulli numbers. This series should give the desired precision within a few terms.
3-D equation of a circle
To solve this problem, we can use quite basic tools. $1$. Locate the centre $C$ of the circle. This is the midpoint of the line segment you were given. $2$. Compute the radius $r$ of the circle. $3$. Which one of the candidate points is at distance $r$ from $C$? Once you know the coordinates $(c_1,c_2,c_3)$ of the centre $C$, and the radius $r$, then the equation of the sphere with centre $C$, radius $r$ is given by $$(x-c_1)^2+(y-c_2)^2+(z-c_3)^2=r^2.$$ A candidate point $P$ lies on a circle with the given line segment as a diameter if and only if it lies on the sphere with the above equation. Or else we can use the Pythagorean Theorem. Find the square of the length of the diameter. For which of your candidate points $P$ is the sum of the squares of the distances from $P$ to the diameter ends equal to the square of the diameter? Or else we can use perpendicularity directly. Let our given points be $A$ and $B$, and let $P$ be a candidate point. For $P$ to lie on a circle that has $AB$ as a diameter, we need the "dot product" of $A-P$ and $B-P$ to be $0$. That might be fast enough for the $1$ minute restriction.
Existence of function everywhere unbounded but with each point a local minimum
Assume $f$ is such a function. For $n\in\mathbb N$, the set $U_n:=\{x\in\mathbb R\mid f(x)\ge n\}$ is open because by property $2$, $x\in U_n$ implies that there is an open interval $S$ with $x\in S\subseteq U_y$. Moreover, $\overline {U_n}=\mathbb R$ because for every $x\in \mathbb R$ and every open neighbourthood $U$ of $x$, property $1$ says that $U_n\cap U\ne \emptyset$. Let $$A=\bigcap_{n\in\mathbb N} U_n.$$ Then by what we have just seen, $A$ is a countable intersection of dense open sets. By the Baire category theorem, $\overline A=\mathbb R$, especially, $A\ne\emptyset$. If $a\in A$ then we find $a\in U_n$ for all $n$, that is $f(a)\ge n$ for all $n\in\mathbb N$, which is absurd. Therefore, no such $f$ exists.
In $(\mathbf Z/p^r \mathbf Z)^*$, finding an element with order $p-1$.
I think it is possible by induction, for $r=1$ it is the classical result I assume to be true. Take $0\leq x\leq p-1$ such that $x$ is of order $p-1$ in $\mathbb{Z}/p \mathbb{Z}^*$. We will be looking for an element of order $p-1$ in $\mathbb{Z}/p^2 \mathbb{Z}^*$ under the form : $$x_2=x+\alpha p $$ Where $\alpha$ is to be determined, we have : $$x_2^{p-1}=x^{p-1}+(p-1)\alpha x^{p-2} p\text{ mod } p^2 $$ $$x_2^{p-1}=x^{p-1}-\alpha x^{p-2} p\text{ mod } p^2 $$ Now because $x$ is of order $p-1$ we have $x^{p-1}=1+t_1p$ hence : $$x_2^{p-1}=1+(t_1-\alpha x^{p-2}) p\text{ mod } p^2 $$ Hence $\alpha$ must verify the equation $t_1-\alpha x^{p-2}=0$ mod $p$. This is clearly solvable because $x^{p-2}$ can be inversed mod $p$, hence one can construct $\alpha$ such that : $$x_2:=x+\alpha p\text{ is of order dividing } p-1 $$ Now $x_2=x$ mod $p$ so its order must be divided by the order of $x$ mod $p$ which is $p-1$ hence $x_2$ is exactly of order $p-1$. Assume now that $x_{r-1}$ is of order $p-1$ in $\mathbb{Z}/p^{r-1} \mathbb{Z}^*$ then define : $$x_r:=x_{r-1}+\alpha p^{r-1} $$ Likewise, this define an element in $\mathbb{Z}/p^{r} \mathbb{Z}^*$ and : $$x_r^{p-1}=x_{r-1}^{p-1}+(p-1)\alpha x_{r-1}^{p-2} p^{r-1}\text{ mod } p^r $$ $$x_r^{p-1}=x_{r-1}^{p-1}-\alpha x_{r-1}^{p-2} p^{r-1}\text{ mod } p^r $$ But $x_{r-1}^{p-1}=1+t_{r-1}p^{r-1}$ hence we have that : $$x_r^{p-1}=1+(t_{r-1}-\alpha x_{r-1}^{p-2}) p^{r-1}\text{ mod } p^r $$ Now the equation in $\alpha$ : $t_{r-1}-\alpha x_{r-1}^{p-2}=0$ mod $p$ is solvable hence one can find $\alpha$ such that $x_r$ is of order dividing $p-1$, now by construction $x_r$ mod $p^{r-1}$ is $x_{r-1}$ hence its order must be divided by $p-1$, finally $x_r$ is an element of order $p-1$ in $\mathbb{Z}/p^r \mathbb{Z}^*$.
sum of a telescoping series
We have: $$\sum_{n=1}^{\infty} \ln \left(1+\frac{2}{n+1} \right)^{n+1} - \ln \left(1+\frac{2}{n} \right)^n = \lim_{n \to +\infty} \ln \left(1+\frac{2}{n+1} \right)^{n+1} - \ln \left(3 \right),$$ which after applying the limit is: $$= \ln e^2 - \ln 3 = 2- \ln 3.$$
Please fix this incorrect proof from Kunen
There is a mistake in the definition of filter. Condition (b) has to read: $\forall p\in G\forall q\in\Bbb P(\color{orange}{p\leq q} \implies q\in G)$ That is, a filter is closed upwards. I would personally add a condition (c) that says that a filter is nonempty, but this is not really important. This should clarify the problems with the proof, but in case it doesn't here is some more: In the proof, $\Bbb P$ is defined to be the set of opens in $\Bbb R$ with measure $<\epsilon$ ordered reversely by inclusion. A filter $F$ of $\Bbb P$ is a set of opens such that that any finite union $\bigcup p_i$ of opens $p_i\in F$ is also in the filter (this is what condition (a) gives us), and such that for any open subset $p\subset q$ (with $p\in \Bbb P$) of some open $q\in F$ we also have $p\in F$ (this is what (b) gives us). So a stronger condition $q\leq p$ gives us a larger open set than $p$. This is what we want, since we want a large open set (so that it covers the union of all $M_\alpha$) that still has measure $\leq\epsilon$. So the first mistake in the proof is not a mistake: If $p,q\in G$, then they have a common $r\leq p$ and $r\leq q$. This is the same as saying $r\supset p$ and $r\supset q$. Clearly then $r\supset p\cup q$ as well, which means $r\leq p\cup q$. By point (b) from the definition of filter, then $p\cup q\in G$. The second mistake follows directly from the corrected condition (b). Note that the set $\mathscr{B}$ of open rational intervals should be the set of open rational intervals with measure $<\epsilon$, since otherwise we cannot guarantee that $q\in \Bbb P$.
How many years the world's oil reserves will be enough?
I used Excel to calculate this, and found that after 37 years, 235 B will be extracted, so it appears your calculation is correct. And doing a rough estimate, we can use an approximation 2%*26*e = 141%, so in 26 years, production should be about 41% higher than today. But 26 is about half of 55, which would mean that production would have to be, on average, about twice current levels. So even without a calculator, 26 is clearly wrong. As far as speculating as to where 26 came from, if you divide the current reserves by the current rate plus the reserves times 2%, you get 240/(4.36+240*.02) = 26.2, which rounds to 26. So this could be a coincidence, or perhaps your teacher did something along those lines.
Newton iteration method
When using $f(x)=x^3$ the recursion becomes $x_{n+1}=x_n-\frac{x_n^3}{3 x_n^2}=\frac{2}{3}x_n$ and hence can explicitly be solved as $$ x_n = \left(\frac{2}{3}\right)^n x_0.$$ Now you just have to plug this into the inequality $x_n\leq 10^{-5}$, take the logarithm and solve for $n$, which gives $$n\ln\frac{2}{3}\leq\ln\frac{10^{-5}}{0.9}\quad\Rightarrow\quad n\geq\frac{\ln (10^{-5}/0.9)}{\ln(2/3)}\approx 28.2$$ Note that $\leq$ turns to $\geq$ as you divide through a negative logarithm. This means, the 29th iteration is the first one to be inside the $10^{-5}$-neighborhood.
What is the greatest integer that divides $p^4-1$ for every prime number $p$ greater than $5$?
I suppose this is more of a brute force method. By FlT, you know $p^4\equiv 1\pmod{5}$, so $5$ is a factor. Also, factoring, you get $$p^4-1=(p-1)(p+1)(p^2+1),$$ and since $p$ is odd, each factor is divisible by $2$. Moreover, one of the factors $p-1$ or $p+1$ is divisible by $4$, so $4\cdot2\cdot 2=16$ is another factor. Finally, of the three consecutive integers, $p-1$, $p$, and $p+1$, $3$ is a factor of one. Since $p$ is prime and larger than $5$, $3\nmid p$, so $3|p^4-1$. Hence $16\cdot 3\cdot 5=240$ is a factor of $p^4-1$ for any prime $p\gt 5$. It still remains to see that $240$ is the greatest such divisor. This can be confirmed by looking at $7^4-1=2400$ and $11^4-1=14640$.
What's the variance of intercept estimator in multiple linear regression?
Usually your $X$ will look like this \begin{equation} X = \begin{bmatrix} \mathbf{1} & X_1 & \ldots & X_{k-1} \end{bmatrix} \end{equation} where $\mathbf{1}$ is an all-ones vectors of size $N$ and $X_{j}$ is the $(j+1)^{th}$ column in $X$ Then \begin{equation} X^T X = \begin{bmatrix} \mathbf{1}^T\mathbf{1} & \bar{y}^T \\ \bar{y} & Y^T T \end{bmatrix} \end{equation} Notice that we have a block matrix here. Also notice that $\mathbf{1}^T\mathbf{1} = N$, with \begin{equation} \bar{y} = \begin{bmatrix} \sum X_{1i} \\ \vdots \\ \sum X_{Ni} \\ \end{bmatrix} \end{equation} an $Y$ is the matrix $X$ with first column being omitted. \begin{equation} X^T X = \begin{bmatrix} N & \bar{y}^T \\ \bar{y} & Y^T T \end{bmatrix} \end{equation} Using block matrix inversion, we get that the first element in the first row of $(X^T X)^{-1}$ is \begin{equation} [(X^T X)^{-1}]_{1,1} = N^{-1} + N^{-1} \bar{y}^T(Y^T Y - \bar{y} N^{-1} \bar{y}^T )^{-1}\bar{y}N^{-1} \end{equation} Since $N$ is a scalar this simplifies to \begin{equation} [(X^T X)^{-1}]_{1,1} = \frac{1}{N} + \frac{1}{N^2} \bar{y}^T(Y^T Y - \frac{1}{N}\bar{y} \bar{y}^T )^{-1}\bar{y} \tag{1} \end{equation} But \begin{equation} \operatorname{var} (\hat{\beta}_0) = \sigma^2 [(X^T X)^{-1}]_{1,1} \tag{2} \end{equation} So using $(1)$ in $(2)$ we get \begin{equation} \operatorname{var} (\hat{\beta}_0) = \frac{\sigma^2}{N} + \frac{\sigma^2}{N^2} \bar{y}^T(Y^T Y - \frac{1}{N}\bar{y} \bar{y}^T )^{-1}\bar{y} \end{equation}
validity of simple argument for showing $\sum 1/p$ grows like $\log \log x$
Euler was wrong. Why do you write "grow like" instead of $\sum_{n\le x} \frac1n \sim \log x$? It implies that $\log(\sum_{n\le x} \frac1n) \sim \log\log x$ which doesn't tell anything about $\sum_{p\le x} \frac1p$. There is only one elementary proof of the Mertens theorem $\sum_{p\le x} \frac1p \sim \log\log x$ whose main step is $$n\log n\sim \log n! =\sum_{p^k\le n} \lfloor n/p^k\rfloor\log p \sim n \sum_{p^k\le n}\frac{\log p}{p^k}$$ where the last step needs $\sum_{p^k\le n}\log p =O(n)$ which follows from $\sum_{p\in (n,2n]}\log p\le \log{2n\choose n}=O(n)$. Followed by a partial summation to go from $\sum_{p\le x} \frac{\log p}{p}\sim\log x$ to $\sum_{p\le x} \frac1{p} \sim \log\log x$
How to solve the dual problem of SVM
Being a concave quadratic optimization problem, you can in principle solve it using any QP solver. For instance you can use MOSEK, CPLEX or Gurobi. All of them come with free trial or academic license. Due to its typical dimension, and the peculiar structure, there are some first-order gradient based algorithms usually used by specialized packages. I suggest you to look at the LIBSVM package. The source code is available, plenty of documentation, examples and specialized versions. Don't re-invent the wheel unless you really need a special one.
How to solve $(A-AX)^{-1}=X^{-1}B$ and prove $B$ is invertible?
Your part A is problematic. You can't start with $B^{-1}$ until you have proved that $B$ is invertible. Approach 1: propose a candidate and show that it works. Let $M=(A-AX)X^{-1}$. Then $$ MB=(A-AX)X^{-1}B=(A-AX)(X^{-1}B)=(A-AX)(A-AX)^{-1}=I, $$ and then use this result to infer that $BM=I$ also. Approach 2: Take determinant on both sides of $(A-AX)^{-1}=X^{-1}B$ to infer that $\det(B)=\det X/\det(A-AX)\neq 0$.
Vector field and vector valued differential form - Notation
$\omega$ is a $1$-form, we deduce that for every $x\in P$, $\omega_x(Y(x))\in{\cal g}$ and $h_Y:P\rightarrow {\cal g}$ defined by $x\rightarrow\omega_x(Y(x))$, $X\omega(Y)$ is the differential of $h_Y$ evaluated at $X$, $dh_Y.X(x)$.
Elements of the ring $F[x,y]/(x^2-y^2)$
Try writing out the definitions to get started: For a polynomial $p(x,y) \in F[x,y]$, let $\overline{p}(x,y) := p(x,y) \pmod{(x^2 - y^2)}$. If $\overline{p}(x,y)$ is nilpotent, that means that $p(x,y)^N \equiv 0 \pmod{(x^2 - y^2)}$ for some $N > 1$, so $x^2 - y^2 \mid p(x,y)^N$. What can you conclude from here? Similarly $\overline{p}(x,y)$ is idempotent iff $x^2 - y^2 \mid p(x,y)^2 - p(x,y)$. What can you conclude from here? Lastly, keep in mind that $x - y = x + y$ in $F[x,y]$ if $2 = 0$ in $F$.
How many different words can be made using the letters of MISSISSIPPI?
$$ \frac{4}{4!} = \frac{1}{3!}. $$
Differentiate $\sqrt{1+f(x)^2}/(1+f(x))$
For this kind of problems, where you have products and ratios, I think that logarithmic differentiation is quite useful. Let $$y=\frac{\sqrt{1+f(x)^2}}{1+f(x)}$$ so $$\log(y)=\frac 12 \log\big(1+f(x)^2\big)- \log\big(1+f(x)\big)$$ Now, differentiate $$\frac{y'}{y}=\frac{f(x) f'(x)}{1+f(x)^2}-\frac{f'(x)}{1+f(x)}=\frac{(f(x)-1) f'(x)}{(1+f(x)) \left(1+f(x)^2\right)}$$ Replacing $y$, we then have $$y'=\frac{(f(x)-1) f'(x)}{(1+f(x))^2 \sqrt{1+f(x)^2}}$$ By the way, tou could further continue starting from $$y'=\frac{(f(x)-1) f'(x)}{(1+f(x)) \left(1+f(x)^2\right)}y$$ express $\log(y')$ as a sum of logarithms and differentiate all the pieces ($y''$ will then be expressed as a function of $y$ and $y'$).
How do I solve the following limit with indeterminate form of $\frac 00$?
Considering$$y=\frac{e - (1+x)^{\frac{1}{x}}}{x}$$ first, work with $$A=(1+x)^{\frac{1}{x}}\implies \log(A)=\frac 1x\log(1+x)$$ Now, use Taylor series $$\log(A)=\frac1x\left(x-\frac{x^2}{2}+O\left(x^3\right) \right)=1-\frac{x}{2}+O\left(x^2\right)$$ Continue with Taylor $$A=e^{\log(A}=e-\frac{e x}{2}+O\left(x^2\right)$$ Just finish.
Chmmc 2012 team #9 $f(n)=n$ if $\exists x$ s.t. $x^2-n\mid 4096, 0$ otherwise. Find $\sum_{i=1}^{4095} f(i)\mod 4096$.
We have $$\sum_{i=1}^{2^k-1} f(n)=\sum_{n\in 1,2,\cdots , 2^k-1, n=x^2} n\equiv \sum_{n\in 1,2,\cdots , 2^k-1, n=x^2} x^2$$ But $i^2\equiv j^2\Leftrightarrow i=j$ or $i+j=2^k$ or $i=\ell \cdot 2^{k-2}+1, j=\ell \cdot 2^{k-2}-1, \ell=1,3$, so in above expression $x^2$ and $(2^k-x)^2$ only count once, while $(\ell \cdot 2^{k-2}+1)^2$ and $\ell \cdot 2^{k-2}-1$ counts only once too. So $\sum_{n\in 1,2,\cdots , 2^k-1, n=x^2} x^2\equiv \sum_{x=1}^{2^{k-1}-1}x^2-(2^{k-2}-1)^2\equiv \frac{1}{6}(2^{k-1}-1)2^{k-1}(2^k-1)+2^{k-1}-1 (\mod 2^k)$ And the rest is easy.
Convergence of series with integral test
The integral to be compared with is $$\int_2^{\infty}\dfrac{dx}{x\ln^p(x)} = \int_{\ln2}^{\infty} \dfrac{dt}{t^p} = \left. \dfrac{t^{1-p}}{1-p}\right \vert_{\ln2}^{\infty} = - \dfrac{(\ln2)^{1-p}}{1-p} + \lim_{t\to \infty} \dfrac{t^{1-p}}{1-p}$$ The limit exists only for $p>1$ and doesn't exist for $p \leq 1$. Hence, the series $$\sum_{n=2}^{\infty} \dfrac1{n\ln^p(n)}$$ converges for $p>1$ and diverges for $p \leq 1$.
Solving $ \frac{d^2}{dt^2}\theta = k\sin\theta$ for $t$
The solution of the initial value problem $\theta'' = k \sin \theta$, $\theta(0)= \pi/2$, $\theta'(0)=0$, is given implicitly (for $\pi/2 \le \theta(t) \le 3\pi/2$, i.e. the first swing of the pendulum) by $$ \int _{\pi/2 }^{\theta \left( t \right) }\!{\frac {1}{\sqrt {-2\,k \cos \left( s \right) }}}\ {ds}=t$$ The time to go from $\theta = \pi/2$ to $\theta=\pi$ is thus $$ \int_{\pi/2}^\pi \frac{1}{\sqrt{-2k\cos(s)}}\ ds = \frac{1}{\sqrt{k}} \int_{\pi/2}^\pi \frac{1}{\sqrt{-2\cos(s)}}\ ds$$ That last integral is non-elementary: its approximate value is $1.854074677$, and it can be expressed as ${\rm EllipticK}(1/\sqrt{2})$ in the convention used by Maple. Wolfram Alpha calls it $K(1/2)$. It can also be written as $\dfrac{\pi^{3/2}}{2 \Gamma(3/4)^2}$.
Isomorphic normal subgroups
No. Let $N_1= 3 \mathbb{Z}$, $N_2 =4 \mathbb{Z}$. Both are normal subgroups of the additive group $\mathbb{Z}$. They are isomorphic via $f: 3x \mapsto 4x$. However, $\mathbb{Z}/3 \mathbb{Z}$ is not isomorphic to $\mathbb{Z}/ 4 \mathbb{Z}$ since these are both finite groups with a different number of elements, hence no bijection exists between them.
What is a compact 2-dimensional surface without boundary?
You're mistaken, the two sphere does not contain a boundary because every point is an interior point. This mean it has no boundary points. Note that the two sphere is merely the surface of a sphere, no other points are included. The interior and exterior you're describing are part of $\mathbb{R}^3$ not the two sphere.
Second part of the factorial sum divisibility question
Note that heuristically, if these numbers were equidistributed mod $p$ we would expect each one to be zero (i.e., expect $p$ to divide the result) with probability $\approx 1/p$; presuming that all the values are independent (which seems a reasonable assumption), we shoud expect to have $\sum_{p\lt n}\frac{1}{p}\approx M+\ln \ln n$ of them less than $n$, where $M$ is the Meissel-Mertens Constant — this sum is approximately 2.887 for $n=10^6$, so while we would heuristically 'expect' another one in that range it's not surprising that there aren't any, and despite the apparent lack of any more solutions, because of the divergence of the series the 'expected' number of solutions is still infinite! Added: I put together my own software implementation in C++, confirming the results (that only $p=3$ and $p=11$ work) for $p\lt 5\times 10^5$ but also looking at the distribution of results $\mod p$ to see if any patterns arose. I computed the value of $\dfrac{\sum_{1}^{p-1}i!\bmod p}{p}$ (the division by $p$ serving to 'normalize' values into the range $[0, 1)$ ) for all $p$ in that range and then grouped them in bins; these are the plots with 100 bins and 256 bins. (These bin counts should be small enough relative to the prime values in consideration that there shouldn't be any substantial aliasing effects in binning the data.) The results are a bit scattershot but there doesn't appear to be any skew towards particular values (e.g. $\frac{p-1}2$) which would show up as a spike in the bins. If I have more time I may take a look at the square of the sum and particularly even the inverse of the sum $\pmod p$, but since the latter would actually require me writing a quick GCD algorithm to find the inverse it'll have to wait for later. (Note: the sharp drop at the last element is an artifact of my adding a 0 value to the end of the bin data so my spreadsheet didn't automatically normalize the range; it's not an actual value.)
How would I go about computing this finite sum?
Hint: Without the need for solving the coefficients: $\dfrac{-k^2+2k+1}{k^2(k+1)^2} =\dfrac{k^2+2k+1-2k^2}{k^2(k+1)^2}=\dfrac{(k+1)^2}{k^2(k+1)^2}-\dfrac{2}{(k+1)^2}=\dfrac{1}{k^2}-\dfrac{2}{(k+1)^2}.$ Thus, the sum is reduced to $$\sum_{k=1}^{n}\dfrac{2^k}{k^2}-\sum_{k=1}^{n}\dfrac{2^{k+1}}{(k+1)^2}.$$ And then note that $$\sum_{k=1}^{n}\dfrac{2^{k+1}}{(k+1)^2}=\sum_{k=2}^{n+1}\dfrac{2^{k}}{k^2}=-2+\dfrac{2^{n+1}}{(n+1)^2}+\sum_{k=1}^{n}\dfrac{2^{k}}{k^2}.$$ The rest is for you to finish.
Urn problem (Probability)
25% of the time both balls drawn from the two urns will both be white, 25% of the time both will be black, and 50% of the time there will be one black and one white. After those balls have been added to the third urn, 25% of the time the third urn will contain 4 white and 2 black balls so the probability of drawing a white ball is 4/6= 2/3, 50% of the time it will contain 3 white and 3 black balls so the probability of drawing a white ball is 3/6= 1/2, and 25% of the time it will contain 2 white and 4 black balls so the probability of drawing a whit ball is 2/6= 1/3. The overall probability of drawing a white ball from the third urn is (.25)(2/3)+ (.50)(1/2)+ (.25)(1/3)= 1/6+ 1/4+ 1/12= (2+ 3+ 1)/12= 6/12= 1/2. Actually, it should have been clear from the start that, since each urn contains the same number of white as black balls, the probabilities of drawing a white or black ball from the third urn are the same so 1/2.
Modelling question using completing the square
It's hard to answer such questions when we aren't given any context. I have several answers in mind, but they each depend on what you already know. Just taking a stab: The function $m=-r(r-4)= -r^2+4r$ is quadratic and its graph is a parabola. Since the leading coefficient is negative, we know it's parabola with arms pointing down. So the maximum of the function is at the vertex of the parabola. The usual way to find the vertex is by completing the square, as was done in your example. Other ways are by taking the derivative $\frac{dm}{dr} = -2r+4$ and setting it equal to zero. When you solve, you get $r=2$, which you can plug back into the function to get $m=4.$
binomial (undefined function)
Suppose the books are labeled $a,b,c,d$. Let $A,B,C,D$ represent the event of taking books $a,b,c,d$ respectively. Let our sample space be all ways of selecting one book. Then clearly $A,B,C,D$ form a partition of the sample space. (I.e. $P(A\cup B\cup C\cup D)=P(A)+P(B)+P(C)+P(D)=1$). Let $K$ represent the event that the book selected has at least 200 pages. In the first example, note that $K\cap A=A$ since if you selected book $A$, since $A$ has 500 pages it therefore has at least 200 pages. Similarly for the other books. As such $P(K)=P(K\cap \Omega) = P(K\cap (A\cup B\cup C\cup D)) = P((K\cap A)\cup (K\cap B)\cup (K\cap C)\cup (K\cap D)) = P(A\cup B\cup C\cup D)=1$ Similarly in the second example, $K\cap A=\emptyset$ since selecting book $A$, which has 100 pages, does not satisfy the requirement that the book selected has at least 200 pages. In a similar manner as before, we get $P(K)=0$. The specific probability distribution used to select which book is completely irrelevant to the problem.
Please help me check my metric definition of isolated point
Looks good. Another formulation would be: A point $x\in A$ is called an isolated point of A if there is an $\epsilon>0$ such that $B_\epsilon(x)\cap A=\{x\}.$ Or: $x\in A$ is called an isolated point of $A$ if $\{x\}$ is open in the subspace $(A,d)$.
Some basic conceptual question in multivariable partial derivative
Differentiable here means that there is a linear map $A : \mathbb{R}^2 \to \mathbb{R}^2$ so that $$ \lim_{(h,k) \to (0,0)} \frac{ \lVert f(x+h,y+k) - f(x,k) - A (h,k) \rVert }{\lVert (h,k) \rVert} = 0. $$ You can in this case construct the map directly (hint: it's zero), and then it is easy to prove that this condition works at zero. On the other hand, $C^1$ normally means that the first partial derivatives should be continuous. You can compute these explicitly in the usual way at a general point to show that this is not the case (you'll find discontinuities on the lines $x=0$ and $y=0$, I expect). Reminder: as any first course in multivariable analysis should tell you, you have the proper inclusions $\{$Continuous partial derivatives at $x \} \subset \{ $Differentiable at $x \} \subset \{ $Partial derivatives exist at $x \} $, so you can only prove in one direction from these sets.
Limits with trig, log functions and variable exponents
Yes those are both right. $$\lim_{x\to\infty}\frac{2\cdot 3^{5x}+5}{3^{5x}+2^{5x}}=\lim_{x\to\infty}\frac{2+5\cdot3^{-5x}}{1+\left(3/2\right)^{-5x}}=2$$ and $$\lim_{x\to0}\frac{e^{2x}-e^{x\ln\pi}}{\sin 3x}=\lim_{x\to0}\frac{(2-\ln\pi)x+O(x^2)}{3x}=\frac{1}{3}(2-\ln\pi).$$ In the second one, you can use that $e^u=1+u+O(u^2)$.
How to prove that $\sum\limits_{i=0}^p (-1)^{p-i} {p \choose i} i^j$ is $0$ for $j < p$ and $p!$ for $j = p$
Use generating functions. Call $s_{j,p}$ the $j$th sum and consider the exponential generating function $S_p$ of the sequence $(s_{j,p})_{j\geqslant0}$ defined as $$ S_p(x)=\sum\limits_{j=0}^{+\infty}s_{j,p}\frac{x^j}{j!}=\sum\limits_{i=0}^p(-1)^{p-i}{p\choose i}\sum\limits_{j=0}^{+\infty}i^j\frac{x^j}{j!}=\sum\limits_{i=0}^p(-1)^{p-i}{p\choose i}\mathrm e^{ix}. $$ By the binomial theorem, the last sum is $$ \sum\limits_{i=0}^p{p\choose i}a^ib^{p-i}=(a+b)^p, $$ for $a=\mathrm e^x$ and $b=-1$, hence $$ S_p(x)=(\mathrm e^x-1)^p. $$ The series expansion of $\mathrm e^x-1$ has no $x^0$ term hence the valuation of $S_p(x)$ is at least $p$. This proves that $s_{j,p}=0$ for every $0\leqslant j\leqslant p-1$. Furthermore $\mathrm e^x-1=x+o(x)$ hence the $x^p$ term in $S_p(x)$ is exactly $1$ and $s_{p,p}=p!$. This method provides $s_{p+j,p}$ for every positive $j$ as well, for example, $$ \frac{s_{p+1,p}}{(p+1)!}=\frac{p}2,\qquad \frac{s_{p+2,p}}{(p+2)!}=\frac{p(3p+1)}{24}, $$ and, more generally, for every nonnegative $j$, the ratio $\dfrac{s_{p+j,p}}{(p+j)!}$ is a polynomial in $p$ of degree $j$ and with rational coefficients. Remark To compute $S_p(x)$, one can also note that the sum at the end of our first displayed equation is a multiple of the expectation of the sum $X_p=Y_1+\cdots+Y_p$ of $p$ i.i.d. Bernoulli random variables $Y_k$ with expectation $x$, hence $$ S_p(x)=2^p(-1)^p\mathrm E((-1)^{X_p}\mathrm e^{xX_p}). $$ By independence the expectation is a product of $\mathrm E((-1)^{Y_k}\mathrm e^{xY_k})=\frac12(1-\mathrm e^x)$, hence $$ S_p(x)=(-1)^p(1-\mathrm e^x)^p=(\mathrm e^x-1)^p. $$
How to find the coordinates of a cylindrical helix whose curvature and torsion are given?
It'll be a conical helix, namely \begin{align*} \mathbf{r}(s) &amp;= \begin{pmatrix} \dfrac{s}{\sqrt{6}} \, \cos (\sqrt{2}\, \ln s) \\[2pt] \dfrac{s}{\sqrt{6}} \, \sin (\sqrt{2}\, \ln s) \\[2pt] \dfrac{s}{\sqrt{2}} \end{pmatrix} \\[5pt] \kappa &amp;= \frac{1}{s} \\[5pt] \tau &amp;= \frac{1}{s} \end{align*} For more general case, see another answer here and also the ODE for space curve here.
How many orthogonal matrices are there
let us count the number of constraints: (a) to make the first column orthogonal to the remaining $n-1$ columns, you need $n-1$ constraints. al together one needs $(n-1)+(n-2) + \cdots + 2 + 1=\frac12 n(n-1).$ (b) to make all columns of length $1,$ one needs $n$ constraints. therefore, to make an $n \times n$ orthonormal matrix, you will have $$n^2 -\frac12n(n-1) - n=\frac{n(n-1)}{2}$$ free parameters. for example, you have only one free variable to make a $2 \times 2$ orthogonal matrix. for $n = 3,$ it is $3$ free variables. for an orthogonal matrix, you don't need to make the columns length one. so there will be $$\frac{n(n+1)}2$$ free variables.
Non-isomorphic structures with equal cardinality
If $f : \mathfrak{A} \to \mathfrak{B}$ preserves the successor function, then an easy induction argument shows that $f(n) = (n,0)$ for every natural number $n$. Therefore $f$ is not surjective.
What about $\mathbb{Z}[\sqrt[n]{2}]$?
The discriminant of $\alpha=\sqrt[3]{2}$ is $\pm 27\cdot 4$, so only $2$ and $3$ may ramify, since $\pm 27\cdot 4=[\mathcal{O}_K:\mathbb{Z}[\alpha]]^2 d_K$, where $K=\Bbb Q\!\left(\sqrt[3]2\right)$. Now $\mu_\alpha=X^3-2$ is $2$-Eisenstein, so $2\nmid [\mathcal{O}_K:\mathbb{Z}[\alpha]]$ and $\mu_{\alpha}(X-1)$ is $3$-Eisenstein, so $3\nmid [\mathcal{O}_K:\mathbb{Z}[\alpha +1]]= [\mathcal{O}_K:\mathbb{Z}[\alpha]]$. Therefore, $\mathcal{O}_K=\mathbb{Z}[\alpha]$ and $d_K=\pm 27\cdot 4$. It follows that the factorisation of $(p)$ into prime ideals is reflected by the decomposition of $X^3-2$ modulo $p$. The Minkoswki bound is $\approx 2.94$, hence the class group is generated by the class of prime ideals lying above $2$. Now, since $(2)=(\alpha,2)^3=(\alpha)^3$, the unique prime ideal above $2$ is $(\alpha)$, which is principal, so $\mathbb{Z}[\alpha]$ is a PID. A conjecture (supported by lot of positive results) suggests that a ring of integers is a PID if and only it is Euclidean (but not necessarily norm Euclidean, so the Euclidean function may be very hard to find). Since $\mathbb{Z}[\alpha]$ is a PID, irreducible elements are exactly the generators of prime ideals. Depending of the factorisation of $X^3-2$ mod $p$, you will have. For $p\neq 2,3$: If $2$ is not a cube modulo $p$, $p$ is irreducible If $2$ is a cube modulo $p$ but $X^2+X+1$ is irreducible modulo $p$, we have $(p)=P_1P_2$, where $P_1=(\alpha-m,p), P_2=(\alpha^2+m\alpha+m^2,p)$n where $m^3\equiv 2 \ [p]$. The generators $\pi_1,\pi_2$ of $P_1,P_2$ will give you non associate irreducible elements. If $2$ is a cube modulo $p$ and $X^2+X+1$ is reducible modulo $p$, whe have $(p)=P_1P_2P_3$, where $P_k=(\alpha-j^km,p)$ with $j^2+j+1\equiv 0 \ [p]$ and $m^3\equiv 2 \ [p]$. The generators $\pi'_1,\pi_2',\pi'_ 3$ of $P_1,P_2,P_3$ will give you non associate irreducible elements. If $p=2$, we get that $\alpha$ is irreducible If $p=3$, $(3)=P^3$, where $P=(\alpha-1,3)=(\alpha-1)$ , hence $\alpha-1$ is irreducible. Hence, up to association, the irreducible elements are $\alpha,\alpha-1$ and the various $\pi_1,\pi_2,\pi'_1,\pi'_2,\pi'_3$. I doubt that you can find the last ones explicitely for general $p$...
Finding the extrema of function without differentiating
An idea might be to look at $f(x,y)=x^2-4xy+3y^2=(x-2y)^2-y^2=(x-3y)(x-y)$ and observe the roots of this function. Remember that the vertex of the x-coordinate of a quadratic function is at the middle of the two roots.
Show that if $\operatorname{trace}(AB) = 0$ and $\operatorname{rank} (A)=1$ then $ABA=0$
Hint: Since $A$ has rank $1$, we can write $$A = \mathbf{u}\mathbf{v}^\mathrm{T}$$ for some vectors $\mathbf{u}$ and $\mathbf{v}$. Now use the cyclicity of the trace with respect to its arguments and see what you get with $\mathrm{Tr}(AB)=0$.
Given $\Delta ABC$ is such that $a^3+b^3=c^3$, prove that $\Delta ABC$ is acute.
we distinguish two cases: we assume that the triangle is not accute, then we have in the first case the triangle hase a right angle, we assume we have $$a^2+b^2=c^2$$ or $$\sqrt{a^2+b^2}=c=\sqrt[3]{a^3+b^3}$$ powering by $6$ and rearranging we get $$2=3\frac{a}{b}+3\frac{b}{a}$$ Setting $$t=\frac{a}{b}$$ then we have the quadratic equation $$2t=3t^2+3$$ or $$t^2-\frac{2}{3}t+1=0$$ this equation hase no solution. Analogously we can solve the case that $$a^2+b^2&lt;c^2$$
What is wrong with my application of the Myhill-Nerode theorem on this language?
$L$ is the language of words such that the first letter is equal to the last letter (and the empty word). So there are only 5 equivalence classes : empty word 0 and 0w0 for any word w 1 and 1w1 for any word w 0w1 for any word w 1w0 for any word w
Divergence test for real series
Let $\sum_{n = 1}^\infty a_n$ a convergent series. Then, the sequence of summans $a_n$ is a null sequence, i.e. $\lim_{n \to \infty} a_n = 0$. This gives you a necessary condition for a series to converge. The contraposition of this statement reads If $a_n$ is not a null sequence, then $\sum_{n = 1}^\infty a_n$ cannot converge. Now let's take a look at the sequence $a_n = \frac{(-1)^n n}{n + 1}$. This sequence does not converge at all (why?), i.e. in particular $a_n$ is not a null sequence. Thus by the second highlighted statement, the series $\sum_{n = 1}^\infty a_n$ cannot converge.
What is the Künneth formula for complete varieties?
It's always hard to reconstruct someone else's train of thought, but here's one attempted explanation. As aginensky says, the first step seems to be to use the projection formula. This shows that $(p_2)_* L_1 = M_1 \otimes (p_2)_* O_Z$ where I have written $Z$ to denote the product (to save typing). So now it is enough to show that $(p_2)_* O_Z = O_{Y_1}$. To do this, recall that $(p_2)_* O_Z$ is defined by \begin{align*} U &amp;\mapsto H^0((p_2)^{-1}(U), O_{(p_2)^{-1}(U)})\\ &amp;= H^0(U \times X, O_{U \times X}) \end{align*} Now Künneth tells us that this is isomorphic to \begin{align*} &amp;H^0(U,O_U) \otimes_k H^0(X,O_X) =H^0(U,O_U)\end{align*} using the fact that $X$ is complete.
Why does $d(x,y) |x^3-y^3|$ define a metric space but $d(x,y)=|x^2-y^2|$ does not on real numbers.
Intuitively, the property in question says the distance between any pair of different points is nonzero. For the proposed metric $d(x,y)=|x^2-y^2|$ that is not true because $d(-1,1)=|(-1)^2-1^2|=|1-1|=0$. We have found two different points at distance zero, violating the property. Having found one example that violates the axiom is sufficient to show this is not a metric. Your first comment is incorrect. The first equivalence, $|x^2-y^2|=0 \iff x^2 = y^2$ is true, but $x^2=y^2 \iff x=y$ is not. Take $x=-1,y=1$ and the left is true but the right is false.
An integration question.
As Daniel said, $f(x)=\pi/2$ is the solution. Let $g(x)=f'(x)x^{2014!}$. Then, for any $n=0,1,2,\ldots$ we have $$ \int_{-1}^1g(x)x^ndx=\int_{-1}^1f'(x)x^{n+2014!}\,dx=0. $$ So $g(x)=0$ for all $x\in[-1,1]$ (it is orthogonal to $x^n$ for all $n$), and thus $f'(x)=0$. So $f$ is constant, and now the first equality gives $f(x)=\pi/2$, $x\in[-1,1]$.
Doubt in understanding a question of basic probability theory
On each die the player can expect to win ${1\over6}$ units, makes ${1\over2}$ units per simultaneous throw of the three dice. On the other hand the probability that none of the three dice shows the number he has bet on amounts to $\left({5\over6}\right)^3=0.5787$. As $0.5787&gt;{1\over2}$ the game is unfair to the betting player.
Almost all trees are cospectral (Allen Schwenk's 1973 article)
Results that imply Theorem 2 appear in most books that treat graph spectra - note that the formula holds for all graphs, not just trees. Second, to extract coefficients, you need tools for manipulating power series. Now Schwenk would have been programming in Fortran on a computer that was probably less powerful than your cell phone, but now packages such as sage/maple/mathematica... fortunately have this stuff built in. You can find an example of how to do the sort of calculations in sage at http://linear.ups.edu/eagts/eagts.html (chapter 8). (I realise that this not as complete an answer to your question as you might have hoped for, but such answer requires more time and space than are available.)
Texts on Coxeter groups
A classical reference is Reflection Groups and Coxeter Groups by James Humphreys. The book is self-contained, and only assumes some knowledge of algebra.
What is the hessian of l2 norm squared?
This might be easier if we first rewrite the squared norm as a sum: $$f(\vec{x}) = \sum_i x_i^2$$ It is pretty clear that $\frac{d}{dx_i}f(\vec{x}) = 2x_i$. We can then see that $\frac{d^2}{dx_idy_i} = \begin{cases}2&amp;\:&amp;\mathrm{if}&amp;i = j\\0&amp;&amp;\mathrm{if}&amp; i \neq j\end{cases}$. So, the Hessian is just $2I$.
Can a linear functional be infinite at a point?
The value infinity is not allowed. Your example functional $$ f(x) = \sum_{i=1}^\infty x_i $$ is not defined on the whole $l^2$, $D(f)\subsetneq l^2$. Unbounded functionals are somewhat different: they are defined on the whole space, but are unbounded. In every neighborhood of the origin there are points, where the unbounded functional admits arbitrarily large values.
Given a characteristic polynomial of a Matrix, prove a certain Matrix is equal to 0
This is the famous Cayley-Hamilton theorem, but we can also prove this special diagonalizable case directly. Writing $A = P^{-1} D P$, we use a standard trick to compute out \begin{align*} p(A) = p( P^{-1} D P) &amp;= ( P^{-1} D P)^n + a_{n - 1}( P^{-1} D P)^{n - 1} + \dots + a_1 ( P^{-1} D P) + a_0I \\ &amp;= P^{-1} D^n P + a_{n - 1}P^{-1} D^{n - 1} P + \dots + a_1 ( P^{-1} D P) + a_0I\\ &amp;= P^{-1}( D^n + a_{n - 1} D^{n - 1} + \dots + a_1 D + a_0I ) P. \end{align*} Now we've reduced the problem to the case of a diagonal matrix. Given what you know about the characteristic polynomial of a diagonal matrix, can you go from here?
Query about proof of homotopy lifting property
The proof fails to state one last condition on $h$, namely, that $p$ on $p^{-1}(U)$ factors through $h$ - that is, the composition map $p^{-1}(U)\rightarrow U\times D \rightarrow U$ is $p_{|p^{-1}(U)}$, where $U\times D \rightarrow U$ is the natural projection. So, $p(h^{-1}(x,d))=x$ for all $x\in U$. So, if $p(\hat{u}(t)) = u(t)$ for all $t$, then $h(\hat{u}(t)) = (u&#39;(t),d(t))$, for some $d:I\rightarrow D$. Since $D$ is discrete, though, $d(t)$ must be constant. On the other hand, $u(t)=p(\hat{u}(t)) = p(h^{-1}(h(\hat{u}(t)))) = p(h^{-1}(u&#39;(t),d)) = u&#39;(t)$ So, $u&#39;(t)=u(t)$, and $h(\hat{u}(t)) = (u(t),d)$
What is the reason for considering a Banach space in the following theorem?
Yes, $S=\lim_{n\to\infty}S_n=\sum_{n=0}^\infty T^n$. You need the fact that the space is complete in order to prove that this series converges. And, yes, $ST=TS\ne\operatorname{Id}$. My guess is that the statement of the theorem is that$$(\operatorname{Id}-T)S=S(\operatorname{Id}-T)=\operatorname{Id}$$because that's what was proved.
How does the backward/forward algorithm work if there is no end?
The Forward-backward algorithm is used to train HMM model probabilities from observations iteratively. It moves towards something like a maximum likelihood estimator (only a local maximum, though). If it converges, chances are you will keep getting infinitesimal small improvements not worth your computing time until changes fall below your floating point precision. Therefore, you stop computing when your probabilities have stabilised enough for your taste.
$ T = V|T| $ in a finite-dimensional Hilbert space.
Let $T = U \Sigma V^*$ be the SVD of $T$. Then $T = (U V^*) (V \Sigma V^*)$ will suffice.
Solution to Laplace equation in $\mathbb{R}^n$
Apart from the geometry of $\partial \Omega$, the boundary values $f$ would have to be very exceptional. Indeed, a harmonic function on $\mathbb R^n$ is real-analytic on $\mathbb R^n$, and this confers a lot of rigidity on the values it can attain on $\partial \Omega$. For example, let $\Omega$ be the unit disk in $\mathbb R^2$. Expand $f$ into Fourier series $f(\theta)\sim \sum_{n\in\mathbb Z} c_n e^{in\theta}$. (The series need not converge; so far this is a formal expansion.) Then in $\Omega$ the (unique) solution of the Dirichlet problem is given by $$u(r,\theta) = \sum_{n\in\mathbb Z} c_n r^{|n|} e^{in\theta} \tag{1}$$ If $u$ extends to a harmonic function on all of $\mathbb R^2$, then it must be represented by the series (1) on all of $\mathbb R^2$ (indeed, write $u$ as the real part of holomorphic function; that function must be entire). Which implies that the sequence $c_n$ is rapidly decaying on both ends: $$\lim_{|n|\to \infty} R^{|n|}c_n = 0 \quad \forall\ R&gt;0 \tag{2}$$ This requires $f$ to be analytic and more. Conversely, if (2) holds, then (1) solves your problem in all of $\mathbb R$. If $\Omega$ does not have a nice boundary, the rigidity is still there ($f$ has to be very special), but it's no longer feasible to describe what "special" is, without tautologies.
Intersection of Simple Closed Curves in $\mathbb{R}P^2$
If I have two transversely intersecting n-dimensional submanifolds $A,B$ of $M^{2n}$, then then (everything here is mod 2) the parity of the number of geometric intersections is given by the cup product $P([A])P([B])$ evaluated on the fundamental class $[M]$, where $P$ denotes the Poincare isomorphism and $[A],[B],[M]$ are the fundamental classes. It is easy enough to argue (either via standard surgery arguments or ad hoc methods) that a curve of the second type represents the nontrivial loop in $\pi_1(\mathbb{R}P^2)$, hence under the mod 2 Poincare Isomorphism it represents the nontrivial class in $H^1 (\mathbb{R}P^2)$. Since we know the ring structure of $\mathbb{R}P^2$ is truncated polynomial on this class, we deduce that the number of intersections is $1$ mod 2, as desired.
Convergence of series $A_n = \sqrt{\sum_{k=n}^{\infty} a_k} - \sqrt{\sum_{k=n+1}^{\infty}a_k} $ if series $a_n$ converges
Fix some sequence $(b_k)_{k\geqslant1}$ and define $c_k=b_k-b_{k+1}$ for every $k\geqslant1$. Then, for every $n\geqslant0$, $$ \sum_{k=1}^nc_k=b_1-b_{n+1}. $$ If the sequence $(b_k)$ converges, this implies that the series $\sum\limits_kc_k$ converges and that its sum is $b_1-b_\infty$, where $b_\infty=\lim\limits_{k\to\infty}b_k$. Applying this to $$ b_k=\sqrt{\sum_{i=k}^\infty a_i}, $$ one sees that the series $\sum\limits_kA_k$ converges and that its sum is $$ \sum_{k=1}^\infty A_k=\sqrt{\sum_{i=1}^\infty a_i}. $$ To prove that $a_n\ll A_n$, note that $$ a_n=A_n\cdot\left(b_n+b_{n+1}\right),\qquad b_n+b_{n+1}\to0. $$
Is there a standard notation for the product from right to left?
I don't think there is such a notation. However, you won't need it, as you can do the following product: $$\prod_{i=1}^n A_{n-i+1}$$ or equivalently $$\prod_{i=0}^{n-1} A_{n-i}$$
$f: \mathbb{D} \to \mathbb{D}$ holomorphic, $f(\frac{1}{2}) + f(-\frac{1}{2}) = 0$. Prove $|f(0)| \leq \frac{1}{4}$
$h \colon \mathbb{D} \to \mathbb{D}$ is holomorphic and has zeros at $\frac{1}{2}$ and $-\frac{1}{2}$. The same is true of $$g \colon z \mapsto \frac{z-\frac{1}{2}}{1 - \frac{1}{2}z}\cdot \frac{z + \frac{1}{2}}{1 + \frac{1}{2}z} = \frac{4z^2-1}{4-z^2}\,.$$ Moreover, we have $\lvert g(z)\rvert = 1$ for $\lvert z\rvert = 1$, and $g$ has no other zeros than the simple zeros at $\pm \frac{1}{2}$. It follows that $q \colon \mathbb{D} \to \overline{\mathbb{D}}$ is holomorphic, where $$q \colon z \mapsto \frac{h(z)}{g(z)}\,.$$ In particular, we have $$1 \geqslant \lvert q(0)\rvert = \biggl\lvert \frac{h(0)}{g(0)}\biggr\rvert = 4\lvert f(0)\rvert\,.$$
How to Prove : $\frac{2}{(n+2)!}\sum_{k=0}^n(-1)^k\binom{n}{k}(n-k)^{n+2}=\frac{n(3n+1)}{12}$
Here’s a fairly straightforward combinatorial argument. Rewrite the equation as $$\sum_{k=0}^n(-1)^k\binom{n}{k}(n-k)^{n+2}=\frac{n(3n+1)(n+2)!}{24}\;.\tag{1}$$ The lefthand side of $(1)$ is an inclusion-exclusion calculation of the number of surjections from $[n+2]$ to $[n]$ (where as usual $[m]=\{1,\ldots,m\}$ for each $m\in\Bbb Z^+$). We now count these surjections in a different way. If $f:[n+2]\to[n]$ is a surjection, either there is a $k\in[n]$ such that $|f^{-1}[\{k\}]|=3$, or there are distinct $k,\ell\in[n]$ such that $|f^{-1}[\{k\}]|=|f^{-1}[\{\ell\}]|=2$; call these surjections Type $1$ and Type $2$, respectively. To construct a Type $1$ surjection we first choose $k$ in $n$ ways, then choose $f^{-1}[\{k\}]$ in $\binom{n+2}3$ ways, and then define $f$ on $[n+2]\setminus f^{-1}[\{k\}]$ in $(n-1)!$ ways, for a total of $$n(n-1)!\binom{n+2}3=\frac{n(n+2)!}{3!}$$ Type $1$ surjections. To construct a Type $2$ surjection we first choose $k$ and $\ell$ in $\binom{n}2$ ways, then choose the preimage of the smaller of $k$ and $\ell$ in $\binom{n+2}2$ ways, then choose the preimage of the larger of $k$ and $\ell$ in $\binom{n}2$ ways, and finally define the rest of $f$ in $(n-2)!$ ways, for a total of $$\binom{n}2^2\binom{n+2}2(n-2)!=\frac{n(n-1)(n+2)!}8$$ Type $2$ surjections. The total number of surjections is therefore $$\frac{n(n+2)!}6+\frac{n(n-1)(n+2)!}8=\frac{n(3n+1)(n+2)!}{24}\;,$$ as desired.
In how many ways can $5$ teaching posts be filled if $2$ posts are reserved for American candidates?
this is because the no of ways you can select 2 American candidates out of 7 is 7C2-------M and the number of ways you can select the rest is (23-7)C(5-2)-------------N Thus, total ways = M*N = 7C2 * 16C3
Problem understanding Quasi-Newton method: BFGS
Yes, you put that as $B$ and then solve $s_k = \arg\min_{\Delta x} \left\{f(x_k) + \nabla f(x_k)^{\mathrm T} \,\Delta x + \frac{1}{2} \Delta x^{\mathrm T} B \,\Delta x\right\}$. Since $B$ is positive definite, the solution is obtained by setting the derivative to $0$: $s_k = -B_k^{-1} \nabla f(x_k)$.
Request help on proof of irrationality of $\sqrt {8}$.
Why don't you just use $2\sqrt{2} = \sqrt{8}$? Once you do this just prove by contradiction that if $\sqrt{8}$ is rational, so must $\sqrt{2}$ be.
Hölder type inequality
Use the generalized Hölder inequality with counting measure.