title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
a proof concerning fundamental group and lifting of paths
Yes, exactly, but its lift is the given path $\tilde f:e_0\leadsto e_1$ (because $f=p\circ\tilde f$) and its endpoint is $e_1$.
Applying Itô Lemma to Stochastic Differential Equation
As usual, you consider $Y_t=\ln Z_t$ to get $$ dY_t=\frac{dZ_t}{Z_t}-\frac12\frac{d\langle Z\rangle_t}{Z_t^2}=-θ_tdB_t-\frac12θ_t^2dt $$ which you now can integrate.
Is there a continuous bijection from $\mathbb{R}$ to $\mathbb{R}^2$
Suppose $f(x)$ were such a function. Note that each $A_n = f([-n,n])$ is a closed (actually compact) set, with $\cup A_n = {\mathbb R}^n$. By the Baire category theorem, there is one such $A_n$ that contains a closed ball $B$. Since $[-n,n]$ is compact, the image of any relatively closed subset of $[-n,n]$ is compact and thus closed. Hence $f^{-1}$ is continuous when restricted to $A_n$, and thus when restricted to $B$. So in particular $f^{-1}(B)$ is a connected subset of ${\mathbb R}$. Since all connected subsets of ${\mathbb R}$ are intervals, $f^{-1}(B)$ is a closed interval $I$. Let $x$ be any point in the interior of $B$ such that $f^{-1}(x)$ is not an endpoint of $I$. Then $B - \{x\}$ is still connected, but $f^{-1}(B - \{x\})$ is the union of two disjoint intervals, which is not connected. Since $f^{-1}$ when restricted to $B - \{x\}$ is continuous, you have a contradiction.
Why is $\operatorname{Var}(X_{(1)}) = \operatorname{Var}(X_{(n)})$ for i.i.d $X_1, \ldots, X_n \sim U(0,1)$?
Let $Y_k=1-X_k$. $Y_k$ has the same distribution as $X_k$. (define $Y_{(1)}$ and $Y_{(n)}$ using the same min and max notation). Therefore $var(Y_{(1)})=var(X_{(1)})$. Meanwhile $Y_{(1)}=X_{(n)}$ and $Y_{(n)}=X_{(1)}$ As a result $var(X_{(n)})=var(X_{(1)})=var(Y_{(n)})=var(Y_{(1)})$
If $M$ is a $kG$-module, $H\leq G$, why is the module of coinvariants isomorphic to $k(G/H)\otimes_{kG} M$?
The right action of $G$ on $G/H$ is given by inverse left mutliplication, that is, $gH \cdot g' := g'^{-1}gH$. Given that action, the desired isomorphism is given by $$k(G/H) \otimes_{kG} M \to M_H,\quad gH \otimes m \mapsto g^{-1}m + \langle m' - hm' \:|\: m' \in M, h \in H\rangle.$$ Of course, some things have to be checked here. First of all, the map is well-defined, that is, different representatives for the coset $gH$ give the same element in $M_H$ and also $gH \otimes g'm$ maps to the same element as $g'^{-1}gH \otimes m$. Both things can be easily verfied. To show that the given map is an isomorphism, we can consider the map $$ M_H \to k(G/H) \otimes_{kG} M, \quad m + \langle m' - hm' \:|\: m' \in M, h \in H\rangle \mapsto 1H \otimes m$$ This map can be shown to be well-defined two. Now, all that is left to do is to show that these two maps are inverses of one another. Also note that instead of working with $G/H$ you could have considered the set of left cosets $H\backslash \! G$ with the usual right action. That might be a bit more intuitive since you immediately know the right action. Also note that the isomorphisms above are very similar to and generalize the isomorphism $kG \otimes_{kG} M \cong M$ (the case where $H = \{1\}$).
Finding stationary points without being able to solve for x?
The stationary points are thos points $x$ such that $f'(x)=0$. Therefore: there are no stationary points if $\lambda>0$; there is one and only one stationary point when $\lambda=0$, which is $0$; there are two and only two stationary points when $\lambda<0$, which are $\pm\sqrt{-\frac{2\lambda}3}$.
Gram-Schmidt method to get a basis for $P_3$
Graham Schmidt. Pick a vector, to make it a candidate for your first basis vector. $w_0 = 1$ Normalize it. Since $\|w_0\| = 1$ we that step is already done. $e_0 = w_0 = 1$ Your second basis vector. $w_1 = x$ Subtract the projection of $e_1$ onto $x.$ $e_1^* = x - \langle e_1,x\rangle e_1$ $e_1^* = x - \int_0^1 x \ dx = x-\frac 12$ Normalize it... $e_1 = \frac {e_1^*}{\|e_1^*\|}$ $\|e_1^*\|^2 = \langle e_1^*,e_1^*\rangle = \int_0^1 (x-\frac 12)^2 \ dx\\ \int_0^1 x^2 -x + \frac 14\ dx = \frac 13 - \frac 12 + \frac 14 = \frac 1{12}\\ e_1 = \sqrt {12} x - \sqrt 3$ $w_2 = x^2\\ e_2^* = w_2 - \langle e_0,w_2\rangle - \langle e_1,w_2\rangle$ Normalize it... lather, rinse, repeat.
show that $(A'A + B'B)^{-1}$ is a g inverse of A'A
As you have already got, $C(A'A)\subseteq C(A')$ and $C(B'B)\subseteq C(B')$. So $C(A')\cap C(B')=\{0\}$ implies $C(A'A)\cap C(B'B)=\{0\}$ In the same way, you can prove that $$C(A'A(A'A+B'B)^{-1}BB')\subseteq C(A'A)$$ and $$C(A'A(A'A+B'B)^{-1}BB')\subseteq C(B'B)$$ (Using the symmetric of $A'A$ and $B'B$). Thus $$C(A'A(A'A+B'B)^{-1}B'B)\subseteq C(A'A)\cap C(B'B) = \{0\}$$ ie. $A'A(A'A+B'B)^{-1}B'B = 0$
Need help discerning category isomorphism
The definitions of the categories are fundamentally flawed. I suggest that you go back to the definition of a category and try to understand why what you've written down doesn't work out. A morphism is an arrow from one object to another, so "its only morphism is addition modulo $3$" doesn't make sense: addition modulo $3$ assigns a third number to two numbers, which isn't what we need to define a morphism. The same for path composition, which assigns a third path to two paths. Here are three categories that have some similarity with what you've written; perhaps one or two of them are what you were trying to get at. Category $A'$ has one object, the set $\mathbb Z_3=\{0,1,2\}$. For every $n\in\mathbb Z_3$, there is one morphism from $\mathbb Z_3$ to $\mathbb Z_3$, namely the function that sends $x$ to $x+n\bmod3$. Composing the morphisms corresponding to $n_1$ and $n_2$ yields the morphism corresponding to $n_1+n_2\bmod3$. The identity morphism for the only object is the one corresponding to $n=0$. Category $A''$ has three objects, $0$, $1$ and $2$. There is one morphism for each pair of objects. Composition yields the only morphism between the appropriate objects, and the identity morphism for an object is the only morphism from that object to itself. Category $B'$ has three objects, $a$, $b$ and $c$. There is one morphism from $x$ to $y$ for each path from $x$ to $y$ in the complete graph on $\{a,b,c\}$. Composition of morphisms is defined by composition of the corresponding paths. The identity morphism for an object is the path beginning and ending at that object without any edges. Now we can define a functor $F$ from $B'$ to $A''$ that assigns to the objects $a$, $b$, $c$ the objects $0$, $1$, $2$, respectively, and to the morphism corresponding to a path from $x$ to $y$ the only morphism from $F(x)$ to $F(y)$. We can also define a functor $G$ from $A''$ to $A'$ that assigns to each object in $A''$ the only object in $A'$, and to the morphism from $x$ to $y$ the morphism that maps $x$ to $y$. We can also define the composition $H=G\circ F$ from $B'$ to $A'$.
What do I do wrong with Möbius method of inversion?
You can't use Mobius inversion here because you need $$p_{2n}(x)=\prod_{d\mid n} q_d(x)^{\mu(n/d)}$$ for all $n$, but it is not true for $n=1$. If you defined $r_n(x)=p_{2n}(x)$ if $n>1$ and $r_1(x)=p_{2}(x)/2=q_1(x)$ if $n=1$, then you'd have: $$r_n(x)=\prod_{d\mid n} q_d(x)^{\mu(n/d)}$$ for all $n$. Then you'd get that: $$q_d(x)=\prod_{d\mid n} r_n(x) = \frac 1 2 \prod_{d\mid n} p_{2d}(x)$$ The mistake is thinking that the statement of MI (additive form) is: $$\left(g(n)=\sum_{d\mid n} f(d)\right)\iff\left(f(n)=\sum_{d\mid n} \mu\left(\frac n d\right)f(d)\right)$$ The correct statement is: $$\left(\forall n:g(n)=\sum_{d\mid n} f(d)\right)\iff\left(\forall n:f(n)=\sum_{d\mid n} \mu\left(\frac n d\right)f(d)\right)$$ One of my favorite fake proofs makes this error.
Multivariable calculus: hard problems with solutions
Usually Schaum's outlines are a good source for lots of problems with solutions, in this case Elliott Mendelson: Schaum's 3,000 Solved Problems in Calculus and Robert Wrede and Murray Spiegel: Schaum's Outline of Advanced Calculus come to mind. I don't know if that's hard enough, however :-)
Bounding the function $(-z)^{s-1}$ over the square with vertices $(\pm(2n+1) \pi,\pm(2n+1) \pi)$
$s = \sigma + it$ is fixed. When $z$ traverses the contour $C_n'$, the argument is constrained between $-\pi$ and $\pi$, therefore, you have the bound $$\lvert (-z)^s\rvert = \lvert z\rvert^{\sigma-1} \cdot e^{-t\cdot \arg (-z)} \leqslant e^{\pi \lvert t\rvert}\cdot \lvert z\rvert^{\sigma-1} \leqslant e^{\pi \lvert t\rvert}\cdot (20n)^{\sigma-1}.$$
Can I prove that 2n+1 = O(2n)?
For $n \geq 1$, $$2n+1 \leq 3n = \frac{3}{2}\cdot 2n.$$ Take $c=\frac{3}{2}$ and $n_0=1$. Also, for the record: writing things like $O(2n)$ is "morally wrong." The whole point of the $O(\cdot)$ notation and its cousins ($\Omega(\cdot)$, $\Theta(\cdot)$, and so on) is to hide the constants to be able to focus on the asymptotic growth.
How to use Kullback-Leibler Divergence if probability distributions have different support?
Statistically, Kullback-Leibler divergence measures the difficulty of detecting that a certain distribution say H1 is true when it is thought initially that a certain other distribution say H0 is true. It essentially gives the so-called Bahadur slope for this problem of discrimination (i.e. of testing). If, as in your example, each distribution has some support on a set for which the other distribution has no support then perfect inference becomes possible and the divergence is legitimately infinite. The more interesting case is where one support is fully contained in the other. In that case one of the hypotheses can be confirmed with certainty while the other only exponentially fast so the divergence will be infinite only in one direction.
Is it true that $A_x$ has even order for all $x\in G$?
I would give a proof of the converse of Derek Holt's conclusion. If $G$ is of odd order then there is no element of even order. Therefore for $A^*_x = \{y:gcd(o(x),o(y)) = 1 \text{or prime} \}$ has properties for all $x \in G$ that 1. $0 \in A^*_x$ 2. $y \in A^*_x \leftrightarrow y^{-1} \in A^*_x$. Remind that $y \neq y^{-1}$ except for $y = 0$ since $o(y) \neq \text{even}$. Therefore $A^*_x$ is automatically of odd order for all $x$. If $o(x) = 1 \text{or prime}$ then $x \in A^*_x$ and $A_x = A^*_x - {x}$ then $A_x$ is of even order. Otherwise $x \notin A^*_x$ therefore $A_x = A^*_x$ is of odd order.
Condition for tangency
Firstly you can see a good thing happening in this question. That is, we can substitute as follows: $x^\frac{1}{3}$=$\sin{\theta}$ $y^\frac{1}{3}$=$\cos{\theta}$ $x$=$\sin^3{\theta}$ $y$=$\cos^3{\theta}$ $\frac{dx}{d\theta}$ =$3\sin^2{\theta}$$\cos{\theta}$ $\frac{dy}{d\theta}$ =$-3\sin{\theta}$$\cos^2{\theta}$ $\frac{dy}{dx}$=$-\cot{\theta}$ Therefore, $y-cos^3{\gamma}$=$-\cot{\gamma}${$x-\sin^3{\gamma}$} Is the tangent equation. {here ${\gamma}$=random taken value of ${\theta}$ which is the parameter at the tangency point} Now, Put $(a,0)$ and $(0,b)$ You will get $a$=$\sin{\gamma}$ $[\sin^2{\gamma}$+$\cos^2{\gamma}$] =$\sin{\gamma}$ Similarly, you will get $b=\cos{\gamma}$ Therefore, $a^2+b^2=1$ is the required condition.
How do the products of two orthogonal projection matrices relate to their respective column spaces?
It makes not much sense in this generality to carry the matrices for their column spaces, we should just use orthogonal projection onto a subspace. So, let now $A$ and $B$ rather denote subspaces (what you denoted by $C(A)$ and $C(B)$). [The properties that $P_A^2=P_A^*=P_A$ and ${\rm ran\,}P_A=C(P_A)=C(A)$ already uniquely determine the projection $P_A$, no need to use its explicit form composed of variants of matrix $A$.] ii. Of course, if $A=B$, then $P_A=P_B$, and $P_A$ commutes with $P_B$. Also, if $A\subseteq B$ or $B\subseteq A$, or if $A\perp B$. i. This is not true as it stands, e.g. if $A\perp B$ then $P_AP_B=P_BP_A=0$. Moreover, if $A=U\oplus A'$ and $B=U\oplus B'$ with $U=A\cap B$ and $A'\perp B'$, then we again have $P_AP_B=P_BP_A=P_U$. iii. I think you are right in that we should consider the other direction: if $P_AP_B=P_B$, then with $b\in B$ we have $b=P_Bb=P_AP_Bb=P_Ab$, so $b\in{\rm ran\,}P_A\,=A$. The other equation $P_AP_B=P_A$ would mean similar relation for the row spaces of the original matrices, I guess. iv. Yes, the converse of iii. also holds: if $B\subseteq A$, then with arbitrary $x\in B\oplus B^\perp,\ x=b+y$, we have $$P_AP_Bx=P_Ab=b=P_Bx\,.$$
Where do the variables of a quadratic form live?
A quadratic form is, by definition, a function $x \mapsto f(x,x)$, where $f$ is a symmetric bilinear form on the vector space you're working with over the field that you specify. The coefficients $a_{ij}$ correspond to the entries of the representative matrix of the quadratic form, which is also the representative matrix of the symmetric bilinear form which induces our quadratic form. The representative matrix of a symmetric bilinear form is a symmetric matrix.
Can I assume that a biologist will know what "lhs" and "rhs" mean? Or what are some other ways of indicating the left/right hand sides of an equation?
If you even have to think if your audience knows what a particular abbreviation means, then you must explain it. In any case, writing LHS/RHS in anything but very informal contexts seems simply unacceptable to me.
Strictly diagonally dominant matrices are non singular
The proof in the PDF (Theorem 1.1) is very elementary. The crux of the argument is that if $M$ is strictly diagonally dominant and singular, then there exists a vector $u \neq 0$ with $$Mu = 0.$$ $u$ has some entry $u_i > 0$ of largest magnitude. Then \begin{align*} \sum_j m_{ij} u_j &= 0\\ m_{ii} u_i &= -\sum_{j\neq i} m_{ij}u_j\\ m_{ii} &= -\sum_{j\neq i} \frac{u_j}{u_i}m_{ij}\\ |m_{ii}| &\leq \sum_{j\neq i} \left|\frac{u_j}{u_i}m_{ij}\right|\\ |m_{ii}| &\leq \sum_{j\neq i} |m_{ij}|, \end{align*} a contradiction. I'm skeptical you will find a significantly more elementary proof. Incidentally, though, the Gershgorin circle theorem (also described in your PDF) is very beautiful and gives geometric intuition for why no eigenvalue can be zero.
Maximum principle and initial conditions (easy)
The maximum and minimum principles for elliptic and parabolic equations belong to the class of what is called a priori estimates. To be more precise, these types of results are of the form If a solution were to exists for a certain equation, and if the coefficients of the equation were to satisfy certain properties, then the solution must also satisfy certain property. One of the nice things about these types of estimates/principles/theorems/lemmas is that one can easily adapt them to nonlinear cases from the linear case results. In your case: Assume that $u$ is a fixed given function which exists as a solution to your equation. For that $u$, define the coefficient functions $A(x,t) = a(u)$ and $B(x,t) = b(u)$ and $C(x,t) = c(u)$. Then clearly $u$ also solves the linear equation $$ u_t(x,t) = A(x,t)u_{xx} + B(x,t) u_x + C(x,t) $$ Hence if the functions $A,B,C$ can be assumed to satisfy also the conditions for the classical maximum principle for linear parabolic equations, $u$, now being a solution to the linear equation must also follow the maximum principle. That is to say: if the map $u\mapsto a(u)$ and $u\mapsto c(u)$ satisfy the condition that $a(u)$ is uniformly positive and $c(u)$ is non-positive, the classical maximum principle will apply for any classical solution of the quasilinear equation just as well as how it applies to the linear equation. Remark: Philosophically this is another instance of the method of freezing coefficients. For qualitative and quantitative results about a (presumed to exist) solution to a quasilinear equation, if one can make certain assumptions on how the coefficients behave, one can treat it just like a solution to an appropriate linear equation. It is usually in the step for proving existence of solutions that the quasilinearity becomes a problem.
Transformations problem
We have $Y=y$ if and only if $n-X=y$ if and only if $X=n-y$. From the given probability distribution function of $X$, it follows that $$\Pr(Y=y)=\binom{n}{n-y}p^{n-y}(1-p)^{n-(n-y)}=\binom{n}{y}p^{n-y}(1-p)^y.$$ The above is a very mechanical approach. Better, I think, is to note that $X$ is the number of heads when a coin that has probability $p$ of giving head is tossed $n$ times. Then $Y=n-X$ is the number of tails. Since the probability of tail is $1-p$, we have $$\Pr(Y=y)=\binom{n}{y}(1-p)^y p^{n-y}.$$
Simplifying $\tan^{-1} {\cot(\frac{-1}4)}$
Note that $$\cot \left(-\frac14\right)=-\tan\left(\frac{\pi}{2}-\frac14\right)=\tan\left(\frac14-\frac{\pi}{2}\right)$$ thus $$\tan^{-1}\cot \left(-\frac14\right)=\tan^{-1}\tan\left(\frac14-\frac{\pi}{2}\right)=\frac14-\frac{\pi}{2}$$
Is there a way to manipulate a hyperbola like $\frac{x^2 - 4}{x^2 + 4}$ into a readily sketch-able form?
Why don't you do the same procedure: $$\frac{x^2-4}{x^2+4}=1-\frac8{x^2+4}$$ The last term is a Lorentzian function. So this will be like a horizontal line at $y=1$, but has a dip around $0$ to $y=-1$.
What is the absolute value of the following?
Note $$|e^{-jkd\cos\theta/2} + e^{j\phi}e^{jkd\cos\theta/2}|^2$$ $$=(e^{-jkd\cos\theta/2} + e^{j\phi}e^{jkd\cos\theta/2}) (e^{jkd\cos\theta/2} + e^{-j\phi}e^{-jkd\cos\theta/2})$$ $$=1+ e^{j\phi}e^{2jkd\cos\theta/2} + e^{-j\phi}e^{-2jkd\cos\theta/2} +1$$ $$=2+ 2\cos(\phi+2kd\cos\theta/2) $$ Thus, the absolute value is $$\frac{\sqrt2}r\sqrt{1+\cos(\phi+2kd\cos\theta/2) }$$
If $(x'_n)$ is Cauchy in the quotient space $X/Y$, then the corresponding sequence $(x_n)$ may not be Cauchy in $X$
Let $X = \mathbb{R}^2$ and $Y = \{(0,y) \mid y \in \mathbb{R}\}$. It's not hard to see that $X/Y$ is isomorphic to $\mathbb{R}$. Let $p : X \to X/Y$ denote the quotient map. Now suppose $(x_n)$ is Cauchy in $\mathbb{R}$ and observe that the the sequence $((x_n,n))$ is not Cauchy in $\mathbb{R}^2$. However, $p(x_n,n) = x_n$ and we know that $(x_n)$ is Cauchy.
Prove that $(a_1-b_1)^2(a_2-b_2)^2\cdots (a_n-b_n)^2$ is an even number.
Note that it suffices to show that $a_i - b_i$ is even for some $i$. Using $$\begin{split} (a_1-b_1) + (a_2 - b_2) +\cdots (a_n - b_n) &= (a_1+ \cdots +a_n)-(b_1+ \cdots +b_n) \\ &=(1+\cdots +n) -(1+\cdots + n)\\ &= 0. \end{split}$$ As $n$ is odd, one of them $a_i - b_i$ must be even (as adding $n$ odd terms will give you an odd number, not $0$).
Why isn't $\{(1, 1), (1, 2), (2, 1)\}$ transitive
$R_2$ isn't transitive because $(2,1) \in R_2$ and $(1,2) \in R_2$, but $(2,2) \notin R_2$. [Here, I've taken $(x,y) = (2,1)$, $(y,z) = (1,2)$, $(x,z) = (2,2)$.] Another example: $R_1$ isn't transitive because $(3,4) \in R_1$ and $(4,1) \in R_1$, but $(3,1) \notin R_1$.
How can I prove this statement about the BC Lemma?
Independence is not required for this. Let $B_n=A_n\setminus (A_1\cup A_2\cup \cdots \cup A_{n-1})$. If possible let the conclusion be false. Then $c \equiv P (\cup_n A_n) <1$. It is easy to verify that the events $B_n$ are disjoint and their union is same as $\cup_n A_n$. Hence $\sum_n P(B_n)<1$. Now $\sum_n P(A_n|\cap_{k<n}A_k^{c}) \leq \sum_n \frac {P(B_n)} {1-P(A_1\cup A_2 \cup \cdots )}=\frac c {1-c} <\infty$.
Calculate $\int_{-\infty}^{\infty}\;\left( \frac{x^2}{1+4x+3x^2-4x^3-2x^4+2x^5+x^6}\right) \;dx$
Let $F(x) = \frac{x^2}{P(x)}$ where $$P(x) = x^6+2x^5-2x^4-4x^3+3x^2+4x+1 = (x^3+x^2-2x-1)^2 + (x^2+x)^2$$ Change variable to $u = \frac{1}{x+1} \iff x = \frac{1}{u}-1$. The integral at hand becomes $$\int_{-\infty}^\infty F(x) dx = \left(\int_{-\infty}^{-1^{-}} + \int_{-1^{+}}^\infty\right) F(x) dx = \left(\int_{0^{-}}^{-\infty} + \int_{+\infty}^{0^{+}}\right) F\left(\frac{1}{u} - 1\right)\left(-\frac{du}{u^2}\right)\\ = \int_{-\infty}^\infty \frac{1}{u^2} F\left(\frac{1}{u}-1\right) du $$ By direct substitution, we have $$\frac{1}{u^2}F\left(\frac{1}{u}-1\right) = \frac{(u^2-u)^2}{u^6-2u^5-2u^4+4u^3+3u^2-4u+1} = \frac{(u^2-u)^2}{(u^3-u^2-2u+1)^2+(u^2-u)^2}$$ Notice the function defined by $$g(u) \stackrel{def}{=} \frac{u^3-u^2-2u+1}{u^2-u} = u - \frac{1}{u}-\frac{1}{u-1}$$ has the form where Glasser's Master Theorem applies, we get $$\int_{-\infty}^\infty F(x) dx = \int_{-\infty}^\infty \frac{du}{g(u)^2 + 1} = \int_{-\infty}^\infty \frac{dx}{x^2+1} = \pi $$ NOTE Please note that the statement about Glasser's Master theorem in above link is slightly off. The coefficient $|\alpha|$ in front of $x$ there need to be $1$. Otherwise, there will be an extra scaling factor on RHS of the identity. When in doubt, please consult the original paper by Glasser, Glasser, M. L. "A Remarkable Property of Definite Integrals." Math. Comput. 40, 561-563, 1983. and an online copy of that paper can be found here.
Proving that $\lfloor a^2/2 \rfloor$ is even for all $a\in\mathbb{Z}$.
Do it by cases. Either $a$ is even, or it’s odd. If $a$ is even, write $a=2n$; then $a^2/2=2n^2$ is certainly even. If $a$ is odd, write $a=2n+1$; then $$\left\lfloor\frac{a^2}2\right\rfloor=\left\lfloor\frac{4n^2+4n+1}2\right\rfloor=2n^2+2n=2(n^2+n)$$ is also even. Don’t shy away from case-by-case arguments; sometimes they’re the most straightforward way to prove a result.
Poisson bracket proof
Going from the first line to the second, they've just used the product rule on the first factor of each term. When they say "repeat this for the second entry", this is what they mean. The computation they showed gave an equation where, on the RHS, they "pulled out" a function from the first slot of the bracket. They want you to do an analogous thing to "pull out" a factor from the second slot (this will mean doing the product rule on the second factors in line 1 and grouping terms). As you say, we could do all of this in one fell swoop rather than slot by slot. Can you take it from here?
Given the second derivatives of y and x wrt. a third variable, what is the second derivative of y wrt. x?
What you are doing is absolutely wrong. Chain rule means first differentiate that term in respect of whatever extra variable you have then multiply it with derivative of the that variable with respect to x Applied chain rule $$\frac{dy}{dx}=\frac{dy}{dt} * \frac{dt}{dx}=\frac{\frac{dy}{dt}}{\frac{dx}{dt}}$$ Differentiate it again You get $$\frac{d^2y}{dx^2}=\frac{\frac{dx}{dt}\frac{d^2y}{dt^2}-\frac{d^2x}{dt^2}\frac{dy}{dt}}{({\frac{dx}{dt}})^2}*{\frac{dt}{dx}}$$
Finding the transformation matrix of a linear map
You have $$\left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right).\left( \begin{array}{cc} 1 & -1 \\ 2 & -1 \\ \end{array} \right)=\left( \begin{array}{cc} 0 & 2 \\ -1 & 1 \\ \end{array} \right)$$ so $$\left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right)=\left( \begin{array}{cc} 0 & 2 \\ -1 & 1 \\ \end{array} \right).\left( \begin{array}{cc} 1 & -1 \\ 2 & -1 \\ \end{array} \right)^{-1}=\left( \begin{array}{cc} -4 & 2 \\ -1 & 0 \\ \end{array} \right)$$ in agreeance with the solution you wrote
The $2^{nd}$, $4^{th}$ and $9^{th}$ terms of an AP
Hint: $$\dfrac AC=\dfrac BD=\dfrac{A-B}{C-D}$$ for $A\ne B$
Computing of $\int_{-1}^1\frac{e^{ax}dx}{\sqrt{1-x^2}}, \: a \in \mathbb{R}$
By setting $x=\sin\theta$ we have $$ I(a)=\int_{-1}^{1}\frac{e^{ax}}{\sqrt{1-x^2}}\,dx = \int_{-\pi/2}^{\pi/2}\exp\left(a\sin\theta\right)\,d\theta \tag{1}$$ and we may expand the exponential function as its Taylor series at the origin. Since the integral of and odd integrable function (like $\sin^3$ or $\sin^5$) over $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$ is zero, we get: $$\begin{eqnarray*} I(a) &=& \sum_{n\geq 0}\frac{a^{2n}}{(2n)!}\int_{-\pi/2}^{\pi/2}\sin^{2n}(\theta)\,d\theta\\ (2i \sin\theta=e^{i\theta}-e^{-i\theta})\qquad &=&\sum_{n\geq 0}\frac{\pi a^{2n}}{4^n (2n)!}\binom{2n}{n}\\&=&\pi\sum_{n\geq 0}\frac{a^{2n}}{4^n(n!)^2}=\color{red}{\pi\cdot I_0(a)}.\end{eqnarray*} \tag{2}$$
Explanation of this percentile GRE problem.
There are only $61$ possible different scores, so it is clear that the $300$ scores are not divided into $100$ blocks of $3$ scores each. But it is possible (in fact, in reality quite likely) that people who got scores closer to the $50$th percentile are in blocks of scores much larger than those who are around the $80$th percentile. The distributions of a finite set of test scores also do not generally form a perfect bell curve. It can be expected that there will be a few places where more than the expected number of people got the same score, or where fewer than expected got the same score. So for there to be one particular score that only three people received is not too unusual. In any case, the question here is not asking you for a model of the test scores. It is not even asking how many people had the same score as Dominick. I believe it is asking whether one of the following numbers can be greater than, equal to, or less than the other: The number of other scores (numbers in the range $15$ to $75$, excluding the score that Dominick got) that were received by someone and are in the $80$th percentile. The number of people (other than Dominick) in the $80$th percentile.
If $a_{n+1}=a_n+1/a_n$ and $a_0 = 1$, show $a_n/H_n^4\to \infty$ but $a_n/H_n^4\to 0$
$a_{n+1}^2 = a_{n}^2 + 2 + \frac{1}{a_n^2}$ gives $a_n\geq \sqrt{2n-1}$ as well as $$ a_{n+1}^2\leq a_n^2+2+\frac{1}{2n-1}$$ from which $a_n\leq \sqrt{2n+O(\log n)}$. The given limits are simple to compute given these bounds, but $$ \lim_{n\to +\infty}\frac{H_n^4}{a_n} = 0,\qquad \lim_{n\to +\infty}\frac{a_n}{H_n^5}=\color{red}{+\infty}$$ since $H_n=\log(n)+O(1)$. Thanks to Clement C., here it is a plot of $a_n$ versus $\sqrt{2n}$:
Midpoint with a connected subset and continuous function
The continuous image of a connected set is connected. Thus $f(A)$ is a connected subset of $\mathbb{R}$, and hence must be an interval. Since $f(a), f(b) \in f(A)$, this interval must contain the interval $[f(a),f(b)]$. In particular $x \in f(A)$, i.e. there is $c \in A$ such that $f(c)=x$.
system of differential equation
Let $y_i=x_i/c_i$ and $g(y)=f(c_1y_1,\dots,c_ny_n)$. Then the system can be written as $$ \frac{dy_i}{dt}=g(y),\quad1\le i\le n.\tag{1} $$ If $\{y_1(t),\dots f_n(t)\}$ is a solution, then there exist constants $k_2,\dots,k_n$ such that $y_i=y_1+k_i$, $2\le n$. Conversely, if the real valued function $z(t)$ satisfies de differential equation $$ \frac{dz}{dt}=g(z,z+k_2,\dots,z+k_n)\tag{2} $$ for some constants $k_2,\dots,k_n$, then $y=\{z,z+k_2,\dots,z+k_n\}$ is a solution of (1). For any choice of constants $k_2,\dots,k_n$ and $\zeta\in\mathbb{R}$, since $g$ is smooth, equation (2) has a unique solution such that $z(0)=\zeta$. This gives you an infinite number of different solutions of (1).
Infinite Collection of Sets with These Properties
Hint: use the Fundamental Theorem of Artithmetic and Euclid's proof that there are infinitely many primes. Good luck!
Nonlinear system, how to find Lyapunov
We can find a Liapunov function around the equilibrium point $(1,1)$ so that it can be shown to be asymptotically stable. Let $V(x_1,x_2)=a(x_1-1)^2+b(x_2-1)^2$. Then $$\dot{V}=2a(x_1-1)\dot{x_1}+2b(x_2-1)\dot{x_2}\\ =5ax_1-2bx_1-4ax_1x_2-3ax_1^2+2bx_1x_2+2ax_1^2x_2-5a+2b+4ax_2+3ax_1-2bx_2-2ax_1x_2$$ Let $b=3a$ to eliminate the $x_1x_2$ term. Then let $a=1$ since each term contains an $a$ and we want the square term to be negative. $$\dot{V}=(-3+2x_2)x_1^2+2x_1-6x_2+1$$ Now if you choose $\frac{3}{4}<x_2<\frac{5}{4}$, $\frac{3}{4}<x_1<\frac{5}{4}$, it would be negative.
When is a point considered inside/outside a polygon?
Within discrete and computational geometry, a polygon $P$ is usually defined to include its boundary $\partial P$. So then the vertices and points on edges are "in" $P$. Often the phrase "strictly interior" to $P$ is used to indicate a point in $P$ but not on $\partial P$; a boundary point is "on" (not "in") the boundary $\partial P$. Sometimes a polygon is defined to be just the vertices and edges, and other times to be the closed region of $\mathbb{R}^2$ bounded by $\partial P$. Generally authors try to make clear their assumptions to avoid ambiguity.
Transport functions, lemmas between isomorphic structures using Univalence
Welcome to Math.SE! Please try to use $\LaTeX$ in your questions to make the math more readable, and also to give as precise citations as possible for future reference. Could you improve your question in these regards? To the math: As stated in the book, the key is to view the structure or theorem you are interested in as a type-indexed family and to use transport to move the structure/theorem along the fibres. Here, you'd like to construct a double function $\text{double}^{\prime}: {\mathbb N}^{\prime}\to {\mathbb N}^{\prime}$ from the known double function $\text{double}: {\mathbb N}\to {\mathbb N}$. The point is that these can be viewed as elements in the fiber of ${\mathbb N}^{\prime}$ resp. ${\mathbb N}$ of the family $(\sum\limits_{T:{\mathcal U}} T\to T)\to{\mathcal U}$ corresponding to $P: {\mathcal U}\to {\mathsf {Type}},\ T\mapsto (T\to T)$, and a witness of equality $e: T=T^{\prime}$ gives rise to an equivalence of the fibres $P(T)=(T\to T)\simeq P(T^{\prime})=(T^{\prime}\to T^{\prime})$ by transport. If you apply this to the equality ${\mathbb N}={\mathbb N}^{\prime}$ which you have deduced from some equivalence ${\mathbb N}\simeq {\mathbb N}^{\prime}$ via univalence, you get $({\mathbb N}\to {\mathbb N})\simeq ({\mathbb N}^{\prime}\to {\mathbb N}^{\prime})$ into which you can plug $\text{double}: {\mathbb N}\to {\mathbb N}$ to get some $\text{double}^{\prime} : {\mathbb N}^{\prime}\to {\mathbb N}^{\prime}$. Let me know if you need more explanations.
Deductions from monotonic functions
If $g$ is decreasing, $-g$ is increasing. Moreover, $1-x$ is decreasing. Can you conclude something about $g(1-x)$ and in turn about $f$?
Probability of even number of events occuring
Writing $X=\sum_{k=1}^m\mathbf1_{A_k}$ we are dealing with a sum of iid Bernoulli random variables with parameter $p$. That gives $X$ binomial distribution with parameters $m$ and $p$. To be found is: $$P(X\text{ is even})=\sum_{k\text{ even}}P(X=k)=\sum_{k\text{ even}}\binom{n}{k}p^kq^{m-k}\tag1$$where $q:=1-p$. Working out the answer in the book by means of: $$\frac12[1+(q-p)^m]=\frac12[(p+q)^m+(q-p)^m]$$ you will arrive at the RHS of $(1)$. P.S. I haven't checked your answer completely (yet), but in the case $m=4$ there are $\binom40+\binom42+\binom44=1+6+1=8\neq7$ possibilities.
How is analysis beautiful? -- confusion from an algebraist
Of course, it is a matter of taste. However I think it is helpful to understand the history of analysis. I myself really enjoyed the book Mathematics: The Loss of Certainty, by Morris Kline. In the good old days (pre-19th Century), people did calculus willy-nilly. But it was realized that rigor was required, because contradictory results came up (like several different values for $\sum_{n=1}^\infty \frac{(-1)^{n+1}}n$). So they made it rigorous with the use of $\epsilon$-$\delta$ proofs, and later a rigorous notion of integration. These kinds of proofs are now a rite of passage for anyone who wants to do analysis, before they do something that is applicable (like differential equations, probability theory, and harmonic analysis). Now some people fall in love with these $\epsilon$-$\delta$ proofs, and these people can go on to study abstract Banach space theory. But other people, like Norbert Wiener, used analysis to develop more interesting and applicable stuff like mathematical Brownian motion (that is, the Wiener process). Indeed I remember a quote by Norbert Wiener where he compared himself to Stefan Banach, but I am unable to locate this quote. So there is a side to analysis that does have more structure, and in this sense, it does have more of the flavor of algebra.
Barycentric Coordinates of Incenter
For triangle $ABC$, Let $a,b,c$ denote the lengths of sides $BC,CA,AB\;$, respectively.$\\[4pt]$ Let $h_a,h_b,h_c$ denote the lengths of the altitudes from vertices $A,B,C$, respectively.$\\[4pt]$ Let $r$ denote the inradius.$\\[4pt]$ Let $k$ denote the area.$\\[4pt]$ Since the distance from the incenter to each of the lines $BC,CA,AB\;$is $r$, it follows that The "$A$" coordinate of the incenter is $\dfrac{r}{h_a}$. The "$B$" coordinate of the incenter is $\dfrac{r}{h_b}$. The "$C$" coordinate of the incenter is $\dfrac{r}{h_c}$. From $$k = \frac{1}{2}\,a\,h_a = \frac{1}{2}\,b\,h_b = \frac{1}{2}\,c\,h_c$$ we get $$a = \frac{2k}{h_a},\;\;b = \frac{2k}{h_b},\;\;c = \frac{2k}{h_c}$$ hence $$ \frac{r}{h_a}:\frac{r}{h_b}:\frac{r}{h_c} = \frac{1}{h_a}:\frac{1}{h_b}:\frac{1}{h_c} = \frac{2k}{h_a}:\frac{2k}{h_b}:\frac{2k}{h_c} = a:b:c$$ It follows that the incenter has barycentric coordinates $(a,b,c)$, as was to be shown.
Using definition of derivative to find $\frac{d}{dx} (e^x)$ -- circular reasoning?
There are many, many different ways you can approach this. (1) Define $e^x = \sum_{k=0}^\infty \frac{x^k}{k!}$. Then taking a derivative and passing the limit through (this needs to be justified but it can be done by showing uniform convergence) we can show that $\frac{d}{dx}e^x = e^x$. Alternatively, we can use the limit definition of a derivative and this definition to show this. (2) Define $e^x = \lim_{n \to \infty} \left( 1 + \frac{x}{n} \right)^n$. See Rene's answer for finding the bound $$\limsup_{h \to 0} \frac{e^h - 1}{h} \leq \frac{k+1}{k}$$ to see how we can compute the limit and hence the derivative. (3) Define $e^x$ to be the number $y(x)$ such that $\int_1^{y(x)} \frac{1}{t} dt = x$. Use the fundamental theorem of calculus and the chain rule when taking a derivative of the above with respective to $x$ to get $$\frac{1}{y(x)} y'(x) = 1$$ and conclude $y'(x) = y(x)$ or $(e^x)' = e^x$. Now all these definitions are equivalent. That is, starting with one definition, we can show that the other must hold. It is clear that (2) and (3) imply (1) because knowing the derivative of $e^x$ you can find it's taylor series. But this then means that (2) and (3) give the same function. So they are all equivalent. The truly circular way to compute the limit of the finite difference is to use L'Hospital's rule: $$\lim_{h \to 0} \frac{e^h - 1}{h} = \lim_{h \to 0} \frac{e^h}{1} = 1.$$ The problem is that you use what the derivative of $e^h$ is to show what the derivative of $e^x$ is.
Volume of a divisor?
If $L$ is a nef line bundle on a smooth projective complex variety $X$ of dimension $n$, then the volume equals the top self-intersection $(L^n)$, which in turn is the integral of the volume form $c_1(L)^n$, hence the name.
Is $L:\mathbb P_4\to \mathbb{R}$, with $L(p)=p''(0)$, a linear function?
Your answer is correct: you prove that a function is linear as the composition or two linear functions : taking the second derivative, then evaluating in $0$. Of course, there is a different path, by describing the operation with respect to the "canonical" basis $1,x,x^2,x^3,x^4$, where your linear function (a linear form, more precisely) is: $$(ax^4+bx^3+cx^2+dx+e) \ \to \ (12ax^2+6bx+2c) \ \to \ (2c)$$ which can be proven to be linear by exhibiting its $1 \times 5$ matrix: $$\left(\begin{array}{c}a\\b\\c\\d\\e\end{array}\right)\to \left(\begin{array}{ccccc}0&0&2&0&0\end{array}\right)\left(\begin{array}{c}a\\b\\c\\d\\e\end{array}\right)$$
Evaluating definite integral $\int_ 3^6 \frac1{\sqrt{27+6x-x^2}} dx$
Hint: Write $$\frac { 1 }{ \sqrt { 27+6x-{ x }^{ 2 } } } = \frac{1}{6\sqrt{1-\frac{\left(x-3\right)^2}{36}} }$$ And subsitute $$x = 3 + u$$
Combinatorial proof of $x^{(n)} = \sum_{k = 1}^n L(n,k)(x)_k$
Noticing that $\binom{a}{b}=\frac{(a)_b}{b!}, $ you can write your expression as $$\binom{x+n-1}{n}=\sum _{k=1}^n\binom{n-1}{n-k}\binom{x}{k}$$ which is Vandermonde, so the standard combinatorial proof of Vandermonde suffices. Notice that the explicit formula comes from shuffling the elements of $[n]$ in a line, using stars and bars to see that $\binom{n-1}{n-k}$ is the number of ways to partition the $n$ into positive $k$ parts $a_1+a_2+\cdots +a_k=n$ where order matters, and so you partition your line using first $a_1$ numbers, then $a_2$ until you get the $n$ numbers, and taking out the order of the $k$ parts. For example: Given $a_1=4,a_2=2,a_3=1,a_4=4$ then $$\underbrace{1\,9\,10\,2}_{a_1}\,\underbrace{5\,6}_{a_2}\,\underbrace{4}_{a_3}\,\underbrace{7\,3\,8}_{a_4}\text{ gives the ordered partition.}$$ It suffices to show it just for $x$ an integer because this are polynomials on $x$ and if you have two polynomials $P_1(x)=P_2(x)$ for $x\in \mathbb{N}$ then $$P_1(x)-P_2(x)=0$$ implies that the polynomial on the LHS has infinite roots, that is just possible if the polynomial on the left is strictly $0.$
Proving that quadratic form is convex in (vector, matrix) arguments
This seems to be a little trickier than I thought first! I'll be using the following lemma: A function $f$ is convex if and only if its epigraph $$\text{epi}(f) = \{(p, t) | \; p \; \text{in domain of} \;f \; \text{and} \; f(p) \leq t\}$$ is a convex set. Let $f(x, Q) = x^T Q^{-1}x$ with domain $\mathbb{R}^n \times S^n_{++}$ where $S^n_{++}$ is the set of $n$ by $n$ symmetric positive definite matrices over $\mathbb{R}.$ Now, its epigraph is $$\{((x, Q), t) | \; Q \succ 0, \; x^TQ^{-1}x \leq t\}\stackrel{\text{Schur Complement}}{=}\{((x, Q), t) | \; Q \succ 0, \; \begin{pmatrix} Q & x \\ x^T &t \end{pmatrix} \succeq 0\}.$$ Last condition is an LMI in $(x,Q, t),$ so is convex.
Is $f: ( \mathbb{ Z}_{10},+) \rightarrow ( \mathbb{ Z}_{5} \times \mathbb{ Z}_{2},+), n\pmod {10} \mapsto (n \pmod 5,n \pmod 2) $ surjective?
So you want an $x \in \Bbb{Z}_{10}$ such that \begin{align*} x &\equiv a \pmod{5}\\ x &\equiv b \pmod{2} \end{align*} So $x=a+5k=b+2s$ for some $s,k \in \Bbb{Z}$. Thus $\boxed{5k-2s=b-a}$. But $\gcd(2,5)=1$ so we can express \begin{align*} 5(1)-2(2)&=1\\ 5(b-a)-2(2b-2a)&=b-a. \end{align*} So if we take $k=b-a$ (see boxed equation and the last equation above) we get $x=a+5(b-a)=\color{red}{5b-4a}$. You can verify that $$f(5b-4a)=([a]_{5},[b]_2)$$ In general use Chinese Remainder theorem.
Show that if $A, B$ are groups, then $A \times B$ is solvable if and only if both $A$ and $B$ are solvable?
Suppose $A \times B$ is solvable. Since any subgroup of a solvable group is solvable we have that $A \times \{e_B\} \cong A$ and $\{e_A\} \times B \cong B$ are solvable. Conversely, suppose $A$ and $B$ are both solvable: There exist sub-normal series $$1=A_0 \trianglelefteq A_1 \trianglelefteq \dots \trianglelefteq A_t=A $$ $$1=B_0 \trianglelefteq B_1 \trianglelefteq \dots\trianglelefteq B_s=B $$ with abelian quotients. You can make both series have the same length by repeating one of subgroups. Consider then the subgroups $\{A_i \times B_i \}_{i=0}^{t=s}$.
Dimension of $R$ over $Z_p$
A vector space $V$ over $F=\mathbb{Z}/p\mathbb{Z}$ is, first of all an abelian group with respect to addition. But not any abelian group can be made into a vector space over $F$, for a very simple reason: for each vector $v$ you can write $v=1v$ (where $1\in F$) and do $$ \underbrace{v+v+\dots+v}_{\text{$p$ summands}}= \underbrace{1v+1v+\dots+1v}_{\text{$p$ summands}}= (\underbrace{1+1+\dots+1}_{\text{$p$ summands}})v=0v=0 $$ because $p\alpha=0$ for all $\alpha\in F$. Note that the same is true when $F$ is any field having characteristic $p\ne0$. Thus $V$, with respect to addition, must have the property that $pv=0$, for all $v\in V$. The abelian group $\mathbb{R}$ with respect to addition is torsion free, so it can't be made a vector space under any field of nonzero characteristic. The same technique shows that $\mathbb{Z}/p^2\mathbb{Z}$ cannot be made into a vector space over $\mathbb{Z}/p\mathbb{Z}$. The structure of vector spaces over fields of characteristic $p\ne0$ is very strict: they are direct sums of copies of $\mathbb{Z}/p\mathbb{Z}$. Conversely, such a direct sum is in a natural way a vector space over $\mathbb{Z}/p\mathbb{Z}$.
Uncountable Point exclusion topology: Arc connectedness
The concept of arc connected seems to be inconsistently defined in the literature: that link refers to two definitions. In one definition, "arc connected" requires the path to be a homeomorphism onto its image, in which case the topology $\tau$ is not arc connected: there are no subspaces whatsoever that are homeomorphic to $[0,1]$, because any $t \in [0,1]$ which does not map to $0 \in \mathbb{R}$ would have to form a singleton open set $\{t\} \subset [0,1]$, which is absurd. In the other definition, "arc connected" is a synonym for "path connected", in which case the topology $\tau$ is arc connected, for instance if $x_1,x_2 \ne 0$ then they are connected by the path $$\gamma(t) = \begin{cases} x_1 & \quad\text{if $t \in [0,1/2)$} \\ 0 &\quad\text{if $t=1/2$} \\ x_2 &\quad\text{if $t \in (1/2,1]$} \end{cases} $$ whereas $x_1=0$, $x_2 \ne 0$ are connected by the the path $$\gamma(t) = \begin{cases} 0 &\quad\text{if $t=0$} \\ x_2 &\quad\text{if $0 < t \le 1$} \end{cases} $$ and similarly for $x_1 \ne 0$, $x_2 = 0$.
Connecting a $n, n$ point grid
Interesting question. In what follows consider the $n\times n$ square grid. Notice that the trivial solution obtained by following a square spiral towards the center starting from an outside corner yields a solution with $2n-1$ lines. To see why, notice that 2 lines reduces the grid to a $(n-1)\times (n-1)$ grid, and since the $1\times 1$ grid requires only 1 line, induction yields $2n-1$ lines. Can we do better? Based on the posts in the forum and my own attempts, I believe the answer is that $2n-2$ lines is optimal. Showing this is possible is again easy. Start at a corner, and spiral towards the center until there is only a $3\times 3$ grid remaining. Recall from above that 2 lines in the spiral will reduce the grid by a dimension, so thus far we will have used $2\cdot (n-3)=2n-6$ lines. On the last line, end it so that we are in a position to go through the diagonal of the $3\times 3 $ grid. Since the $3\times 3$ grid has a solution with $4$ lines starting diagonally from a corner we have found a solution to the $n\times n$ grid using only $2n-2$ lines. Now, the question remains, is $2n-2$ optimal? The more I think about it, the more I believe it, but a proof does not leap into mind. I will think more. Edit: Of course $n=1,2$ are exceptions, and required $2n-1$ lines. The method I presented can be modified slightly to produce a closed path. All that must be changed is the way the final $3\times 3$ grid is traversed, and perhaps moving the starting position of the first line to a spot slightly outside of the original $n\times n$ grid. In other words the conjecture Toshi Kato is true. Edit 2: For a proof that $2n-2$ is optimal, see Joriki's answer to this question Not lifting your pen on the $n\times n$ grid.
Application of the Hahn-Banach Theorem
Yes, assuming furthermore that $X$ and $Y$ are not equal to {$0$}, you can first show that $B(X,Y)$ has a closed subspace which is isometrically isomorphic to $Y$, by setting $T_{\Lambda,y}(x)=\Lambda(x)y$, for $\Lambda \in X^*$, the dual space of $X$ and $y \in Y$. $T_{\Lambda,y} \in B(X,Y)$ with norm $\|\Lambda\|_{X^*}\|y\|_Y$. Then, if $Y$ is not complete, $B(X,Y)$ is not Banach, which is the contrapositive of the statement you are trying to prove. If you want more details, I have them, but this is the general idea.
Visualizing vectors in matrices
It depends on what do you want to visualize. If you want to visualize some elements of the column space, then each column is a vector in $\mathbb{R}^2$, and you can plot $3$ vectors there. If you are interested in visualizing some elements of the row space, then each row is a vector in $\mathbb{R}^3$, and you have two vectors in $\mathbb{R}^3$.
Find the integral values for which $\left(\pi(x+y)\right)^2=4\pi(x)\pi(y)$
For every prime $p$ we have $$ \pi(p+k) = \pi(p + k + \ell), $$ where both $k$ and $\ell$ are positive, and $k + \ell < n_p$, and $n_p$ is defined as $$ \pi(p) + 1 = \pi(p+n_p) = \pi(p') $$ We look for solutions $$ \pi^2(p + k + \ell) = 4 \pi(p + k) \pi(\ell), $$ such that $$ \pi(p + k + \ell) = \pi(p + k) = 4 \pi(\ell) $$ which is easier to solve. Case $\ell = 2$. $$ \ell=2 \Rightarrow \pi(\ell) = 1 : \pi(p) = 4 \Rightarrow p = 7, n_7 = 4, \ell < 4. $$ So these solutions are $$ (x,y) \in \big\{ (2,7), (2,8), (7,2), (8,2) \big\}. $$ Case $\ell = 3$. $$ \ell=3 \Rightarrow \pi(\ell) = 2 : \pi(p) = 8 \Rightarrow p = 19, n_19 = 4, \ell < 4. $$ So these solutions are $$ (x,y) \in \big\{ (3,19), (19,3) \big\}. $$ Higher solutions do not seem to appear, as $\pi(p)$ grows faster then $n_p$...
Converting a primal LP to a dual LP with a constant in the question
The set of $x$ that minimizes the objective function $f(x)$, minimizes also $f(x)+c$. So in your case instead of $\min z = -3x_1 + x_2 - 20$ you can solve the problem using $\min z = -3x_1 + x_2 $. $$ \min z = -3x_1 + x_2 \\ s.t. \quad -3x_1 + 3x_2 \le 6 \\ \quad\quad\qquad -8x_1 + 4x_2 \le 4 \\ \qquad\qquad x_1,x_2 \ge 0 $$
Falacy in the arguement to show that $F$ is uniformly continuous in some punctured neighbourhood of $a$.
The only flaw in your argument is at the very end: You need the inequality whenever $0<|x-y|<\delta$, and your argument does not give you that. But actually, you started going wrong a bit earlier, when you said to choose $x,y\in B(a,\delta_0)$. But that will constrain you to a neighbourhood of $a$, smaller and smaller as $\epsilon$ shrinks, and that spells disaster for your proof idea. Edited to add: You have proved, quite correctly, that given $\epsilon>0$, there is a $\delta>0$ so that $|F(x)-F(y)|<2\epsilon$ whenever $|x-a|<\delta$ and $|y-a|<\delta$ (you left out the last part). But that is not the definition of uniform continuity, so you haven't shown that. For uniform continuity, you need to show that given $\epsilon>0$, there is a $\delta>0$ so that $|F(x)-F(y)|<2\epsilon$ whenever $|x-y|<\delta$. But since $|x-y|<\delta$ does not imply either one of $|x-a|<\delta$ or $|y-a|<\delta$, much less both at the same time, your proof does not work. Edit the second, in response to the edit of the question on June 9: Note that you are supposed to show that $F$ is uniformly continuous in somme punctured neighbourhood of $a$. That means: There exists such a punctured neigbourhood, say $U$, so that, for every $\epsilon>0$, there is some $\delta>0$ so that $x,y\in U$ and $|x-y|<\delta$ imply $|f(x)-f(y)|<\epsilon$. Note the order: The neighbourhood is chosen first. You have chosen a neighbourhood depending on $\delta$, and hence on $\epsilon$. But that is an error of logic. Choosing a smaller $\epsilon$ may, in principle, force you to pick a smaller $\delta$, thus changing your neighbourhood. For reference, here is a correct proof. As you can see, a somewhat bigger gun is needed. First, note that the differentiability at $a$ implies $\lim_{x\to a}F(x)=f'(a)$. This means that if we extend the definition of $F$ by setting $F(a)=f'(a)$, then $F$ is continuous at $a$. By the (omitted) assumption that $f$ is continuous, it now follows that $F$ is continuous everywhere. Next, recall that a continuous function on a closed and bounded interval is uniformly continuous. So $F$ is uniformly continuous on $[a-1,a+1]$, say. But then it is also uniformly continuous on any subset of that interval, in particular, on the punctured neighbourhood $(a-1,a+1)\setminus\{a\}$.
Given a map $f:X\rightarrow Y,$ and a path $h:I\rightarrow X$ from $x_0$ to $x_1$, show that $f_*\beta_h=\beta_{fh}f_*$.
If $\alpha,\beta:I\to X$ are two paths with $\alpha(1)=\beta(0)$ then the path composition is defined as: $$\alpha\beta(t):=\begin{cases} \alpha(2t) &\text{if } 0\le t\le 1/2 \\ \beta(2t-1) &\text{if } 1/2\le t\le 1 \end{cases}$$ So if $f:X\to Y$ is a function then $$f\circ(\alpha\beta)(t)=\begin{cases} f\circ\alpha(2t) &\text{if } 0\le t\le 1/2 \\ f\circ\beta(2t-1) &\text{if } 1/2\le t\le 1 \end{cases}$$ and $$(f\circ\alpha)(f\circ\beta)(t)=\begin{cases} f\circ\alpha(2t) &\text{if } 0\le t\le 1/2 \\ f\circ\beta(2t-1) &\text{if } 1/2\le t\le 1 \end{cases}$$ Meaning $f\circ(\alpha\beta)$ is literally equal to $(f\circ\alpha)(f\circ\beta)$. By induction this extends to any number of (compatible) paths, regardless of the order (bracketing) of composition.
Commutative ring and maximal ideal problem
You can use part (a) to prove the second part. Let $x\in A\setminus M.$ Consider $N=M+xA=\{m+xa|\,m\in M, a\in A\}$. Clearly $N$ is an ideal of $A$ and contains $M.$ Thus it has be the whole ring $A.$ In particular $m+xa=1$ for $m\in M,\, a\in A$. thus $xa=xa=1-m.$ Now you can apply part (a);) it follows that $a$ is a unit. Done!
Difference between PDE Optimization and Control Theory Applied to PDES?
Let's make things a little more concrete. Suppose you wish to choose a function $u$ to maximise $$ \int_0^T f(t,x(t),u(t)) \, \mathrm{d}t $$ subject to $$ \dot{x}(t) = g(t,x(t),u(t)) $$ and $u(t) \in U$ for $t\in[0,T]$. You also have that $x(0)=x_0$. Here, $x$ is the state variable while $u$ is the control. The dynamics of the state variable are given by an ODE for simplicity. Subject to some regularity conditions (see your favourite optimal control reference), you can find an optimal control which we will denote by $u^*$. The chosen $u^*$ induces a function $x^*$ determined by the dynamics of $x$ along with its initial condition. That is, $x^*$ solves the ODE above with the initial condition $x^*(0)=x_0$, and $u(t) = u^*(t)$. If we evaluate our objective using $x^*$ and $u^*$, we obtain a value function $$ V(x_0) = \int_0^T f(t,x^*(t),u^*(t))\,\mathrm{d}t. $$ You can then look for the value of $x_0$ that maximises the function $V$. (One way to make this easy is to find conditions that guarantee differentiability of $V$. Again, see your favourite reference.) Note however that this is a one-dimensional optimisation problem, and hence is not really thought of as an "optimal control" problem. Problems in optimal control, at least to the best of my knowledge, are always multidimensional, and in fact are often infinite dimensional.
Equation system modulo prime
You have a linear system $$ \begin{cases} 8k_1+k_2=9\\ 6k_1+k_2=32\\ 11k_1+k_2=45 \end{cases} $$ in the field with $p$ elements. Let's look at the rank of the matrix: $$ \begin{bmatrix} 8 & 1 & 9\\ 6 & 1 & 32\\ 11 & 1 & 45 \end{bmatrix} $$ whose determinant is $141=3\cdot47$. So, if $p$ is neither $3$ nor $47$, the rank of the complete matrix of the system is $3$ and the system has no solution, by the Rouché-Capelli theorem. If $p=3$ the matrix can be written $$ \begin{bmatrix} 2 & 1 & 0\\ 0 & 1 & 2\\ 2 & 1 & 0 \end{bmatrix} $$ and a simple elimination gives the reduced row echelon form $$ \begin{bmatrix} 1 & 0 & 2 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \end{bmatrix} $$ so we have a unique solution $k_1\equiv_3 2$, $k_2\equiv_3 2$. If $p=47$, the elimination is not as easy. But $6\cdot 8=48$, so we can multiply the first row by $6$, getting (remember we're working modulo $47$): $$ \begin{bmatrix} 1 & 6 & 7 \\ 6 & 1 & 32 \\ 11 & 1 & 45 \end{bmatrix} $$ Subtract the first row multiplied by $6$ from the second row and the first row multiplied by $11$ from the third row, getting $$ \begin{bmatrix} 1 & 6 & 7 \\ 0 & 12 & 37 \\ 0 & 29 & 15 \end{bmatrix} $$ Since $12\cdot4=\equiv_{47}1$, we can multiply the second row by $4$: $$ \begin{bmatrix} 1 & 6 & 7 \\ 0 & 1 & 7 \\ 0 & 29 & 15 \end{bmatrix} $$ and now we can subtract the second row multiplied by $29$ from the third row, getting $$ \begin{bmatrix} 1 & 6 & 7 \\ 0 & 1 & 7 \\ 0 & 0 & 0 \end{bmatrix} $$ Now subtract the second row multiplied by $6$ from the first row, getting $$ \begin{bmatrix} 1 & 0 & 12 \\ 0 & 1 & 7 \\ 0 & 0 & 0 \end{bmatrix} $$ which means you have $k_1\equiv_{47}12$ and $k_2\equiv_{47}7$.
Two methods for estimate $s_{\bar{X}}$?
The first equation is $not$ for standard error of $\bar X$, it is for the sample standard deviation $S$. The standard error of $\bar X$ is $S/\sqrt{n}.$ The second equation looks as if you are sampling $n$ individuals from a finite population of size $N$. Careful reading of the context of these equations will probably clarify things for you. Ross is pretty careful about specifying conditions. Here are computations from R, verifying what you say about the column of differences: d = c(2,4,10,12,16,15,4,27,9,-1,15) mean(d) ## 10.27273 sd(d) ## 7.9761 # S sd(d)/sqrt(length(d)) ## 2.404885 # SE In this case the standard error seems to be for sampling with replacement or for sampling without replacement from a population viewed as (essentially) infinite. When $n$ is much smaller than $N$ the second factor under the square root sign (second formula) is very nearly unity.
In the LU decomposition with pivoting proof $L = PM^{-1}$ is unit lower triangular
Given that we are talking about LU factorisation, I assume that the permutation matrix $P_i$ exchanges line $j$ and $k$ with $j, k \ge i$ (Typically $j=i$ and $k>i$). With this assumption, we can rewrite the matrix $M$ in the following form: $$ M = M_{n-1}P_{n-1} M_{n-2}P_{n-2} \dots M_2P_2M_1P_1 = L_{n-1}L_{n-2}\dots L_2L_1 \cdot P_{n-1}P_{n-2}\dots P_2P_1$$ with $$L_{n-1} = M_{n-1}, \quad L_{n-2} = P_{n-1}M_{n-2}P_{n-1}^{-1},\quad L_{n-3} = P_{n-1}P_{n-2}M_{n-3}P_{n-2}^{-1}P_{n-1}^{-1}, \quad \text{etc}$$ thus having $M_i$ equal to $L_i$, but with the sub-diagonal nonzero entries permuted by $\dots P_{i+2}P_{i+1}M_i$. This is a result of the assumption that $P_i$ does not change the order of the rows before row $i$. If this hypothesis is not respected, then $L$ is no longer unit triangular. It follows that the matrices $L_i$ are lower triangular and unit diagonal, and thus so is their product $L^{-1}$. $$ M = \underbrace{L_{n-1}L_{n-2}\dots L_2L_1}_ {L^{-1}} \cdot \underbrace{P_{n-1}P_{n-2}\dots P_2P_1}_{P}$$ The inverse of a unit diagonal triangular matrix is also a unit triangular matrix which implies that $L$ is unit triangular. Complementary note N°1: Here is an example (with n = 4) showing that the $M$ is actually equal to $L_{n-1}L_{n-2}\dots L_2L_1 \cdot P_{n-1}P_{n-2}\dots P_2P_1$, as it might be useful to convince yourself: Using the definition for $L_i$: $$L_3 L_2 L_1 \cdot P_3 P_2 P_1 = M_3 (P_3 M_2 P_3^{-1})(P_3 P_2 M_1 P_2^{-1} P_3^{-1}) \cdot P_3 P_2 P_1$$ cancelling the permutation matrices gives the expected matrix $M$: $$L_3 L_2 L_1 \cdot P_3 P_2 P_1 = M_3 P_3 M_2 P_2 M_1 P_1 = M$$ Complementary note N°2: Example showing why $L_i$ and $M_i$ have the same structure. Let's have a look at $L_2$ in the same example: $$ L_2 = P_3 M_2 P_3^{-1}$$ with $M_2$ being of the form \begin{pmatrix} 1 & & & \newline & 1 & & \newline & x & 1& \newline & x && 1 \end{pmatrix} ($x$ means some nonzero entry). $P_3$ will shuffle rows 3 and 4, thus $P_3 L_2$ will be of the form: \begin{pmatrix} 1 & & & \newline & 1 & & \newline & x & 1 \text{ or } 0 & 1 \text{ or } 0\newline & x & 1 \text{ or } 0 & 1 \text{ or } 0 \end{pmatrix} $P_3^{-1}$ will then shuffle columns 3 and 4 in an opposite way, resetting the 2x2 lower right bloc back to an identity sub-matrix. Edit: if you want more details, I would suggest reading lectures 20 and 21 of the book "Numerical linear algebra" by Lloyd N. Trefethen & David Bau III, the whole algorithm for LU is explained and it's surprisingly pleasant to read
Sequences and indexed families.
A quick answer: keep in mind that the sentence let $\{a_n\}_n$ be a sequence is as correct as saying let $f(x)$, $x \in X$, be a function. A sequence is a function, no matter how you define functions. Any function can be thought of as its graph, and this is exactly what you are doing with your sections. Notation is often lazy, and it is meaningful as long as we understand its meaning. So a sequence is definitely not the same as the set of its values, although we tend to use a misleading notation.
Invariant hyperplanes that does not pass through the origin
Assume a linear transformation, $\alpha:V\to V$ and $d\neq 0$ and subspace $E$ such that $E+d$ is invariant. We claim that $E$ must be invariant. Indeed, if $\alpha(E+d)=E+d,$ then $d-\alpha(d)\in E\cap \alpha(E)$. Therefore, if $v\in E\setminus T(E),$ then $v+d= v'+T(d)$ for some appropriate $v'\in T(E),$ but then $v=v'+T(d)-d\in T(E)$ by assumption, which is a contradiction. The other inclusion is similar, or you could simply say that $dim(E)\geq dim(T(E))$, but we just showed $E\subseteq T(E)$. Now, we know that $\alpha(d)=d+w$ for some $w\in E$, so a necessary condition is the existence of an invariant subspace $E$ and a linearly independent one-dimensional subspace $E'$such that $\alpha(E')\subseteq E\oplus E'$. In case $V$ is finite dimensional over $\mathbb{C},$ apply the Jordan Normal Form to $\alpha$ restricted to $E\oplus E',$ yielding a basis of generalised eigenvectors. Note that one such generalised eigenvector $d'$must lie outside $E$ and hence $d'=\lambda d+w'$ for some $\lambda\in \mathbb{C}$ and $w'\in E$. Then, $\alpha(d')=\lambda d+w+\alpha(w'),$ and $w+\alpha(w')\in E,$ implying that, in fact, $d'$ must be a generalised eigenvector with corresponding eigenvalue $1$. As you seem to know that this is sufficient (which should also follow from the above), $\alpha$ has these properties if and only if one of its eigenvalues is $1$.
A problem concerning action via automorphisms
You have covered the case that $G$ is solvable. If $A$ is solvable, let $P\lhd A$ be a nontrivial $p$-subgroup of $A$, and consider $C_G(P)$, the fixed points of $P$ acting on $G$. Since for all $x\in P$ and $a\in A$, we have $x^a\in P$, it's not hard to see $C_G(P)$ is $A$-invariant. Applying induction on $|G|$ then reduces to the case of $C_G(P)=\lbrace1\rbrace$. Since $|G|\equiv |C_G(P)|\equiv 1\pmod{p}$ in this case, we have that $|G|$ and $|P|$ are coprime, and thus for every prime $q$ dividing $|G|$, there is a unique Sylow $q$-subgroup $Q$ of $G$ which is $P$-invariant (this is theorem 3.23 in FGT). Again, because for all $x\in P$ and $a\in A$, we have $x^a\in P$, it follows that $Q$ is actually $A$-invariant.
Need Help on Solving Mixed Linear/Nonlinear System of Equations
Hint:   assuming $\,c_j \ne 0\,$, introduce the new variables $\,u_j=r_j/c_1, v_j = s_j / c_2, w_j=t_j / c_3\,$, then dividing equations $\,(10)$ - $(12)\,$ and $\,(13)$ - $(15)\,$ by the respective RHSs: $$ \begin{align} u_{1}^2 + u_{2}^2 + u_{3}^2 = 1 \tag{10.a} \\ v_{1}^2 + v_{2}^2 + v_{3}^2 = 1 \tag{11.a} \\ w_{1}^2 + w_{2}^2 + w_{3}^2 = 1 \tag{12.a} \\ \end{align} $$ $$ \begin{align} u_1v_1 + u_2v_2 + u_3v_3 = 1 \tag{13.a} \\ u_1w_1 + u_2w_2 + u_3w_3 = 1 \tag{14.a} \\ v_1w_1 + v_2w_2 + v_3w_3 = 1 \tag{15.a} \\ \end{align} $$ Combining $\,(10.a)+(11.a)-2 \cdot (13.a)\,$ gives: $$ (u_1 - v_1)^2 + (u_2 - v_2)^2 + (u_3 - v_3)^2=0 $$ Therefore $\,u_j=v_j\,$, and by symmetry $\,u_j=v_j=w_j=\lambda_j\,$. Since $\,c_3=c_1+c_2\,$, it follows that the last three equations $\,(16)$-$(18)\,$ are satisfied automatically: $$ t_j = \lambda_jc_3 = \lambda_j c_1+\lambda_jc_2=r_j+s_j $$ Therefore the non-linear part $\,(10)$-$(18)\,$ of the system has the general solution $\,r_j=\lambda_jc_1\,$, $\,s_j=\lambda_jc_2\,$, $\,t_j=\lambda_jc_3\,$, where $\,\lambda_1, \lambda_2, \lambda_3\,$ are some constants such that $\,\lambda_1^2+\lambda_2^2+\lambda_3^2=1\,$. Substituting these in equations $\,(1)$-$(9)\,$ leaves a linear system of $9$ equations in $9$ unknowns which can be solved by the usual methods.
Discovering unknown function input by evaluation
If you don't know anything about $U$, the product $Uf$ carries very little information about $f$. Namely, all we can conclude is that $f\ne 0$ whenever $Uf\ne 0$. This little information might still be helpful in ruling out possible candidates for $Q$. If you suspect that $Q=Q_1$ and can find $\tau,\omega$ such that $f(\tau, \omega,Q_1)=0$, then the evaluation of $\tilde U(\tau, \omega )$ can refute the hypothesis. To get more, you should carefully consider the nature of $U$ and extract as much analytic information about $U$ as possible.
How do you plot vectors between two start and end points along a sphere?
Given two unit vectors $p_0$ and $p_1$, $|p_0|=|p_1|=1$, such that if $c:=p_0\cdot p_1$ then $|c|\ne 1$, compute $q:=p_1-c\:p_0$, and third point $p_2:=q/|q|$. We have now $p_0\cdot p_2=0$ and $|p_2|=1$. The circle passing through the three points is given by $p:=\cos(\theta)p_0+\sin(\theta)p_2$ where the angle $\theta=0$ for $p=p_0$, $\;\theta=\pi/2\;$ for $p=p_2,\;$ and $c=\cos(\theta)$ for $p=p_1$. Another approach is to use linear interpolation and then normalize to a unit vector, similar to the comment about $\texttt{ Slerp()}$. Let $0\le t\le 1$, $q:=(1-t)p_0+tp_1$ and $p:=q/|q|$ is the new point.
Why functions on the complex plane are "analytically biased"?
Complex analytic functions are very rigid. One striking property they have is that if: $U$ is a connected, open domain $\{ x_n \} \to x$ is any sequence of points of $U$ converging to a point of $U$ $f, g$ are analytic functions on $U$ $f(x_n) = g(x_n)$ for all l$n$ then we can conclude $f = g$. I'm pretty sure the same is true for multi-variable analytic functions. One application of this is that if the domain $U$ contains a real interval, then the value of any analytic function $f$ on $U$ is completely determined by its values on that real interval. That is, if you have an analytic expression for the values of $f$ on the real interval and $U$ is contained in the domain of that expression, then the same analytic expression gives $f$ everywhere on $U$. By applying this to $f(x) = x^z$ (and taking care to deal with branch cuts), the identity $f'(x) = z x^{z-1}$ holds for positive real $x$ and real $z$, and thus it holds for all $x$ and $z$ where it makes sense.
Prove that, given positive integers m and n, if m | n then 2^m − 1 | 2^n − 1. In particular, deduce that if 2^n − 1 is prime then n is prime.
Induction is the easy way to go here. Show that if $(2^m-1)| (2^{km}-1)$ for some $k$, then $(2^m-1)| (2^{(k+1)m}-1)$. $2^{(k+1)m}-1 = 2^{km}\cdot 2^m - 1 = (2^{km} - 1)2^m + 2^m - 1$ Can you complete the proof? For the last part, consider the contrapositive of the statement.
Distributional solutions and test functions with non-compact support
For any smooth function $\phi$ vanishing at infinity there is a sequence of test functions $\phi_n \in C^\infty_0$ s.t. $\phi_n$ converges to $\phi$ in lets say max norm. The only question is that whether $\int \phi f dx$ exists or not. Since $f$ is only locally integrable it does not need to vanish at infinity, hence it can happen that $f\phi$ does not vanish at infinity. So the answer is no in general.
Prove that if $ \lim_{x\to a}f(x)=l$, then $ \lim_{x\to a}|f|(x)=|l|$
Note that by the reverse triangle inequality, we have $||a|-|b||\leq |a-b|$. So, let $\epsilon>0$, as $\lim_{x\to a}f(x)=l$, there exists $\delta>0$ such that $|x-a|<\delta $ implies $|f(x)-l|<\epsilon$. It follows that $$|x-a|<\delta\qquad \implies \qquad ||f(x)|-|l|| \leq |f(x)-l|<\epsilon$$ implying $$\lim_{x\to a}|f|(x)=\lim_{x\to a}|f(x)| =|l|.$$ For an example of function such that $\lim_{x\to a}|f|(x)$ exists but not $\lim_{x\to a}f(x)$, you can take $$f(x) = \begin{cases} 1 & \text{if } x\geq a\\ -1 &\text{if }x<a\end{cases}$$
Understanding the group operation $U_n$
Mod 40, we compute: 7*7 = 49 = 9, 7*7*7 = 343 = 23, 7*7*7*7 = 2401 = 1. Therefore the order of 7 is 4.
Are the fundamental groups of $X$ and $X/A$ isomorphic when $A$ is contractible?
The result isn't true in general. Take $X=S^1$, $A=S^1 -\{N\}$, $N$ being the north pole. $A$ is contractible and $X/A$ is the Sierpiński space, which is contractible (and thus has a trivial fundamental group), but $X$ has a non-trivial fundamental group.
An alternative succinct proof needed for trivial cardinality fact
Let $f_X\colon X\to\alpha_X$ be some bijection from $X$ to $|X|$, and similarly define $f_Y$. Since $X\preccurlyeq Y$ we have some injection $g\colon X\to Y$. Therefore we have $f_Y\circ g\circ f_X^{-1}\colon\alpha_X\to\alpha_Y$ is an injection. However the definition of a cardinal is an ordinal $\alpha$ such that there is no injection from $\alpha$ into any $\beta\in\alpha$ (note that if there is such injection, then there is a bijection, due to the Cantor-Bernstein theorem). Since $\alpha_X$ and $\alpha_Y$ are cardinals this means that $\alpha_X\leq\alpha_Y$.
Not understanding the third solution to an Exponential Equation
The $0.5$ came from the fact that $1$ to any power remains $1$, so we get $(2\cdot 0.5)^a=(2 \cdot 0.5)^b$ and we do not demand that $a=b$
Gradient of distance-squared on Riemannian manifold
A straightforward computation one could do is : Let $v\in (TM)_x$. Let $x(t)$ be a curve such that $x(0)=x$ and $x'(0)=v$. Then $df(x)(v)= \frac{d}{dt} d(p,x(t))^2|_{t=0}$. Take a variation of the geodesic $\gamma_0$ joining $p$ to $x$, given by $H(s,t)=\gamma_t(s)$, where $\gamma_t$ is the geodesic joining $p$ to $x(t)$. Notice that $$\frac{d}{dt} d(p,x(t))^2|_{t=0}= 2 \frac{d}{dt} E(\gamma_t)|_{t=0}.$$ Use the first variation formula for energy (see for ex. Gallot Hulin Lafontaine) to find that $$\frac{d}{dt} E(\gamma_t)|_{t=0}=g(v,\gamma_0'(d(p,x))),$$ where $g$ is the metric. But $\gamma_0'(d(p,x))$ is the quantity we wanted to get. It follows now from the definition; $g_x(\nabla(f),v)=df(x)(v)$ for all $v\in (TM)_x$.
Different values of $A^n$ using Cayley-Hamilton Theorem And Direct Multiplication
$(A-I)^{2}=0$ does not imply that $A=I$. [For example, $M=\left[\begin{array}{llll}0 & 1 \\ 0 & 0 \end{array}\right]$ is a matrix whose square is $0$. Bur $M$ itself is not $0$]. C-H Theorem gives $A^{2}=2A-I$. A simple induction argument gives $A^{n}=nA-(n-1)I$ for all $n$.
Minimum number of conjugacy classes of a finite non-abelian group
If $H=G/Z(G)$, and if you show that $k(H)>2$, then you're done. The only way we might have $k(H)=2$ is that $H\setminus\{1\}$ is a conjugacy class. As the number of elements of a conjugacy class divides the order of the group, we must have $|H|=2$, i.e. $H=C_2=\{1,-1\}$ (cyclic group with $2$ elements). We thus have a group extension $1\to Z(G) \to G \to C_2\to1$. If $a\in G$ is mapped to $-1\in C_2$ then $G$ is generated by $Z(G)$ and by $a$, and $a$ commutes with $Z(G)$, so $G$ is commutative, giving a contradiction.
Is the density of $\mathbb{Q}$ in $\mathbb{R}$ equivalent to the Least Upper Bound Property of $\mathbb{R}$?
(comment turned answer) $\mathbb{Q}$ is an ordered archimedean field which has $\mathbb{Q}$ dense in it, but doesn't have LUB property, so this is not enough.
Using Fourier Transforms to solve $3u_x + 5u_t = 0$
Note that the Fourier transform is given by $$ \tilde u(k,t) = \mathcal F_x[u(x,t)] = \int_{\Bbb R} u(x,t)e^{-ikx}\,dx $$ Thus, applying Leibniz's rule, we have $$ \mathcal[F_x u_t(x,t)] = \int_{\Bbb R} \frac{\partial }{\partial t}u(x,t)e^{-ikx}\,dx =\\ \frac{\partial }{\partial t}\int_{\Bbb R} u(x,t)e^{-ikx}\,dx = \tilde u_t(k,t) $$ Perhaps you could take it from there.
Finding infimum of a class of functions in $C[0,1]$
Let $x_0\in (0,1]$ be a maximum point for $|u|$, i.e. $|u(x_0)| = 1$. Then, by Holder's inequality, $$ (*) \qquad 1 = |u(x_0)| \leq \int_0^{x_0} |u'| \leq \sqrt{x_0} \left(\int_0^{x_0}|u'|^2\right)^{1/2}. $$ Hence [simplified version suggested by zhw] $$ F(u) := \int_0^1 |u'|^2 \geq \int_0^{x_0}|u'|^2 \geq \frac{1}{x_0}\,. $$ Now it is easily seen that the minimum of $F$ is achieved for $x_0=1$ by the function $\overline{u}(x) = x$. [Long version, maybe it clarifies the conclusion above.] It is easily seen that the function $$ v(x) := \begin{cases} u(x), & \text{if}\ x \in [0,x_0],\\ \text{sign} u(x_0), & \text{if}\ x \in [x_0, 1], \end{cases} $$ satisfies $F(v) := \int_0^1 |v'|^2 \leq \int_0^1 |u'|^2$ (with strict inequality if $x_0<1$ and $u$ is not constant in $[x_0, 1]$), hence it is not restrictive to consider, in our minimization problem, only those functions which are constant after $x_0 \in \text{argmax} |u|$. On the other hand, among all competing functions of this kind with maximum of $|u|$ attained at $x_0\in (0,1]$, we have that $$ \overline{u}(x) := \begin{cases} x/x_0, & \text{if}\ x\in [0, x_0],\\ 1, & \text{if}\ x\in [x_0, 1], \end{cases} $$ attains the minimum of $F$, since $$ (**) \qquad F(\overline{u}) = \int_0^{x_0} \frac{1}{x_0^2}\, dx = \frac{1}{x_0} $$ whereas, by (*), $$ F(u) \geq \frac{1}{x_0} $$ with strict inequality if $u'$ is not constant in $[0, x_0]$. Finally, by (**), the functional is minimized when $x_0 = 1$.
vector calculus question: finding equation of surface tangential to plane
Thats fine, although in your expression for $\nabla z$ there should be no term $\frac{\partial z}{\partial z}$ (What would that even mean?) A very useful property of the gradient vector is that if you move in the horizontal plane by small distances $dx,dy$, then the vertical movement is the inner product of the gradient vector with the vector $(dx,dy)$, ie $$dz=\frac{\partial z}{\partial x}dx+\frac{\partial x}{\partial y} dy$$ and your mouse wants to make dz=0. As for the plane, let it be given by $$z=ax+by+c$$ and you want to choose $a,b,c$ so that at the place concerned, this plane has the same gradient and the same elevation as the given surface.
how does this translate to a circle with radius 5: $\sqrt{24-2x-x^2}$
$$y=\sqrt{24-2x-x^2}\\y^2=24-2x-x^2\\y^2=25-(x+1)^2\\(x+1)^2+y^2=5^2$$ From the second to the third line, I completed the square. Notice now that we have the equation of a circle centered at $(-1,0)$ of radius $5$. However, your original equation only represents the top half of this circle, because $y$ is always positive. Here is a picture of the graph:
Solve $a_n - 4a_{n-1} + 4a_{n-2} = 2^n$
No, $n^22^{n^2}C$ does not work. Try with $n^22^{n}C$. Then $$2^n=a_n - 4a_{n-1} + 4a_{n-2} = n^22^{n}C-4(n-1)^22^{n-1}C+4(n-2)^22^{n-2}C=2^{n}\cdot 2C$$ which implies that $C=1/2$. In general, if the r.h.s. is $r^n$ and $r$ is a solution of the characteristic polynomial of multiplicity $m$, then a particular solution has the form $n^m C r^n$. Note that $n^i r^n$ is a solution of the homogeneous recurrence for $0\leq i<m$.
How to solve $\int \frac{dx}{\sqrt{9x^2+18x+2}}$?
You can use the fact that$$9x^2+18x+2=(3x+3)^2-7.$$In other words, do the substitution $3x+3=y$ and $3\,\mathrm dx=\mathrm dy$. Can you take it from here?
An Euclidean geometry question with circles.
Inversion solution Make an inversion at $P$ with arbitrary radius $r$. Then Circle $\theta$ maps to line $\ell$ which is parallel to its tangent. Then images $A'$ and $B'$ of $A$ and $B$ lies on $\ell$. Similary $\varepsilon$ maps to line $a$ which is parallel to line $PB$ and $\delta $. So $A'$ and image $C'$ of $C$ lies on $a$. Since $C'$ lies on $CD$ and $B'$ lies on $PB$ we see that $A'B'PC$ is parallelogram. So $C'P = A'B'$ In the same way we deduce that $D'P=A'B'$ so $p$ halves $C'D'$ and thus it halves also $CD$. Synthetic solution Because of angle between tangent and a chord we have $$\angle ACP = \angle APB = \angle PDB = :\alpha$$ Similary we have $$\angle APC = \angle ABP =:\beta \;\;\;\;{\rm and}\;\;\;\;\angle BAP = \angle BPD =: \gamma$$ Since $\alpha +\beta+\gamma = 180^{\circ}$ (see angles at $P$) we have $$\triangle ACP \sim \triangle APB \sim \triangle PDB$$ so we have $${CP\over BP} = {AP\over AB} \;\;\;\;{\rm and}\;\;\;\;{DP\over AP} = {BP\over AB}$$ and thus $$ CP = {AP\cdot BP\over AB} = DP$$
Check linear independence
Suppose $\{A,B, rA+sB+C\}$ is dependent. Then we can find constants $c_1, c_2, c_3$ (at least one of which is nonzero) such that $$c_1A+c_2B+c_3(rA+sB+C)=0$$ Rearranging this gives us $$(c_1+c_3r)A+(c_2+c_3s)B+c_3C=0$$ If all three of the above coefficients are zero, then $c_1=c_2=c_3=0$, a contradiction. Therefore, we have found a nontrivial linear combination of $A,B,$ and $C$ equal to zero, and so $\{A,B,C\}$ is dependent.
How to find if 3 non-collinear points exist in set of N 3D points?
Pick two distinct points $a_1,a_2$ (for numerical stability, prefer them far apart). Find two vectors $v,w$ orthogonal to $a_2-a_1$ (e.g., let $v$ be the longest of $e_i\times(a_2-a_1)$ where the $e_i$ are the standard basis, and then let $w=v\times (a_2-a_1)$) and compute $c_1=v\cdot a_1$, $c_2=w\cdot a_1$. then loop over all other points and check if $v\cdot a_i=c_1$ and $e\cdot a_i=c_2$ (or $\approx$ with suitably chosen error allowances)
Solving differential equation $2xy + 2x +(x^2-1)y'=0$ and singular solutions
$$(x^2-1)y'+2x(y+1)=0,$$ Let $y+1=z$ and $x^2\ne 1$, then $$z'+\frac{2x}{x^2-1}z=0 \implies \frac{dz}{z}=\frac{2x dx}{1-x^2} \implies \int\frac{dz}{z}=\int\frac{2x dx}{1-x^2} \implies \ln [z(1-x^2)]=C_1. $$ $$\implies z=\frac{C_2}{1-x^2} \implies y=\frac{C_2-1+x^2}{1-x^2}, x^2 \ne 1.$$ Clearly $y=-1$, for $C_2=0$. So the line $y=-1$ is not a singular solution is a part of the general solution.
Proof verification : Show that $\sup \left\{3-\frac{2}{n^2}:n\in\mathbb{N}\right\} =3$
As Jack already mentioned there are some parts you need to fix. In general the best way is to try to guess what the supremum or infimum of the set is and to proofe it by contradiction: Assume $3$ is not the supremum. This means there exists $a>0$ such that $s:=3-a$ is the supremum. But since $1/n^2 \to 0$ as $n\to \infty$ you can find an index $N_a$ such that $1/n^2 <a/2$ for all $n\geq N_a$ and hence we have $s<x_n$, where $x_n:=3-1/n^2$ and $n\geq N_a$. This is impossible and hence $3$ is the supremum of the set.
Power series of $\sqrt {z- \mathrm i}$ around $z=1$.
$$\sqrt{z-i} = \sqrt{z\left(1 - \frac{i}{z}\right)} = \sqrt{z}\sqrt{1 + \frac{1}{iz}}$$ Then use the series expansion of $1 + X$ where $X = \frac{1}{iz}$