title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How can I solve the probability of a sample mean exceeds population mean if I'm not provided means?
The sample mean $\bar{X}$ has normal distribution, mean the population mean $\mu$, and standard deviation $\tau=\frac{8}{\sqrt{4}}$. We want $\Pr(\bar{X}-\mu\gt 2)$, which is $\Pr(Z\gt \frac{2}{\tau})$, where $Z$ is standard normal. This looks like precisely the approach you suggested.
Find normal vector without using cross product
If $n=(n_1,n_2,n_3)$, then $$\begin{cases} \overrightarrow{AB}\cdot n=0 \\ \overrightarrow{AC}\cdot n=0\end{cases}\Leftrightarrow \begin{cases} n_2-n_3=0 \\ 2n_1-n_3=0\end{cases}$$ $$\Leftrightarrow \begin{cases}n_1=\lambda \\ n_2=2\lambda\\n_3=2\lambda\end{cases}\quad (\lambda\in\mathbb{R})\Leftrightarrow n=\lambda(1,2,2)\quad (\lambda\in\mathbb{R}).$$ Choosing for example $\lambda=1$, we get $n=(1,2,2).$
Understanding the injectivity of an isomorphism regarding the Galois group of a cyclic extension
Suppose $\sigma$ is an element of the kernel. Then $\sigma(\alpha)=\alpha$ and so $\sigma$ acts trivially on $E$ because $\alpha$ generates $E$ over $F$. So $\sigma$ is the identity of $G$, and so the (unnamed) homomorphism is injective.
How many Scythians were there?
The smelted arrowheads are used to produce a hemispherical shell, the inner radius of which is given by the 600 amphorae ≈ 23,400,000 cubic centimeters (since this is the volume the bowl holds). So the inner radius is given by $$ \frac{2 \pi}{3} r^3 \ = \ 23,400,000 \ \text{cm.}^3 \ \ \Rightarrow \ \ r \ \approx \ 223.4 \ \text{cm.} \ , $$ which is about 7-1/3 feet. (So it is fairly large.) The thickness of the bowl is the given 10 cm., making the outer radius of the bowl 233.4 cm. The volume of bronze (sorry, I thought it was gold in my comment) used in producing the bowl is then $$ \frac{2 \pi}{3} (233.4)^3 \ - \ \frac{2 \pi}{3} (223.4)^3 \ \approx \ 3,278,200 \ \text{cm.}^3 \ . $$ If each arrowhead contains 1.9 cubic centimeters of bronze, this requires about 1,725,400 of them. EDIT -- As a side note to further aid in imagining this "bowl", bronze alloys have densities in the vicinity of 8.2 grams per c.c., so the mass of this object is around 27 metric tons (or roughly 30 English tons).
What is the largest positive integer less than $(\sqrt{6}+\sqrt{5})^6$
Let $x_n=(\sqrt6+\sqrt5)^{2n}+(\sqrt6-\sqrt5)^{2n}$. Show that $x_0=2$, $x_1=22$, and $$x_{n+2}=22x_{n+1}-x_n.$$ Then compute $x_2$ and $x_3$ from this formula. You can then find $$\lfloor(\sqrt6+\sqrt5)^6\rfloor=x_3-1.$$
How is zero order phase correction applied?
I think the question is probably poorly phrased. This is the answer I was looking for (and consequently found!): The signal from an NMR experiment is usually represented as a stream of complex numbers. You can imagine a complex number like this: where $x$ is the real value and $y$ is the imaginary value. This means that the magnitude of the number is $\sqrt{x^2 + y^2}$ and a phase is $\tan^{-1}\left(\frac yx \right)$. In NMR, experimenters want the beginning of the return signal to be at maximum for the real part, and zero for the imaginary. However the reading could start before the maximum, so to correct this the phase is changed and the magnitude kept the same. To do this the new values for $x$ and $y$ are calculated thus: $\ x = \text{magnitude} \cdot \cos \theta $ $\ y = \text{magnitude} \cdot \sin \theta $ Where $\theta$ is the new phase you want the data to have. This calculation is applied across all points in the signal. This is known as adjusting the zero order phase.
relative sign in Hodge star of tensor product
I studied that part in Huybrechts' book some time ago. Looking at my notes, I think that a decent way to prove this is to use the standard property of the Hodge star, i.e. $ \star \alpha \wedge \beta = <\alpha, \beta> \eta$ where $\eta$ is the volume form. With that in mind the trick seems to be to once use this property on the combined space $W_1 \oplus W_2$ and once on each of the $W_i$ each. $$ \star (\alpha_1 \otimes \alpha_2) \wedge (\beta_1 \otimes \beta_2) = <\alpha_1, \beta_1>_1 <\alpha_2, \beta_2>_2 \eta_1 \eta_2 = (-1)^{k_1 k_2} \star_1 \alpha_1 \wedge \star_2 \alpha_2 \wedge \beta_1 \wedge \beta_2 $$ I.e. the overall sign comes from commuting the differential forms in the last step.
Does the fiber of a projection, under a surjective *-homomorphism, contain a projection?
Not sure about the general case. In your particular case, yes, it's true. Suppose that $\phi(T)$ is a projection. Since $\phi(T^*)=\phi(T)^*=\phi(T)$, you get that $$ \phi\Big(\frac{T+T^*}2\Big)=\phi(T), $$ so you may assume without loss of generality that $T$ is selfadjoint. From $\phi(T-T^2)=0$, we get that $T-T^2$ is compact. The spectrum of a compact selfadjoint operator is either finite or a sequence that converges to zero. And $$ \sigma(T-T^2)=\{\lambda-\lambda^2:\ \lambda\in\sigma(T)\}. $$ If $t-t^2=s-s^2$, we rewrite this as $t-s=t^2-s^2=(t-s)(t+s)$. So either $s=t$ or $s=1-t$. Thus, since $\sigma(T-T^2)$ is countable, so is $\sigma(T)$. If $\lambda$ is an accumulation point for $\sigma(T)$, there exists a sequence $\{\lambda_n\}\in\sigma(T)$ with $\lambda_n\ne\lambda_m$ and $\lambda_n\to\lambda$. Then $\lambda_n-\lambda_n^2\to\lambda-\lambda^2$, so $\lambda-\lambda^2$ is an accumulation point for $\sigma(T-T^2)$ (note that infinitely many points in the sequence $\{\lambda_n-\lambda_n^2\}$ are distinct). So $\lambda-\lambda^2=0$, which forces $\lambda=0$ or $\lambda=1$. We have shown that the only two possible accumulation points of $\sigma(T)$ are $0$ and $1$. Let $r\in (0,1)\setminus\sigma(T)$. Then $1_{(-\infty,r)}$ and $1_{(r,\infty)}$ are continuous on $\sigma(T)$. So by continuous functional calculus we write $$ T=R+I-S, $$ where $R=1_{(-\infty,r)}(T)$ and $S=I-1_{(r,\infty)}(T)$ are selfadjoint operators with $R(I-S)=0$ and with countable spectrum with only zero as a possible accumulation point. This allows us to write $$ R=\sum_k\alpha_kP_k,\qquad S=\sum_j\beta_jQ_j $$ where $\{\alpha_k\}$ and $\{\beta_j\}$ are sequences that converge to zero, and the $\{P_k\}$ and $\{Q_j\}$ are families of pairwise orthogonal projections. With some relabelling, we may write $$ R=\sum_k\alpha_kP_k+\alpha_k'P_k',\qquad S=\sum_j\beta_jQ_j+\beta_j'Q_j', $$ where the projections $P_k$ and $Q_j$ have infinite rank, and the projections $P_k'$ and $Q_j'$ have finite rank. Now $$\tag1 \phi(R)=\sum_k\alpha_k\phi(P_k),\qquad\phi(S)=\sum_j\beta_j\phi(Q_j). $$ We have $\phi(T^2)=\phi(T)$, which is $$ \phi(R^2)+\phi((I-S)^2)=\phi(R)+\phi(I-S). $$ Multiplying by $\phi(R)$ we get $\phi(R)^3=\phi(R)^2$; being a selfadjoint operator, this equality tells us that the spectrum is $\{0,1\}$ and so $\phi(R)$ is a projection. Similarly, $\phi(I-S)$ is a projection, and so is $\phi(S)$. Going back to $(1)$ we get that $\alpha_k,\beta_j\in\{0,1\}$, for all $k$ and $j$. If we now form $$ P=\sum_k\alpha_kP_k+\sum_j(1-\beta_j)Q_j, $$ this is a projection (note that $P_kQ_j=0$ for all $k,j$, this comes from $R(I-S)=0$). And $$ \phi(P)=\phi(R)+\phi(I-S)=\phi(T). $$
Density function of an (iid) stochastic process
"Probability density function" as a concept only really works when we have a random variable defined on Euclidean space, i.e. a single real-valued random variable or the joint law of finitely many. There is the concept of density with respect to a measure, but there isn't a canonical measure on $\mathbb R^{[0,1]}$ (the space of functions from $[0,1]$ to $\mathbb R$, which is where your process lives) in the way that there is on $\mathbb R^n$. For infinitely many random variables, the best we have is Kolmogorov's theorem which tells us that the joint law is uniquely given by the finite dimensional distributions: for any $t_1,\ldots,t_n\in[0,1]$, we have $$\mathbb P(X_{t_1}\le x_1,\ldots,X_{t_n}\le x_n)=\prod_{i=1}^n\left(\frac1{\sqrt{2\pi}}\int_{-\infty}^{x_i}e^{-t^2/2}dt\right).$$
Finding the fundamental solution of an ODE
This does look like an exercise from an homework... anyway, just some tips: the reasoning you do is correct, $\Gamma(x,\epsilon)$ indeed is a linear combination of $e^{k(x-\epsilon)}$ and $e^{-k(x-\epsilon)}$ away from $x=\epsilon$. However, one must remember that $\Gamma$ is not differentiable for $x=\epsilon$, so you should expect the general form of $\Gamma$ to be $$ \Gamma(x,\epsilon)= \begin{cases} A e^{k(x-\epsilon)}+B e^{-k(x-\epsilon)} & x<\epsilon\,, \\ C e^{k(x-\epsilon)}+D e^{-k(x-\epsilon)} & x>\epsilon\,. \end{cases} $$ To find the coefficients $A,B,C,D\in\mathbb{R}$, you need to use the definition of fundamental solution. Namely, for any test function $\varphi\in\mathcal{C}_c^\infty(\mathbb{R})$ it must hold \begin{equation} \varphi(\epsilon)\,=\,\int_{-\infty}^{\infty}\Gamma(x-\epsilon)\left[-\varphi''(x)+k^2\varphi(x)\right]\,dx\,. \end{equation} Split the integral on the right hand side into two parts (from $-\infty$ to $\epsilon-\delta$ and from $\epsilon+\delta$ to $+\infty$, with $\delta>0$ small number that you will eventually send to zero) and integrate by parts a couple of time. You will be left with some boundary terms. Impose that these boundary terms equal $\varphi(\epsilon)$ at the limit $\delta\to 0$ and you will find some conditions on your coefficients $A,B,C,D$. As LutzL mentioned, at the end you will find that these conditions force $\Gamma\,'(x-\epsilon)$ to have a jump of height $1$ at $x=\epsilon$. Also, the coefficients $A,B,C,D$ will not (and cannot) be completely fixed, but you will have some freedom in their choice, due to the fact that the fundamental solution is not uniquely determined (if $\Gamma$ is a fundamental solution, then $\Gamma+h$ is also a fundamental solution for any function $h$ such that $-h''+k^2 h=0$). Good luck!
Is there any greedy algorithm to solve polygon problem with 3 polygons?
I have an attempt to the problem. We transform P2 to a convex polygon by connecting the concave parts, let's call it P2' and its number of edges is N. We remark that P2' is inside P1 and contains P2. So P2' verify the conditions. Now we need to find the polygon with the minimum number of edges (between 3 and N). we try to minimize the number of edges of P2' iteratively. there are two ways to do that: linking two edges by adding a point or linking two edges by adding a line. then verify if the new polygon is still in P1, otherwise test the next edges until we test all of them. we stop when we cannot decrease the number anymore.
Lie derivative of a two-form
I'm going to use capital letters for the vector fields and $\mathcal{L}$ for the Lie derivative. By Cartan's formula, we have $$(\mathcal{L}_X\alpha)(Y, Z) = (d(i_X\alpha))(Y, Z) + (i_X(d\alpha))(Y, Z) = (d(i_X\alpha))(Y, Z) + (d\alpha)(X, Y, Z).$$ If $\beta$ is a one-form, $(d\beta)(Y, Z) = Y\beta(Z) - Z\beta(Y) - \beta([X, Y])$, so for $\beta = i_X\alpha$ we see that \begin{align*} (d(i_X\alpha))(Y, Z) &= Y(i_X\alpha)(Z) - Z(i_X\alpha)(Y) - (i_X\alpha)([Y, Z])\\ &= Y\alpha(X, Z) - Z\alpha(X, Y) - \alpha(X, [Y, Z])\\ &= Y\alpha(X, Z) - Z\alpha(X, Y) + \alpha([Y, Z], X). \end{align*} As for the other term, we have \begin{align*} &\ (d\alpha)(X, Y, Z)\\ =&\ X\alpha(Y, Z) - Y\alpha(X, Z) + Z\alpha(X, Y) - \alpha([X, Y], Z) + \alpha([X, Z], Y) - \alpha([Y, Z], X). \end{align*} Combining the two, we obtain \begin{align*} (\mathcal{L}_X\alpha)(Y, Z) &= X\alpha(Y, Z) - \alpha([X, Y], Z) + \alpha([X, Z], Y)\\ &= X\alpha(Y, Z) - \alpha(\mathcal{L}_X Y, Z) + \alpha(\mathcal{L}_X Z, Y)\\ &= X\alpha(Y, Z) - \alpha(\mathcal{L}_X Y, Z) - \alpha(Y, \mathcal{L}_X Z). \end{align*} Now note that if $f$ is a smooth function, then $\mathcal{L}_Xf = d(i_Xf) + i_X(df) = d(0) + df(X) = Xf$. In particular, $$\mathcal{L}_X(\alpha(Y, Z)) = X\alpha(Y, Z).$$ Therefore $$(\mathcal{L}_X\alpha)(Y, Z) = \mathcal{L}_X(\alpha(Y, Z)) - \alpha(\mathcal{L}_X Y, Z) - \alpha(Y, \mathcal{L}_X Z)$$ and hence $$\mathcal{L}_X(\alpha(Y, Z)) = (\mathcal{L}_X\alpha)(Y, Z) + \alpha(\mathcal{L}_X Y, Z) + \alpha(Y, \mathcal{L}_X Z).$$ More generally, if $\alpha$ is a contravariant $k$-tensor (for example, a $k$-form), $$\mathcal{L}_X(\alpha(Y_1, \dots, Y_k)) = (\mathcal{L}_X\alpha)(Y_1, \dots, Y_k) + \sum_{i=1}^k\alpha(Y_1, \dots, Y_{i-1}, \mathcal{L}_XY_i, Y_{i+1}, \dots, Y_k).$$ In fact, one usually defines the Lie derivative to act on tensors by precisely this rule. An algebraic way of expressing the above equation is: the Lie derivative obeys the Leibniz rule with respect to contractions. In fact, this can be used as one of the axioms that uniquely determines the action of the Lie derivative on tensor fields - see here.
Second order integer cone and polar
For an arbitrary set $C \subset \mathbb R^3$, we define the polar cone via $$C^\circ = \{y \in \mathbb R^3 \mid \forall x \in C : y^\top x \le 0\}.$$ Then, we have $$C_2^\circ = C_1$$ from the bipolar theorem and, similarly, $$C_3^\circ = \operatorname{clcone}( C_1 \cap \mathbb Z^3 ),$$ where $\operatorname{clcone}(B)$ is the closed, convex, conical hull, i.e., the smallest closed convex cone containing $B$. Finally, one can check $$C_1 = \operatorname{clcone}( C_1 \cap \mathbb Z^3 ).$$
Logarithm transformation for a function in Machine Learning
As noted in the comments, one major reason is to avoid numerical problems with the floating point numbers involved (overflow/underflow). Logs transform products into sums and scale large/small numbers into more similar orders of magnitude, without changing the underlying optimization optima. But what is the underlying reason for using a logarithmic function? Beyond the practical reason above, there is a theoretical reason. It comes from the log-likelihood of a probabilistic model being the objective for a statistical model giving rise to all sorts of natural loss functions. For instance, under a Gaussian error assumption, the $L_2$ error is equivalent to maximizing the log likelihood. In the case of binary classification with a sigmoid output (most similar to your case), maximizing the log likelihood leads to minimizing the binary cross entropy loss. This loss quantity is closely related to information theory and the KL divergence, which provides additional motivation for its use. In general, maximizing (log) likelihood is well-studied and thought to endow models with nice statistical properties. Related: [1].
Prove $\{(x,y): x>0\}$ is connected
This may not be exactly what you are looking for, but here's one way: The function $f: \mathbb{R}^2\to\mathbb{R}^2$ given by $f(x, y)=(e^x, y)$ is continuous. The continuous image of a connected set is connected. The image of $f$ is precisely $\{(x, y)\in\mathbb{R}^2: x>0\}$.
Change of variable induces divergence in the integral
Your change of variable is not one-to-one, which invalidates the RHS. But, the existence of a singularity is not precluded by a change of variable, e.g. $$\int_0^\infty e^{-x^2/2} dx = \int_0^\infty e^{-y} (2y)^{-1/2} dy$$ because the integrand is still (Riemann) integrable.
How do you justify a preference for the positive purely real integer in a GCD computation?
Short answer: the $G$ in GCD means "greatest". There is no natural order on $\Bbb C$, so there is no greatest of two complex numbers. Long answer: I assume that you are talking about Gaussian integers here. The gcd still exists in a way, but it is not unique any more: it is unique up to multiplication by a unit (a Gaussian integer of norm $1$). Therefore, "the" gcd of two Gaussian integers is a collection of four Gaussian integers. If one of them is a positive real number, this may be your "preferred" gcd, by choice/definition, for the only reason that it may seem natural. However, it may very well happen that none of them be a real number. In this case, your question can't even be raised.
Equivalence of two $\sigma$-finite measures
I suspect that the result is not true. I suspect that there is an implied (or missing) hypothesis that the measures are bounded on bounded sets. Let $\mu A = \int_{A \cap (0,1]} \frac{dt}{t}$, and $\nu A = \mu A + \delta_0 A$, where $\delta_0$ is the Dirac measure with point mass at $x=0$. Clearly $\mu \ne \nu$. However, if $f$ is continuous, then either $f(0) = 0 $ or not. If $f(0) = 0$, then $\int f d \mu = \int f d \nu$, and if $f(0) \neq 0$, then both integrals are infinite with the same sign. Hence $\int f d \mu = \int f d \nu$ for all continuous $f$. Both measures are finite on each element of the collection $\{ (-\infty, 0] \} \cup \{(\frac{1}{n+1}, \frac{1}{n}] \}_{n} \cup \{ (1,\infty) \} $.
Condition number of a matrix bounded from below and above?
The condition number of a matrix $A$ can be characterized by $$\kappa(A)=\frac{\lambda_{\max}}{\lambda_{\min}}$$ where $$\lambda_\min := \min \{|\lambda |: \lambda \in \sigma(A)\},\quad \lambda_\max := \max \{|\lambda |: \lambda \in \sigma(A)\},$$ and $\sigma(A)$ is the spectrum of $A$. So it is clear that $\kappa(A) \geq 1$ for every $A$ since $\lambda_\min\leq \lambda_\max$, however consider the matrix $$A=\begin{pmatrix} \delta & 0 \\ 0 & 1 \end{pmatrix}$$ then for $\delta > 1$ we have $\lambda_\min=1$,and $\lambda_\max= \delta$ so that $$\kappa(A) = \delta \to \infty$$ as $\delta \to \infty$. So it is not bounded from above.
I am stuck on Fermat's Little Theorem. I know how to apply it, but does it apply here $15^{48}$ mod $53$.
Since $15^{48}$ is nearly $15^{52}$, we can write \begin{align} 15^{48} &\equiv 15^{52} \cdot 15^{-4} &\pmod{53}\\ &= 15^{-4}, &\pmod{53} \end{align} using Fermat's little theorem. With the extended Euclidean algorithm, one can compute $15^{-1} = -7$, and so \begin{align} 15^{-4} &\equiv (-7)^4 &\pmod{53}\\ &\equiv 49^2 &\pmod{53}\\ &\equiv (-4)^2 &\pmod{53}\\ &\equiv 16. &\pmod{53} \end{align}
How can I get maximal ideal containing an ideal using Macaulay2?
The scheme $X\subseteq {\Bbb A}^3$ defined by the ideal $I=\langle x^2y+z,zx-y\rangle$ is supported R=QQ[x,y,z] I=ideal(x^2*y+z,z*x-y) minimalPrimes I -- {ideal(z,y), ideal(x+1,y+z), ideal(x^2-x+1,-x*z+y)} on the lines $y=z=0, x+1=y+z=0,$ and the lines $x^2-x+1=xz-y=0.$ The lines intersect as as follows: minimalPrimes(ideal(z,y)+ideal(x+1,y+z)) -- {ideal(z,y,x+1)} minimalPrimes(ideal(z,y)+ideal(x^2-x+1,-x*z+y)) -- {ideal(x^2-x+1,z,y)} minimalPrimes(ideal(x+1,y+z)+ideal(x^2-x+1,-x*z+y)) -- {} So $X$ has singular locus the three (1+2) points: {ideal(z,y,x+1), ideal(z,y,x^2-x+1)}. These give rise to the distinguished maximal ideals or points. If you want any other points on the lines, just pick points on, say ideal(z,y), by picking a third plane, say $x-a=0$, giving the maximal ideal, say ideal(z,y,x-a) for any $a$ in your field.
Rotation of a hyperbola in affine geometry
Try scaling the $x$-axis with $x\mapsto \frac{1}{\sqrt{5}}x$ and then rotating by $-\frac{\pi}{4}$.
Which dyadic rationals can be built on domineering boards?
All of them. In the 2015 Theoretical Computer Science paper "New results for Domineering from combinatorial game theory endgame databases" (arXiv link) by Uiterwijk and Barton, a construction is provided to produce any dyadic rational in a (connected!) board of Domineering. They cite the 1996 paper New values in Domineering by Yonghoan Kim, which showed a construction for connected boards of value $2^{-n}$.
Convergence of $\sum \frac{1}{a_n}$ given convergence of $\sum a_n$
Hint: you might think about the series for $\ln 2: \sum_{n=1}^{\infty}\frac{(-1)^n}{n}$, which converges to a non-zero value.
Heat Equation: Initial value boundary value problem
The solution consists of two parts: the stationary solution $-x^2+1$ plus transient part, which is simply the solution to the homogeneous heat equation with the same boundary conditions. This solution can be easily found in explicit form (note that the initial condition will change). Do this and show that the transient solution is always less than 0.
Polynomial ring isomorphisms
You're on the right track, but it is not generally the case that the ideal $f$ itself is of one of those three. Luckily, we can fix this by using the right changes of variables. In the following, I will repeatedly use the following fact: If $R$ is a ring, $I$ is an ideal and $\sigma:R\to R$ is an automorphism, then $R/I \cong R/\sigma(I)$. I will also use the terms "change of variables" and "automorphism of $\mathbb R[x]$" interchangably. Note that all isomorphisms are actually $\mathbb R$-algebra isomorphisms, not just ring isomorphisms. I'll assume $a\neq 0$. $f=aX^2+bX+c=\frac{1}{4a}(4a^2X^2+4abX+b^2+4ac-b^2)=\frac{1}{4a}((2aX+b)^2+b^2-4ac)$. We have that $f(X)\mapsto f(2aX+b)$ is an automorphism of $\mathbb R[X]$. After we apply the inverse of this automorphism, we get the ideal $g=(\frac{1}{4a}(X^2+D))$ where $D=b^2-4ac$. Now $\frac{1}{4a}$ is just a unit, so we are actually are dealing with the ideal $g=(X^2+D)$. If $D=0$, then obviously $\mathbb R[x]/(g) \cong \mathbb R[\epsilon]$. If $D\neq 0$, then we make another change of variables via $X \mapsto \sqrt{|D|}X$. After this change of variables, we get the ideal $(X^2-1)$ or $(X^2+1)$, depending on the sign of $D$. The rest should be obvious.
Checking if $p$ tautologically implies $q$
Usually, $\to$ is the conditional (i.e. the connective "if__ ,then __") while "tautologically imples" is denoted with $\vDash$. The relation between the two is the following: $p \vDash q$ iff $p \to q$ is a tautology.
A concrete example of involution in group theory
Example: in the cyclic group $\mathbb{Z}_{12}$, $6$ is an involution since $6+6=0$ in $\mathbb{Z}_{12}$. More generally, in $\mathbb{Z}_{2n}$, the element $n$ is an involution. Notice that the requirement $a=a^{-1}$ is equivalent to $aa=1$ (just multiply both sides by $a$). Therefore an involution must satisfy $a^2=1$. So the only way it does not have order $2$ is if it has order smaller than $2$, i.e., $a=1$.
show that $\cos(\theta_{2}-\theta_{3})+\cos(\theta_{3}-\theta_{1})+\cos(\theta_{1}-\theta_{3})+1=0$
Let $\bar{z}$ denote the complex conjugate of $z$. Multiplying $a+b+c = abc$ by $\bar{a}$, we get \begin{align*} 1 + \bar{a}b + \bar{a}c = bc \end{align*} Similarly, \begin{align*} 1+\bar{b}c+\bar{b}a &= ca\\ 1+\bar{c}a + \bar{c}b & = ab \end{align*} Adding we get, \begin{align*} 3 + (\bar{a}b +a\bar{b}+ \bar{b}c +b\bar{c}+ \bar{c}a + c\bar{a}) = ab+bc+ca \end{align*} Also, from $a+b+c = abc$, we get $\bar{a} +\bar{b}+\bar{c} = \bar{a}\bar{b}\bar{c}$ and hence dividing throughout by $\bar{a}\bar{b}\bar{c}$ we get \begin{align*} bc+ca+ab = 1 \end{align*} Thus, \begin{align*} (\bar{a}b +a\bar{b}+ \bar{b}c +b\bar{c}+ \bar{c}a + c\bar{a}) = -2 \end{align*} and hence \begin{align*} \cos(\theta_2-\theta_3) + \cos(\theta_3 - \theta_1) + \cos(\theta_1 - \theta_2) + 1 = 0 \end{align*}
Could I get a critique of this epsilon-delta limit proof?
You pretty much have all the right pieces. Note that we're not dealing with a biconditional though; we want to show that: $$ 0 < |x - 3| < \delta \implies |x^2 - 9| < \epsilon $$ Here's a cleaned up version of your proof. Given any $\epsilon > 0$, consider $\delta = \min\{1, \epsilon/7\} > 0$. Then observe that if $0 < |x - 3| < \delta$, then: \begin{align*} |x^2 - 9| &= |x - 3||x + 3| \\ &< \frac{\epsilon}{7}|x + 3| &\text{since }|x - 3| < \delta \leq \frac{\epsilon}{7} \\ &= \frac{\epsilon}{7}|(x - 3) + (6)| \\ &\leq \frac{\epsilon}{7}\left(|x - 3| + |6|\right) &\text{by the triangle inequality} \\ &< \frac{\epsilon}{7}\left(1 + |6|\right) &\text{since }|x - 3| < \delta \leq 1 \\ &= \epsilon \end{align*} as desired. $~~\blacksquare$
Matrix computing of $a(i,j)a(j,i)$
You could use the trace formula: $\mathrm{Tr}(AA)=\sum_i[AA]_{ii}=\sum_i\sum_j[A]_{ij}[A]_{ji}$
Prove that a presheaf is a sheaf
Hint: Since $X$ is irreducible, any two non-empty open set always intersects. And for any inclusion $V \subseteq U$ of non-empty open sets, the restriction map $\mathcal{F}(U) \rightarrow \mathcal{F}(V)$ is the identity map. Sheaf Property: (1) Let $U \subseteq X$ be open and let $\{U_i\}$ be an open cover of $U.$ Suppose $s \in \mathcal{F}(U)$ and $s|_{U_i} = 0, \forall i.$ The restriction maps are identity maps. So we have $s = 0.$ (2) Let $U \subseteq X$ be open and let $\{U_i\}$ be an open cover of $U.$ Suppose for each $i,$ we have $s_i \in \mathcal{F}(U_i)$ such that $s_i|_{U_I \cap U_j} = s_j|_{U_i \cap U_j}, \forall i, j.$ Since $X$ is irreducible, $U_i \cap U_j \neq \emptyset, \forall i, j$ and the restriction maps $\mathcal{F}(U_i) \rightarrow \mathcal{F}(U_i \cap U_j)$ are identity maps, this shows that $s_i = s_j \in \mathbb{Z}, \forall i, j.$ Take $s \in \mathcal{F}(U)$ to be $s_i.$
No $\Delta-$System on a subset of a singular cardinal.
$\newcommand{\cf}{\operatorname{cf}}$Let $\cf\kappa=\lambda<\kappa$, and let $\langle\alpha_\xi:\xi<\lambda\rangle$ be a strictly increasing sequence cofinal in $\kappa$ such that $\alpha_0=0$. For $\xi<\lambda$ let $K_\xi=[\alpha_\xi,\alpha_{\xi+1})$; then $\kappa=\bigcup_{\xi<\lambda}K_\xi$, and $|K_\xi|<\kappa$ for each $\xi<\lambda$. Let $$A=\bigcup_{\xi<\lambda}[K_\xi]^2\;;$$ clearly $|A|=\kappa$, and I leave to you the straightforward verification that $A$ has no $\Delta$-system of power $\kappa$.
Let $V$ be a vector space of dimension $m\geq 2$ and $T: V\to V$ be a linear transformation such that $T^{n+1}=0$ and $T^{n}\neq 0$
$0=T^{n+1}(V)=T(T^n(V)) \implies rank(T^n)\le nullity (T)$. But of course $nullity T\le nullity T^{n+1}$ So (2) is true. Oh and (1) is true also... For (1), see here...
Oblique asymptote coincides with the function
Say that you have a function $f(x) = p(x)/q(x)$ with oblique asymptote $\ell(x)$ and remainder $r(x)$ (i.e., $f(x) = \ell(x) + r(x)$, where $\ell(x)$ is linear and $r(x) = \tilde p(x)/q(x)$ tends to $0$ as $x$ tends to $\infty$). If there is a solution to $r(x) = 0$, call it $x_0$, then we have $$f(x_0) = \ell(x_0) + r(x_0) = \ell(x_0).$$ So when there is no remainder, the original function agrees with the oblique asymptote $\ell(x)$ (meaning that the function will touch or cross the asymptote at that point).
Prove: $p$ is not prime as an element of $Z[i] \implies$ there exist $a,b \in Z$ such that $p = a^2 + b^2$
So, $p$ is composite in $\Bbb Z[i]$ so that $p=\alpha\beta$ for some $\alpha,\beta\in\Bbb Z[i]$ where $1<N(\alpha)<N(p)$ and $1<N(\beta)<N(p)$. Hence, $$p^2=N(\alpha)N(\beta).$$ This implies that $$N(\alpha)\mid p^2.$$ Because $p$ is prime, either $N(\alpha)=1$ or $p$ or $p^2$. We cannot have $N(\alpha)=1$ and $N(\alpha)=p^2$. Hence, $N(\alpha)=p$. Write $\alpha=a+bi$ and we are done.
Let $\{a_n\}_{n \in \Bbb N}$ be convergent. Prove/Disprove: $\lim\limits_{n\to\infty}$ $|a_n - a_{2n}| = 0$
Yes $a_{2n}$ refers to the even entries of the sequence $\{a_n \in \Bbb R\}_{n \in \Bbb N}$. And you are essentially begin asked to prove that the sequence $b_n = |a_n - a_{2n}|$ derived from our original one converges to $0$. And this can be accomplished by a standard $\varepsilon$-$N$ argument: Well fix any $\varepsilon > 0$. If $a_n$ is convergent (to $a \in \Bbb R$ say), then by definition of convergence there is a natural $N$ s.t. $$|a_n - a| < \frac{\varepsilon}{2}$$ for all naturals $n > N$. But look: for any natural $n$ obviously $2n \geq n$. Hence if $n > N$, then certainly $2n > N$ also. Thus for that same $\varepsilon$, you also get $$|a_{2n} - a| < \frac{\varepsilon}{2}$$ as long as $n > N$ remains true. So now by the triangle inequality, \begin{align*} |b_n - 0| = |a_n - a_{2n}| &\leq |a_n - a| + |a - a_{2n}| \\ &= |a_n - a| + |a_{2n} - a| \\ &< \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon \end{align*} for all naturals $n > N$. But that is exactly what $\lim\limits_{n \to \infty} b_n = 0$ means.
Is it possible to realize the induced homomorphism easier?
I don't have any complete answer for it, but you can get at least a commutative diagram involving these homologies due to homology functor being natural: you have a commutative diagram involving exact sequences between chain complexes $$\begin{array} & 0 & \rightarrow & D & \rightarrow & A & \rightarrow & B & \rightarrow & 0 \\ & & \downarrow & & \downarrow && \downarrow \\ 0 & \rightarrow & A& \rightarrow & B & \rightarrow & C & \rightarrow & 0 \end{array}$$ which yields a commutative diagram between homologies and in particular you have commutative diagram $$\begin{array} & H_{n+1}(B) & \rightarrow & H_n(D) \\ \downarrow & & \downarrow \\ H_{n+1}(C) & \rightarrow & H_n(A) \end{array}$$
Tangent Plane to the Surface $x^3y+2y^3z+3xz^3=16$ at the Point $(0,2,1)$
Write $$ f \equiv x^3y + 2y^3z + 3xz^3 $$ Recall $$ \nabla = \left(\begin{matrix} \frac{\partial}{\partial x} \\ \frac{\partial}{\partial y} \\ \frac{\partial}{\partial z} \end{matrix}\right)$$ We apply the gradient operator to $f$ which will give us the normal vector at some $(x,y,z)$ $$\nabla f = \left(\begin{matrix} \frac{\partial}{\partial x} \\ \frac{\partial}{\partial y} \\ \frac{\partial}{\partial z} \end{matrix}\right) x^3y + 2y^3z + 3xz^3 $$ $$\nabla f = \left(\begin{matrix} \frac{\partial}{\partial x}\left[x^3y + 2y^3z + 3xz^3\right] \\ \frac{\partial}{\partial y}\left[x^3y + 2y^3z + 3xz^3\right] \\ \frac{\partial}{\partial z} \left[x^3y + 2y^3z + 3xz^3\right] \end{matrix}\right)$$ $$\nabla f = \left(\begin{matrix} 3x^2y + 3z^3 \\ x^3+6y^2z \\ 2y^3+9xz^2 \end{matrix}\right)$$ And evaluate it at $(0,2,1)$ to get the normal vector to be $ (3,24,16)$. Now we have a point on the plane and the normal we can write the plane as $$3x+24y+16z = 64$$
If $\gamma$ is spherical, then the equation $\frac{\tau}{\kappa}=\frac{d}{ds}(\frac{\dot{\kappa}}{\tau \kappa^2})$ holds.
It is really a chasing of definition. I have supplied the info needed to derive each step below. I hope that is clear enough. Let $s$ be the arc length along the curve $\vec{\gamma}(s)$. Recall the definition of tangent vector and the Frenet-Serret formulas $$ \vec{t} = \frac{d}{ds}\vec{\gamma}\quad(*1) \quad\text{ and }\quad \left\{\begin{array}{rrrrl} \dot{\vec{t}} =& &\kappa \vec{n}& &(*2a)\\ \dot{\vec{n}} =& -\kappa \vec{t}& &+ \tau\vec{b}&(*2b)\\ \dot{\vec{b}} =& &-\tau\vec{n} & &(*2c) \end{array}\right.$$ We have $$\begin{array}{rrll} & (\vec{\gamma} - \vec{\alpha})\cdot(\vec{\gamma} - \vec{\alpha}) & = r^2\\ \text{diff. once, (*1)} \implies & \vec{t} \cdot(\vec{\gamma} - \vec{\alpha}) & = 0 &(*3a)\\ \text{diff. again, (*2a, *1)} \implies & \kappa\vec{n} \cdot(\vec{\gamma} - \vec{\alpha}) + \vec{t}\cdot\vec{t}& = 0\\ \vec{t}\cdot\vec{t} = 1\implies & \vec{n}\cdot( \vec{\gamma} - \vec{a} )& = -\frac{1}{\kappa} &(*3b)\\ \text{diff. again, (*2b,*1)} \implies & (-\kappa\,\vec{t} + \tau\,\vec{b})\cdot(\vec{\gamma}-\vec{a}) + \vec{n}\cdot\vec{t} & = -\frac{d}{ds}\frac{1}{\kappa} = \frac{\dot{\kappa}}{\kappa^2}\\ \vec{t}\cdot\vec{n} = 0\text{ and (*3a)} \implies & \tau\,\vec{b}\cdot(\vec{\gamma}-\vec{\alpha}) & = \frac{\dot{\kappa}}{\kappa^2}\\ \iff & \vec{b}\cdot(\vec{\gamma}-\vec{\alpha}) & = \frac{\dot{\kappa}}{\tau\kappa^2}\\ \text{diff. again, (*2c,*1)} \implies & -\tau\,\vec{n}\cdot(\vec{\gamma}-\vec{\alpha}) + \vec{b}\cdot\vec{t} &= \frac{d}{ds}\left(\frac{\dot{\kappa}}{\tau\kappa^2}\right)\\ \vec{b}\cdot\vec{t} = 0\text{ and (*3b)} \implies & \frac{\tau}{\kappa} & = \frac{d}{ds}\left(\frac{\dot{\kappa}}{\tau\kappa^2}\right) \end{array}$$
Are rings $\mathbb{R}[x]/(x^2+2)$ and $\mathbb{R}[x]/(x^2+3)$ isomorphic?
Hint: they are both field extensions of $\mathbb R$ of degree $2$.
Epsilon limit question proof verification/help (self study)
In order to have rigor, we want to algebraically show that there exists an $N$ for each $\epsilon$. What you are doing right now looks a bit like trial and error, so we need to find some trick that works well. Here is a trick (which Jair Taylor also pointed out) that turns out to be very helpful here: $$\frac{2n^2}{n^3+3}<\frac{2n^2}{n^3}=\frac{2}{n}$$ Now if we pick an $\epsilon$, then we can find an $N$ which works, since we want $\frac{2}{N}<\epsilon$, we can rearrange as $N>\frac{2}{\epsilon}$, so just take $N=\lfloor\frac{2}{\epsilon}\rfloor + 1$, and we're done!
The relationship of derived subgroup and absolute center of a group $G$
The smallest group $G$ with cyclic center for which $G'<L(G)<Z(G)$ is $G:=\Bbb{Z}/4\Bbb{Z}$, which is a finite $p$-group for $p=2$, as $$G'=0,\qquad L(G)=2\Bbb{Z}/4\Bbb{Z},\qquad Z(G)=\Bbb{Z}/4\Bbb{Z}.$$
Clarification of: $f$ has no multiple root in any field extension of $F$ unless $f'$ is the zero polynomial
You're right that the conclusion doesn't follow from the statement you quoted first. But it is reasonably easy to prove from first principles: Let $f(x)=x^{15}+ax^{10}+bx^5+c$ in some field of characteristic $5$, and suppose $\xi$ is a root of $f$. Now consider the function $$ g(y)=f(y+\xi)=(y+\xi)^{15}+a(y+\xi)^{10}+b(y+\xi)^5+c$$ This is a polynomial in $y$. If we use the binomial theorem to expand $(y+\xi)^{5n}$, the only terms whose binomial coefficient don't vanish modulo $5$ are those where the power of $y$ is a multiple of $5$. So $g$ must actually have the form $$ g(y)=y^{15}+a'y^{10}+b'y^5+c'$$ and since $g(0)=0$ (because $\xi$ was a root of $f$) we have that $c'=0$. Thus $$ g(y) = p(y)y^5 $$ for some polynomial $p$. But then $f(x)=g(x-\xi) = p(x-\xi)\cdot(x-\xi)^5$ which by definition means that $\xi$ is an at least fivefold root of $f$.
Perpendicular distance from origin to the triangle
The distance of a point $(x',y',z')$ from the plane $Ax+By+Cz=D$ is given by $$\left| \frac{Ax'+By'+Cz'-D}{\sqrt{A^2+B^2+C^2}} \right|$$ The equation of a plane with $x$-, $y$- and $z$-intercepts $a$, $b$ and $c$ respectively is $$\frac{x}{a}+\frac{y}{b}+\frac{z}{c}=1$$ Now the $x$, $y$ and $z$ intercepts are $1$. Can you proceed?
Showing that the alternating group of degree n is normal
An alternative approach to the cycle type argument is to show that $[S_n : A_n] = 2$, which implies that $A_n$ is normal. The fact that conjugation preserves the length of a cycle will give you the result. If $\mu$ is the $m$ cycle $(x_1 x_2 \dots x_m)$, then \begin{align*} \tau \mu \tau^{-1} = (\tau(x_1) \tau(x_2) \dots \tau(x_m)) \end{align*} since $\tau\mu\tau^{-1}(\tau(x_i)) = \tau(\mu(x_i)) = \tau(x_{i+1})$. So $\tau \mu \tau^{-1}$ is again an $m$ cycle. Therefore, conjugation by $\tau \in S_n$ preserves the length of $\mu$. Now, recall that if $n \geq 3$, ($n\leq 2$ implies $S_n$ is abelian) $A_n$ is generated by the $3$-cycles. Hence, if $\sigma \in A_n$ then $\sigma = \mu_1\dots \mu_k$ where $\mu_i$ are $3$-cycles. Then \begin{align*} \tau \sigma \tau^{-1} & = \tau(\mu_1\dots \mu_k)\tau^{-1} \\ & = \tau \mu_1 \tau^{-1} \dots \tau\mu_k\tau^{-1} \end{align*} which is a product of $3$-cycles. (Note that the $[S_n:A_n]=2$ proof is much easier).
$\prod_{i=1}^{p-1} (i^2+1) \equiv 4 \pmod p$ if $p$ is a prime $\equiv3\pmod 4$
Let $u$ be a square root of $-1$ in the finite field $\Bbb F_{p^2}$. Then your product equals $$\prod_{a=0}^{p-1}(a+u)(a-u)=f(u)f(-u)$$ where $$f(X)=\prod_{a=0}^{p-1}(a+X)=X^p-X$$ over the field $\Bbb F_{p^2}$. Thus $f(u)=u^p-u=-2u$, as $p\equiv3\pmod 4$, and similarly $f(-u)=2u$. Your product is then $-4u^2=4$ in $\Bbb F_{p^2}$.
partial derivative with respect to a log of a vector
If I understood your notation you have a function $$f:\mathbb R^n\times \mathbb R^n\rightarrow \mathbb R $$ with $(x,y)\mapsto f(x,y):=\langle x, \ln y\rangle + \langle (1-x),\ln (1-y)\rangle$, denoting by brackets the scalar product of vectors in $\mathbb R^n$. Equivalently, we can write $$f(x,y)=\sum_{i=1}^nx_i\ln y_i + (1-x_i)\ln (1-y_i), (*) $$ as the vectors $1-x$ and $1-y$ are defined by $$1-x := (1-x_1, 1-x_2,\dots, 1-x_n) $$ and $$1-y := (1-y_1, 1-y_2,\dots, 1-y_n) $$ Then, for all $j=1,\dots, n$ $$\frac{\partial f}{\partial y_j } = x_j \frac{d \ln y_j}{d y_j } + (1-x_j)\frac{d \ln (1-y_j)}{d y_j },$$ as all the terms in $(*)$ with $i\neq j$ do not depend on $y_j$. Computing the derivatives of the logarithms we arrive at $$\frac{\partial f}{\partial y_j } = x_j \frac{1}{y_j } - (1-x_j)\frac{1}{1-y_j }.$$
There is isomorphism $T:X \rightarrow Y$ such that $Tx_n = y_n$ for each $n$ implies $(x_n)_{n=1}^{\infty}$ and $(y_n)_{n=1}^{\infty}$ are equivalent
Since $Tx_n=y_n$ then by linearity we get $$ T\left(\sum_{k=1}^Na_kx_k\right)=\sum_{k=1}^Na_ky_k. $$ If the partial sum for $x$ converges (to $x=\sum_{k=1}^{+\infty}a_kx_k$) then by continuity of $T$ the partial sum for $y$ must converge too (to $T(x)=\sum_{k=1}^{+\infty}a_ky_k$). The other way round is similar, because $T^{-1}$ is continuous as well.
Passing a derivative through a limit.
From Bartle and Sherbert, Introduction to Real Analysis, 3rd ed.: 8.2.3 Theorem $\quad{}$ Let $J \subseteq \mathbb{R}$ be a bounded interval and let $(f_n)$ be a sequence of functions on $J$ to $\mathbb{R}$. Suppose that there exists $x_0 \in J$ such that $\left(f_{n}\left(x_0\right)\right)$ converges, and that the sequence $\left(f^{\prime}_{n}\right)$ exists on $J$ and converges uniformly on $J$ to a function $g$. Then the sequence $(f_n)$ converges uniformly on $J$ to a function $f$ that has a derivative at every point of $J$ and $f' = g$.
Solids produced from finite constructive solid geometry operations
Well, if the primitives are, say, quadratic surfaces, then the combinations will be piecewise quadratic. Similarly, if the primitives are linear surfaces (i.e. planes) then the combinations will be piecewise linear (i.e. polyhedra). More generally, the surfaces of objects formed by a finite number of CSG operations will consist of a finite number of pieces of the primitive surfaces, with the edge of each piece consisting of a finite number pieces of pairwise intersections of the primitive surfaces.
Doubts in showing following $\lim_{n\to \infty}p^{1/n}=1$
If $0<p<1$ then $(\frac 1 p)^{1/n} \to 1$ because $\frac 1 p>1$. Hence $\frac 1 {p^{1/n}} \to 1$ so $p^{1/n} \to 1$.
Integration problem that may use DCT
First suppose $f$ is a continuous function on $[0,1)$ with compact support. Then $f$ is uniformly continuous, so given $\varepsilon > 0$, there corresponds a $\delta > 0$ such that for all $x$ and $y$, $|x - y| < \delta$ implies $|f(x) - f(y)| < \varepsilon$. Let $N$ be a positive integer such that $\frac{1}{N} < \delta$. If $n \ge N$, then for $k = 0,1,\ldots, n-1$, $$\int_{k/n}^{(k+1)/n}|f_n(x) - f(x)| \le \int_{k/n}^{(k+1)/n} n\int_{k/n}^{(k+1)/n} |f(y) - f(x)|\, dy\, dx < \int_{k/n}^{(k+1)/n}\varepsilon\, dx = \frac{\varepsilon}{n}.$$ Hence $\|f_n - f\|_1 < \varepsilon$ for all $n \ge N$. This shows that $f_n \xrightarrow{L^1} f$. Now suppose $f \in L^2(0,1)$. Let $\varepsilon > 0$. Choose $g \in C_c([0,1))$ such that $\|f - g\|_\infty < \frac{\varepsilon}{3}$. Then $\|f - g\|_1 \le \|f - g\|_\infty < \frac{\varepsilon}{3}$. Furthermore, for $n \in \Bbb N$ and $k = 0,1,\ldots, n-1$, $$\int_{k/n}^{(k+1)/n} |f_n(x) - g_n(x)|\, dx \le \int_{k/n}^{(k+1)/n} \int_{k/n}^{(k+1)/n} n\|f - g\|_\infty\,dy\, dx < \frac{\varepsilon}{3n}.$$ Therefore $\|f_n - g_n\|_\infty < \frac{\varepsilon}{3}$ for all $n \in \Bbb N$. Since $g_n \xrightarrow{L^1} g$, there exists an $N \in \Bbb N$ such that $\|g_n - g\|_1 < \frac{\varepsilon}{3}$ for all $n \ge N$. So if $n \ge N$, $$\|f_n - f\|_1 \le \|f_n - g_n\|_1 + \|g_n - g\|_1 + \|g - f\|_1 < \frac{\varepsilon}{3} + \frac{\varepsilon}{3} + \frac{\varepsilon}{3} = \varepsilon.$$ Since $\varepsilon$ is arbitrary, $f_n \xrightarrow{L^1} f$.
How to find a matrix $X$ such that $X+X^2+X^3 = \begin{bmatrix} 1&2005\\ 2006&1 \end{bmatrix}$?
There's no such $X$, even with rational entries. If there were, then it would have an eigenvalue that's either rational or a quadratic irrationality. But if $\lambda$ is an eigenvalue of $X$ then $\lambda + \lambda^2 + \lambda^3$ is an eigenvalue of $\left[\begin{array}{cc}1&2005\cr2006&1\end{array}\right]$. But those eigenvalues are the roots $x = 1 \pm \sqrt{2005\cdot 2006}$ of $(x-1)^2 = 2005 \cdot 2006$, and the polynomial $(\lambda^3+\lambda^2+\lambda-1)^2 - 2005 \cdot 2006$ turns out to be irreducible, so none of its roots can be the eigenvalue of a $2 \times 2$ matrix with rational entries, QED.
Visualizing that fundamental group is not abelian in general
The loops $ab^{-1}$ and $ab$ are not homotopic. Imagine the holes are instead pegs, like the left figure here: Source:https://www.tinkercad.com/things/11tjAfAiQNw-two-pegs-two-holes The loop $ab^{-1}$ is equivalent to an open loop $\mathsf{O}$ around both pegs. First, you wrap string around the pegs to make the shape of $ab^{-1}$; tie the start and end of the string into a knot at the base point. Then, notice you can simply nudge the string into an $\mathsf{O}$ shape without moving the base point knot or lifting up the string. In contrast, the loop $ab$ is different. If you wrap the string around the pegs to make the shape of $ab$, you create a figure $\mathsf{8}$. There is no way to nudge the string into an $\mathsf{O}$ shape without moving the base point or lifting up the string over the pegs. "Aren't they welded to the base point". Note that you are allowed to nudge any part of the string except the knot where the string starts and ends. The string is allowed to cross over itself and cross over the base point. If part of the string crosses over itself at the base point, you can still move that part; just don't move the base knot itself. You can think about homotopies like this to help your intuition. When you make any loop out of string, try nudging the string without (a) moving the base point, or (b) lifting up the string over the pegs. The result is another homotopically equivalent loop, and all homotopically equivalent loops can be made in this way. The pegs are obstacles. Wrapping a string around them creates a loop that you can't remove unless you lift that loop over the peg. In this way, just by recording which strings can be homotopically transformed into other strings, you can discover where the pegs are, even if the pegs are invisible. Thus this loop-wrapping approach (homotopy theory) uses strings within the space to reveal the invisible obstacles/holes outside the space.
How does Hilbert's Nullstellensatz generalize the "fundamental theorem of algebra"?
Let's prove that the Nullstellensatz implies the fundamental theorem of algebra in the 1D case. Let $p \in \Bbb C[z]$. The Nullstellensatz says that if we have another polynomial $f \in \Bbb C[z]$, such that $f$ has the same zeroes as some $g \in \langle p \rangle$, then $f^r \in \langle p \rangle$ for some $r \in \Bbb N$. Now assume there exists a polynomial $p \in \Bbb C[z]$ that has no zeroes. Clearly the polynomial $1$ has the same zero set (the empty set!); the Nullstellensatz says that $1 = 1^r \in \langle p \rangle$ for some $r \in \Bbb N$. Since $1 \in \langle p \rangle$, $p$ is constant. Thus by contraposition every nonconstant polynomial in $\Bbb C[z]$ has a root.
Bounding the integral of a $C^1$ function using its gradient
WLOG assume $\phi(x)$ supported in the unit ball (it has compact support, so it is supported in some ball). First look at $$ |f(x) - f*\phi_\epsilon(x)| \leq \int_{|z|\leq 1} \phi(z) |f(x) - f(x-\epsilon z)| dz $$ replace the integrand $$ f(x) - f(x-\epsilon z) = - \int_0^{\epsilon|z|} D_rf(x - r\omega) dr $$ with $\omega = z / |z|$ and using the fundamental theorem of calculus. So Integrating the whole thing over $x$, and changing the order of integration, you have $$ \int_{\Omega}|f(x) - f*\phi_\epsilon(x)|dx \leq \int_{|z|\leq 1} \phi(z) \int_0^{\epsilon|z|} \int_{\Omega} |D_rf(x - r\omega)| dx~ dr~ dz $$ The inside most integral for fixed $r\omega$ gives you $\int_\Omega |\nabla f| dx$. The integral over $r$ gives you the factor of $\epsilon$. And integrating $\phi(z)$ over the ball of radius 1 gives you 1.
What is induction, really?
For your "side question": You're missing something absolutely essential about induction - the base case. In a proof by induction, one must show not only that $P(n) \Rightarrow P(n+1)$, but also that $P(1)$ holds (or $P(0)$, or $P(2)$, or wherever you want $P$ to start being true). The standard analogy is of a sequence of dominoes; by showing $P(n) \Rightarrow P(n + 1)$, we've shown that each domino is heavy enough to knock the next one down, but we don't know that any of them fall at all until we've shown that the first one does. We could use some random other formula in the induction step, sure - for example, we could do the following attempted induction: Suppose that $1 + 2 + 3 + \cdots + n = 1 + \frac{n(n+1)}{2}$. Adding $n + 1$ to both sides and simplifying, we obtain $1 + 2 + 3 + \cdots + n + (n + 1) = 1 + \frac{(n+1)(n+2)}{2}$, which seems to complete the induction. The problem is that the missing step doesn't work - $A_1$, which in this case is the statement $1 = 1 + \frac{1(1+1)}{2}$, is false. Analogously, the first domino does not fall. To respond to your main question: The second proof is not really a proof. You've successfully demonstrated that the claim is true when $n = 3$, and you've given a good reason to believe that it will be true for other values of $n$; but that's not "proof". How do we know that there's not some very large $n$ with just the right properties so that the "teeth" of your triangles don't quite line up right? The proof by induction, however, is a proof, because all that's required in order to be sure that no $n$ escapes it is the axiom "all natural numbers are reachable by induction". The key difference is that the second "proof" hinges heavily on intuition; that's why it "feels" more natural to you, but that's also why it isn't proof - intuitions can be wrong, sometimes very badly wrong. The first proof uses only logic, which is as unarguable as we can reasonably expect to achieve. As for the speak/see distinction, I sort of see (forgive the pun) what you're getting at, but it's not quite right. Just because we can "say" something clearly doesn't mean we assume it's true - there are plenty of obviously false things we can say clearly, like "$1+1=3$". But it is the case that we (or rather, mathematicians in general) do not consider something proven unless we can "speak" the proof clearly and correctly.
How to solve equation
First, you should notice that this is an equation to be solved for $x>0$. Now compute the derivative of $f(x)=\log_4x+2x-9$. You have $f'(x)=1/(x\ln4)+2$. For $x>0$ this is positive so $f$ is strictly increasing on $(0,\infty)$. As you know one solution that is reasonably easy to guess, you should be able to conclude yourself. In case you don't know any solution to this type of equation, the reasoning I have written is only able to say if there exists a solution or not. As @TooOldFor Math pointed out, the general solution of this kind of equation ($\ln x=ax+b$) is given by the Lambert W function, which is a special (i.e. complicated) function.
Evaluate $\lim_{n \to \infty} \sqrt[n]{3^n+4^n}$
$$\lim _{ n\rightarrow \infty }{ \sqrt [ n ]{ 3^{ n }+4^{ n } } } =4\cdot \lim _{ n\rightarrow \infty }{ \sqrt [ n ]{ \left( \frac { 3 }{ 4 } \right) ^{ n }+1 } } =4$$
Is this simplification correct, and if so, what law does it illustrate?
The distributive law is the factoring law. You have factorised the expression by taking out the common factor of F. So it is the distributive law.
Proving two torus maps are homotopic
I will give an elementary proof of the problem using the fact, that $T^2$ is a topological group and that its universal cover is contractible. We will start with some constructions in homotopy theory of topological groups, which are required to understand the proof given below and added here for convenience. First note, that $\mathbb R^2$ forms a topological group under addition: $(x,y) + (x',y') := (x+x',y+y')$ and that $\mathbb Z^2$ is a discrete normal subgroup thereof. We identify $T^2$ with the quotient of $\mathbb R^2$ by $\mathbb Z^2$, so that $T^2$ again becomes a topological group under addition. Moreover the quotient map $p: \mathbb R^2 \to T^2$ becomes a group homomorphism and is easily seen to be the universal covering projection ($\mathbb Z^2$ discrete subgroup $\Rightarrow$ $p$ is covering projection; $\mathbb R^2$ is contractible $\Rightarrow$ $p$ is universal). Next, we observe, that for $[f],[g]\in [(X,\ast),(T^2,0)]$ the sum $[f]+[g] := [f+g]$ is well defined, turning $[(X,\ast),(T^2,0)]$ into a group. The same arguments show that $[(X,\ast),(\mathbb R^2,0)]$ is a group under point wise addition of representatives as well (as is $[(X,\ast),(G,1)]$ for any topological group $G$ with unit $1$), and that the map $p_\sharp : [(X,\ast),(\mathbb R^2,0)] \to [(X,\ast),(T^2,0)]$ given by $p_\sharp([f]) := [p \circ f]$ is a group homomorphism. Now $\pi_1(T^2,0) = [(S^1,1), (T^2,0)]$ is a group in two ways, by means of composition of (representatives of) loops $[\alpha], [\beta] \mapsto [\alpha \ast \beta]$ and by means of point wise addition of (representatives of) loops $[\alpha], [\beta] \mapsto [\alpha + \beta]$. Both operations share the same unit, the (class of the) constant loop sending everything to $0 \in T^2$ and denoted simply by $0: (S^1,1) \to (T^2,0)$. We can also observe, that for any loops $\alpha, \beta, \gamma, \delta$ we have $(\alpha + \beta) \ast (\gamma + \delta) = (\alpha + \gamma) \ast (\beta + \delta)$. Therefore $$[\alpha]+[\beta] = ([\alpha] \ast [0]) + ([0] \ast [\beta]) = ([\alpha] + [0]) \ast ([0] + [\beta]) = [\alpha] \ast [\beta],$$ hence the two operations are in fact the same on $\pi_1(T_2,0)$. The same argument can be used to show the analogous statement for $\pi_1(\mathbb R^2,0)$ (or $\pi_1(G,1)$ for any topological group $G$ with unit $1$). Now back to the problem: Given two maps $\varphi, \psi: T^2 \to T^2$, such that for some point $x \in T^2$, we have $\varphi(x) = \psi(x) = x$ and $\pi_1(\varphi) = \pi_1(\psi): \pi_1(T^2,x) \to \pi_1(T^2,x)$, we want to show $\varphi \simeq \psi$, where the homotopy can be taken relative to $x$. Replacing $\varphi$ with $\xi \mapsto \varphi(\xi + x) - x$ and $\psi$ with $\xi \mapsto \psi(\xi + x) - x$ if necessary, we may assume $x=0$. It will then suffice to show, that $\chi \simeq 0$, where $\chi := \varphi - \psi$. Since the induced map $\pi_1(\chi): \pi_1(T^2,0) \to \pi_1(T^2,0)$ on fundamental groups is trivial (this is where we need all the constructions for topological groups), we can lift $\chi$ to a map $\bar{\chi}: (T^2,0) \to (\mathbb R^2,0)$ with $\chi = p \circ \bar{\chi}$. We now define $H: T^2 \times I \to T^2$ by $H(x,t) = p(t\bar\chi(x))$, which is easily checked to be the required homotopy $0 \simeq \chi$.
Is this implication in my solution valid?
For $x\ge 0$ you have already computed $$f'(x) = \ln(2x+1)+\frac{2x+3}{2x+1}$$ and both terms are positive. Therefore $f$ is strictly increasing for $x\ge 0.$
Is the space $C^1[a,b]$ with its usual 'sup-norm ' topology is separable?
If $f$ is continuously differentiable and $\epsilon >0$ then there exist a polynomial $q$ such that $\|f'-q\|_{\infty} <\epsilon$. Since $f(x)=f(0)+\int_0^{x} f'(t)dt$ it follows that $\|f-p\|_{\infty} \leq \epsilon$ where $p(x)=f(0)+\int_0^{x} q(t) dt$. Thus $\|f-p\|_{\infty} \leq \epsilon$ and $\|f-p\|_{\infty} \leq \epsilon$. Note that $p$ is also a polynomial.Now aprroximate the coefficients of $p$ by rational numbers to see that polynomials with rational coefficients form a countable dense set.
Proof: $||u+v||^2≤(||u||+||v||)^2$ Using Cauchy Schwarz Inequality
Suppose $X\subset \mathbb R$ and consider two elements of that set $a,b\in X$ and a function $f:X\to X$. If $f$ is weakly increasing then $a\ge b \Rightarrow f(a)\ge f(b)$. In your case $X=\mathbb R_+$, $a=(||u||+||v||)^2$, $b=||u+v||^2$, and $f(x)=\sqrt{x}$. The square root is well defined in $\mathbb R_+$ and actually strictly increasing in it.
Solve the recurrence relation: $T_n=\sqrt nT_{\sqrt n} +1$
For $n = 1$ you have $T_1 = \sqrt{1} T_1 + 1 = T_1 + 1$, which is impossible.
How to find the magnitude of two vectors given the magnitude of their sum?
Use the Sine Law of triangles: In triangle with sides of length $a_1,a_2,a_3$ that are opposite to angles $A_1,A_2,A_3$ we have $$a_1/\sin A_1=a_2/\sin A_2=a_3/\sin A_3.$$ The 3rd angle in the green triangle is $180^o-30.8^o-18.1^o=131.1^o.$ Therefore $$|v_{tot}|/\sin 131.1^o=|v_A|/\sin 18.1^o=|v_B|/\sin 30.8^o.$$
Writing an interval with infinite unions or infinite intersection
The first two are wrong since (a+1,b-1) is the smallest interval thus defining the intersection. The third is wrong since a and b are not members of any of the sets forming the union. The fourth is correct since all points (x) $a<x<b$ are included. Note: I assume my comment about notation above is correct.
Optimal time to renew a library book
First assume that I may extend only once by $n$ days. If I have held the item for $1\le k\le n$ days (since the beginning or my last successfull renewal), and I try to extend, then this will succeed with probability $(1-p)^k$ and hence my total time (since the beginning or my last successfull renewal) will be $=n+k$ with probability $(1-p)^k$ and $=n$ with probability $1-(1-p)^k$. Hence the expected value is $n+k(1-p)^k$, and I want to maximize this. Clearly, $k+1$ is better than $k$ iff $(k+1)(1-p)^{k+1}>k(1-p)^k$, i.e., $k<\frac1p-1$. Hence the (or a) best $k$ is $ k=\left\lceil\frac1p-1\right\rceil$ if this is valid, i.e., if $\frac1{n+1}\le p<1$ $k=n$ if $p<\frac1{n}$ irrelevant if $p=1$ In other words, $k$ is given by $\frac1{k+1}\le p<\frac1k$ if possible. At any rate, this will give me an expected time of $n'= n+k(1-p)^k$. Note that if the optimal $k$ is $\gg1$ (and $p$ not much smaller than $\frac1n$) then we will have $n'\approx n+\frac ke$. Next assume we still have two attempts to extend by $n$ days available. If I have held the item for $1\le k'\le n$ days and try to extend, this will again succeed with $(1-p)^{k'}$. but as the extension will be $n'$ (in expectation9 instead of just $n$, the total expected lease time will be $n$ with probability $1-(1-p)^{k'}$ and $k'+n'=n+k'+k(1-p)^k$ with probability $(1-p)^{k'}$, i.e., $$n+(1-p)^{k'}(k'+k(1-p)^k). $$ This time, $k'+1$ is better than $k'$ iff $ \frac1p-1-k(1-p)^k>k'$. Hence we will pick $$k'=\left\lceil \frac1p-1-\left\lceil\frac1p-1\right\rceil^{\vphantom|}(1-p)^{\left\lceil\frac1p-1\right\rceil}\right\rceil $$ for "moderate" $p$, but $$ k'=\min\left\{\left\lceil\frac1p-1-n(1-p)^n\right\rceil,n\right\} $$ for small $p<\frac1n$.
Is it possible to generate $A$ from a linear system of the form $Ax=b$ given $x$ and $b$?
Not sure if this is what you're looking for, but take the case $b = (b_{1}, b_{2}, b_{3}), x = (x_{1}, x_{2}, x_{3})$ with each $x_{i}$ invertible, then a trivial answer is $A = (a_{i,j})$ where $a_{i,i} = b_{i}x_{i}^{-1}$ and other entries are $0$.
If $p$ prime and $0<x<y<z<p$ with squares congruent mod $p$, then $x+y+z\mid x^2+y^2+z^2$
If $x^2\equiv y^2\pmod p$ then $p$ divides $y^2-x^2=(y-x)(y+x)$. By assumption, $0&lt;y-x&lt;p$, hence $p\nmid y-x$ and $p|x+y$. Again from $0&lt;x&lt;y&lt;p$, we conclude $0&lt;x+y&lt;2p$, hence $x+y=p$. By the same argument, $x+z=p$, hence $y=z$, contradicting the assumption $y&lt;z$.
Is Hlawkas Inequality holds for sobolev space
This is a long comment. Quote from Hlawka's functional inequality: Moreover, Witsenhausen showed that the space $L^p(0, 1)$ is a Hlawka space for $1\le p\le 2$. Therefore, one can see that all Banach spaces having the property that all its finite dimensional subspaces can be embedded linearly and isometrically in the space $L^p([0, 1])$, with some $1\le p\le 2$ are Hlawka spaces (see Niculescu and Persson and Lindenstrauss and Pelczy'nski). Further, Witsenhausen proved that a finite-dimensional real space with piecewise linear norm is embeddable in $L^1$ if and only if it is a Hlawka space. However, Neyman showed that in the general case embeddability in $L^1$ does not characterize Hlawka spaces. Concluding, to the best of the author’s knowledge, no characterization of Hlawka spaces is presently known.
For $a,b,c>0$, prove that $\frac{a^2}{b}+\frac{b^2}{c}+\frac{c^2}{a}\ge a+b+c+\frac{4(a-b)^2}{a+b+c}$
from your last one: LHS$\ge (|a-b|+|b-c|+|c-a|)^2 \ge (|a-b|+|b-c+c-a|)^2=4(a-b)^2$
Is there a name for this "mean"?
The quantity $XM$ lies between the arithmetic and geometric means, that is $$AM\geq XM\geq GM.$$ Notice that $$3AM^2=2XM^2+QM^2,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$$ and so since $AM\leq QM$, it follows that $$3AM^2=2XM^2+QM^2\geq 2XM^2 +AM^2\Rightarrow AM\geq XM.$$ The AM-GM inequality implies that $XM\geq GM$. From $(1)$ we may write $$XM=\sqrt{\frac{3AM^2-QM^2}{2}},$$ but a nicer way to express $XM$ is given in Thomas Andrews comment: $$XM=GM\sqrt{\frac{GM}{HM}}.$$ See Also: Newton's inequalities. More generally quantities such as $XM$ are often referred to as Elementary Symmetric Means.
Cartan subalgebra of compact group as "annihilator" of a single element
Here is a possible method. We'll show that almost all $X$'s have the property. First of all, the connected Lie subgroup $H\subset G$ generated by $\mathfrak h$ is closed. If not, then $\bar H$ is bigger, still connected and Abelian, so it's Lie algebra would be still Abelian and bigger. So $H$ is a compact Abelian Lie group, i.e. a torus, i.e. (isomorphic to) $\mathbb R^n/\mathbb Z^n$. Now take any $X\in\mathfrak h =\mathbb R^n$ such that the 1-parametric group $L$ it generates is dense in $\mathbb R^n/\mathbb Z^n$ (in other words, the coefficients of $X\in\mathbb R^n$ should be linearly independent over $\mathbb Q$). If $Y\in\mathfrak g$ is such that $[X,Y]=0$ then $Ad_g Y=Y$ for every $g\in L$, hence $Ad_gY=Y$ for every $g\in\bar L=H$, hence $[Z,Y]=0$ for every $Z\in\mathfrak h$, hence (by maximality of $H$) $Y\in\mathfrak h$. edit Here is a proof with roots. Take any $X\in\mathfrak h$ such that $\alpha(X)\neq0$ for every root $\alpha$. Now use the decomposition $\mathfrak g=\mathfrak h\oplus\bigoplus_\alpha R_\alpha$, where $R_\alpha=\{Z\in\mathfrak g|[Y,Z]=\alpha(Y)Z\,\forall Y\in\mathfrak h\}$ is the $\alpha$-root space, to see that if $[X,Y]=0$ for some $Y\in\mathfrak g$ then all the $R_\alpha$-components of $Y$ vanish, i.e. $Y\in\mathfrak h$.
Calculus.Integration of definite integral
Let $y=b\cos(t)$. Hence, we have $$\int_{-b}^{b} \left(y^2-b^2\right)^4 dy = \int_{0}^{\pi} b^9 \sin^9(t)dt = 2b^9\int_0^{\pi/2} \sin^9(t)dt$$ From here, we have $$\int_0^{\pi/2} \sin^9(t)dt = \dfrac89 \cdot \dfrac67 \cdot \dfrac45 \cdot \dfrac23$$
show for every homomorphism $\varphi$, there is a pointed map that induces it.
A group homomorphism $\varphi:\Bbb Z\to\Bbb Z$ is determined by its value $n=\varphi(1)$, and must be of the form $\varphi=x\mapsto nx$. Now consider $S^1$ as the set of unit complex numbers, with $1$ as the distinguished point, and verify that $$f=x\mapsto x^n$$ induces $\varphi$ on the homotopy group.
How to show that $\frac{d^n}{dx^n} (x^2-1)^n = 2^n \cdot n!$ for $x=1$
$$\frac{d^n}{dx^n}(x^2-1)^n=2nx\frac{d^{n-1}}{dx^{n-1}}(x^2-1)^{n-1}+2n\frac{d^{n-2}}{dx^{n-2}}(x^2-1)^{n-1}$$ The second term is $0$ at $x=1$. So we have the recursion $I_n=2nI_{n-1}$.
How this covariance come?
You should have mentioned that he's applying a lemma &ndash; how did you think we'd be able to guess that without the book? Here's a copy of the book. This calculation is intended &ldquo;as an illustration of the application of Lemma $5.3.3$&rdquo;. Upon comparison with item a. in Lemma $5.3.3$ (p. $220$), it seems that it's a typo and he forgot the factor $\sigma^2$.
Is this an even function?
Since $1$ and $-1$ verify the condition $|x|\leq 1$ then their images are determined by the expression $3-x$, so $f(1)=2\neq f(-1)=4$, hence $f$ is'nt even. You can also plot the curve of this function and check that it isn't symmetrical according the ordinate axis.
Show that the function |P|: $\mathbb{C}\to\mathbb{R^+_0}$ has a minimum
If $P$ is a polynomial it is also continuous and so it has a minimum on every closed disk. Let $M$ be the minimum on the disk of radius $1$. On the other hand you know that $\lim_{z\rightarrow\infty} P(z) = \infty$ (note that as DonAntonio comment you may want to take $|z|\rightarrow\infty$ but I consider these two notions equivalent). This means that if $z$ lies outside of a sufficiently large disk (say of radius $r$) then $|P(z)|&gt;M$. Let $N$ be the minimum over the disk of radius $r$. Then $\min |P(z)|$ is the minimum between three terms (The disk of radius $1$, the disk of radius $r$, and outside of the disk of radius $r$). But by the choice of $r$ the minimum can't be outside of the disk of radius $r$ and so the minimum is $\min(M,N)$.
Expectation related to sparse Gaussian random vectors
For fixed $v$ the random variable $T=\langle X,v\rangle$ is distributed as $N(0,\|v\|^2)$. (It is gaussian, because it is a linear combination of independent gaussians; its variance is the sum of the squares of the coefficients.) By elementary calculus (or by the method of believing the internet) we know $E|T|=\|v\|\sqrt{\frac 2\pi}$. The calculus: assuming $T\sim N(0,1)$, we have (where $\phi$ is the standard normal density) $$E|T|=\int_{-\infty}^\infty |t|\phi(t)\,dt=2\int_0^\infty t\phi(t)\,dt=\frac 2{\sqrt{2\pi}} \int_0^\infty t e^{-t^2/2} \,dt=\sqrt{\frac 2 \pi}\int_0^\infty e^{-t^2/2}d(t^2/2)=\sqrt{\frac 2 \pi} $$ and so on. This is for fixed $v$. Your $\langle Z,x\rangle=\langle X,V\rangle$ where $V=(Y_1x_1,Y_2x_2,\ldots,Y_nx_n)$, and your desired $E|\langle Z,x\rangle|=E ( E[|\langle X,V\rangle|\mid V])=\sqrt{2/\pi}E\| V\|$. This last expectation is hard to compute: $$E\|V\| = \frac{\sum_S \sqrt{\sum_{i\in S} x_i^2}}{\binom n s}$$ where $S$ ranges over all size-$s$ subsets of $\{1,\ldots,n\}$.
Showing the Heisenberg Group is Isomorphic to the set $\mathbb{R}^3$
Suppose that $f:(s,x,y)\mapsto(s+c_1xy/2,x,y)$ is a mapping between the two group laws: $$(s,x,y)\cdot(s',x',y') = (s+s' + \frac12(c_2xy' +c_3 x'y), x+x', y+y') $$ and $$(s,x,y)\star(s',x',y') = (s+s'+\frac12((c_1+c_2)xy'+(c_1+c_3)x'y),x+x',y+y')$$ which are both associative. Then verify the equation $$f((s,x,y)\cdot(s',x',y'))=f(s,x,y)\star f(s',x',y').$$ In your case, you want to specialize to $\;c_1=c_2=1,c_3=-1.$
Proving a sequence is convergent
Hint: Try to write $y_{n+1} $ as $$y_{n+1}= \sum_{k=1}^{n} (-1)^{n-k} \dfrac{x_k}{2^{n-k+1}} + Cy_0$$ where $C$ could depend on $n$.
Topology of complex projective plane
The key theorem you need from elementary topology is that any surjective map from a compact space to a Hausdorff space is a quotient map. To apply that theorem you must set up the appropriate quotient maps. This is a purely topological problem, requiring you to guess the correct formulas for the appropriate quotient maps. You have described constructions 1 and 2, but let me also describe a construction which which is closest to the definition of $\mathbb{C}P^n$: Construction 0: $\mathbb{C}P^n$ is the quotient of $\mathbb{C}^{n+1}-\{0\}$ defined by the equivalence relation $x \sim zx$ for each $x \in \mathbb{C}^{n+1}-\{0\}$ and each $\lambda \in \mathbb{C} - \{0\}$. Let $f : \mathbb{C}^{n+1}-\{0\} \to \mathbb{C}P^n$ be the function such that $f(x)$ equals the equivalence class of $x$. The topology on $\mathbb{C}P^n$ is defined to be the unique one such that $f$ is a quotient map. Next let me compare construction 0 and construction 1. Let $g : S^{2n+1} \to \mathbb{C}P^n$ be the restriction of the function $f$ to the unit sphere $S^{2n+1} \in \mathbb{C}^{n+1}-\{0\} = \mathbb{R}^{2n+2}-\{0\}$. According to the definition of $f$, two points $x,y \in S^{2n+1}$ satisfy $g(x)=g(y)$ if and only if there exists $\lambda \in \mathbb{C}-\{0\}$ such that $x=\lambda y$ (notice that $\lambda$ must also have norm $1$ since $x$ and $y$ have norm $1$). Since the domain of $g$ is compact and its range is Hausdorff, the above theorem applies, and therefore $g$ is a quotient map. It follows that $\mathbb{C}P^n$ is homeomorphic to the quotient space obtained from $S^{2n+1}$ by identifying $x,y$ if and only if there exists $\lambda \ne 0$ such that $x=\lambda y$ (again the $\lambda$ must have norm $1$). Now, to get to the heart of your question, let's compare construction 1 and construction 2. The method is similar: construct a quotient map $h : \mathbb{D}^{2n} \to \mathbb{C}P^n$ such that $h(x)=h(y)$ if and only if $x,y \in \partial\mathbb{D}^{2n}$ and $x=\lambda y$ for a nonzero complex constant (which, again, must have norm $1$). We will construct $h$ using $g$. To do this, consider the inclusion $\mathbb{D}^{2n} \subset \mathbb{C}^{n} \subset \mathbb{C}^{n+1}$. Map $\mathbb{D}^{2n}$ to $S^{2n+1}$ using the function $$p(a_1,b_1,...,a_n,b_n,0,0) = (a_1,b_1,...,a_n,b_n,\,\sqrt{1 - (a_1^2+b_1^2+...+a_n^2+b_n^2)}\,\,,\,\,0) $$ We then have a map $$h = g \circ p : \mathbb{D}^{2n} \to \mathbb{C}P^n $$ Now check that this map $h$ is surjective, its domain is compact, and its range is Hausdorff. Applying the key theorem, $h$ is a quotient map. Also check that $h(x)=h(y)$ if and only if $x=y$ or $x,y \in \partial \mathbb{D}^{2n}$ and $x = \lambda y$ for some $\lambda \in \mathbb{C}$ (which as said must have norm $1$). It follows that $\mathbb{C}P^n$ has the description in your construction 2.
Solving the congruence $7x \equiv 41 \mod{13}$
Reduce the congruence to $7x \equiv 2$. Since $\gcd(7,13) = 1$, there exists a unique inverse of $7$ modulo $13$. Note that the inverse is just $2$ since $7 \cdot 2 \equiv 1 \pmod {13}$. Then it follows that $(7)(7^{-1})x \equiv x \equiv 2(7^{-1}) \equiv 4 \pmod {13}$ and that is the solution set.
Hunting for a basis: Rational Canonical Form
Unfortunately, I did all of the following work prior to reading the first paragraph of your post. I apologize if this exposition is therefore unhelpful, but I figured I would just post it, regardless, since I had already gone through the trouble to write it all out. We begin by finding the Smith Normal Form of $A$ by using elementary row and column operations to transform the matrix $xI - A$ into a diagonal matrix with each diagonal entry dividing the next. Be sure to keep track of your elementary row operations, and use elementary column operations whenever possible because they do not affect your basis. We find that the Smith Normal Form of $A$ is given by $$\begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; x-2 &amp; 0 \\ 0 &amp; 0 &amp; (x-2)^2 \end{pmatrix}.$$ We note that the elementary row operations used to find the Smith Normal Form were (1.) $R_1 - \frac{1}{9}(x-2) R_3 \mapsto R_1,$ (2.) $R_2 + R_3 \mapsto R_2,$ (3.) $R_1 + \frac{5}{9} R_2 \mapsto R_1,$ and (4.) swap $R_1$ and $R_3.$ We perform the inverse elementary row operations on the $3 \times 3$ identity matrix to obtain the generators of the cyclic factors in the invariant factor decomposition of $V$ as an $F[x]$-module. We have that \begin{align*} (1.) \, [e_1, e_2, e_3] &amp;\to \biggl[e_1, e_2, e_3 + \frac{1}{9}(x-2) e_1 \biggr], \\ \\ (2.) \, \biggl[e_1, e_2, e_3 + \frac{1}{9}(x-2) e_1 \biggr] &amp;\to \biggl[e_1, e_2, e_3 + \frac{1}{9}(x-2) e_1 - e_2 \biggr], \\ \\ (3.) \, \biggl[e_1, e_2, e_3 + \frac{1}{9}(x-2) e_1 - e_2 \biggr] &amp;\to \biggl[e_1, e_2 - \frac{5}{9}e_1, e_3 + \frac{1}{9}(x-2) e_1 - e_2 \biggr], \\ \\ (4.) \, \biggl[e_1, e_2 - \frac{5}{9}e_1, e_3 + \frac{1}{9}(x-2) e_1 - e_2 \biggr] &amp;\to \biggl[e_3 + \frac{1}{9}(x-2) e_1 - e_2, e_2 - \frac{5}{9}e_1, e_1 \biggr]. \end{align*} Considering that $x$ acts on $V$ via $x \cdot v = T(v),$ and the matrix of $T$ with respect to the standard basis is $A,$ it is easy to see that this matrix reduces to $\bigl[0, e_2 - \frac{5}{9} e_1, e_1 \bigr].$ We conclude that $\bigl\{e_2 - \frac{5}{9} e_1, e_1, T(e_1) \bigr\} = \bigl\{\bigl(-\frac{5}{9}, 1, 0 \bigr), (1,0,0), (2,9,-9) \bigr\}$ is the desired basis.
Associating a variety to a cone?
This is called the theory of toric varieties. Canonical references are Fulton: Introduction to Toric Varieties (very concise) Cox, Little, Schenck: Toric Varieties (very detailed) The point of this theory is that describing these varieties in terms of combinatorial data, i.e. cones, makes it straightforward to compute a huge number of things that typically in algebraic geometry are difficult or even intractable --- for example, cohomology groups of sheaves.
An example of Higman
Let $m=\text{ord}(b)$. We know $p\mid 2^m-1$, that is $2^m\equiv1\pmod p$. The positive integers $m$ solving $2^m\equiv1\pmod p$ are the multiples of $k$ where $k$ is the multiplicative order of $2$ modulo $k$, that is the least positive integer with $2^k\equiv1\pmod p$. Therefore $k\mid m$.
Is there a way to simplify $(\sin(su))(\cos u)^s$?
This is a partial answer. We must have $\Re s&gt;-1$ for the integral to converge. For $0&lt;r&lt;1$, let the contour $C_r$ be the boundary of $\{z\in\mathbb{C} : r&lt;|z|&lt;1, 0&lt;\arg z&lt;\pi/2\}$ (with the usual "counterclockwise" orientation), consisting of two quartercircles and two line segments. Then, assuming the principal values of $(\ldots)^s$ taken, we have (by Cauchy's integral theorem) $$0=\int_{C_r}(1+z^2)^s\frac{dz}{z}=\int_r^1\frac{(1+x^2)^s-(1-x^2)^s}{x}\,dx\\+i\int_0^{\pi/2}(1+e^{2i\phi})^s\,d\phi-i\int_0^{\pi/2}(1+r^2 e^{2i\phi})^s\,d\phi.$$ Taking $r\to 0$, and substituting $x^2=t$, we get the "$+$" version of $$2^{s+1}\int_0^{\pi/2}(\cos\phi)^s e^{\pm is\phi}\,d\phi=\pi\pm if(s),\quad f(s)=\int_0^1\frac{(1+t)^s-(1-t)^s}{t}\,dt,$$ and the "$-$" version is obtained similarly. Hence, the given integral is equal to $2^{-s-1}f(s)$. The equality $f(s)=f(s-1)+2^s/s$ allows to compute $f(s)$ for $s\in\mathbb{Z}_{\geqslant 0}$.
Confusion about the range of the sum of i.i.d. random variables
You assumed the $X_i$ are uniformly distributed in $[0,1]$ in the first place, so why are you later puzzled that "the $X_i$ all have the same range (in this case $0 \le X_i \le 1$ for all $i$)"? If you add up $n$ numbers, each in the interval $[0,1]$, then you get a number in the range $[0,n]$. There is no assumption of units (amount of fuel, number of passengers, etc.) here, but implicitly, writing down $X_1 + \cdots + X_n$ implies that for whatever physical quantity $X_i$ is supposed to model, the sum should make sense. Moreover the physical quantity should follow the probablistic assumption (uniform distribution): number of passengers does not make sense though, since presumably the number of passengers is a nonnegative integer, while $X_i$ takes on any value between $0$ and $1$.
Do a pair of orthogonal directions with slopes equal to zero imply $\nabla f = 0$?
Remember than having an extrema of a differentiable function $f : \mathbb{R}^k \to \mathbb{R}$ at a point $p$ implies $\nabla_p f = 0$. The converse however is not true, and so even if your question can be answered affirmatively, this says nothing about the former. Now, as for your question, the answer is yes if $f$ is differentiable and $k = 2$ for the following reason: since $v \perp w$, the set $\{v,w\}$ is linearly independent and therefore a basis of $\mathbb{R}^2$. Since $f$ is differentiable, we have that for any point $p$ the directional derivative with direction $u$ is $$ f_u(p) = \langle\nabla_p f, u \rangle \tag{1} $$ Now, if $x$ is any vector, there exist $a,b \in \mathbb{R}$ so that $x =av+bw$ and thus, $$ f_u(p) = \langle\nabla_p f, u \rangle = \langle\nabla_p f, av+bw \rangle = a\langle\nabla_p f,v\rangle+b\langle \nabla_p f, w \rangle = 0. $$ In particular, the partial derivatives are zero, by taking $u = e_j$ with $1≤j≤2$. The other direction comes directly from $(1)$. If $k&gt;2$, the map $$ f(x_1, \dots, x_k) = x_k $$ has nonzero gradient but the partial derivatives $f_1$ and $f_2$ (which in particular are directional) are zero and $e_1 \perp e_2$.
On the Composition of simple Projections
No, it is not. $H=\{x\;|\;\underline 1^\top x=1\}$ is a hyperplane, $[0,1]^n$ a hypercube, and $X$ is a simplex which results from intersecting these two. But the problem is that the second projection may well leave the hyperplane and therefore result in a point outside the simplex. Here is an example for $n=3$: Consider the point $(8,8,-15)$ which already lies in $H$. Its projection onto $X$ would be $(\frac12,\frac12,0)$ which has distance $\frac{15}2\sqrt6\approx18.4$. But the corner of the cube at $(1,1,0)$ has distance $\sqrt{323}\approx18.0$ so it is closer.
Why is $\sum_{i=0}^{n-5} 4(n-i-5)^3 = 4 \sum_{i=5}^n (n-i)^3$?
$$\sum_{i=0}^{n-5}4\,(n-i-5)^3=4\cdot\sum_{i=0}^{n-5}(n-i-5)^3=4\cdot\sum_{i=0}^{n-5}[n-(i+5)]^3=4\cdot\sum_{j=5}^n(n-j)^3$$
Proof for domino tiling over $m \times n$ checkerboard
Without loss of generality, let the even number be the width. Apply double-induction First Induction hypothesis: any $n\times 2$ grid can be filled regardless the value of $n$. Base case: $n=1$, i.e. a $1\times 2$ grid. Clearly it can be filled. First induction step: Supposing that an $n\times 2$ grid can be filled., we try to fill an $(n+1)\times 2$ grid. Clearly this is doable by filling the $n\times 2$ grid and then placing one additional tile on top of the rest. Hence, any $n\times 2$ grid can be tiled for any value of $n$. Second induction hypothesis: any $n\times (2m)$ grid can be filled for any value of $m$ given a specific value of $n$ Base case: $m=1$, i.e. an $n\times 2$ grid. This was proven in the previous step. Second induction step: Supposing that an $n\times (2m)$ grid can be filled., we try to fill an $n\times (2(m+1))$ grid. This can be accomplished by filling an $n\times (2m)$ grid first and filling the remaining $n\times 2$ grid. These two steps together prove that any $n\times (2m)$ grid can be tiled regardless the value of $n$ and regardless the value of $m$, hence any grid with even width can be tiled. By symmetry any grid with even height and not necessarily even width will also be able to be tiled. Finally, we can look to the case that both height and width are odd. As there are an odd number of spaces and any tiling will cover only an even number of spaces, there can be no proper tiling, thus proving that it is both a necessary and sufficient condition that for there to be a tiling there must be at least one of the sides of even length.
Stereographic projection: line element
OK, this problem is a mess. Now that we have one of the standard stereographic projection mappings, we know that the mapping is conformal, and so the formula as given must be incorrect. Let $a$ be the diameter of the sphere, and let $(\psi,\phi)$ be the usual spherical coordinates (with $\psi$ the angle from the vertical and $\phi$ the polar angle, as given in the problem). Then we all know that $$ds^2 = \big(\frac a2\big)^2(d\psi^2 + \sin^2\psi\,d\phi^2).$$ We observe that we have a right triangle with legs $a$ and $\rho$, and angle $\psi/2$ at the south pole. Thus, $$\rho = a\tan\big(\frac\psi 2\big) \quad\text{and}\quad \sin\psi = 2\sin\big(\frac\psi 2\big)\cos\big(\frac\psi 2\big)=\frac{2a\rho}{a^2+\rho^2}.$$ Now easy computation gives $$\frac{d\rho}{1+\big(\frac\rho a\big)^2}=\frac a2\,d\psi.$$ Thus, \begin{align*} ds^2 &amp;= \big(\frac a2\big)^2(d\psi^2 + \sin^2\psi\,d\phi^2) = \frac{d\rho^2}{\big(1+(\frac\rho a)^2\big)^2} + \frac{\rho^2\,d\phi^2} {\big(1+(\frac\rho a)^2\big)^2} \\ &amp;= \frac1{\big(1+(\frac\rho a)^2\big)^2}(d\rho^2 + \rho^2\,d\phi^2), \end{align*} which shows that the metric is conformally equivalent to the usual metric on the tangent plane at the north pole, as expected.
Diameter of Nested Compact Sets
I don't have "Baby Rudin" handy at the moment, but if $\bigcap_{n=1}^\infty K_n$ were to contains two distinct points, $x, y$, then $\mathrm{diam} ( \bigcap_{n=1}^\infty K_n ) \geq d (x,y) &gt; 0$. As $\bigcap_{n=1}^\infty K_n \subseteq K_n$ for all $n$, it would follow that $\mathrm{diam} (K_n) \geq d(x,y)$ for all $n$, contradicting that $\mathrm{diam} (K_n) \rightarrow 0$.
Probability and Combinations
(a) This follows the binomial distribution. If the probability of having a boy is the same as the probability of having a girl, then it is $\binom{6}{3} (\frac{1}{2})^{6}$. The $\binom{6}{3}$ term chooses the boys. (b) This is similar to part (a). To choose the boys, we have $\binom{6}{4} * (\frac{1}{2})^{6}$. However, there is a symmetry case for the girls, so we multiply this by $2$. Note $\binom{6}{4} = \binom{6}{2}$, so multiplying by $2$ gives us the same quantity as if we considered the case of choosing $2$ boys.
Minimum Tournament Required
51 players, 1 winner: 50 needed to go out. 50 players out, 2 losses each, need: 100 losses. 100 losses means 100 games. So, minimum is: 100 games.
One of $2^1-1,2^2-1,...,2^n-1$ is divisible by $n$ for odd $n$
Observe that there are $n$ such number with the possible number of remainders is $n-1$ from $1,2,\cdots,n-2,n-1$ because if $0$ is a remainder for some $r,$ then $n|(2^r-1)$ and we are done. Otherwise using Pigeonhole Principle, at least for two distinct values $r$ we shall have same remainder Let $2^t-1\equiv2^s-1\pmod n$ where $t&gt;s$ $\implies n|\{(2^t-1)-(2^s-1)\}\implies n| 2^s(2^{t-s}-1)\implies n|(2^{t-s}-1)$ as $(2,n)=1$ Clearly, $0&lt;t-s&lt;n$ as $n\ge t&gt;s\ge1$