title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to properly apply the Lie Series
A relevant reference is found here: Exponential of a function times derivative It is advised to absorb this one-dimensional theory first, before proceeding to 2-D. $c)\; X = -y\, \partial_x+x\, \partial_y$ Disclaimer. In our (LaTeX) notes we have $f$ instead of $F$ , $(x_1,y_1)$ instead of $(\hat{x},\hat{y})$ , $\theta$ instead of $\varepsilon$ , $k$ instead of $j$ , and more. I didn't replace notations because my eyes are bad and it is expected that the danger of making mistakes is greater than the advantage of being consistent with the question. An example of a Continuous Transformation in two dimensions is a Rotation over an angle $\theta$: $$ \left\{ \begin{array}{c} x_1 = \cos(\theta) . x - \sin(\theta) . y \\ y_1 = \sin(\theta) . x + \cos(\theta) . y \end{array} \right. $$ It might be asked how rotation of the coordinate system works out for a function of these variables. With other words, how the following function would be expanded as a Taylor series expansion around the original $f(x,y)$: $$ f_1(x,y) = f(x_1,y_1) = f(\,\cos(\theta).x - \sin(\theta).y\, , \, \sin(\theta).x + \cos(\theta).y\, ) $$ Define other (polar) variables $(r,\phi)$ as: $$ x = r.\cos(\phi) \quad \mbox{and} \quad y = r.\sin(\phi) $$ Giving for the transformed variables: $$ x_1 = r.\cos(\phi).\cos(\theta) - r.\sin(\phi).\sin(\theta) = r.\cos(\phi+\theta) \\ y_1 = r.\cos(\phi).\sin(\theta) + r.\sin(\phi).\cos(\theta) = r.\sin(\phi+\theta) $$ We see that $\phi$ is a proper canonical variable. Another function $g(\phi)$ is defined with this canonical variable as the independent one: $$ g(\phi) = f(\,r.\cos(\phi)\, ,\,r.\sin(\phi)\,) = f(x,y) $$ Now rotating $f(x,y)$ over an angle $\theta$ corresponds with a translation of $g(\phi)$ over a distance $\theta$. Therefore $g(\phi+\theta)$ can be developed into a Taylor series around the point of departure: $$ g(\phi+\theta) = g(\phi) + \theta.\frac{dg(\phi)}{d\phi} + \frac{1}{2} \theta^2.\frac{d^2g}{d\phi^2} + ... $$ Working back to the original variables $(x,y)$ with a well known chain rule for partial derivatives: $$ \frac{dg}{d\phi} = \frac{\partial g}{\partial x}\frac{dx}{d\phi} + \frac{\partial g}{\partial y}\frac{dy}{d\phi} $$ Where: $$ \frac{dx}{d\phi} = - r.\sin(\phi) = - y \quad \mbox{and} \quad \frac{dy}{d\phi} = + r.\cos(\phi) = + x \quad \Longrightarrow \\ \frac{dg}{d\phi} = \frac{\partial g}{\partial x}.(-y) + \frac{\partial g}{\partial y}.(+x) \quad \Longrightarrow \quad \frac{d}{d\phi} = x.\frac{\partial}{\partial y} - y.\frac{\partial}{\partial x} $$ Herewith we find that $X = (x.\frac{\partial}{\partial y} - y.\frac{\partial}{\partial x})$ is the infinitesimal operator for Plane Rotations.It is equal to differentiation with respect to the canonical variable, as expected. The end-result is: $$ f_1(x,y) = \sum_{k=0}^{\infty} \frac{1}{k!} \left[ \theta \left(x.\frac{\partial}{\partial y} - y.\frac{\partial}{\partial x}\right) \right]^k f(x,y) = e^{ \theta (x\, \partial / \partial y - y\, \partial / \partial x) } f(x,y) $$ This is true for any function $f(x,y)$. In particular, the independent variables themselves can be conceived as such functions. Which means that: $$ x_1 = e^{ \theta (x\, \partial / \partial y - y\, \partial / \partial x) } x \quad \mbox{and} \quad y_1 = e^{ \theta (x\, \partial / \partial y - y\, \partial / \partial x) } y $$ It is easily demonstrated that: $$ (x\frac{\partial}{\partial y} - y\frac{\partial}{\partial x}) x = - y \quad \mbox{and} \quad (x\frac{\partial}{\partial y} - y\frac{\partial}{\partial x}) y = x $$ Herewith we find: $$ \sum_{k=0}^{\infty} \left[ \theta (x.\frac{\partial}{\partial y} - y.\frac{\partial}{\partial x}) \right]^k x = 1 - \theta.y - \frac{1}{2} \theta^2.x + \frac{1}{3!} \theta^3.y + \frac{1}{4!} \theta^4.x + ... \\ = \cos(\theta).x - \sin(\theta).y = x_1 $$ Likewise we find: $$ \sum_{k=0}^{\infty} \left[ \theta (x.\frac{\partial}{\partial y} - y.\frac{\partial}{\partial x} ) \right]^k y = 1 + \theta.x - \frac{1}{2} \theta^2.y - \frac{1}{3!} \theta^3.x + \frac{1}{4!} \theta^4.y + ... \\ = \sin(\theta).x + \cos(\theta).y = y_1 $$ Thus, indeed, the formulas for a far-form-infinitesimal rotation over an finite angle $\theta$ can be reconstructed from the expansions. $a)\; X = x\, \partial_x-y\, \partial_y$ Read the 1-D reference. We have the following results there: $$ e^{ln(\lambda) \,x \frac{d}{dx}} f(x) = f(\lambda\,x) $$ Where $\lambda$ is a positive scaling factor. We also have: $$ e^{-ln(\lambda) \,x \frac{d}{dx}} f(x) = e^{ln(1/\lambda) \,x \frac{d}{dx}} f(x) = f(x/\lambda) $$ These results translate to 2-D in the following manner: $$ e^{ln(\lambda) \,x \frac{\partial}{\partial x}} f(x,y) = f(\lambda\,x,y) \\ e^{-ln(\lambda) \,y \frac{\partial}{\partial y}} f(x,y) = f(x,y/\lambda) $$ The two exponents are commutative, so we can write, with $\;\ln(\lambda)=\mu\;\Longrightarrow\;\lambda=e^\mu=\exp(\mu)$ : $$ e^{\mu(x\, \partial_x - y\, \partial_y)}\; f(x,y) = f\left(e^\mu x,e^{-\mu} y\right) $$ In particular, with $\;X = x \frac{\partial}{\partial x} - y \frac{\partial}{\partial y}$ : $$ \exp(\mu X) x = exp(\mu) x \quad \mbox{and} \quad \exp(\mu X) y = \exp(-\mu) y $$ $b)\; X = x^2\,\partial_x+xy\,\partial_y$ As for this case, I don't see how we can say more than, in the OP's notation: $$ F(\hat{x},\hat{y})=\sum_{j=0}^{\infty}\frac{\varepsilon^j}{j!}X^j F(x,y) = \exp{(\varepsilon X)} F(x,y) $$ Then it follows that, for special functions $F(x,y)=x$ or $F(x,y)=y$ : $$ \hat{x}=\exp{(\varepsilon X)}x \\ \hat{y}=\exp{(\varepsilon X)}y $$ Please don't tell me that's all you want ..
Some GRE questions (II)
Hints: $1$) The only thing that matters is the component of $(1,1,1)$ in the $(1,0,1)$ direction. $2$) Consider the function on $[0,1]$ and quote a standard theorem. $3$) The vector space $V$ has dimension $6$.
Find $\mathbb{P} [ \| Z+a\| \le r \mid \|Z \| \le r ]$ where $Z$ is an i.i.d. Gaussian vector
Assuming $\|v\|$ denotes the Euclidean length of $v\in\mathbb{R}^n$, you can use this method, involving a certain amount of calculus. By the rotational invariance of mean-$0$ i.i.d. Gaussians, you may as well assume $a$ is of the form $(A,0,\ldots,0)$, with $A\ge0$. Now it is easy to express the joint distribution of the pair $(R,S) = (\|Z+a\|, \|Z\|)$ in terms of the independent r.v.s $Z_1$ and $\sum_{i>1} Z_i^2$. The former is $N(0,1)$ and the latter is chi-squared with $n-1$ degrees of freedom. Your desired conditional probability is the ratio of two integrals, each over a 2-dimensional region. ADDED, edited 17 July 2017. Writing $R=\sqrt{(Z_1+A)^2+Q}$ and $S=\sqrt{Z_1^2+Q}$, where $Z_1$ is $N(0,1)$ and $Q$ is $\chi_{n-1}^2$, the numerator integral (for $P(R\le r, S\le r)$) is $$E \,P( R\le r, S\le r | Q) = E \,\,I_{Q\le r^2} P ( (r^2-Q)^{1/2}-A \le Z_1 \le (r^2-Q)^{1/2} | \, Q).$$ (Here $I_{Q\le r^2}$ is the indicator r.v. for the event $[Q\le r^2].$) If $f$ denotes the $\chi_{n-1}^2$ density, the numerator integral is $$ \int_0^{r^2-A^2/4}f(q) \left( \Phi( (r^2-q)^{1/2} - \Phi((r^2-q)^{1/2}-A) \right) \,dq.$$ (The upper range of integration encodes the condition that the intersection of the events $[(Z_1+A)^2+Q\le r^2]$ and $[Z_1^2+Q\le r^2]$ is non-empty. Conditional on $Q$, each of these events bounds $Z_1$ to an interval: $-\Delta -A \le Z_1 \le \Delta -A$ and $-\Delta \le Z_1 \le \Delta$, respectively, where $\Delta=\sqrt{r^2-Q}$. The intersection is nontrivial when $\Delta-A \ge -\Delta$. Then the intersection is $[\Delta-A \le Z_1 \le \Delta].$) The denominator integral is $$ \int_0^{r^2} f(q) \left( \Phi( (r^2-q)^{1/2} - \Phi(-(r^2-q)^{1/2}) \right) \,dq.$$ These integrals are formally 1-dimensional, but the use of the $\Phi$ function can be considered as sweeping 2-dimensional integral dirt under the rug.
Formal proof for detection of intersections for constrained segments
Once the segments are numerated try to use induction on $n$ which is the number of the segment, checking whether it is intersecting the next one. Good luck
Find the coefficient of $x^3y^2z^3$ in the expansion $(2x+3y-4z+w)^9$
Every term in $(2x+3y-4z+w)^9$ is a product of nine factors, each of which is one of the four terms in parentheses. Thus, before you collect like terms each factor will have the form $(2x)^i(3y)^j(-4z)^kw^\ell$, where $i+j+k+\ell=9$. Since the exponents in $x^3y^2z^3$ add up to only $8$, not $9$, there is no such term in the product, and its coefficient is $0$. If you actually meant the coefficient of $x^3y^3z^3$, each such term must arise as the product of three factors of $2x$, three of $3y$, and three of $-4z$, so it must be $(2x)^3(3y)^3(-4z)^3=2^33^3(-4)^3x^3y^3z^3$, with a coefficient of $2^33^3(-4)^3=-13824$. Your formulat tells you that there are $$\frac{9!}{3!3!3!}=1680$$ such terms, so the total coefficient of $x^3y^3z^3$ is $-13824\cdot1680=-23~224~320$.
Translate the statements
You can capture claim A) by claiming that for every number that is a prime gap, there is a greater number that is also a prime gap. Now, in order to do so, it helps to first state exactly the formula that states that '$a$ is a prime gap'. That would be: $\exists p (Prime(p) \land Prime(p + a) \land (\forall n \in \mathbb{N}, p < n < p + a \Rightarrow \lnot Prime(n)))$ OK, so then the claim that for every number that is a prime gap, there is a greater number that is also a prime gap becomes: $\forall a (PrimeGap(a) \to \exists b (b > a \land PrimeGap(b))$ i.e.: $\forall a (\exists p (Prime(p) \land Prime(p + a) \land (\forall n \in \mathbb{N}, p < n < p + a \Rightarrow \lnot Prime(n))) \to \exists b (b > a \land \exists p (Prime(p) \land Prime(p + b) \land (\forall n \in \mathbb{N}, p < n < p + b \Rightarrow \lnot Prime(n))))$ Now, B) is a strange claim ... I am not sure what they mean by it ...
Integrating a function from negative infinity to infinity
I'll give an approach different from all of the above using complex analysis, which saves you from a lot of trigonometry and elementary algebraic manipulations. (This might be more likely the preferred approach of the problem, since it uses the variable $z$, which commonly denotes a complex variable.) Note that the denominator $z^2+25=(z+5i)(z-5i)$ contains complex roots. Hence the integral is susceptible to the technique of integrating over a closed curve on the complex plane and applying the residue theorem. More specifically, consider the half-circle $\gamma$ on the complex plane with the straight edge sitting on the real interval $[-R,R]$ and with the arc $Re^{i\theta}$ where $\theta\in[0,\pi]$. The function is rational, hence meromorphic, and the root $z=5i$ (with multiplicity $1$) is included inside this half-circle when $R>5$. Hence we compute the residue at $z=5i$: $$\operatorname{res}_{5i}f=\lim_{z\to5i}\frac{z-5i}{(z+5i)(z-5i)}=\frac{1}{10i}$$ Now apply the residue theorem: $$\int_\gamma f(z)\,dz=2\pi i\operatorname{res}_{5i}f=\frac{2\pi i}{10i}=\frac{\pi}{5}$$ It remains to prove that the integral around the arc $\gamma_R$ goes to $0$ as $R\to\infty$. The function is bounded on this arc by $1/(R^2+25)$, and the arc has length $\pi R$, so by the estimation lemma: $$\lim_{R\to\infty}\left|\int_{\gamma_R}f(z)\,dz\right|\le\lim_{R\to\infty}\left|\frac{\pi R}{R^2+25}\right|$$ The right hand side has growth rate $O(R)$ in the numerator and $O(R^2)$ in the denominator, so it's straightforward to show that it goes to $0$ as $R\to\infty$ (by e.g. L'Hôpital's rule). Hence the integral on the real line is equal to the integral around the half-circle, that is $$\int_{-\infty}^{\infty}\frac{1}{z^2+25}\,dz=\frac{\pi}{5}$$ as was required.
Question about the orthogonal projection
In general, a projection matrix is any matrix $P$ that satisfies $P^2=P$. All orthogonal projections satisfy this property, but there are indeed projections which are not orthogonal. The easiest way to see this is perhaps geometryically. Consider, say, the projection on $\mathbb{R}^2$ onto the $x$-axis that is parallel to the line $y=x$.
To obtain a closed form for the series related to special functions.
You are right, the series is a little messed up, but there's no reason to try guessing the proper form. We can always use the general way to get the hypergeometric form of a series. Consider: $$f(z)=z \Gamma(3)\left[\frac{1}{\Gamma(3)}+\frac{(\gamma)_1}{\Gamma(4)}\frac{z}{1!}+\frac{(\gamma)_{2}}{\Gamma(5)}\frac{z^2}{2!}+\frac{(\gamma)_{3}}{\Gamma(6)}\frac{z^3}{3!}+\cdots\right]=2 z \sum_{n=0}^\infty \frac{(\gamma)_n z^n}{(n+2)! n!}$$ Now we catch the $0$ term: $$A_0= \frac{2z}{2}=z$$ And consider the ratio of terms: $$\frac{A_{n+1}}{A_n}= \frac{(n+\gamma)}{(n+3)} \frac{z}{n+1}$$ Which immediately allows us to write: $$f(z)=z {_1 F_1} (\gamma;3;z)$$
If $1!+2!+\dots+x!$ is a perfect square, then the number of possible values of $x$ is?
Hint: Let $$a_n = \sum_{j=1}^nj!$$ Knowing that for $j\ge 5$, the term $j!$ will end with $0$, since the product will contain both $2$ and $5$ it should be straighfroward for you to show that $$a_n = 33 + \sum_{j=5}^n j!$$ This sum therefore ends with digit $3$. Big Hint It is easy to check that no perfect square can end with digit $3$ (namely by checking all squares $\mod 10$). This shows that $a_n$ is not perfect square for $n\ge 4$.
Why is this true? A set is Dedekind infinite if and only if equipotent to the union with the set containing itself
If $h:A\to A\cup\{A\}$ is a bijection, the restriction of $h^{-1}$ to $A$ is a bijection between $A$ and a proper subset of $A$, so $A$ is Dedekind-infinite. Now suppose that $A$ is Dedekind-infinite, and let $h:A\to A$ be injective but not surjective. Fix $a_0\in A\setminus h[A]$, and recursively define a sequence $\langle a_n:n\in\Bbb N\rangle$ in $A$ by setting let $a_{n+1}=h(a_n)$ for each $n\in\Bbb N$. Since $h$ is injective, so is the sequence: $a_m\ne a_n$ if $m,n\in\Bbb N$ and $m\ne n$. Now define $$f:A\cup\{A\}\to A:x\mapsto\begin{cases} a_0,&\text{if }x=A\\ a_{n+1},&\text{if }x=a_n\text{ for some }n\in\Bbb N\\ x,&\text{otherwise}\;; \end{cases}$$ it’s easy to verify that $f$ is a bijection. Added: By the way, this proves the harder direction of another useful characterization of Dedekind-infinite sets: $A$ is Dedekind-infinite if and only if there is an injection $f:\Bbb N\to A$.
Computing the first Chern class of a manifold from the metric
You need a complex line bundle (standardly with a hermitian metric) in order to compute Chern classes. In your case, $c_1(TM)$ is going to be closely related to the curvature 2-form of the associated Riemannian metric. And, no, in general, the first Chern form (class) of a Riemann surface is not $0$.
Show that if $p$ is a prime such that $p|(2^{64}+1)$ then $p \equiv 1 $ (mod 128)
Hint: Since $p \mid 2^{64} + 1$ you have $2^{64} \equiv -1 \bmod p$ and so $2^{128} \equiv 1 \bmod p$.
Study of a complex function (2)
Consider the function $f(z)=(4z^2+1) \tanh(\pi z)$. Determine its singularities, and, in particular, the residues in the poles. I think your argument for the singularities of $f(z)$ is mostly correct (as in, I think what you're trying to say is correct), however precisely what you've written doesn't quite make sense. Note that your limit $$\lim_{z \to \frac{i}{2}} \frac{e^{2 \pi z}-1}{e^{2 \pi z}+1}$$ does not exist, and the RHS of your equality contains $z$, despite taking $\lim_{z \to \frac{1}{2}}$. However, you're computing virtually the correct thing, that is, $$\tanh (\pi z) = \frac{1}{\pi} \left(z - \frac{i}{2}\right)^{-1} + \dots$$ near $z = \frac{i}{2}$, which has its pole cancelled by the corresponding simple zero of $4z^2+1$. Personally, my preferred argument here would be that $\cosh \pi z$ has simple roots, so $\tanh \pi z$ has simple poles at $z_k = i \left( \frac{1}{2} + k\right)$ for $k \in \mathbb{Z}$, with those at $z_0$ and $z_{-1}$ cancelled by the simple zeros of $4z^2+1$. For the remaining $z_k$, \begin{align*} \operatorname{res}\left(f(z), z_k \right) &= \lim_{z \to z_k} (z-z_k) f(z) \\ &= (4z_k^2+1) \sinh (\pi z_k) \cdot\lim_{z \to z_k} \frac{z-z_k}{\cosh \pi z_k} \\ &= (4z_k^2+1) \sinh (\pi z_k) \cdot \lim_{z \to z_k} \frac{1}{\pi \sinh \pi z_k} \\ &= \frac{4z_k^2+1}{\pi} = \frac{4k(k+1)}{\pi} \end{align*} where we applied L'Hôpital to deduce the value of the limit (which also shows we do indeed have removable singularities at $z_0$ and $z_{-1}$). Then the exercise asks to prove that, if a function $g$ is holomorphic at infinity, then $g(z)$ is the sum of a series of the form $\sum_{j=0}^\infty a_jz^{-j} $, when $|z|$ is large enough. I'm going to state that holomorphic means analytic and the result essentially falls out immediately. It's entirely possible that this distinction hasn't even been noted to you, and often these terms get used interchangeably. If $g$ is holomorphic at infinity, then it is holomorphic at $u \equiv \frac{1}{z} = 0$, and so analytic in some disc $\lvert u \rvert < \frac{1}{R}$. Then, within this disc, we can expand it as a Taylor series about $u= 0$, $$g|_u = \sum_{j=0}^\infty a_j u^j$$ Translating this back to $z$, we have that for $\lvert z \rvert > R$, $$g(z) = \sum_{j=0}^\infty a_j z^{-j}$$ Finally, the exercise asks to (1) calculate $\int_{D_R(0)} f(z)dz$, where $D_R(0)$ is the disk centered in $0$ with radius $R$, when $R$ is large enough; and (2), to study how $\frac {f(z)} {z^3}$ behaves at infinity. For $R$ sufficiently large (note not in the limit $R \to \infty$, however), say with $N < R < N + 1$ for some $N \in \mathbb{N}$, we use the residue theorem. Recall there are poles for $k \in \mathbb{Z} \setminus \lbrace 0, -1 \rbrace$, so we have $$I = \int_{D_R(0)} f(z) \; dz = 2 \pi i \sum_{k = -N}^N \frac{4k(k+1)}{\pi}$$ where we've included the sum over $k = 0, -1$ since these terms do not contribute. Noting that $$\sum_{k=-N}^N k = 0, \quad \sum_{k=1}^N k^2 = \frac{N(N+1)(2N+1)}{6}$$ we can simplify $I$ to $$I = 8i \sum_{k=-N}^N k^2 = 16 i \sum_{k=1}^N k^2 = \frac{8i}{3} \cdot N(N+1)(2N+1)$$ For the behaviour of $f(z) z^{-3}$ as $z \to \infty$ - well, this is somewhat more subjective of a question. The leading order behaviour (due to the $4z^2+1)$ is going to be given by studying $$g(z) = \frac{4 \tanh(\pi z)}{z}$$ It should be clear that this is not holomorphic at $\infty$, since there are infinitely many poles - at each $z_k$ - and we have $\lvert z_k \rvert = k + \frac{1}{2} \to \infty$. Moreover, if we consider $u = z^{-1}$, then we see that the singularities accumulate at $0$, i.e. every region $\lvert u \rvert < \delta \ll 1$ contains a countably infinite number of poles. That is, $u = 0$ is not an isolated singularity.
Morita contexts and Noetherianity/affineness
Probably the way to think of it is to establish the functor $F(-):Mod-R\to Mod-S$ isomorphic to $-\otimes_R M$ for a progenerator $_RM_S$ and then look at the properties it preserves. First establish that if $N_R$ is finitely generated, then so is the left $S$ module $F(N)$. Then, identifying the submodules of $F(N)$ as images of submodules of $N$ ia $F$, you can easily say that all submodules of $F(N)$ are finitely generated. Applying this to $N_R=R_R$ would finish the job. Actually it is probably possible to argue directly with a correspondence of submodules using $F$ and a counterpart inverse functor $G$, but I lack the experience to state that confidently.
Proving the existence of a proof without actually giving a proof
There are various ways to interpret the question. One interesting class of examples consists of "speed up" theorems. These generally involve two formal systems, $T_1$ and $T_2$, and family of statements which are provable in both $T_1$ and $T_2$, but for which the shortest formal proofs in $T_1$ are much longer than the shortest formal proofs in $T_2$. One of the oldest such theorems is due to Gödel. He noticed that statements such as "This theorem cannot be proved in Peano Arithmetic in fewer than $10^{1000}$ symbols" are, in fact, provable in Peano Arithmetic. Knowing this, we know that we could make a formal proof by cases that examines every Peano Arithmetic formal proof with fewer than $10^{1000}$ symbols and checks that none of them proves the statement. So we can prove indirectly that a formal proof of the statement in Peano Arithmetic exists. But, because the statement is true, the shortest formal proof of the statement in Peano Arithmetic will in fact require more than $10^{1000}$ symbols. So nobody will be able to write out that formal proof completely. We can replace $10^{1000}$ with any number we wish, to obtain results whose shortest formal proof in Peano arithmetic must have at least that many symbols. Similarly, if we prefer another formal system such as ZFC, we can consider statements such as "This theorem cannot be proved in ZFC in fewer than $10^{1000}$ symbols". In this way each sufficiently strong formal system will have some results which we know are formally provable, but for which the shortest formal proof in that system is too long to write down.
Finding the Equation of a Trig Graph via both Sine and Cosine
Consider the sinusoidal graph shown below. We wish to express its equation in the form $$y = A\cos(Bx - C) + D$$ where $|A|$ is the amplitude, $\dfrac{2\pi}{|B|}$ is the period, $C$ is the phase shift, and $D$ is the vertical shift. The function has a maximum value of $2$ at $$x = \frac{\pi}{2} + n\pi, n \in \mathbb{Z}$$ and a minimum value of $-4$ at $$x = n\pi, n \in \mathbb{Z}$$ Its amplitude is $$|A| = \frac{1}{2}[2 - (-4)] = \frac{1}{2} \cdot 6 = 3$$ Its period is the distance between adjacent minima, which is $\pi$. Thus, \begin{align*} \pi & = \frac{2\pi}{|B|}\\ |B|\pi & = 2\pi\\ |B| & = 2 \end{align*} Since the average of the maximum and minimum values is $$\frac{2 + (-4)}{2} = \frac{-2}{2} = -1$$ the graph has a vertical shift $D = -1$. The cosine function attains its maximum value at $x = 0$. Since this graph has a minimum value at $x = 0$, it is inverted, which means we can either shift the graph by half a period or multiply the amplitude by $-1$. If we do the latter, we obtain the equation $$y = -3\cos(2x) - 1$$
Is the following optimization problem convex? If not, what is it?
It's a mixed-integer second order cone optimization problem. MOSEK is arguably the best solver for this problem. If $\ell$ is small, you could brute-force the solution.
I don't understand what a Euclidean Topology is
$(a,b)= ]a,b[$ the intervall not the pair in $\mathbb R^2$
Why is the particular solution of $y'' - 4y' +3y = e^t$ not in the form of $Ae^t$
The homogeneous solution is dictated by: $m^2 - 4m +3 = 0$. That gives one root that equals $1$ and gives one solution $= e^t$. Clear?
Sum of the series $\sum_{k=0}^{r}(-1)^k.(k+1).(k+2).\binom{n}{r-k} $
We shall use the combinatorial identity $$\sum_{j=0}^{k}{(-1)^j\binom{n}{j}}=(-1)^k\binom{n-1}{k}$$ This can be proven easily by induction, and there is also probably some combinatorial argument why it holds. We shall use the equivalent form $$\sum_{j=0}^{k}{(-1)^{k-j}\binom{n}{j}}=\binom{n-1}{k}$$ Now $(r-k+1)(r-k+2)=k(k-1)-(2r+2)k+(r^2+3r+2)$, so \begin{align} & \sum_{k=0}^{r}{(-1)^k(k+1)(k+2)\binom{n}{r-k}} \\ &=\sum_{k=0}^{r}{(-1)^{r-k}(r-k+1)(r-k+2)\binom{n}{k}} \\ & =\sum_{k=2}^{r}{(-1)^{r-k}k(k-1)\binom{n}{k}}-(2r+2)\sum_{k=1}^{r}{(-1)^{r-k}k\binom{n}{k}}+(r^2+3r+2)\sum_{k=0}^{r}{(-1)^{r-k}\binom{n}{k}} \end{align} We have \begin{align} \sum_{k=2}^{r}{(-1)^{r-k}k(k-1)\binom{n}{k}} & =\sum_{k=2}^{r}{(-1)^{(r-2)-(k-2)}n(n-1)\binom{n-2}{k-2}} \\ & =n(n-1)\sum_{k=0}^{r-2}{(-1)^{(r-2)-k}\binom{n-2}{k}} \\ & =n(n-1)\binom{n-3}{r-2} \\ & =\frac{r(r-1)(n-r)}{n-2}\binom{n}{r} \end{align} \begin{align} \sum_{k=1}^{r}{(-1)^{r-k}k\binom{n}{k}} & =\sum_{k=1}^{r}{(-1)^{(r-1)-(k-1)}n\binom{n-1}{k-1}} \\ & =n\sum_{k=0}^{r-1}{(-1)^{(r-1)-k}\binom{n-1}{k}} \\ & =n\binom{n-2}{r-1} \\ & =\frac{r(n-r)}{n-1}\binom{n}{r} \end{align} \begin{align} \sum_{k=0}^{r}{(-1)^{r-k}\binom{n}{k}} & =\binom{n-1}{r} \\ & =\frac{n-r}{n}\binom{n}{r} \end{align} Thus \begin{align} & \sum_{k=0}^{r}{(-1)^k(k+1)(k+2)\binom{n}{r-k}} \\ & =\frac{r(r-1)(n-r)}{n-2}\binom{n}{r}-(2r+2)\frac{r(n-r)}{n-1}\binom{n}{r}+(r^2+3r+2)\frac{n-r}{n}\binom{n}{r} \\ & =\binom{n}{r}\frac{(n-r)(r(r-1)n(n-1)-(2r+2)rn(n-2)+(r^2+3r+2)(n-1)(n-2))}{n(n-1)(n-2)} \\ & =\binom{n}{r}\frac{(n-r)(2r^2+(6-4n)r+(2n^2-6n+4))}{n(n-1)(n-2)} \end{align}
Radius function from length of curve
If I understand correctly, $l(\theta)$ expresses the length of the curve from its initial point to the point with polar angle $\theta$. Since the initial point is at $\theta=0$, the standard formula for length in polar coordinates gives $$ l(\theta)= \int_0^\theta \sqrt{r'(t)^2+r(t)^2}\,dt \tag1 $$ Differentiating both sides and squaring, we obtain a differential equation for $r$: $$ r'(\theta)^2+r(\theta)^2 = (l\,'(\theta))^2 \tag2 $$ where the function $(l'(\theta))^2$ is known. This is a nonlinear ODE with no obvious (to me) simplifications. I think numerical or approximate solutions are necessary here. For example, I took $K_0=K_1=1$, which simplifies a lot: $l\,'(\theta)=1/(1+\cos\theta)$. Then outsourced the job to Maple: sol:=dsolve([diff(r(theta),theta)= sqrt(ls-r(theta)^2), r(0)=0],numeric, range=0..Pi/2); odeplot(sol, [r(theta)*cos(theta),r(theta)*sin(theta)],thickness=3); With these constants $r(\theta)$ is pretty close to $\theta/2$, so the curve looks like an Archimedean spiral.
Elliptic curve group of $y^2 = x^3 + 2x + 3$ over $\mathbb{F}_5$
The curve $y^2=x^3+2x+3$ is not an elliptic curve over $\mathbb{F}_5$, since its discriminant $$ \Delta=-16(4\cdot 2^3+27\cdot 3^2)=0 $$ over $\mathbb{F}_5$. So it is a singular curve, hence not elliptic. References: Discriminant of Elliptic Curves
About construction of homeomorphism
Not necessarily. Let $X=[0,1]$ with the subspace topology and let $g_y$ be the identity for $y\neq 0$ and $g_0(x)=1-x$. Then $f^{-1}((1/2,1])=\{0\}\cup(1/2,1]$ which is not open, so the map $f$ is not even continuous.
When does locally closed imply closed?
If $X$ is quasi-compact (e.g. the underlying space of a Noetherian scheme), then $\{V_i\}$ has a finite subcover, and replacing it with such, we may assume it is a finite cover. If $Z$ is any closed subset of $X$, then $Z = \bigcup_i (Z\cap V_i)$, so $f(Z) = \bigcup_i f(Z\cap V_i)$ is the finite union of closed sets (using that $Z\cap V_i$ is closed in $V_i$ and $f_{| V_i}$ is closed). Thus $f$ is closed in this case. Just to give a counterexample in the non-quasi-compact case, let $Y$ be any space in which points are closed, and let $X$ denote the same underlying set as $Y$, but with the discrete topology. We take $f$ to the identity $X \to Y$. If we let $\{V_i\}$ denote the set of singleton subsets of $X$, then $f_{| V_i}$ is closed (since each point in $Y$ is closed by assumption), but $f$ is typically not closed. (Any subset $Z$ if $X$ is closed, but unless $Y$ is also discrete, not all $Z$ will be closed as a subset of $Y$.)
Show that T T is an unbounded, injective operator, that its inverse is bounded and that the range of T T is dense in L 2 (0,1) L2(0,1) .
Note that $T(X)=Y:=\textrm{span}\{f_k|k\in \mathbb{N}\}$ and $T^{-1}:Y\to L^2(0,1)$ is given by $T^{-1}(f_k)=\frac{1}{k}e_k$. I think you can take it from here to show that this map is bounded.
Chi-square distribution and uniform distribution
The chi-squared distribution with $2$ degrees of freedom is just the exponential with density function $\frac{1}{2}e^{-x/2}$ for $x\gt 0$, and $0$ elsewhere. In particular, $\Pr(X\ge x)=e^{-x/2}$ for $x\gt 0$. We first find the cdf $F_Y(y)$ of $Y$. Clearly $\Pr(Y\le y)=0$ if $y\le 0$, and $\Pr(Y\le y)=1$ if $y\ge 1$. For $0\lt y\lt 1$ we have $$\small F_Y(y)=\Pr(Y\le y)=\Pr(e^{-X/2}\le y)=\Pr\left(\frac{-X}{2}\le \ln y\right)=\Pr(X\ge -2\ln y)=e^{(2\ln y)/2}=y.$$ To find the density function $f_Y(y)$ of $Y$, differentiate $F_Y$. We get $f_Y(y)=1$ for $0\lt y\lt 1$.
How can I plot a 3d graph with data points in Maple?
You can look at the example worksheet on interpolation and smoothing ( http://www.maplesoft.com/support/help/Maple/view.aspx?path=examples%2fInterpolation_and_Smoothing ) At bottom, I've also included a simple method using inverse-distance and weighted averaging (which is fast, but which extrapolates poorly). Sorry, for some reason I am unable to attach images of the plots right now. I have used your third column as the dependent data, but it should be straightforward to change that. Just make sure that the dependent data is the 3rd column of the Matrix formed like, say, <P|T|M>. restart: P:=Vector([19.02,18.64,18.54,20.5,31.07,32.64,30.01,28.04,40.46, 40.34,39.36,39.18,50.17,51.03,51.39,52.65,51.14,60.66, 59.11,59.94,58.1,58.27,57.74], datatype=float[8]): T:=Vector([311.15,307.05,304.65,301.15,298.15,304.64,311.15,300.23, 298.5,304.65,308.6,310.81,300.75,302.63,304.55,311.2, 308.45,311.75,308.15,304.65,302.95,301.85,300.65], datatype=float[8]): M:=Vector([6.83,5.94,6.33,5.56,4.2588,4.4338,4.4338,4.2588, 0.684,1.1050,0.82132,0.88382,1.3934,0.57132, 0.92228,1.4588,0.97997,1.4878,1.3838,1.6723, 1.1765,1.3838,1.4588],datatype=float[8])*1E8: First, one can construct the basic 3D point plot from the data. Some version of the point plot (with some shading scheme) may or may not be displayed alongside the various surfaces constructed below. plots:-pointplot3d( <P|T|M>, symbolsize=20, axes=box ); Next, a simple triangular mesh can be used to interpolate linearly between the points. Ptriang := plots:-surfdata( <P|T|M>, view=0..7e8, source=irregular ): plots:-display( Ptriang, plots:-pointplot3d(<P|T|M>,symbolsize=20,color=red), axes=box ); Another approach is to smooth the data. This may not produce great results for such a small amount of data. There are several adjustable options for the method. It is not fast. Ploess := Statistics:-ScatterPlot3D( <P|T|M>, view=0..7e8, lowess, fitorder=2, bandwidth=1/2, rule=1, grid=[50,50], axes=box ): plots:-display( Ploess, plots:-pointplot3d(<P|T|M>,symbolsize=20,color=red) ); Another possible approach is to to use weighted average (weighted by distance or some metric). Below, a black-box procedure ff is created which takes any p-t point and returns the computed m-value. (A better scheme for this would be interpolation over Voronoi cells using so-called natural neighbors and radial basis functions, but I have only unfinished code for that at present.) f := proc(x::float, y::float, N::integer, X::Vector(datatype=float[8]), Y::Vector(datatype=float[8]), Z::Vector(datatype=float[8]), p::integer, R::float) local i::integer,j::integer,res::float,innerres::float,dist::float; innerres:=0.0; for j from 1 to N do dist:=sqrt((x-X[j])^2+(y-Y[j])^2); innerres:=innerres+(max(0, abs(R-dist))/(R*dist))^p; end do; res:=0.0; for i from 1 to N do dist:=sqrt((x-X[i])^2+(y-Y[i])^2); res := res + Z[i]*(max(0, abs(R-dist))/(R*dist))^p/innerres; end do; res; end proc: try ff:=Compiler:-Compile(f); catch: ff:=proc(x,y,N,P,T,M,p,R) evalhf(f(x,y,N,P,T,M,p,R)); end proc; end try: We can query procedure ff at any (p,t) point. ff(40.2, 304.7, 23, P, T, M, 3, 4.0); which produces, 8 1.10586391484649837 10 And ff can now be used directly in the plot3d command, Pinvdist := plot3d('ff'(p, t, 23, P, T, M, 3, 20.0), p=min(P)..max(P), t=min(T)..max(T), numpoints = 900, view=0..7e8): plots:-display( Pinvdist, plots:-pointplot3d(<P|T|M>,symbolsize=20,color=red), axes=box );
let U be a set which is both closed and open
It indeed means that $U$ is open and $U$ is closed as well. This can surely happen for some sets $U$ ("sets are not doors, they can be open and closed at the same time", as a teacher of mine once said). So indeed $\operatorname{Int}(U) = U = \operatorname{Cl}(U)$ for this $U$ and so indeed $\operatorname{Int}(\operatorname{Cl}(U)) = \operatorname{Int}(U) = U$ an similarly for the other order. There is no more to it than that.
Possible Stacking Orders
The sequence appears in OEIS as A208716; however, neither a formula nor an asymptotic is given there, so I'll state a very general result (but only sketch the proof). Suppose you have a bracelet with $n$ charms, and you want to color these with $m$ colors, but only certain colors may appear next to each other. We capture the restrictions as a symmetric $m\times m$ matrix $M=(m_{ij})$, where $$m_{ij}=\begin{cases} 1,& \textrm{if colors $i$ and $j$ are allowed to be adjacent}\\ 0,& \textrm{otherwise.} \end{cases} $$ We'll call a string (or necklace or bracelet) legal if it satisfies these constraints. We will call a color repeatable if it is allowed to be adjacent to itself. The matrix $M$ in your example is $$M=\pmatrix{1&0&1\\0&1&1\\1&1&1};$$ the fact that all colors are repeatable is reflected in the fact that the diagonal of $M$ consists of all $1$'s. The key fact is Burnside's Lemma: if a finite group $G$ acts on a set $X$, the number of orbits is the average, over elements $g\in G$, of the number of fixed points of $g$: $$|X/G| = \frac1{|G|}\sum_{g\in G} |{\rm Fix}(G)|.$$ For us, $X$ is the set of "legal" colorings, as determined by $M$, and $G$ is a group of symmetries: for "necklaces" $G$ is the $n$-element cyclic group which rotates the necklace, and for "bracelets" we also consider reflections (turning the bracelet upside down.) (Bracelets are called "free necklaces" in this MathWorld article.) The orbits are just the equivalence classes of the jewelry under the relevant group action. We start with necklaces: the number $N(n)$ of inequivalent legal necklaces of length $n$ is $$N(n)=\frac1n\sum_{d|n}\phi(n/d)l(d),$$ where $\phi$ is the Euler phi-function and $l(d)$ is the number of legal necklaces of length $d$. But the $(i,j)$ entry of $M^d$ is the number of legal strings of length $d+1$ which start at $i$ and end at $j$, so $l(d)$ is simply the trace of the matrix $M^d$. In principle you can get a closed form for $l(d)$ by computing the Jordan form of $M$, but in some cases it's easier to note that the sequence $\{l(d)\}$ satisfies a linear recurrence equivalent to the characteristic polynomial of $M$. (Note that if there are no adjacency restrictions, $M$ is the all-ones matrix, so $l(d)={\rm tr}M^d=m^d$ and the usual necklace formula falls out.) For bracelets, $G$ also contains $n$ reflections. If $n$ is odd, all the reflections have a unique fixed point; if $n$ is even, then half the reflections have $2$ fixed points and the rest have none. So the bracelet formulas are a little more complicated. Here's the general result: if $B(n)$ denotes the number of inequivalent legal bracelets, then $$B(n)= \frac12N(n)+\begin{cases} \frac12 s\left(\frac{n+1}2\right),& n{\rm\ odd}\\ \frac14 p\left(\frac{n}2+1\right) + \frac14 b\left(\frac{n}2\right),& n {\rm\ even,} \end{cases} $$ where $p(n)$ is the number of legal strings of length $n$, $s(n)$ is the number of legal strings of length $n$ ending at a repeatable color, and $b(n)$ is the number of legal paths of length $n$ which both begin and end at a repeatable color. (In your special case, $b(n)=s(n)=p(n)$ since all colors are repeatable.) In any case, these are all easily expressible in terms of $M$: if $\bf j$ is a column vector of $1$'s, and if $\bf r$ is the column vector with $r_i=1$ iff color $i$ is repeatable, then $$\begin{align} p(n)&={\bf j}^t M^{n-1} {\bf j}\\ s(n)&={\bf j}^t M^{n-1} {\bf r}\\ b(n)&={\bf r}^t M^{n-1} {\bf r} \end{align} $$ In particular, they satisfy the same recurrence as $l(n)$. OK, specializing to your problem now: the characteristic polynomial of $M$ is $z^3-3z^2+z+1,$ with roots $1$, $1+\sqrt2$ and $1-\sqrt2$. We find $$\begin{align} l(n)&=\left(1-\sqrt{2}\right)^n+\left(1+\sqrt{2}\right)^n+1\\ p(n)=s(n)=b(n)&=\frac{1}{2} \left(\left(1-\sqrt{2}\right)^{n+1}+\left(1+\sqrt{2}\right)^{n+1}\right) \end{align} $$ Note $l(n)=3l(n-1)-l(n-2)-l(n-3).$ Plugging these into the formulas for $N(n)$ and $B(n)$ we can now count inequivalent legal bracelets: $$\begin{array}{ccccccccccccccc} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ 3 & 5 & 7 & 12 & 18 & 34 & 56 & 111 & 207 & 427 & 859 & 1851 & 3930 & 8672 & 19092 \\ \end{array} $$ Asymptotically, it is clear that the dominant term is the $d=n$ term in $N(n)$, so $$B(n)\sim \frac{(1+\sqrt2)^n}{2n}$$ for large $n$. Finally, it appears you may be anxious to use all three of your colors. This count didn't require all the colors to be used, but you can use inclusion/exclusion to get the result in that case. For any subset $S$ of the colors, let $B_S(n)$ denote the number of bracelets using only colors in $S$ (but not necessarily all of them.) You can compute these by repeating the calculation with an appropriate submatrix of $M$. Then the number of inequivalent bracelets using all three colors is: $$N_{123}(n)-N_{12}(n)-N_{23}(n)-N_{13}(n)+N_1(n)+N_2(n)+N_3(n)-N_\emptyset(n).$$
Strictly convex iff norm is strictly sub additive
According to Wikipedia you have to show that the line segment (except the endpoints) lies inside the unit ball $D$. Let $x,y \in \partial D$. Let $t \in (0,1)$. Then $$ \|tx + (1-t)y\| < |t|\|x\| |1-t|\|y\|\le |t| + |1-t| = 1$$ which shows the claim.
Bounded Linear Operator Between $L^p$ Spaces
For $p<2$ set $f = \frac{h}{\|h\|_2}$, where $h = g^{\frac p{2-p}}$. Then $\|Tf\|_p = \|g\|_{\frac{2p}{2-p}}$, so $\|T\|$ coincides with the latter value (and is even attained). For $p=2$ choose some $x_0\in [0,1]$ such that $|g(x_0)| = \|g\|_\infty$. Let $\epsilon > 0$. Then there exists $\delta > 0$ such that for $x$ in the $\delta$-neighborhood $B_\delta$ of $x_0$ we have $|g(x)|^2\ge\|g\|_\infty^2-\epsilon$. Let $f\in L^2$ be such that $\|f\|_2=1$ and $\operatorname{supp}(f)\subset B_\delta$. Then $$ \|T\|^2\ge\|Tf\|_2^2 = \int_{B_\delta}|g(x)|^2|f(x)|^2\,dx \ge \|g\|_\infty^2-\epsilon. $$ Letting $\epsilon\to 0$ yields $\|T\| = \|g\|_\infty$.
Cartesian Closed Categories & Exponential Objects
The condition $$\text{hom}(a\times b,c)\cong\text{hom}(a,c^b)$$ (the isomorphism being natural) is the definition of $c^b$. The map $$a\mapsto\text{hom}(a\times b,c)$$ is a contravariant functor from $C$ to $\textbf{Set}$. What Mac Lane is asserting here is that this functor is representable (for all $b$ and $c$) and that we denote a representing object (unique up to isomorphism) by $c^b$.
Counting sum of lattice points
For simplicity let k-dimensional sphere is centered at the lattice point. So we can do a linear transform of this lattice to $R^n$, where $n < k$ is dimension of the lattice, n-dimensional sphere which is a "slice" of k-dimensional sphere by n-dimensional plane containing the lattice will be transformed to n-dimensinal ellipsoid. Thus, we have n-dimensinal ellipsoid E with center in $R^n$ and your set S is a set of integer points inside E: $$ E =\{x \in R^n: \frac{x^2_1}{a_1^2} + ... + \frac{x^2_n}{a_n^2} = 1\} $$ $$ S = \{(x_1,...,x_n) \in Z^n: \frac{x^2_1}{a_1^2} + ... + \frac{x^2_n}{a_n^2} \le 1\} $$ You want to find lower estimation of the number of elements in $V = S \oplus S$. Maybe try to estimate the number of elements in $V \setminus S$? We can consider "extreme radial" elements of S. It`s easy to see 2n such elements: $r_i = (0,...,0, \pm [a_i], 0,...,0)$, $i=1,...,n$ (for simplicity suppose that all $a_i \ge 1$). As a duplications of "extreme radial" elements all $2r_i = r_i + r_i \in V \setminus S$. Hence, we slightly improve trivial lower estimation: $$|V| \ge |S| + 2n$$ If we calculate "extreme radial" elements of S more precisely we can improve the estimation.
For what values of $\gamma > 0$ does $n^{\gamma} (\sqrt[n]{n} - 1)^2$ converge?
Hint : $$ n^{\frac{1}{n}}-1= \exp(\frac{\log(n)}{n})-1= (\frac{\log(n)}{n})+O((\frac{\log(n)}{n})^2) ... $$
Least value of $ab-cd$ is
Let $a=2\sin\alpha$, $b=2\cos\alpha$, $c=2\sin\beta$ and $d=2\cos\beta$. Then $4\sin\alpha\sin\beta+4\cos\alpha\cos\beta=0$ and hence $\cos(\alpha-\beta)=0$. Therefore, $\alpha=(n+\frac12)\pi+\beta$. \begin{align*} ab-cd&=2\sin2\alpha-2\sin2\beta\\ &=4\cos(\alpha+\beta)\sin(\alpha-\beta)\\ &=4\cos\left(\left(n+\frac12\right)\pi+2\beta\right)\sin\left(\left(n+\frac12\right)\pi\right) \end{align*} The least possible value is $-4$.
Completeness and Compactness of Cartesian Product of Metric Spaces
The proof of this is rather long, but straightforward. To show this space is complete, take a Cauchy sequence. Show that it projects to a Cauchy sequence in each coordinate. Since each factor space is complete, you have a natural candidate $x$ for convergence. You can find an $N$ large enough that the $i$th coordinate of the $n$th term of the Cauchy sequence is within $\epsilon/2$ of the $i$th coordinate of $x$ whenever $i$ and $n$ are less than $N$, and so that $\sum_{i=N}^\infty 2^{-i}<\epsilon/2$. Then apply a standard $\epsilon/2$ argument by breaking the series at $i=N$. Compactness is proved via sequential compactness with essentially the same argument. Start with an arbitrary sequence. It has subsequences which converge in each coordinate. Carefully build a subsequence which converges in every coordinate by first taking a subsequence that converges in the first coordinate, then taking a subsubsequence which converges in the first two coordinates, and so on. By the above argument, this entire subsequence converges. To prove the converse of each part, you have one $X_i$ that is not complete(compact), so there exists a bad Cauchy sequence which (sequence such that every subsequence) doesn't converge. Choose a sequence in the product space which matches your bad sequence in the $i$th coordinate, and is constant in the other coordinates. This sequence will behave poorly in the product space too.
Poisson process and expected value in combination of waiting time
I thought, you just do 5 x 2 = 10 minutes as expected value to have 5 people at the cash desk. But you could also see it als independent variables which are exponentially distributed. Can someone explain me which method I need to use? People arrive at an average rate of one per two minutes, so the expected time to arrival of the fifth customer is indeed ten minutes.   This is because the interarrival times are iid exponential random variables. The Law of Total Expectation says the expected time to the fifth arrival equals the sum of the expected inter arrival times. $$\mathsf E(T_5)=\mathsf E(T_1)+\mathsf E(T_2{-}T_1)+\mathsf E(T_3{-}T_2)+\mathsf E(T_4{-}T_3)+\mathsf E(T_5{-}T_4)$$ Now, the memoryless property of the Poisson Process entails that each of those interarrival times is an independent and identically exponentially distributed random variable. $$\mathsf E(T_5)=5\mathsf E(T_1)$$ And we know that the expected time to the first arrival is two minutes.$$\mathsf E(T_5)=10$$
Hyperbolic Manifold Visualization
This picture, like many pictures, is not intended to be analytically exact. It is instead intended to convey a general intuition. The general intuition here is applicable to any complete Riemannian manifold whatsoever; a hyperbolic manifold is just a special case of a complete Riemannian manifold, namely it is a complete Riemannian manifold of constant sectional curvature $-1$. The picture could just as well have been used in any Riemannian geometry book, with an alternate caption: "An illustration of the exponential map $\exp_x(v)$, which maps the tangent space $\mathcal T_x M$ at the point $x$ to the complete Riemannian manifold $M$".
How to correctly write $\max$
If you want to express the term on the rhs gets maximal if you plug in $p=π$, you can use argmax, like: $$\pi = \underset{p}{\text{argmax}} ( pq(p) - pc(p)).$$ If you want to express that π is the maximum of the rhs write: $$\pi = \underset{p}{\text{max}} ( pq(p) - pc(p)).$$ The term $\underset{p}{\max} \pi$ is well-defined, too. And it is $$\underset{p}{\max} \pi = \pi.$$ Hence: $$ \underset{p}{\text{max}}\pi = pq(p) - pc(p) \qquad ⇔\qquad \pi = pq(p) - pc(p)$$ which is most likely not what you want to express.
How to Sort Covariance Matrix with Python
The covariance matrix has by definition the entries $\Sigma_{ij} = \operatorname{cov}(x(t_i), x(t_j))$ now, you are interested in the covariance matrix of the reordered time series $\tilde t_k = t_{\pi(k)}$, where $\pi$ is a given permutation (e.g. the permutation that *un*sorts the time series). Then correspondingly, $$ \tilde \Sigma_{ij} = \operatorname{cov}(x(t_{\pi(i)}), x(t_{\pi(j)})) = \Sigma_{\pi(i), \pi(j)}$$ You have 2 options: to permute the rows/columns of a matrix, you can multiply by permutation matrices from the left/right: $$\tilde \Sigma = P^T \Sigma P \iff \Sigma = P\tilde \Sigma P^T$$ where $P$ is the permutation matrix induced by $\pi$. Alternatively, you can use the indices obtained by the p = np.argsort(p) directly to avoid having to do the actual matrix multiplication via indexing S[p][:,p] short demo: import numpy as np N = 5 t_org = np.arange(N) pi = np.random.permutation(N) x_org = np.array([np.random.randn(100)*k for k in range(N)]) S_org = np.cov(x_org) print("Covariance of sorted time steps", S_org, sep="\n") t_obs = t_org[pi] x_obs = x_org[pi] S_obs = np.cov(x_obs) print("Covariance of unsorted time steps", S_obs, sep="\n") #%% Using indexing S[p][:,p] p = np.argsort(t_obs) print("Reconstruction equals original:", S_obs[p][:,p] == S_org, sep="\n") #%% Alternative using Permutation matrix P = np.eye(N)[p] print("Permutation matrix", P, sep="\n") print("Reconstruction equals original:", P@[email protected] == S_org, sep="\n")
Construction of segment of given length through an intersection of two circles
Hint: $ACSR$ is a rectangle. Do you see why $AC=RS=\frac12 LM$? Solution: construct - as was suggested - a right triangle $ABC$ whose hypotenuse $AB$ is the segment between the centers of the given disks, and one of the legs ($AC$) is congruent to $\frac12EF$. Draw through the point $K$ a line parallel to $AC$. It will intersect the circles in the points $L$ and $M$. Then $LM=EF$. The construction is possible only if $\frac12EF<AB$, in which case there are two solutions according to the number of possibilities to construct the triangle $ABC$.
Use of Trigonometric functions of angles beyond 90 degrees
Trigonometric functions are very commonly used to describe the motion of a rotating object. But once the amount of rotation passes 90 degrees, ...
Induction Proof of Algorithm [Greedy Graph Coloring]
Induction proof proceeds as follows: Is the graph simple? Yes, because of the way the problem was defined, a range will not have an edge to itself (this rules out one of the easiest ways to prove that a graph is not n-colorable). Does it hold for 0 or 1 vertex? Yes, trivially. How about two vertices (with one edge, because disconnected vertices do not affect chromatic number)? You should be able to work out the justification for this step, and it is arguably the first non-trivial case and should be used as your base case. Suppose you know that it holds for n vertices. Those vertices correspond to particular ranges. You could choose to prove that it then holds for n+1 vertices by adding another range (and an unknown number of edges) but this seems hard and potentially complicated. Instead, suppose it holds true for n edges. There are two ways that we could add an edge: by extending the endpoint of a nearby range until they intersect or by adding a new range entirely. Consider the output of the algorithm in either scenario, and you will have your proof. In general, the more of your work you post, the better help you will be able to receive. This site is not intended to solve homework problems without some demonstrated effort and understanding or lack thereof.
Fairness division problem
I don't know what "fair" means to you. There's the Shapley Value ( https://en.wikipedia.org/wiki/Shapley_value ). It is an axiomatic approach that captures ideas about fairness, then shows there is a unique way to divide the surplus among the players. There's the core https://en.wikipedia.org/wiki/Core_(game_theory). The idea here is that any kid could refuse to participate and go to the store themselves, or even convince some of the other kids to join them and deviate from the group. In simple models, there is a result called ``equal treatment in the core'', where kids with similar amounts of money and similar tastes get the same outcome. This is another way to formalize the idea of fairness as coming from bargaining power. You could also look at the Nash bargaining solution. If the question is just about maximizing candy purchases subject to wealth constraint and then dividing up the spoils, you could just use Pareto optimality. Put a welfare weight on each child, solve the utility maximization problem, then trace out the Pareto frontier as the weights vary. Everything on the frontier is efficient, and you can pick whatever outcome seems most fair to you.
For every natural number $n$, $ 3^{3n} - 1$ is divisible by $26$.
From the assumption that for a particular $k$ we have $3^{3k}-1$ is a multiple of $26$ (which does not mean $3^{3k}=26k$), we have to prove that $3^{3(k+1)}-1$ is a multiple of $26$. Note that $3^{3k+3}=27\cdot 3^{3k}$. But by the induction assumption $3^{3k}-1=26q$ for some integer $q$. So $27\cdot 3^{3k}-1=27(26q+1)-1=26(27q+1)$, and therefore $3^{3k+3}$ s a multiple of $26$.
Probability of cells on a grid with axis probability known
There's no way to calculate a probability since you didn't specify a distribution. I'll assume that what you meant is that the cards are independently chosen as red or blue with probability $\frac12$ each and then the row and column counts are provided, and you want the conditional probability of a given card being red or blue given the row and columns sums. To determine this conditional probability, you need the counts of configurations that are compatible with the given constraints. The probability for the given card to be red is the number of compatible configurations in which that card is red divided by the total number of compatible configurations. I doubt that there's a simple expression for these numbers for arbitrary row and columns sums, but since there are only $2^{25}\approx3\cdot10^7$ different configuration, it's easy to count the compatible configurations on a computer.
Types of singularities, why is this an essential singularity
Note that a function $f(z)$ can be expanded as a Laurent series about a singularity $z_0$ as: $$f(z) = \sum_{n=0}^{\infty} a_n(z-z_0)^n + \sum_{n=1}^{\infty} \frac{b_n}{(z-z_0)^n}$$ In case, the second sum (the principal part) has infinitely many terms, then the singularity $z=z_0$ of $f(z)$ is called an essential singularity. Hence, the given function: $$f(z) = e^{\frac1{z}} = \sum_{n=0}^{\infty} \frac{1}{n!z^n}$$ has $z=0$ as an essential singularity.
Coding theory - Linear codes
For 1, ask yourself: can $0$ be in $v + V$, if $v \notin V$? For 2, write $a = v + v_1$ for some $v_1 \in V$. Also $b = v + v_2$ for some $v_2 \in V$. Now what is $a - b$?
Proving that $X_n= 2^n 1_{(0,\frac{1}{n})}, n \geq 1 $ converges to zero in probability
Notice that for a fixed $n \in \mathbb{N}$, $X_n(\omega) > 0$ if and only if $\omega \in (0, \frac{1}{n})$. Thus for any $\epsilon > 0$, $P(|X_n| > \epsilon)= P((0,\frac{1}{n})) = \frac{1}{n}$. Edit: changed $x$ to $\omega$ to make notation a bit clearer
Find gradient of this implicit function
EDIT: Abhinav pointed out a mistake, which has been corrected for posterity. To find $\frac{\partial z}{\partial x}$ by implicit differentiation means to differentiate both sides of the equation with respect to $x$, while remembering that $z$ is implicitly a function of $x$ (we didn't write it with function notation such as $z(x)$). Don't forget to use the product rule when differentiating a product! $$ \begin{align} xz + yz^2 - 3xy - 3 &= 0 \\ \tfrac{\partial}{\partial x} \left[ xz + yz^2 - 3xy - 3 \right] &= \tfrac{\partial}{\partial x} \left[ 0 \right] \\ \tfrac{\partial}{\partial x} \left[ xz \right] + \tfrac{\partial}{\partial x} \left[ yz^2 \right] - \tfrac{\partial}{\partial x} \left[ 3xy \right] - \tfrac{\partial}{\partial x} \left[ 3 \right] &= 0 \\ \left[ 1 \cdot z + x \cdot \tfrac{\partial z}{\partial x} \right] + y \cdot 2z \tfrac{\partial z}{\partial x} - 3y - 0 &= 0 \\ z + x \tfrac{\partial z}{\partial x} + 2yz \tfrac{\partial z}{\partial x} - 3y &= 0 \end{align} $$ This yields $$ \frac{\partial z}{\partial x} = \frac{3y - z}{x + 2yz}. $$ Try to find $\tfrac{\partial z}{\partial y}$ in an analogous way, and post your result in the comments.
What is the smallest sigma algebra, whose every elements are m*-measurable?
$A=\{ \emptyset, R \}$ is the counterexample here.
Stable Method of orthogonal projection onto a subspace with the help of Moore-Penrose inverse.
Assume that $A\in\mathbb{R}^{m\times n}$ has full column rank and that $A=QR$, $Q\in\mathbb{R}^{m\times n}$, $R\in\mathbb{R}^{n\times n}$, is its "economical" QR factorization. The dense QR factorization in Matlab is (most likely) implemented using a stable Householder orthogonalization which gives a computed $Q$ which is not exactly orthogonal but is very close to being an orthogonal matrix in the sense that there is an $m\times n$ orthogonal matrix $\hat{Q}$ such that $$ A+E=\hat{Q}R, \quad \|Q-\hat{Q}\|_2\leq c_1(m,n)\epsilon, \quad \|E\|_2\leq c_2(m,n)\epsilon\|A\|_2, $$ where $c_i$ are moderate constants possibly depending on $m$ and $n$ and $\epsilon$ is the machine precision (for double precision $\approx 10^{-16}$). In the finite precision calculation, we like the orthogonal matrices because they do not amplify the errors. Indeed, using the assumption above we can show that $$ \|\mathrm{fl}(QQ^Tu)-\hat{Q}\hat{Q}^Tu\|_2\leq c_3(m,n)\epsilon\|u\|_2. $$ Although Matlab uses SVD to compute the pseudo-inverse, we can assume that it is computed using the QR factorization. The final reasoning is the same. We have then $A^+=R^{-1}Q^T$ but this time with a little bit more technical work this gives $$ \|\mathrm{fl}(AA^{+}u)-\hat{Q}\hat{Q}^Tu\|_2= \|\mathrm{fl}(QRR^{-1}Q^Tu)-\hat{Q}\hat{Q}^Tu\|_2\leq c_4(m,n)\kappa_2(A)\|u\|_2, $$ where $\kappa_2(A)=\|A\|_2\|A^+\|_2$ is the spectral condition number of $A$. Note that in finite precision $RR^{-1}\neq I$ and the error committed by this operation is proportional to the conditioning of $R$ which approximately (note that $R$ is not the exact R-factor) is equal to that of $A$. To conclude, both methods you tried are bad in the sense that the error depends on the condition number of $A$. You cannot see that much difference now since $A$ is random and quite well conditioned. The errors should be more visible when $A$ would be more ill-conditioned. EDIT: Although it might seem that the second approach gives somewhat more accurate results, it is not true in general. The following snippet eventually finds a counterexample: m = 100; n = 10; kappa = 1e6; while true U = orth(rand(m, n)); D = diag(logspace(0, -log10(kappa), n)); V = orth(rand(n, n)); A = U * D * V; u = rand(m, 1); x_1 = pinv(A')*(A'*u); x_2 = A*(pinv(A)*u); [Q, ~] = qr(A, 0); x = Q*(Q'*u); error_1 = norm(x_1 - x) / norm(x); error_2 = norm(x_2 - x) / norm(x); if error_1 < error_2 disp('Gotcha!'); break; end end
What's the algorithm for computing polynomial regression
It is linear least squares with a kernel built of monomials. $${\bf c_o}=\min_{\bf c}\|{\bf \Phi c - d}\|_2^2$$ where $${\bf\Phi} = [{\bf 1,x,x^2,\cdots,x^n}]$$ where the 1,x etc are column vectors with the values and $\bf c$ is a column vector linearly combining them together (the vector of coefficients) and $\bf d$ is the vector of data points to fit to (function values).
How to iterate the geometric mean
$$G_{n} = \bigg(\prod_{i=1}^n x_i\bigg)^{1/n} = \bigg(x_n\prod_{i=1}^{n-1} x_i\bigg)^{1/n} = x_n^{1/n}\bigg(\prod_{i=1}^{n-1} x_i\bigg)^{1/n} = \left(x_n\left(G_{n-1}\right)^{n-1}\right)^{\frac 1n}$$
How to simplify $(3x+2)\cdot2\cdot25\cdot2 \div 50 - (x+4)$?
In the order of operations, the first step is removing parenthesis. However this is necessary only if the terms within parenthesis can be simplified further which is not true in this case. Opening parenthesis implies simplifying the terms within parenthesis first. Therefore in your problem there is no need for opening parenthesis and your solution is correct.
How can I find the common axis of 2 cones in space that have the same base radius but different heights?
Let $\mathbf{v}$ be a unit vector along the common axis of the cones, let $\mathbf{a}, \mathbf{b}$ be the vectors joining the vertex to the points on the yellow and blue cones' bases. Since we know the radii and the height of these cones, we can compute $\theta_1, \theta_2$ the angles between $\mathbf{v}, \mathbf{a}$ and $\mathbf{v}, \mathbf{b}$ respectively. So $\mathbf{v} \cdot \mathbf{a} = |\mathbf{a}|\cos{\theta_1} $, $\mathbf{v} \cdot \mathbf{b} = |\mathbf{b}|\cos{\theta_2}$ and $|\mathbf{v}| = 1$ - Three equations with three unknowns so we can solve for $\mathbf{v}$.
Can we get the formula for $\prod\limits_{k=0}^n{(1+2^k)^2}$ in terms of $n$?
As Cameron Williams pointed out, it suffices to consider the non-squared version. But note that $$ \prod_{k=0}^{n} (2^{k} + 1) = \prod_{k=0}^{n} 2^{k}(1 + 2^{-k}) = \left( \prod_{k=0}^{n} 2^{k} \right)\left( \prod_{k=0}^{n} (1 + 2^{-k}) \right). $$ Taking logarithm, we find that $$ \log \prod_{k=0}^{n} 2^{k} = \sum_{k=0}^{n} k \log 2 = \frac{n(n+1)}{2}\log 2 \quad \Longrightarrow \quad \prod_{k=0}^{n} 2^{k} = 2^{n(n+1)/2}. $$ On the other hand, $$ \log \prod_{k=0}^{n} (1 + 2^{-k}) = \sum_{k=0}^{n} \log(1 + 2^{-k}), $$ which converges as $n \to \infty$ by limit comparison test with $\sum_{k=0}^{n} 2^{-k}$. So if we denote $$C = \prod_{k=}^{\infty} (1 + 2^{-k}),$$ then we get $$ \prod_{k=0}^{n} (2^{k} + 1) \sim C 2^{n(n+1)/2}. $$ A relatively fast-converging series expansion for $\log C$ can be obtained by Fubini's Theorem: $$ \log C = \log 2 + \sum_{j=1}^{\infty} \frac{(-1)^{j-1}}{j(2^{j} - 1)} \approx 4.7684620580627434483 \cdots. $$
What's a good book on advanced linear algebra?
N.Jacobson. Lectures in Abstract Algebra, vol 2. Linear Algebra. M.M. Postnikov. Lectures in Geometry: Semester II. Linear Algebra and Differential Geometry.
Show ex$(n, H)\leq \left\lfloor\frac{n^2+n}{4}\right\rfloor$
Note that for any two adjacent vertices $u$ and $w$ we have $d(u) + d(w) \leq n + 1$ (otherwise $G$ would contain a copy of $H$), and so \begin{align*} \sum_{e = uw \in E} (d(u) + d(w)) \leq e(G)\cdot (n+1). \end{align*} On the other hand, the sum counts the degree of $v$, once for each neighbor of $v$, so it is also equal $\displaystyle \sum_{v \in V} d(v)^2$. Thus, by Cauchy-Schwarz, we have \begin{align*} \frac{1}{n} \left(\sum d(v) \right)^2 \leq \sum_v d(v)^2 \leq e(G)\cdot (n+1), \end{align*} but, by Euler's degree sum formula, we also have $\displaystyle \frac{1}{n} \left(\sum d(v) \right)^2 = \frac{1}{n} 4 e(G)^2$. Thus, \begin{align*} \frac{1}{n}4 e(G)^2 \leq e(G)\cdot(n+1) \Rightarrow e(G) \leq \frac{n(n+1)}{4}, \end{align*} as desired. Notice that this is exactly the proof of Mantel's Theorem but with $n + 1$ replacing $n$ where it makes sense to.
Transcendental approximation
Let's write this as $$ \dfrac{h}{\ln h} = x $$ where $h = e h_c/r_0$ and $x = \dfrac{eb}{8 \pi r_0 (1+\mu) f}$. The solution of this is $h = - x W(-1/x)$ where $W$ is a branch of the Lambert W function. In this case I think you want the $-1$ branch, which gives a real value when $-1/e < -1/x < 0$ (this is the mathematical $e = \exp(1)$: I don't know if yours is supposed to be that, or if it really is $2.71$). As $x \to + \infty$, $h \to +\infty$ as well. But note that $h/x = \ln h \to \infty$, so your approximation could only possibly be good for a very restricted range of $x$. Essentially you're treating $\ln h$ as constant (about $3.3$), so this is only good if $h \approx \exp(3.3) \approx 27$.
Questions on Limit with Logarithm
You can also rewrite it as: $$\lim_{n \rightarrow \infty} \frac{n^{1,001}}{n \log{n}} = \lim_{n \rightarrow \infty} \frac{n^{0,001}}{\log{n}} = \lim_{n \rightarrow \infty} \frac{\frac{n^{-0,999}}{0,001}}{\frac{1}{n}} = \lim_{n \rightarrow \infty} \frac{n^{0,001}}{0,001} = \infty$$ which might be a little clearer.
What does the intersection of curves on a contour plot signify?
Not necessarily. Consider $f(x,y)=x^2-y^2$ and the level curve $f(x,y)=0$. It is formed by the two straight lines $y=\pm x$, that cut at $(0,0)$; the slopes of the lines are $\pm1$. What is happening is that $\nabla f(0,0)=(0,0)$. This implies that you cannot apply the implicit function theorem and represent the lever curve as $y=y(x)$ or as $x=x(y)$.
Holomorphic symplectic manifold
1) The converse is not true. There has to be some compatibility between the complex structure and the symplectic structure. There are manifolds that admit both complex structures and symplectic structures, but without any such pair of structures being compatible. An example is the Kodaira-Thurston manifold. Let $K=T^3 \times \mathbb{R}/\mathbb{Z}$ where $j \in \mathbb{Z}$ acts by $$\psi: (x,y,z,t) \mapsto (x,y + jx, z, t+j).$$ The symplectic form $dx \wedge dy + dz \wedge dt$ is invariant under $\psi$. The projection $\pi : K \to T^2$ given by $(x,y,z,t) \mapsto (z,t)$ has fibers that are symplectic $T^2$ with symplectic form the restriction of $dx \wedge dy$. The universal cover is $\mathbb{R}^4$ with deck group $$G = \{ (j_1,j_2,k_1,k_2) | \mbox{ in } \mathbb{Z}^4 , (\mathbb{j}',\mathbb{k}') * (\mathbb{j},\mathbb{k}) = (\mathbb{j}+\mathbb{j}',A_{\mathbb{j}'} \mathbb{k} + \mathbb{k}')\}$$ where $\mathbb{j}=(j_1,j_2)$ and $A_{\mathbb{j}}$ is the matrix $$A_{\mathbb{j}} = \begin{pmatrix} 1 & j_2 \\ 0 & 1 \end{pmatrix}$$ So, $\pi_1(K)=G$. Note that the subgroup $H=\langle (j_1,0,0,0) \rangle \cong \mathbb{Z} \leq G$ is a normal subgroup, and is the smallest normal subgroup such that the quotient $G/H$ is abelian, thence $H_1(K)$, which is the abelianization of $\pi_1(K)$ , is equal to $\mathbb{Z}^3$. Hence, the first Betti number is 3. But the first Betti number of a compact Kaehler manifold must be even, hence the manifold $K$ is not Kaehler and, in particular, not hyper-Kaehler. 2) Any Kaehler manifold $M$ (and therefore, any hyper-Kaehler manifold) has a polarization given by: $\mathcal{P}:=T_{(1,0)}(M)=\{ (x,v) \in T_x(M)^{\mathbb{C}} | J_x(v)=iv \}$, where $J : TM^{\mathbb{C}} \to TM^{\mathbb{C}}$ is the integrable almost complex structure induced by the complex structure on $M$.
Inclusion Exclusion Probability Proof Using a Partition of the Space
Claim: $\qquad P(A_1 \cup A_2 \cup \dots \cup A_n)=s_1 - s_2 + s_3 - \dots + (-1)^{n+1} s_n \qquad\qquad\qquad\qquad(1)$ Take any set $B$ in the partition composed of the intersection of $k$ of the $A_i$ sets and $n-k$ of the $A_i^c$ sets, for some $0\leq k\leq n$. It's easy to see that the set $B$ will occur in $s_1\; \binom{k}{1}$ times, in $s_2\; \binom{k}{2}$ times, $\ldots$, and in $s_k\; \binom{k}{k}$ times, and $0$ times in the other values: $s_{k+1},\ldots,s_n$. Thus, for $k=0$ (which means $B=A_1^c\cap\cdots\cap A_n^c$) set $B$ is counted in the RHS of $(1)\; 0 $ times, and for $1\leq k\leq n,$ the count is: $$\binom{k}{1} - \binom{k}{2} + \binom{k}{3} - \cdots +(-1)^{k+1}\binom{k}{k} = \binom{k}{0} - (1-1)^k = 1.$$ On the LHS of $(1)$ we have same count: $B$ is counted exactly once if it has at least one of the $A_i$ sets in it, and is counted zero times if it has none of the $A_i$ sets. This proves the result.
Significant digits/rounding
mean and standard deviation: round to one more decimal place than your original data. [...] This suggestion follows Sullivan, Michael, Fundamentals of Statistics 3/e (Pearson Prentice Hall, 2011) page 118. Source So, I suggest keeping one digit after the decimal dot when reporting the weighted average.
Differentiability in the complex plane and in $\Bbb R^2$.
One can prove the following theorem (see any basic complex analysis textbook): The following 2 statements are equivalent for a function $$f:A \subseteq \mathbb{C} \to \mathbb{C}$$ given by $$f(z) = f(x+yi) = u(x,y) + iv(x,y)$$ (1) $ f $ is complex differentiable in $a = c + di \in A$. (2) $ u,v: V \subseteq \mathbb{R}^2 \to \mathbb{R}$ are differentiable (in the multivariate sense) in $(c,d)$ AND $f$ satisfies the Cauchy-Riemann equations in $a$. From this , one sees that complex differentiability is much stronger than regular differentiability.
How to find the unit normal vector for typical solid region?
Describe the surface of your region in the form $f(x,y,z)=0$ Then the unit normal vector is given by $$\hat n= \frac{\vec \nabla f}{ |\vec \nabla f| } = \frac{ \left< \frac { \partial f}{\partial x}, \frac { \partial f}{\partial y}, \frac { \partial f}{\partial z} \right >}{ \sqrt{ ( \frac { \partial f}{\partial x} )^2+( \frac { \partial f}{\partial y} )^2 +( \frac { \partial f}{\partial z})^2 } }$$
Discrete and combinatoric mathematics (Functions)
Assuming this: $(f◦g)(x)= a(cx + d)^2 -b = a(c^2)(x^2) + 2acdx + ad^2 - b$ and $(f◦g)(x) = c(ax^2 - b) + d = acx^2 - bc + d$. As $(f◦g)(x)=(f◦g)(x)$ implies that $ac^2x^2=acx^2$, $2acdx=0x$ and $ad^2-b=-bc+d$. $ac^2x^2=acx^2$ implies that $c=1$ or $c=0$ and $a$ is any number. The second equation $2adx=0x$ implies that $a=0$ or $d=0$. The third one $ad^2-b=-bc+d$ becomes $ad^2-b=-b+d$ if $c=1$ which implies that $d=0$ and $b$ is any number. If $c=0$ then $ad^2-b=d$ implies that $b=0$, $d=1$($a=1$) or $d=0$($a$ is any number). Putting all together if $c=1$ then $a$ and $b$ are any numbers and $d=0$. If $c=0$ then $b=0$, $d=1$,$a=1$ or $b=0$, $d=0$, $a$ any number. So I think you are not lost. There are more than one answer to the problem. I just gave several of them. There are more.
What is the main difference between a free tree and a rooted tree?
A rooted tree comes with one of its vertices specially designated to be the "root" node, such that there's an implicit notion of "towards the root" and "away from the root" for each edge. In a free tree there's no designated root vertex. You can make a free tree into a rooted one by choosing any of its vertices to be the root.
How to find total possibilities of 4 events?
First, split it into disjoint events of equal probability: C 1 | C 2 | C 3 | C 4 | Score -----|-----|-----|------------- 1 | 1 | 1 | 1 | 4 1 | 1 | 1 | 3 | 6 1 | 1 | 3 | 1 | 6 1 | 1 | 3 | 3 | 8 1 | 3 | 1 | 1 | 6 1 | 3 | 1 | 3 | 8 1 | 3 | 3 | 1 | 8 1 | 3 | 3 | 3 | 10 3 | 1 | 1 | 1 | 6 3 | 1 | 1 | 3 | 8 3 | 1 | 3 | 1 | 8 3 | 1 | 3 | 3 | 10 3 | 3 | 1 | 1 | 8 3 | 3 | 1 | 3 | 10 3 | 3 | 3 | 1 | 10 3 | 3 | 3 | 3 | 12 Then, count the number of combinations of the specific score that you're interested in: The number of combinations that sum up to $ 4$ is $1$ The number of combinations that sum up to $ 6$ is $4$ The number of combinations that sum up to $ 8$ is $6$ The number of combinations that sum up to $10$ is $4$ The number of combinations that sum up to $12$ is $1$ Finally, divide this number by the total number of combinations, which happens to be $16$: $P(\text{score}= 4)=\frac{1}{16}$ $P(\text{score}= 6)=\frac{4}{16}$ $P(\text{score}= 8)=\frac{6}{16}$ $P(\text{score}=10)=\frac{4}{16}$ $P(\text{score}=12)=\frac{1}{16}$
On a special type of ideal in local Artinian ring
No. For instance, let $k$ be a field and let $R=k[x]/(x^3)$, with maximal ideal $\mathfrak{m}=(x)$. Then $J=(x^2)$ satisfies $J^2=\mathfrak{m}J=0$ but $J^2\neq \mathfrak{m}^2$.
Why do we want *unitary* representations of locally compact groups into $B(H)$?
After thinking about this question for a few days, I've come to a few conclusions and observations which I will type here for posterity, I suppose. In the finite and compact case, we can do a group averaging trick to make the representations unitary. This is not guaranteed in the locally compact setting. In the finite dimensional case, we can develop the group ring. This is extremely useful in understanding the irreducible representations of the group since any irreducible representation appears in the left regular representation and the left regular representation is naturally occurring in the group ring. Formulating the group ring as the set of elements of the form $\sum_{g\in G}f(g)g$, we know that the product of two elements $\tilde{f} = \sum_hf(h)h$ and $\tilde{g}=\sum_hg(h)h$ is given by $\tilde{f}\tilde{g} = \sum_h\sum_{h'}f(h)g(h')hh' = \sum_h\left(\sum_{h'}f(h')g(h^{-1}h')\right)h$. We naturally see a convolution arise in this expression, so we can identify the functions $f$ and $g$ with elements of the $L^1(G)$ algebra. This gives us two paths to go by when generalizing: we can either define the group ring of a locally compact group to be given by elements of the form $\int_G f(g)g\,dg$, where this integral is understood as a formal integral and $f\in L^1(G)$ or we can start by asking that the group ring consist of elements of the form $\int_G f(g)g$, where $f$ is continuous and of compact support (as this is the closest analogy we have to "finiteness" without ostensibly being zero). But since we want to create a Banach algebra from this set and convolution lies at the heart of the group ring, we would be forced to complete $C_c(G)$ with respect to the $L^1$ norm and we would end up with $L^1(G)$ again anyway. With this in mind, if one were to go through the machinery of looking at the representation lifted to the group ring as one does in the finite case, one would end up getting that if $\rho:G\rightarrow GL(H)$ for some Hilbert space $H$, then $\lVert\rho(g)\rVert\le M < \infty$ for all $g\in G$ in order to have meaningful operators on $B(H)$. Since the $\rho(g)$ are uniformly bounded in norm, taking cues from the finite and compact cases, we might as well ask that $\rho(g)$ be unitary. Suppose $\rho:G\rightarrow GL(H)$ is a group homomorphism. One can somewhat run this thought process in reverse and ask the question: what if we work with general (formal) objects of the "group ring" given by $$\mathbb{C}[G] = \left\{\int_Gf(g)g\,dg:\left\lVert\left(\int_Gf(g)\rho(g)\,dg\right)h\right\rVert<\infty\,\text{ for all }\,h\in H\right\}.$$ Here I have taken the approach that the formal object $\left(\int_Gf(g)g\,dg\right)h = \int_Gf(g)\rho(g)h\,dg$, akin to lifting the representation from the group to the group ring in the finite case. The restriction that $\left\lVert\left(\int_Gf(g)\rho(g)\,dg\right)h\right\rVert<\infty$ merely forces $\int_Gf(g)\rho(g)\,dg$ to be a well-defined operator on $B(H)$. By the uniform boundedness principle, I believe this is equivalent $\left\lVert\int_Gf(g)\rho(g)\,dg\right\rVert<\infty$. No real restrictions have been placed on either $f$ nor $\rho$ at this stage in terms of their analytic properties. From this we can see that depending on how $\rho$ behaves, $f$ could have wildly varying behavior. For instance, consider $G = (\mathbb{R}^+,\cdot)$ and $\rho(x) = x^3I$. This is a representation but in order for $\left\lVert\int_Gf(g)\rho(g)\,dg\right\rVert<\infty$, $f$ has to have some somewhat restrictive decay properties. Likewise if we consider the opposite situation: let $\rho(x) = x^{-3}I$, again this is a representation but $f$ could have linear growth and still $\int_Gf(g)g\,dg$ would be a well-defined element in $\mathbb{C}[G]$. So we see that without some restrictions on $\rho$, we cannot say anything meaningful about the group ring. Again, taking cues from the finite and compact cases, we might as well take $\rho$ to be unitary. Making this restriction nearly forces the $L^1(G)$ algebra to pop out naturally (upon considering some norm arguments and the like).
Dealing with an extra state in a Kalman filter
According to your description I deduce that your system not fully observable. So unless you add additions sensors, which makes your system fully observable, I believe there is nothing else you can do to improve the estimate of "north". One cheat that could artificially maybe keep the covariance smaller it by ensuring that the contribution of the process covariance to the north-state is zero (or really small). This however won't improve the estimate of north. Another option is to reset the covariance at regular time intervals or when the largest eigenvalue of the covariance exceeds a certain threshold. There might be some clever ways to do this reset such that you lose no or minimal uncertainty information of the other states. A very different option might to use a particle filter instead of a Kalman filter. Namely I assume that north should be a direction, so can always be represented with angle in the interval between zero and 360 degrees, therefore the uncertainty should also be bounded by this interval. So for a particle filter you can use the modulo operation to ensure this interval.
Convergence in probability for a particular sequence of r.v.'s
Recall that in order to show that $X_n \to 0$ in probability, you must show that for every $\epsilon > 0$, we have $P(|X_n| > \epsilon) \to 0$. Now go back to the definition of sequence convergence: you have to show that for any $\delta > 0$ there exists $N$ such that for all $n \ge N$, we have $P(|X_n| > \epsilon) < \delta$. Can you see how to choose such an $N$? Keep in mind that $N$ is allowed to depend on both $\delta$ and $\epsilon$!
What does a summation mean and how do you compute it?
It just means for each $(2^k- 2^{k-1})$ substitute the value of $k$ and add it to the next $(2^k- 2^{k-1})$ with the next value of $k$ substituted in it. $\sum_{k=1}^{10}(2^k- 2^{k-1})$ is shorthand for this and means summation. (do this 10 times). So $$\sum_{k=1}^{10}(2^k- 2^{k-1})= (2^1-2^{1-1}) + (2^2-2^{2-1}) + (2^3-2^{3-1})+ (2^4-2^{4-1})+ (2^5-2^{5-1})+ (2^6-2^{6-1})+ (2^7-2^{7-1})+ (2^8-2^{8-1})+ (2^9-2^{9-1})+ (2^{10}-2^{10-1})$$ I wrote this out explicitly so you could get a 'feel' for what was going on. However, you could just note that $$2^k- 2^{k-1}=2^k- 2^{k}\times 2^{-1}=2^k(1-2^{-1})=2^k\left(1-\frac12 \right)=\cfrac{2^k}{2^1}=2^{k-1}$$ So $$\sum_{k=1}^{10}(2^k- 2^{k-1})=\sum_{k=1}^{10}2^{k-1}$$ which makes your life easier. You can even see this intuitively by looking at the sum I fully wrote out and observing the cancellation taking place between corresponding terms in the series. If you have a very large number for $k$ say $k=1000$. Since this series $$1+2+4+8+16+\cdots=2^0+2^1+2^2+2^3+2^4+\cdots$$ is a geometric progression. You can therefore use the formula for the sum of a geometric series which is $=\cfrac{a(1-r^{k})}{1-k}$. Where $a$ is the first term and $r$ is the common ratio. $r=\cfrac{\mathrm{next}\space \mathrm{term}}{\mathrm{previous}\space \mathrm{term}}=\cfrac84=2$ and $a=1$ $$\sum_{k=1}^{1000}2^{k-1}$$ $$=\cfrac{1(1-2^{1000})}{1-2}$$ $$=\cfrac{1-2^{1000}}{-1}$$ $$=2^{1000}-1$$ It will take a computer to actually calculate this so here it is:
Simplifying $\left({\sqrt{x} + \frac{1}{\sqrt{x}}}\right)^2 - \left({\sqrt{x} - \frac{1}{\sqrt{x}}}\right)^2 $
Using $a^2-b^2 = (a+b)(a-b)$ we get $$\Big(\sqrt{x} + \frac{1}{\sqrt{x}} + \sqrt{x} - \frac{1}{\sqrt{x}}\Big)\Big(\sqrt{x}+\frac{1}{\sqrt{x}} - \sqrt{x}+\frac{1}{\sqrt{x}}\Big) = \big(2\sqrt{x}\big)\big(\frac{2}{\sqrt{x}}\big) = 4$$ Hence, we get our answer as $4$. Hope it helps.
Induced map by $\mathbb{R}P^2\to\mathbb{C}P^2$ in homology.
I like this strategy, though maybe just take $a_0 = 0$, $a_1 = 1$, $a_2 = -i$ so that we are intersecting $[x_0 : x_1 : x_2]$ with $[z_0 : z_1 : iz_1]$. The only way to scale the last two coordinates to both be real is if $z_1 = 0$; in that case we are left with the points $[z_0: 0 : 0]$. Up to scalar change, there is only one of these, $[1 : 0 : 0]$. This is of course real. We should now check that the intersection at $[1:0:0]$ is transversal. I would write $T_{[1:0:0]} \Bbb{CP}^2$ as $\Bbb C^2$, generated by the tangent vectors changing the second or third coordinate by some complex number. $T\Bbb{RP}^2$ is $\Bbb R^2 \subset \Bbb C^2$, the real elements; the complex line $\Bbb{CP}^1$ has tangent space given by the complex subspace $\Bbb C \subset \Bbb C^2$ generated by $(1,i)$. These two vector spaces are in direct sum: their intersection is zero (you of course cannot write $(\lambda, i\lambda)$ for two reals unless $\lambda = 0$), and they sum to the dimension of the total space. So this is a transversal intersection, as desired, and the intersection product is indeed 1.
"Eilenberg-MacLane property" for the classifying space of a groupoid
It is helpful to consider the fundamental groupoid $\pi_1(X,S)$ of a space $X$ for a set of base points $S$. Similarly we consider a groupoid $G$ as given with its object set say $S$. Its classifying space $BG$ should thus be considered as having a set of base points $S$. Then $\pi_1(BG,S)\cong G$. For more on this idea see the paper Modelling and computing homotopy types: I. Note that the homotopy groups $\pi_i(BG,x)$ at any base point $x$ are zero for $i >1$.
Finding invertible matricies without doing row operations
Note that in matrix $2$, the fifth row is exactly $-2$ times the first row. Hence the determinant is zero. In matrix $1$, the second column and the third column are $4$ times and $3$ times the column vector $[1,3,-2,0,-1]^T$ respectively. By multilinearity, these get removed, so the determinant here is zero too. The third matrix is invertible, since its inverse can be explicitly written down as the reverse rotation matrix. That is, there exists a matrix which you can right and left compose this matrix with to get the identity, which is the reverse rotation matrix. The fourth matrix is of rank $2$ : it is essentially projecting onto a two dimensional space, therefore is not invertible by the fifteenth(!) point of the rank theorem. In general, if people ask you to inspect matrices and tell whether or not the determinant is zero, you will have to look for patterns of the kind I looked for. Moreover, the operations determined by these matrices (such as rotation / projection / scaling) carry a certain geometric meaning to them, which helps you propose candidates for a possible inverse. The key, of course, is the IMT, but we have some shockers in store as well. There are some interesting examples of matrices that look very complicated but are invertible. These include special Vandermonde matrices, Hankel matrices, and the (not so well known) strictly diagonally dominant matrices, which actually are more helpful than we would think they are. So I would also look for these properties in a question of the kind that you were given.
Difference between "algebraically closed field" and the "relatively algebraically closed field"?
Suppose $K$ is a field extension of $F$. We say $F$ is algebraically closed in $K$ if for any polynomial $p(x) \in F[x]$ and any root of $p(x)$, $\alpha$, if $\alpha \in K$, then $\alpha \in F$. The point here is that there are no elements of $K$ algebraic over $F$ that are not already in $F$. We say $F$ is algebraically closed if for any polynomial $p(x) \in F[x]$, $F$ contains all the roots of $p(x)$. This can be considered the most extreme form of the first definition. We are basically saying $F$ is algebraically closed in every extension of $F$, which is equivalent to saying it is algebraically closed in each extension that is a splitting field for some polynomial $p(x) \in F[x]$, i.e. every normal extension.
What's wrong with this Kuhn-Tucker optimization?
Since the fourth constraint: $z\geq0$ will not bind at the maximum, you are correct that $\lambda_4=0$ (by the complimentary slackness condition). However, since the 5th constraint will bind, $\lambda_5\geq0$. You are left with $xy=\lambda_5$ rather than $xy=0$.
Find all the pairs $(n,n)$ that we can obtain with these operations
Let $f(x) = x + 1$ and $g(x) = x^2 + 1$. Suppose we have a sequence $$(n, n) \leftarrow (a_1, b_1) \leftarrow (a_2, b_2) \leftarrow \cdots \leftarrow (1, 3)$$ for some $n$, so at each step we have either $(a_i, b_i) = (f(a_{i+1}), g(b_{i+1}))$ or $(a_i, b_i) = (g(a_{i+1}), f(b_{i+1}))$ and all $a_i, b_i$ are positive integers (here $(a_0, b_0) = (n, n)$ and $(a_m, b_m) = (1, 3)$). The key is to show that all of the steps leading to $(n, n)$ must be of the same "type", extending the argument you gave. Case 1: $(n, n) = (f(a_1), g(b_1))$. We'll show by induction on $k$ that $(n, n) = (f^k(a_k), g^k(b_k))$ for each $k$. We are given that it holds in the case $k = 1$. Now suppose $(n, n) = (f^k(a_k), g^k(b_k))$ for some $k$. If we had $(a_k, b_k) = (g(a_{k+1}), f(b_{k+1}))$, then this would mean $f^k(g(a_{k+1})) = g^k(f(b_{k+1}))$, hence $a_{k+1}^2 + (k+1) = d^2 + 1$, where $d = g^{k-1}(b_{k+1} + 1)$. Note $g(x) > x^2$, so inductively $g^{k-1}(x) \geq x^{2^{k-1}}$, giving $d \geq (b_{k+1}+1)^{2^{k-1}} \geq 2^{2^{k-1}}$. But then this gives $$k = d^2 - a_{k+1}^2 = (d + a_{k+1})(d - a_{k+1}) \geq d + a_{k+1} > d \geq 2^{2^{k-1}}$$ (which is impossible, as you can prove e.g. by induction). Thus we must have $(a_k, b_k) = (f(a_{k+1}), g(a_{k+1}))$, so $(n, n) = (f^{k+1}(a_{k+1}), g^{k+1}(b_{k+1}))$ as desired. Case 2: $(n, n) = (g(a_1), f(b_1))$. An identical argument to that given in case 1 shows that $(n, n) = (g^k(a_k), f^k(b_k))$ for each $k$. Thus we've shown that either $(n, n) = (f^m(1), g^m(3))$ or $(n, n) = (g^m(1), f^m(3))$ for some $m \geq 0$. The first case is impossible: another simple inductive argument shows that if $1 \leq x < y$ then $f^m(x) < g^m(y)$ for each $m \geq 0$, hence we must have $f^m(1) < g^m(3)$ for each $m \geq 0$. For the second case, consider the sequence of $(g^m(1), f^m(3))$ for $m \geq 0$: we have $(1, 3), (2, 4), (5, 5), (26, 6), \dots$, and then by the same fact as before, since $g^3(1) > f^3(3)$, we have $g^m(1) > f^m(3)$ for all $m \geq 3$. We conclude that the only pair $(n, n)$ reached from $(1, 3)$ is $\boxed{(5, 5)}$.
Prove the curve is a part of circle
If you separate the variables in the equation for $g=f'$ you obtain $$ \frac{dg}{(1+g^2)^{3/2}}=cdx $$ With the substitution $g=\tan\theta$ the integral on the left is just $\sin\theta+const.$ so $$ \sin\theta=cx+D $$ If we choose as our initial direction $\theta=0$ (the curve starts horizontally at $x=0$) you get $D=0$. Since $\sin\theta=\tan\theta/\sqrt{1+\tan^2\theta}=g/\sqrt{1+g^2}$ we have $$ \frac{g}{\sqrt{1+g^2}}=cx. $$ Solving for $g$: $$ g(x)=\frac{cx}{\sqrt{1-c^2x^2}}, $$ for $|x|\in[0,1/c)$. Integrating once again $$ f(x)=-\frac{\sqrt{1-c^2x^2}}{c}+E $$ Choosing the initial point of the curve at the origin, we find $E=\frac 1{c}$. Finally, $$ f(x)=\frac 1{c}-\frac{\sqrt{1-c^2x^2}}{c} $$ which is the lower half of the circle centered at $(0,1/c)$ with radius $1/c$, as expected (the radius is the reciprocal of the curvature).
Do I need to understand Multi-Variable Calculus to study Linear Algebra?
Multivariable calculus is helpful because it gives many applications of linear algebra, but it's certainly not necessary. In fact, you probably need linear algebra to really start to understand multivariable calculus. To wit, one of the central objects in multivariable calculus is the differential of a function. In single-variable calculus, you are taught that the differential of a function $f:\mathbb{R}\to\mathbb{R}$ is a new map $f':\mathbb{R}\to\mathbb{R}$ which provides the slope of the tangent line to $f$ at each point in $\mathbb{R}$. This is strictly correct, but it is not the best way to understand single-variable calculus if you want to easily generalize. The better way to see single-variable calculus is to recall that the tangent line to $f$ at $x$ is the best affine-linear approximation to $f$ at $x$, i.e., $f$ is approximated by $f(y)\approx f'(x)(y - x) + f(x).$ This generalizes quite well! If $f:\mathbb{R}^n\to\mathbb{R}^m$, the differential to $f$ at $x$, $df_x$, is the best linear approximation to $f$ at $x$: $f(y)\approx df_x(y-x) + f(x)$. Now, we think of $x$ and $y$ as vectors in $\mathbb{R}^n$ and the differential $df_x$ is an $n\times m$ matrix. Even more generally, we think of $df$ as a map from $\mathbb{R}^n$ into $Hom(\mathbb{R}^n,\mathbb{R}^m)$ which measures the best linear approximation of $f$ at each point $x\in\mathbb{R}^n$. Generalizing further requires the notion, from differential geometry, of a smooth manifold. Such manifolds carry objects called tangent bundles, which assign to each point of the manifold an abstract vector space. You can see how linear algebra is a little more helpful for multivariable calculus than the other way around.
Find the slope of the graph of $xy-4y^2=2$ at $(9,2)$
An other approach. The point $(9,2)$ belongs to $$4y^2-xy+2=0$$ $$\implies y=\frac{x+\sqrt{x^2-32}}{8}$$ the slope is then given by $$\frac{dy}{dx}=\frac 18\left(1+ \frac{x}{\sqrt{x^2-32}}\right)$$ If $x=9$ then $$\frac{dy}{dx}=\frac 18\left(1+ \frac 97\right)=\frac 27$$
compactness property
I’ve left some of the details to you, but here’s the main outline. Suppose that $f[\Bbb R^n]$ is not closed in $\Bbb R^m$. Then there is a point $p\in\operatorname{cl}f[\Bbb R^n]\setminus f[\Bbb R^n]$, and there is a sequence $\langle x_n:n\in\Bbb N\rangle$ in $f[\Bbb R^n]$ converging to $p$. Let $K=\{p\}\cup\{x_n:n\in\Bbb N\}$, and show first that $K$ is compact, so that $f^{-1}[K]$ is compact in $\Bbb R^n$. Now for each $n\in\Bbb N$ choose $y_n\in f^{-1}[K]$ such that $f(y_n)=x_n$, and consider the sequence $\langle y_n:n\in\Bbb N\rangle$. This is a sequence in the compact set $f^{-1}[K]$, so it has a convergent subsequence $\langle y_{n_k}:k\in\Bbb N\rangle$, say with limit $y$. What must $f(y)$ be? Why is this a contradiction?
Geometry problem about two externally touching circles
Let $PQ=QR=RS=2x$, $AM$ and $O_2N$ be perpendiculars to $PS$ and $O_2K$ be a perpendicular to $AM$. Thus, since $$KO_2=MN=x+2x+x=4x,$$ $$AK=\sqrt{12^2-x^2}-\sqrt{3^2-x^2}$$ and $$AO_2=12+3=15,$$ by the Pythagoras's theorem for $\Delta AO_2K$ we obtain: $$(4x)^2+\left(\sqrt{12^2-x^2}-\sqrt{3^2-x^2}\right)^2=15^2.$$ Can you end it now? I got $PQ=QR=RS=\frac{3\sqrt{13}}{2}.$
Statistics: parameter estimation
For every $x_1\lt1$, the likelihood $L(\theta;x_1)=f(x_1;\theta)$ is an increasing function of $\theta$ hence, assuming that $\theta$ is rectricted to some interval $[0,\theta_*]$, one gets $\hat\theta_{\text{MLE}}(x_1)=\theta_*$. Likewise, for every $x_1\gt1$, $\hat\theta_{\text{MLE}}(x_1)=0$.
Proof verification- $f$ is defined on $[0,1]$ and $f(x) = 0$ for $x$ irrational and $f(x) = 1/q$ for $x = p/q$ in lowest terms, then $\int_0^1 f = 0$
I thinks it is simple. Let $\mathbb{X}$ is set of rational numbers in [0,1] and $\mathbb{Y}$ is set of irrational numbers in [0,1]. $\mu (A) $ is measure of set $A$. We know $\mu (\mathbb{X})=0$ and $\mu (\mathbb{Y})=1$. Lebesgue integral: $\int_{0}^{1} f(x) dx= \int_{\mathbb{X}}f(x)d\mu + \int_{\mathbb{Y}}f(x)d\mu =0$ Because $\int_{\mathbb{X}}^{}f(x)d\mu =0$
Plücker matrix - Rank 2 proof
It's not that a generic 4x4 skew-symmetric matrix would have rank 2, but that a generic 4x4 skew-symmetric matrix in the form of $uv^T−vu^T$ has rank 2. To prove this, suppose $u,v$ be linearly independent. Can you show that $u,v$ are in the image of $uv^T−vu^T$? So, what is the image of $uv^T−vu^T$?
Can you substitute equivalent powers?
no, you cant, as $2^3=8 \equiv 2 \not\equiv 1=2^0 \pmod 3$ However, with Fermat's small theorem, if $p$ is prime,$n^p \equiv n\pmod p$ so if $a \equiv b\pmod p$ and $x \equiv y\pmod{ \color{red}{p-1}}$, then $a^x \equiv b^y\pmod p$
Strong induction confusion
Assume you have a proof of If $n\in\Bbb N$ and $n-2\in \Bbb N$ and $\Phi(n-2)$ is true, then $\Phi(n)$ is true. Then $\Phi(7)$ follows from $\Phi(5)$. If you did not show $\Phi(5)$ as a base case, $\Phi(5)$ can alsoe be concluded from $\Phi(3)$. And if $\Phi(3)$ is still not among the base cases, you can conclude it anyway from $\Phi(1)$. But (hopefully), you already know that $\Phi(1)$ is true (base case). Note that in this specific scenario, you need to show $\Phi(1)$ and $\Phi(2)$ "manually". Everything else follows by induction as just illustrated by the argument for $n=7$. More formally, the set $$X:= \{\,n\in\Bbb N\mid\Phi(n)\text{ is false}\,\}$$ is a subset of $\Bbb N$. Assume $X\ne \emptyset$. Then $X$ has a minimal element $n_0$. Then $n_0$ cannot be $>2$ as we would arrive at a contradiction with the induction step statement. Hence either $n_0=2$ or $n_0=1$; but by direct proofs for $\Phi(1)$ and $\Phi(2)$, we know that $1,2\notin X$. from this contradiction, we infer that $X=\emptyset$.
Estimate how many iterations would be needed to determine the root to 16 decimal places?
In this case you get the exact formula $$ (1-ax_{n+1})=(1-ax_n)^2\implies (1-ax_n)=(1-ax_0)^{2^n} $$ This shows that the answer depends on the initial point. If $|1-ax_0|<\frac12$ by constructing $x_0$ as a suitable power of $2$, then with $n=6$ the error bound $2^{-2^6}=(2^{-10})^6\cdot 2^{-4}\sim 10^{-19}$ is small enough for your purposes.
Endomorphisms $\mathbb Z^n \to \mathbb Z^n$ and finiteness of the cokernel
There are perhaps simpler proofs; this one uses tensor products and the structure of finitely generated abelian groups. Consider the exact sequence $$\def\imphi{\operatorname{Im}\phi} 0\to\ker\phi\to\mathbb{Z}^n\xrightarrow{\phi}\mathbb{Z}^n\to\mathbb{Z}^n/\imphi\to0 $$ and tensor it with $\mathbb{Q}$; since $\mathbb{Q}$ is flat, tensoring preserves exactness. So we get the exact sequence $$ 0\to\ker\phi\otimes\mathbb{Q}\to\mathbb{Q}^n\xrightarrow{\hat\phi}\mathbb{Q}^n \to(\mathbb{Z}^n/\imphi)\otimes\mathbb{Q}\to0 $$ where $\hat\phi$ is the endomorphisms obtained by identifying $\mathbb{Z}^n\otimes\mathbb{Q}$ with $\mathbb{Q}^n$. We have $\det\hat\phi=\det\phi$ because the matrices of the endomorphisms are the same. Thus $\det\phi\ne0$ if and only if $(\mathbb{Z}^n/\imphi)\otimes\mathbb{Q}=0$, that is, if and only if $\mathbb{Z}^n/\imphi$ is torsion. A finitely generated abelian group is torsion if and only if it is finite.
Delta symbol meaning in mean curvature equation for level set
Hint: use $$ Du=(u_{x_1}, \cdots, u_{x_n}), |Du|=\sqrt{u_{x_1}^2 \cdots+u_{x_n}^2},\text{Div}(v_1,\cdots,v_n)=\sum_{i=1}^n\frac{\partial v_i}{\partial x_i}$$
Is $\mathbb R$ of second category in itself?
$\mathbb{R}$ is a complete metric space with respect to the usual topology,so from Baire's theorem is of second category in itself. A set of second category is a set which is not of first category,i.e it cannot be written as a countable union of nowhere dense sets. So your counterexample is valid. Also remember that Baire's theorem has this corollary: Let $(X,d)$ be a complete metric space and $F_n$ a countable collection of closed sets ,such that $X=\bigcup_{n=1}^{\infty}F_n$ Then at least one of these closed sets $F_n$ has a nonempty interior.
Is the operator $Tf = f(\sqrt{x})$ continuous?
First of all note that $T$ is linear: $T(\alpha f+ \beta g)=\alpha Tf+ \beta Tg$. 1) As regards $C[0,1]$, by letting $x=t^2$, $$\sup_{x\in[0,1]}|(Tf)(x)|=\sup_{x\in[0,1]}|f(\sqrt{x})|= \sup_{t\in[0,1]}|f(t)|.$$ 2) On the other hand, in $L^2[0,1]$, again by letting $x=t^2$, $$\begin{align} \int_0^1((Tf)(x))^2 dx=\int_0^1(f(\sqrt{x}))^2 dx= \int_0^1(f(t))^2 2t dt\leq 2\int_0^1(f(t))^2 dt. \end{align}$$ Can you take it from here?
Find coefficient in quartic given product of roots
Suppose the roots are $a,b,c$ and $d.$ Then by Vietas formulas, $$a+b+c+d=11\to(1)$$ $$ab+ac+ad+bc+bd+cd=k\to(2)$$ $$abc+abd+acd+bcd=-269\to(3)$$ $$abcd=-2001\to(4).$$ By your additional condition (suppose) $$ab=-69.$$ Then by $(4)$$$cd=29$$ and by $(3)$ $$ab(c+d)+cd(a+b)=-69(c+d)+29(a+b)=-269.$$ $$-69(c+d)+29(11-(c+d))=-269.$$ Hence $$c+d=6$$ and $$a+b=5.$$ Finally $$ab+ac+ad+bc+bd+cd=ab+(a+b)(c+d)+cd=k.$$ Hence $k=-10.$
Prove that it is always possible to subdivide a given trapezium into two similar trapeziums.
It is enough to construct $PQ$ in such a way that $\frac{AB}{PQ}=\frac{PQ}{CD}$, i.e. to construct the geometric mean of $AB$ and $CD$. Here it is a possible approach, exploiting the power of a point with respect to a circle. Let $E\in CD$ be a point such that $AE\parallel BC$; Let $\Gamma$ the circumcircle of $ADE$ and $CF$ (with $F\in\Gamma$) a tangent to $\Gamma$; Let $G\in CD$ be such that $CG=CF$; Let $P\in AD$ be such that $PG\parallel BC$; Let $Q\in BC$ be such that $PQ\parallel AB$. As an alternative way based on the same principle, Let $X=AD\cap BC$; Let $\Gamma$ be a circle with diameter $AD$; Let $T\in\Gamma$ be such that $XT$ is tangent to $\Gamma$; Let $P\in AD$ be such that $XP=CT$; Let $Q\in BC$ be such that $PQ\parallel AB$.