title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Prove that $\ln$ and $\exp$ are inverses
Here's a sketch of the proof: 1) $f(x) = \exp x$ is differentiable everywhere with $f'(x) = f(x)$. 2) $f$ is invertible. 3) The inverse $f^{-1}$ has $(f^{-1})'(f) = 1/f$. 4) $f^{-1}(x) = \log x$.
Clarification to proof regarding orthonormal basis of $L^2(X \times Y)$
These denote the same, namely $(x, y) \mapsto f_n(x)\cdot g_m(y)$.
Prove $A \bigtriangleup B = B \bigtriangleup A$
Well if you're allowed to use the fact that the union and intersection operations are commutative, then we have: $$ A \bigtriangleup B = (A \cup B )\setminus (A \cap B )=(B \cup A )\setminus (B \cap A )= B \bigtriangleup A $$
Proof that $\sum\limits_{k = 0}^n {n\choose k} =2^n$ using Binomial Expansion Formula
Hint: it should be mentioned, on p. 87, that $$\forall x,y\ \ \sum_{k=0}^n\binom nk x^ky^{n-k} = (x+y)^n, $$
Proof by induction - is this correct?
Your proof is correct but too verbose. Why not just write down $$ 7^{2k+2}+2^{2k+3} = 49(7^{2k})+4(2^{2k+1})=45(7^{2k})+4(7^{2k}+2^{2k+1}) $$ and you are done.
Composing dice throw probabilities
As many interesting points have been already pointed out in the comments, and they in essence already contain the answer to the post, I decided to write them up here as an answer for future readers. Mainly extracted from lulu's comments: Since the individual probabilities in each experiment were obtained by dividing the total number of occurrences of value $4$ by the total number of throws of that experiment, the weighted average given for $p$ in the question, boils down to simply counting the overall occurrences of $4$'s from all experiments and dividing by the total number of throws $n.$ Thus one is not really composing probabilities, rather just doing the right counting. Moreover as we were only interested in observing the value $4,$ we essentially treat all other values as failures, which means we can use the binomial distribution as model of choice. Otherwise the multinomial distribution can be used in a more general context. Finally as to alternative ways of estimating the probabilities, lulu nicely points out: there are other sensible ways of estimating the probabilities. You could, for example, just start by assuming that $p=1/6$ then re-estimate using Bayes' Theorem as you get new data. In that way, throwing a $4$ first wouldn't change your mind much, but starting out with five $4$'s in a row certainly would. Doing it this way would make your "composing" problem a lot more delicate...as you'd need to keep track of the individual orders (and declare what you meant by the composite order). Important points brought up by joriki, as the post admittedly lacks in rigour: From the scheme given in the post for computing the probabilities $p_i,$ it follows logically that the probabilities should be multiples of $1/n_i,$ $n_i$ being the number of throws in experiment $i.$ But the chosen numerical values in the post do not seem to fulfill this condition, as a result of a blunder in arbitrarily choosing the values. This point is of important relevance, as joriki points it out: The discussion was based on the premise that these were estimates obtained by dividing the number of 4s by the total number of throws in each case, and then your weighted calculation reconstructs the optimal estimate for the combined experiment. But if this is not how the estimates were obtained, we can't say much about how they should be combined.
Proving that $(3n)!$ is divisible by $n! \times (n + 1)! \times (n + 2)!$ if $n$ is greater than 2
It is an easy exercise to show that for all real numbers $x$ we have $$ \lfloor 3x\rfloor=\lfloor x\rfloor+\lfloor x+\frac13\rfloor+\lfloor x+\frac23\rfloor. $$ Thus for all $n$ and all prime powers $p^t\ge3$ we have $$ \begin{aligned} \lfloor \frac{3n}{p^t}\rfloor&=\lfloor \frac{n}{p^t}\rfloor+ \lfloor \frac{n}{p^t}+\frac13\rfloor+\lfloor \frac{n}{p^t}+\frac23\rfloor\\ &=\lfloor \frac{n}{p^t}\rfloor+ \lfloor \frac{n+\frac{p^t}3}{p^t}\rfloor+\lfloor \frac{n+\frac{2p^t}3}{p^t}\rfloor\\ &\ge \lfloor \frac{n}{p^t}\rfloor+ \lfloor \frac{n+1}{p^t}\rfloor+\lfloor \frac{n+2}{p^t}\rfloor. \end{aligned} $$ This leaves us to deal with the case $p^t=2$. But because $n>2$, we see that $3n$ exceeds one power of two higher than any of $n,n+1,n+2$. If $n>4$ (thanks, Petite Etincelle!) we have $3n>2(n+2)$, and in the cases $n=3,4$ we have $3n>8>n+2$. This gives us a necessary extra term compensating for the deficiency at $p^t=2$. More precisely, if $n=2k+1$ is an odd integer, then $\lfloor \dfrac{3n}2\rfloor=3k+1$ in comparison to $$ \lfloor\frac n2\rfloor+\lfloor\frac{n+1}2\rfloor+\lfloor\frac{n+2}2\rfloor=k+(k+1)+(k+1)=3k+2. $$ On the other hand, if $n=2k$ is even, then $\lfloor \dfrac{3n}2\rfloor=3k$ and $$ \lfloor\frac n2\rfloor+\lfloor\frac{n+1}2\rfloor+\lfloor\frac{n+2}2\rfloor=k+k+(k+1)=3k+1. $$ In either case we are missing a single factor two, so having that single extra term suffices. Summing the above inequalities for $p^t\ge3$ and coupling the terms corresponding to $p^t=2$ and $p^t=2^\ell$, where $\ell$ is the largest integer such that $2^\ell\le 3n$ shows that for all primes $p$ we have $$ \sum_{t>0}\lfloor\frac{3n}{p^t}\rfloor\ge \sum_{t>0}\lfloor\frac{n}{p^t}\rfloor+\sum_{t>0}\lfloor\frac{n+1}{p^t}\rfloor+\sum_{t>0}\lfloor\frac{n+2}{p^t}\rfloor. $$ The claim follows from this.
Finding root of function, possible Lambert function?
Making $y = x+2$ we have $$ c_1-\frac{4}{c_2}\frac{y}{2}e^{-\frac{(y-2)}{2}}=c_1-\frac{4}{c_2}\frac{y}{2}e^{-\frac{y}{2}}e = 0 $$ and then $$ c_1 + \frac{4e}{c_2}\left(-\frac y2\right)e^{-\frac y2}=0\Rightarrow x = -2\left(1+W\left(-\frac{c_1c_2}{4e}\right)\right) $$ Note the pair $\left(-\frac y2\right)e^{-\frac y2}$
When is $c v^\top y y^\top v \ge ||v||^2$
The way the product is written, $v^\top y^\top y v$ must be parsed as $v^\top (y^\top y) v$ since the products $v^\top y^\top$ and $yv$ don't make sense. Since $y^\top y = \|y\|^2$ this leads to $c \|v\|^2 \|y\|^2 \ge \|v\|^2$. The condition is $\|v\|= 0$ or $c \|y\|^2 \ge 1$.
Existence of homomorphism $\phi:\mathbb{Z}[[X]]\to\mathbb{Q}$ such that $\phi(X)=3/4$?
The central binomial coefficients have generating function $p(X)=\sum_{n=0}^{\infty} \binom{2n}n X^n$ which satisfies $p(X)^2(1-4X)=1$ in $\mathbb Z[[X]].$ So a $\phi$ is a homomorphism sending $X\mapsto \frac{3}{4}$ would have to have $\alpha=\phi(p(X))$ satisfying $$\alpha^2 \left(1-4\frac{3}{4}\right)=1$$ Or $\alpha^2=\frac{-1}{2}.$ There is no such $\alpha,$ rational or real.
Does a map between topologies determine a map between sets?
I'm going to answer my own question, and I'm madly delighted to say the answer is yes, there always is such an $f$. Note that for all $x\in X$ the set $\displaystyle N(x)=\bigcup_{O\in \mathcal{B}, x\notin \phi(O)}O$ has the form $Y\setminus\{y\}$. Indeed Suppose that $N(x)$ is all of $Y$. Then we would have $X=\phi(Y)=\phi(N(x))=\bigcup_{O\in \mathcal{B}, x\notin \phi(O)}\phi(O)$ and $x\notin X$ which is absurd. Suppose there were distinct $y_{1},y_{2}$ not in $N(x)$. Then there are two disjoint sets $O_{1},O_{2}$ containing $y_{1}$ and $y_{2}$ respectively. The sets $\phi(O_{1})$ and $\phi(O_{2})$ are disjoint so they cannot both contain $x$. Without loss of generality $x\notin \phi(O_{1})$ so that $O_{1}\subset N(x)$ and $y_{1}\in N(x)$ which is again absurd. Define $f$ by letting $f(x)$ be the only element of $Y\setminus N(x)$. We have \begin{align*} &x\in f^{-1}(U) \\ \iff &U \not\subset N(x) \\ \iff &x\in\phi(U) \end{align*} hence $f$ has the desired property. I don't know what interpretations there might be of this in terms or pointless topology, or topos theory (I suggest this because it seems to me the proof has a propositional logical flavour).
What is the galois group of $x+3$ or $(x+1)(x+2)$ ? How about $A(x)B(x)$?
Yes, the Galois group of x+3 is the trivial group because the splitting field of that polynomial is $\mathbb{Q}$ itself. The idea that the Galois group of a product of polynomials is the direct product of the groups is false, however (assuming that is what you mean). Consider $A(x)=B(x)=x^2+3$. Then $A(x)B(x)=(x^2+3)^2$ and this polynomial also splits completely over $\mathbb{Q}(\sqrt{-3})$. So the Galois group of $A(x)B(x)$ is still $C_2 \not \cong C_2 \times C_2$. For the composition consider $A(x)=x-3$, $B(x)=x^2+3$. Then $A(B(x))=x^2$ has trivial Galois group, but $B$ has Galois group $C_2$.
When is $V=U\oplus U^{\perp}$?
Construction Here's a recipe to construct "bad" incomplete spaces: Start with a Hilbert space $\dim\mathcal{H}=\infty$. Choose a normalized vector $e_0$. Extend it to an ONB $\mathcal{E}\owns e_0$. Fix the independent vector $b_0:=e_0+\sum_{k=1}^\infty\frac{1}{k}e_k$. Extend this to a Hamel basis $\mathcal{B}\supseteq\mathcal{E}$ with $\mathcal{B}\owns e_0,b_0$. Rip it off to get an orthonormal system $\mathcal{S}:=\mathcal{E}\setminus\{e_0\}$. Rip it off to get a linear independent system $\mathcal{L}:=\mathcal{B}\setminus\{e_0\}$. Span your incomplete space $X:=\langle\mathcal{L}\rangle$. Then the orthonormal system is maximal $\mathcal{S}^\perp=(0)$ but not an ONB $\overline{\langle\mathcal{S}\rangle}\neq X$. Example For your query then you can split $\mathcal{S}=\mathcal{S}_1\sqcup\mathcal{S}_2$ to get subspaces $U_1:=\overline{\langle\mathcal{S}_1\rangle}$ and $U_2:=\overline{\langle\mathcal{S}_1\rangle}$. These are orthogonal complements to each other but don't reduce the space $X\neq U_1\oplus U_2$. Moreover you see that any combination of dimension and codimension can appear to be bad in an incomplete space.
Improper integral involing exponential and monotone function.
The assumption is not true Counterexample Let $f(x) = \begin{cases} 1,& \text{if } x < 0 \\ e^{-\lambda x}, & x \geq 0 \end{cases}$ Then $f$ is bounded and monotone and $$ \lim_{x\rightarrow \infty} f(x) e^{\lambda x} = 1$$ but the integral is clearly infinite
Background subtraction
Try adding to all the values the larger negative noise value +1. A simple shift. If there is some problem with the result, then scale to the maximum.
I can't derive the integrating factor of this first order ODE from the Dover textbook
Solve $(x^2-y^2-y)\,dx+(y^2-x^2+x)\,dy=0$ Let $z=x-y$. Then $x+y=2x-z$ so $x^2-y^2=2xz-z^2.$ Substituting gives \begin{equation} (2xz-z^2-x+z)\,dx+(z^2-2xz+x)(dx-dz)=0 \end{equation} which simplifies to \begin{equation} z\,dx+(2xz-z^2-x)\,dz=0 \end{equation} So \begin{eqnarray} \dfrac{\partial M}{\partial{z}}&=1\\ \dfrac{\partial N}{\partial{x}}&=2z-1 \end{eqnarray} Therefore \begin{equation} \dfrac{1}{M}\cdot\left(\dfrac{\partial N}{\partial{x}}-\dfrac{\partial M}{\partial{z}}\right)=2-\dfrac{2}{z} \end{equation} So $\mu=z^{-2}e^{2z}$ is an integrating factor and \begin{equation} z^{-1}e^{2z}\,dx+(2xz^{-1}-1-xz^{-2})e^{2z}\,dz=0 \end{equation} is exact since \begin{equation} \dfrac{\partial}{\partial z}\left(z^{-1}e^{2z}\right)=(-z^{-2}+2z^{-1})e^{2z}=\dfrac{\partial}{\partial x}\left(2xz^{-1}-1-xz^{-2}\right)e^{2z} \end{equation} Integrating the respective terms with respect to $x$ and $z$ yields \begin{equation} \phi(x,z)=xz^{-1}e^{2z}+C(z) \end{equation} and \begin{equation} \phi(x,z)=xz^{-1}e^{2z}-\frac{1}{2}e^{2z}+C(x) \end{equation} So the general solution in terms of $x$ and $z$ is \begin{equation} \left(\dfrac{x}{z}-\frac{1}{2}\right)e^{2z}=C \end{equation} Therefore the solution in terms of $x$ and $y$ is \begin{equation} \left(\dfrac{x}{x-y}-\frac{1}{2}\right)e^{2(x-y)}=C \end{equation} which can be simplified to \begin{equation} \left(\dfrac{x+y}{x-y}\right)e^{2(x-y)}=c \end{equation}
Is this Diophantine equation?
Note that $$\log_a b=\frac{1}{\log_b a}$$ so the question is equivalent to $$(\log_a b)^2=1$$ so $a=b$ or $a=b^{-1}$.
is there a nicer way to $\int e^{2x} \sin x\, dx$?
we have $$ \int e^{2x}\sin x \mathrm d x = (A\cos x + B\sin x)e^{2x} $$ Equation is the same at both ends of the derivative $x$ $$ e^{2x}\sin x = 2e^{2x}(A\cos x +B \sin x) + (B\cos x -A\sin x)e^{2x} $$ Finishing $$ e^{2x}\sin x =(2B-A)\sin x\,e^{2x} +(2A+B)\cos x\,e^{2x} $$ Compare coefficient,we have $$ 2A+B = 0\\ 2B-A = 1 $$ Solutions have $$ A =-\frac{1}{5} \\ B=\frac{2}{5} $$
The polynomial $f (t) = t^7 + 10t^2 − 5$ has no roots in the field $\mathbb Q(\sqrt[3] 5, \sqrt[5] 5)$.
Hint 1. Find the degree of field extension $\mathbb Q\subset\mathbb Q(\sqrt[3] 5, \sqrt[5] 5)$. Hint 2. Prove that $f (t) = t^7 + 10t^2 − 5$ is irreducible over $\mathbb Q$.
$f(x,y)=(x^2-y^2,2xy)$ What is $f(A)=B$?
Hint: Consider transforming to polar coordinates ($x=r\cos\theta, y=r\sin\theta$)
Binomial Distribution of Random Variable
To get you started: The mean number of defective blades in a pack of $10$ is $1$, meaning the probability that a given blade in a pack is defective is $0.1$. What is the probability that in $10$ blades, at least $4$ of them are defective? Hint: Think Binomial Distribution and Complement Rule. Then use this answer, which will be the probability that a given pack will have at least $4$ defective blades, and multiply by $1000$ to find the expected number of packs that have at least $4$ defective blades.
Use an appropriate Half-Angle Formula to find the exact value of the expression $\cos \left(\frac{9\pi}{ 8}\right)$
From Gerry Myerson's comment, we have \begin{align*} \frac{18\pi}{8} &= \frac{9\pi}{4} \\ &=2\pi+\frac{\pi}{4} \\ \cos \frac{18\pi}{8} &= \cos \left(2\pi+\frac{\pi}{4}\right) \\ &= \cos \frac{\pi}{4} \\ &= \frac{\sqrt{2}}{2} \end{align*} We then have the half-angle formula $$ \cos \frac{1}{2}\theta = (-1)^{\lfloor (\theta + \pi) / (2\pi) \rfloor} \sqrt{\frac{1 + \cos \theta}{2}} $$ Setting $\theta = \frac{18\pi}{8}$, we have \begin{align*} \cos \left(\frac{1}{2}\right)\left(\frac{18\pi}{8}\right) &= (-1)^{\lfloor ((18\pi/8) + \pi) / (2\pi) \rfloor} \sqrt{\frac{1 + \frac{\sqrt{2}}{2}}{2}} \\ \cos \frac{9\pi}{8} &= (-1)^{\lfloor(13\pi/4)/(2\pi)\rfloor}\sqrt{\frac{2 + \sqrt{2}}{4}} \\ &= (-1)^{\lfloor 13\pi / 8\rfloor} \frac{\sqrt{2 + \sqrt{2}}}{2} \\ &= (-1)^1 \frac{\sqrt{2 + \sqrt{2}}}{2} \\ \cos \frac{9\pi}{8} &= - \frac{\sqrt{2 + \sqrt{2}}}{2} \end{align*}
Calculating $\cos^{-1}{\frac{3}{\sqrt10}} + \cos^{-1}{\frac{2}{\sqrt5}}$
Use trig identity: $\sin^2\theta+\cos^2\theta=1$ $$\implies\sin\alpha=\sqrt{1-\cos^2\alpha}=\sqrt{1-\left(\frac{3}{\sqrt{10}}\right)^2}=\frac{1}{\sqrt{10}}\quad \forall \quad 0\le\alpha\le \frac{\pi}{2}$$ $$\implies \sin\beta=\sqrt{1-\cos^2\beta}=\sqrt{1-\left(\frac{2}{\sqrt{5}}\right)^2}=\frac{1}{\sqrt{5}}\quad \forall \quad 0\le\beta\le \frac{\pi}{2}$$ Now, use trig identity $$\cos(\alpha+\beta)=\cos\alpha\cos\beta-\sin\alpha\sin\beta$$ $$\cos(\alpha+\beta)=\frac{3}{\sqrt {10}}\frac{2}{\sqrt 5}-\frac{1}{\sqrt {10}}\frac{1}{\sqrt 5} =\frac{1}{\sqrt2}$$ $$\implies \alpha+\beta=\cos^{-1}\frac{1}{\sqrt 2}=\frac{\pi}{4} \quad \quad\left(\because \cos(\alpha+\beta)\in[-1,1]\right)$$ $$\therefore \cos^{-1}{\frac{3}{\sqrt {10}}} + \cos^{-1}{\frac{2}{\sqrt 5}}= \color{blue}{\frac{\pi}{4}}$$ Or alternatively use trig identity $$\sin(\alpha+\beta)=\sin\alpha\cos\beta+\cos\alpha\sin\beta$$ $$\sin(\alpha+\beta)=\frac{1}{\sqrt {10}}\frac{2}{\sqrt 5}+\frac{3}{\sqrt {10}}\frac{1}{\sqrt 5} =\frac{1}{\sqrt2}$$ $$\implies \alpha+\beta=\sin^{-1}\frac{1}{\sqrt 2}=\frac{\pi}{4} \quad \quad\left(\because \sin(\alpha+\beta)\in[-1,1]\right)$$ $$\therefore \cos^{-1}{\frac{3}{\sqrt {10}}} + \cos^{-1}{\frac{2}{\sqrt 5}}=\color{red}{\frac{\pi}{4}}$$
Determine if the following function is differentiable at 0
You need to compute $$ \lim_{x\to0}\frac{f(x)-f(0)}{x-0}=\lim_{x\to0}\frac{f(x)}{x} $$ But you can write, for $x\ne0$, $$ \frac{f(x)}{x}=\begin{cases} x & \text{if $x\in\mathbb{Q}$}\\[4px] x^2 & \text{if $x\notin\mathbb{Q}$} \end{cases} $$ For $0<|x|<1$, you have $\left|\dfrac{f(x)}{x}\right|\le |x|$. Then…
Questions for two exercises on visualising quotient spaces
1) a) Indeed a cylinder, which loops at every $[(0.9999, y)]$ to $[(0., y)] = [(1., y)]$. This divides $R^2$ into long vertical horizontally looping strips. By identifying them, (ie, saying they are all the same and keeping just one), you indeed get something homeomorphic to a cylinder (infinite in a direction). You could also say your equivalence is generated by $(x', y) - (1, 0) \equiv (x, y)$ 1)b) Here you divide your space both vertically and horizontally, that gives you a tiling of squares that loop on each side. Indeed they are all homeomorphic to torii (look up the concept of "fundamental polygon", the one for the torus is exactly your case). 2) For a quotient of a space by a function, you have to think of it as "two element $a$ and $b$ are equivalent/congruent (ie, $a \equiv b$) iff $a - b \equiv 0$". So taking $f(x, y, z) = (x, y, -|z|)$, you can see: $f(u) \equiv f(u')$ iff $f(x, y, z) - f(x', y', z') \equiv (0, 0, 0)$ iff $(x - x', y - y', -|z| - (-|z'|)) \equiv (0, 0, 0)$ iff $x = x'$ and $y = y'$ and $|z| = |z'|$ So you're basically identifying the points of the sphere through a mirror symmetry with the points on your mirror/$xy$-plane as your invariants (ie, the equator). This gives your something like a disc, but with particular edges. Within this disc, particles would reflect at the edges, like pool balls would bounce at a circular pool table. I don't know if it's homeomorphic to some better known topological structure. It's not $RP^2$ because $RP^2$ seen as a disc would warp the particles to the diametrically opposed edge of the disc. 2)b) I suppose you mean "$S^2$ and the identity". Basically, the outer boundary points of your ball are "linked" ("identified", made to be identical) as they are above in 2)a), and there is no new linking happening in the points interior to the ball (the quotient of any topological space by the identity function is the space itself). The trick here is to realize that paths starting on the "upper hemisphere" can now attain points on the lower hemisphere not only through the equator, but also by going through the interior of the earth. This is a bit hard to think about. My gut tells me it's something like $RP^3$, but I'm not at all convinced and wouldn't be sure how to prove that. Tell me if you find a better answer to this one. As a final note, a good way of understanding quotients is basically "injecting a desired property" (an equality defined by an equivalence relation) into a space. Though you have to verify that the equivalence class gives birth to a "well-defined quotient". You have to ensure some category-theoretical commutativity relations. For example, the working quotients of vector spaces have to maintain coherence of restriction to equivalence classes, as well as linearity between an initial space V and its quotient W: $(\forall x, y \in V, x \equiv y \iff [x] = [y] $ and [x], [y] are well defined as elements of a vector space$)$ $\iff$ $(\forall \lambda \in K, x, y \in V, f_W([x + \lambda y]) = [f_V(x)] + \lambda [f_V(y)])$ where $f_W : W \to A$ is the obvious restriction of $f_V: V \to A$ which takes $V$ in input to the quotient space $W$ (composed of identified elements of V) as input, for any output vector space $A$.
How do I find the terms of an expansion using combinatorial reasoning?
Expanding $(x+y)(x+y)(x+y)$ amounts to adding up all the ways you can pick three factors to multiply together. For example, you could pick an $x$ from the first $(x+y)$, a $y$ from the second $(x+y)$, and another $x$ from the third $(x+y)$ to get $xyx=x^2 y$. You are right, the only possible products we can get are $x^3$, $x^2 y$, $xy^2$, and $y^3$. However, we do need to count how many ways to get each factor. For example there is only one way to get $x^3$ (pick $x$ from each $(x+y)$), but there are three ways to get $x^2 y$: $xxy$, $xyx$, and $yxx$. One way to count this is to realize that there are $3$ ways to choose which $(x+y)$ contributes a $y$ [and the rest will be $x$s]. Similar reasoning for $xy^2$ and $y^3$ shows that the expansion is $x^3 + 3x^2 y + 3 xy^2 + y^3$. In general, if you have $(x+y)^n$, the number of ways to obtain a product of the form $x^k y^{n-k}$ is the number of ways to choose $k$ of the $(x+y)$ factors from which to select an $x$. There are $\binom{n}{k}$ ways to make this choice. This proves the binomial theorem $(x+y)^n = \sum_{k=0}^n \binom{n}{k} x^k n^{n-k}$.
Properties of Equilateral Triangles in Circles
Yes. If you're familiar with construction using compass and straight edge, one of the easiest ways to construct an equilateral triangle is to draw two circles where each circle's centre lies on the other circle's edge. Drawing a line between the two intersection points and then from each intersection point to the point on one circle farthest from the other creates an equilateral triangle. You can see from this construction that the side of the equilateral triangle between intersection points is equidistant from each centre, proving that the side is halfway between the circle's centre and its edge.
Tricky probability puzzle
$P(C)$ is the probability that the father would make that claim (whether it is true or false). Either it was snowing and he is telling the truth ($\frac{1}{8} \cdot \frac{1}{6}$) or it was not snowing and he is lying ($\frac{7}{8} \cdot \frac{5}{6}$). The probability that the first instance happend is therefore $$\frac{\frac{1}{8} \cdot \frac{1}{6}}{\frac{1}{8} \cdot \frac{1}{6} + \frac{7}{8} \cdot \frac{5}{6}}.$$
Norm of a functional in finite-dimensional space
If $f \ne 0$, consider $v = \sum_{i=1}^n a_ix_i$ where $$a_i = \begin{cases} \frac{\overline{f(x_i)}}{|f(x_i)|}, &\text{ if $f(x_i) \ne 0$}\\ 0, &\text{otherwise}\end{cases}$$ We have $\|v\| = \max_{1 \le i \le n}|a_i| = 1$ and $$f(v) = f\left(\sum_{i=1}^n a_ix_i\right) = \sum_{i=1}^n a_if(x_i) = \sum_{i=1}^n |f(x_i)|$$ Therefore $$\|f\| \ge \frac{|f(v)|}{\|v\|} = \sum_{i=1}^n |f(x_i)|$$ We conclude $\|f\| = \sum_{i=1}^n |f(x_i)|$.
Is the origin itself plus a line that doesn't go through the origin considered a linear subspace?
Lines that don't pass through the origin are not closed under addition. Consider the line given by $y=x+1$. It contains the points $(1,2)$ and $(2,3)$, but their sum is $(3,5)$, which is not on the given line. There are similar problems with closure under scalar multiplication. It's not just about containing the origin; linearity means much more than that.
If $F/L$ is normal and $L/K$ is purely inseparable, then $F/K$ is normal
Hint. Take a set of polynomials $f_i\in L[x]$ such that $F$ is generated by the roots of these polynomials. Since $L/K$ is purely inseparable, how can you make the $f_i$ into polynomials in $K[x]$ without affecting their roots? You might find the fact that if $\text{char}(K)=p>0$, then $x\mapsto x^p$ is an endomorphism of $K$ (called the Frobenius endomorphism) useful.
will maximum of convex set function intersect with Boundary?
This result is not so simple to prove. I will write you a sketch of the proof, and only in the case when the domain $D$ is compact. In this way it is ensured the existence of a maximum of $f$. Denote by $\partial D$ the boundary of $D$. Let $f: D \to \Bbb R$ be a convex function, and $D$ is a convex compact set ($D$ is a subset of some $\Bbb R^n$). Then we need two lemmas: Lemma 1: let $f:[a,b] \to \Bbb R$ be a convex function. Then the maximum of $f$ is attained at $x=a$ or at $x=b$. proof: by contradiction, suppose that the maximum of $f$ is attained in the interior, and that it is stricty larger than $f(a)$ and $f(b)$. Try to show that this contradicts convexity. Lemma 2: let $x \in D$. Then there exists $y \in \partial D$ such that $f(x) \le f(y)$. proof: if $x \in \partial D$ ,then pick $y=x$. Otherwise, if $x \notin \partial D$, consider any line passing through $x$. This will intersect $\partial D$ at two points $a,b$. In other words, the interior point $x$ belongs to the segment $[a;b]$ and $a,b$ belong to the boundary of $D$. Then $f$ is convex on $[a;b]$ and you can apply Lemma 1. Then you can conclude that the maximum of $f$ is achieved at some point on the boundary. Indeed, if the maximum if attained at some $x \in D$, then you can find $y \in \partial D$ with $f(y) \ge f(x)$, so that $y \in \partial D$ is maximum point of $f$.
Prove that $ \lim\limits_{n \to \infty } \sum\limits_{k=1}^n f \left( \frac{k}{n^2} \right) = \frac 12 f'_d(0). $
You can use the following. If $a_n < b_n$ for all $n$ and $\lim a_n$, $\lim b_n$ exist, then $$\lim a_n \leq \lim b_n$$ This also holds for $\liminf$ and $\limsup$. In your case, since $\frac{n(n+1)}{n} \to 1$, the inequality you wrote down implies (by applying $\lim_{n \to \infty}$ to all three terms) that $$\frac{f'_d(0)-\varepsilon}{2} \leq \liminf_{n \to \infty} \sum_{k=1}^n f\left( \frac{k}{n^2} \right) \leq \limsup_{n \to \infty} \sum_{k=1}^n f\left( \frac{k}{n^2} \right)\leq \frac{f'_d(0)+\varepsilon}{2}$$ for all $\varepsilon > 0$ (the second inequality is true because $\liminf a_n \leq \limsup a_n$ for all sequences $(a_n)$). Hence, it follows (by taking $\varepsilon \to 0$) that $$\frac{f'_d(0)}{2} \leq \lim_{n \to \infty} \sum_{k=1}^n f\left( \frac{k}{n^2} \right) \leq \frac{f'_d(0)}{2}$$ i.e. $$\lim_{n \to \infty} \sum_{k=1}^n f\left( \frac{k}{n^2} \right) = \frac{f'_d(0)}{2}.$$ So either the original statement is false or you forgot a factor of $2$ somewhere in the derivation of the inequality.
How to find explicit form of recurrence relation with four variables for combinatorical value
Suppose $q$ different values from the $d$ values appear in the selection. These create $$\mathfrak{S}_{=q}(\mathfrak{P}_{1\le\cdot\le 3}(\mathcal{Z}))$$ different possible configurations with generating function $$\left(z+\frac{z^2}{2}+\frac{z^3}{6}\right)^q.$$ Here the first set from the sequence describes where the smallest value from the $q$ values appears, the next set where the second smallest value appears and so on. Sum these to obtain $$\sum_{q=1}^d {d\choose q} \left(z+\frac{z^2}{2}+\frac{z^3}{6}\right)^q = -1+\left(1+z+\frac{z^2}{2}+\frac{z^3}{6}\right)^d.$$ The answer is then given by $$l! [z^l] \left(1+z+\frac{z^2}{2}+\frac{z^3}{6}\right)^d.$$ Remark. We could have started from the labeled species $$\mathfrak{S}_{=d}(\mathfrak{P}_{0\le\cdot\le 3}(\mathcal{Z}))$$ to get the same result. Observe that for the maximum possible value $l=3d$ we get $$(3d)! [z^{3d}] \left(1+z+\frac{z^2}{2}+\frac{z^3}{6}\right)^d = \frac{(3d)!}{(3!)^d} = {3d\choose 3,3,3,\ldots, 3}$$ which is the correct value.
Basic questions in Algebra
1) Yes, rings have to be closed under multiplication. 2) That is a proof, but you can go simpler. Assume $xy = 0$. If $x = 0$, we're done. Otherwise $x \neq 0$ and so $x^{-1}$ exists. Then $y = x^{-1}xy = (x^{-1})(0) = 0$. 3) The First Isomorphism Theorem essentially states that given a homomorphism $\psi \colon G \to H$, then $G/\operatorname{ker}\psi \equiv \operatorname{im}\psi \leq H$. Basically, we use the kernel because it contains information about how $G$ is embedded into $H$. If you want the result with any other normal subgroup $N$, you'll have to find a homomorphism that has $N$ as the kernel. And there is, but it is trivial. Simply take $H = G/N$ and define $\psi(g) = gN$.
Discrete Math - Counting
Let $r_i$ be the remainder when the $i$-th integer in our list is divided by $99$. There are only $99$ conceivable remainders, the numbers $0$ to $98$. Since there are $100$ integers in our list, and only $99$ conceivable remainders, there must be two different numbers in our list which have the same remainder. This is a consequence of the Pigeonhole Principle, but the fact is clear even without a name for it. Finally, if two numbers have the same remainder on division by $99$, their difference is divisible by $99$.
How to find the center of a roto-scaling?
Take two corresponding points in the transformation, e.g. $D$ and $H$. Under the rescaling $D$ goes to $D'$, which is then rotated by $\alpha=90°$ to $H$. If $O$ is the center of both transformations, then $\angle HOD'=\angle HOD=\alpha$, hence $O$ lies on the locus of points subtending an angle $\alpha$ with chord $DH$: in general this locus is formed by two circle arcs (a full circle of diameter $DH$ in this particular case). Take then three couples of corresponding points ($DH$, $BJ$ and $EG$ in the diagram): the three loci described above will intersect at center $O$. EDIT. In practice, the construction can be simplified because the circle passing through the intersection point of a line with its roto-translated image and a couple of corresponding points on those lines, also contains the transformation center. See diagram below for an example.
Principal, Gauss and Average Curvature
No. The trace and the determinant don't depend on your matrix representation. If the matrix is diagonal, you can read off the principal curvatures $k_1$ and $k_2$ as the entries, but the sum of the eigenvalues is always the trace of the matrix and the product of the eigenvalues is always the determinant of the matrix.
Number of Unordered Binary Tree and Its Recurrence Relation
The gist of the relation is that you're constructing trees of size $n$ by splitting $n$ nodes into two non-empty groups. Each group is made into a tree and their roots are connected by a new edge. Because the structures of the two trees are independent, the number of such tree pairs for some size $i$ is obtained by multiplying $b_i$, $b_{n-i}$, and the number of ways of splitting the nodes into a group of size $i$ and a group of size $n - i$. The summation, then, is over $i$, or the size of one of the sub-trees. One tree is made of $i$ nodes, and the other of $n - i$ nodes. All non-empty sub-tree sizes that sum to $n$ are possible, so we need to add them all up. You are correct in that we count each tree twice since the order of the children doesn't matter. A tree made of two children of sizes $k$ and $n - k$ is the same as a tree made of two children of sizes $n - k$ and $k$.
Number of square-full numbers less than $x$ is $\ll \sqrt{x}$
You are correct that the square-full numbers are of the form $a^2b^3$. The number of such numbers less than $x$ is less than $\sqrt x + \sqrt {\frac x8} + \sqrt {\frac x{27}} + \dots +\sqrt {\frac x{n^3}}$ where we should have a floor function on each term. We have overcounted a bit because terms of the form a^9 are counted in the $(a^3)^2a^3$ form and again in the $(a^3)^3$ form, for example. Each term comes from a value of $b$ and we stop when $n^3 \gt x$. This means the number less than $x$ is less than $\sqrt x \sum_{i=1}^{\infty}i^{-3/2}=\sqrt x \zeta(\frac 32) \approx 2.612\sqrt x $
Is every primitive element of a finite field of characteristic $2$, a generator of the multiplicative group?
Not necessarily. For instance $f(x)=x^4+x^3+x^2+x+1$ is irreducible over $\Bbb F_2$, so a solution $\alpha$ of $f(x)=0$ generates $\Bbb F_{16}$. But $\alpha$ has multiplicative order $5$ and does not generate $\Bbb F_{16}^\times$.
Compactness proof of $\mathbb{R}^2$
$X$ bounded implies for all $x\in X$, $|x|\leq M$ for some $M\in\mathbb{R}^+$. This means for all $x\in X$ $x\in [-M,M]$. Similarly for all $y\in Y$ we have $y\in [-N,N]$. Therefore for all $(x,y)\in X \times Y$ we have $(x,y)\in [-M,M]\times [-N,N]$. Therefore bounded. To show closedness take a converging sequence $(x_n,y_n)$.This sequence will converge in the Euclidean metric iff $x_n$ and $y_n$ converge. However since $X$ and $Y$ are compact, the limit points will be contained in $X$ and $Y$. This means, if $x_n\rightarrow x$, $y_n\rightarrow y$, we have $x\in X$ and $y\in Y$ implying $(x,y)\in X\times Y$, implying limit of $(x_n,y_n)$, an arbitrary sequence in $X\times Y$ is contained in the Cartesian Product.
Find minimal polynomial given a certain relation
Here is an illustration of a technique I sometimes use, which involves naming the expression we are working with and eliminating $a$. You will note that I have avoided using polynomial fractions by multiplying through by what would have been the denominator - this incidentally avoids accidentally dividing by zero. I think you will find it pretty much as efficient as the other suggestions. For the first we let $x=a^2+a+1$ and then $$0=a^3-a-1=ax-a^2-2a-1=ax-x-a$$ so $$a(x-1)=x$$ Multiply $a^3-a-1=0$ by $(x-1)^3$ to obtain $$\left(a(x-1)\right)^3-a(x-1)^3-(x-1)^3=x^3-x(x-1)^2-(x-1)^3=0$$ This gives you a monic cubic polynomial in $x$, and the minimal polynomial for $x$ will be a factor of this. For the second, we let $y=\frac 1{a^2+1}$ so that $y(a^2+1)=1$ and so $a^2y=1-y$ to use later. This expression involves only $a^2$ so we want to eliminate odd powers of $a$. This is done by using $a^3-a=1$ (gathering the odd powers on one side and the even powers on the other) and squaring it to get $a^6-2a^4+a^2=1$. Then we multiply through by $y^3$ to obtain $$a^6y^3-2a^4y^3+a^2y^3=y^3= (1-y)^3-2y(1-y)^2+y^2(1-y)$$ and that gives a cubic in $y$. This time the cubic is not monic, but if you are allowed rational coefficients you can divide through be the leading coefficient to obtain a monic polynomial.
Sherman–Morrison formula
Yes, it is. In fact, you already have all the components to prove it. The 'if' as well as the 'only if' both parts are provable using Woodbury Matrix identity. As, $$ (A + UV)^{-1} = A^{-1} - A^{-1}U(I + VA^{-1}U)^{-1}VA^{-1} $$ And, $$ (I + VA^{-1}U)^{-1} = I - V(A + UV)^{-1}U $$ Both of the above identities are easily derivable from Woodbury Matrix identity by appropriate substitution.
Projection onto the convex hull of a sequence
Let $p_n = (\cos x_n, \sin x_n)$. Orange polygon is the convex hull of $S$. Point $C$ is constructed in the following way: it is the intersection of the line $y=2$ and line, that is perpendicular to the segment $p_{n-1}p_n$ and passes through the point $p_n$. So by construction $p_n$ is the projection of $C$ on the segment $p_{n-1}p_n$. Calculating coordinates of $C$ we find, that $C = \left(2, \sin x_n + (2-\cos x_n)\tan\left(\frac{x_n + x_{n-1}}{2}\right)\right)$.
Vandermonde-like sum
Except for trivial cases (if you sum just one term in your second expression, say), your sums do not admit a simple closed formula like the Vandermonde identity. More precisely, you can find a recurrence relation for your expression and use Petkovšek's algorithm to prove that it does not admit a hypergeometric solution.
How to find solutions to nonlinear ODE $\dot{z}=i|z|^2z$
The complex conjugate of your equation (which we denote by (1)) is $$ \overline z'(t)=-i|z|^2\overline z(t),\qquad (2). $$ Let's add $\overline z\cdot(1)$ to $z\cdot (2)$: $$ \frac{d}{dt}\left(z(t)\overline z(t)\right)=0. $$ This implies $|z(t)|=|z(0)|=:r$. Then (1) reduces to $$ z'(t)=ir^2 z(t) $$ This means $z(t)=e^{ir^2t}z(0)$, so the general solution is $$ z(t)=z(0)\exp(i|z(0)|^2 t)=:r e^{i(r^2t+\phi)} $$ if $z(0)=r e^{i\phi}$.
Number of paths that begin at vertex, traverse $3$ edges of cube and end furthest
Hint: Draw a cube, making sure to include the "invisible" edges. Now take advantage of symmetry. The first step is to any one of $3$ vertices. Pick one of these vertices, say $W$, and count the number of paths of length $2$ that get you where you want to go. Then multiply your count by $3$.
Show that in a set with at least 3 elements there exist two permutations that do not commute.
If you swap 1st and 2nd, then you cannot swap 2nd and 3rd anymore at the same time ( with the same permutation). Maybe you want 1 to 2 and 2 to 3, or similar. Try a cycle of length 3, like $f = (a,b,c)$. Also, $g = (c,d)$. Then you can calculate $$g \circ f \circ g^{-1} = (a,b,d) \ne f$$
About the definition of $\Phi^*:\Omega^k(B)\longrightarrow \Omega^k(A)$?
Consider the linear algebra picture. If $V$ is a vector space, an element of $\Lambda^k(V^{*})$ acts on $\Lambda^k(V)$ by the formula $$ (\varphi^1 \wedge \dots \wedge \varphi^k)(v_1 \wedge \dots \wedge v_k) = \det(\varphi^i(v_j))_{i,j=1}^k $$ (extended in a multilinear way to a pairing $\Lambda^k(V^{*}) \times \Lambda^k(V) \rightarrow \mathbb{R}$). This action allows you to identify elements of $\Lambda^k(V^{*})$ with multilinear alternating maps $$\underbrace{V \times \dots \times V}_{k\text{ times}} \rightarrow \mathbb{R}. $$ Now, the vector $\Phi^{*}(\varepsilon)_p$ should be an element of $ \Lambda^k(A_p^{*})$ and so it should act on $\Lambda^k(A_p)$.
The derivative of $\rho e^{it}$
You're confusing three different things with each other: $$ \frac{df}{dz}, \qquad \frac{d}{dz}, \qquad\frac{d}{dt} $$ If you had written $$ \frac{d}{dt} \rho e^{it} = \rho ie^{it} $$ then it would be correct, but what you have written is at best a misunderstanding of notation. Applying the product rule, one gets $$ \begin{align} \frac{d}{dt} \rho e^{it} & = \rho\frac{d}{dt} e^{it} + e^{it}\frac{d\rho}{dt} \\[12pt] & = \rho\frac{d}{dt}e^{it} + e^{it}\cdot 0 \\[12pt] & = \rho\frac{d}{dt}e^{it}. \end{align} $$ Next, use the chain rule: $$ \rho\frac{d}{dt} e^{it} = \rho e^{it} \frac{d}{dt}(it). $$
Method for determing least period length of sequence that is the sum of two modulo sequences.
You're right: $f$ is periodic of period $L=\mbox{lcm}(N,M)$. To prove this: First establish that $L$ is a period for $f$: prove that $f(n+L)=f(n)$ for all $n$. Next establish that no number less than $L$ is a period for $f$: find all $n$ for which $f(n)=f(0)=0$.
If $\limsup_{n\to\infty} \ x_{n} = a$ then why does it exist a subsquence $s_{n}$, which $\lim_{n\to\infty} \ s_{n} = a$?
You want to prove $\limsup\limits_{n\to\infty}x_n=a\;$ implies $\;a$ is a subsequential limit of $\{x_n\}$ See if you can get a contradiction from $\limsup\limits_{n\to\infty}x_n=a\;$ and $\;a$ is not a subsequential limit of $\{x_n\}$ In particular, see if you can prove the following: If $a$ is not a subsequential limit of $\{x_n\},$ then there exists an open interval centered at $a$ that contains no terms of the sequence (or if you want, all you need to show is that there is an open interval containing $a$ that contains no terms of the sequence). From this result you should be able to obtain a contradiction to "$\limsup\limits_{n\to\infty}x_n=a$", regardless of what definition you use for $\limsup\limits_{n\to\infty}x_n.$ Regarding the "see if you can prove" part, note that if no such open interval exists, then you can pick terms from the sequence belonging to open intervals with center $a$ having arbitrarily small lengths, and these chosen terms will form a subsequence converging to $a.$
An orthonormal set cannot be a basis in an infinite dimension vector space?
Take any infinite dimensional inner product space $V$ and any orthonormal sequence $(w_n : n \in \mathbb{N})$. Let $W$ be the subspace generated by this sequence. Then $W$ is certainly an infinite dimensional vector space (because it has an infinite independent subset). Also $W$ has an orthonormal basis, because the inner product on $W$ is inherited from $V$ and thus $(w_n)$ is still an orthonormal sequence in $W$. This means that the theorem you have suggested, "an orthonormal set in an infinite dimension vector space is not a vector space basis", is not true. What I believe might be true is that no infinite dimensional complete inner product space has a orthonormal basis. This is the question that Andrey Rekalo addressed in another answer.
Integration by substitution: What formula can I refer to?
There is none. For example, let $f(x)=e^x$, and $g(x)=x^2$. So, if your formula exists, there would be a solution to: $$\int e^{x^2} \mathop{}\!\mathrm{d}x$$ But, as it turns out, there isn't. (At least, there is no elementary solution. "Elementary" means that it can be written in terms of $+$, $-$, $\times$, $\div$, exponentiation, trig functions and their inverses, etcetera.)
Easier way to discover the area of a right triangle
Square both sides of relationship $(x-y)=5$; then apply Pythagoras, giving $\tag{1}x^2+y^2-2xy=25 \Leftrightarrow a^2-2xy=25$ Besides, the area $S$ of the triangle can be computed in two ways : $\tag{2}S=\frac{xy}{2}=\frac{12a}{6}=6a$ Plugging the value of $a$ taken from (2) in (1), one gets a quadratic equation in variable $S$ which yields the looked for value for $S$. This equation is $$\left(\frac{S}{6}\right)^2-4S=25 \ \ \Leftrightarrow \ \ S^2-144S-900=0$$ whose roots are $S=150$ (the unique answer) and $S=-6$, this one having no geometrical meaning.
Riddle on implication
I have a different take on this. IMHO mathematical logic isn't that far off from regular speech in this instance. In what way can $A \implies B$ be false? It can only be false iff the reality is a counter-example, i.e. $A$ is true but $B$ is false. Therefore "$A \implies B$ is false" does indeed imply $A$ is true. However, what's happening here: The prosecutor can say $A \implies B$. If the prosecutor lied then $A$ is true (the defendant is guilty). But the prosecutor has not proven (beyond a reasonable doubt) that he is lying! Therefore the jury should not convict just based on the prosecutor's statement -- after all it might be a true claim :) in which case it does not establish guilt or innocence at all.
Show that the series converges point wise and converges uniformly.
Hints: For the first part, apply the root or ratio test. Let's use root for no particular reason: $$\bigg|\frac{x^n}{n2^n}\bigg|^{\frac{1}{n}}=\bigg|\frac{x}{2}\bigg|\frac{1}{n^{\frac{1}{n}}} \to \bigg|\frac{x}{2}\bigg|$$ For which $x$ is this limit less than 1? Consider the boundary cases separately. For the second part, apply the Weierstrass M-test. Specifically, if $-2<a<b<2$, then then define $t=\max \{ |a| , |b| \}<2$. Then for any $x \in [a,b]$ we have that $|x|<t$ so that $$\bigg|\frac{x^n}{n2^n}\bigg| < \bigg(\frac{t}{2}\bigg)^n$$ Since $t<2$, the series $\sum_n (\frac{t}{2})^n$ converges absolutely, so the M-test tells you that the sequence of partial sums from the original sequence of functions converges uniformly on the given interval $[a,b]$
Rings with noncommutative addition
Vectors on a sphere? Their addition is essentially multiplication of rotations, which is non-commutative.
Integrate $ydx-xdy$ on ellipse
It looks like all you need to do is parametrize $x=2 \cos{t}$, $y=2 \sin{t}$. The integral is independent of $z$, so $t \in [0,2 \pi]$ over $C$. Thus the integral is $-8 \pi$ (given $C$ is positively oriented).
If closure of subalgebra $A$ which separates points vanishes nowhere, then $A$ contains constants
In $C[0,1]$ the subspace spanned by $e^{cx},c>0$ is an algebra which meets these requirements but it does not contain constants.
What should be the characteristic polynomial for $A^{-1}$ and adj$A$ if the characteristic polynomial of $A$ be given?
For the adjugate matrix: $$\operatorname{adj}(A)=\det(A)A^{-1}=:\lambda A^{-1}$$ and then $$\psi_{\operatorname{adj}(A)}(x)=\det(xI_n-\lambda A^{-1})=\lambda^n\det\left(\frac x{\lambda}I_n-A^{-1}\right)=\lambda^n\psi_{A^{-1}}\left(\frac x{\lambda}\right)$$ and for the inverse matrix your work is fine just you have one mistake and you should write: $$\psi_{A^{-1}}(x)=(-1)^n|A|^{-1}x^n\psi_A\left(\frac1x\right)$$
Matrix notation for element-wise raising to the power of $n$
Wikipedia's Hadamard Analogous operations gives the following notation for raising each element of $A$ to the power of $n$: $$\huge{A^{\circ n}}$$ This is called the "Hadamard power" for which Google has 2,960 results, or perhaps "Hadamard exponentiation" (19 google results).
Derive a method for approximating $f'''(x_0)$ whose error term is of order $h^2$ by expanding the function $f$ in a fourth Taylor polynomial
You are missing something in establishing your system. In extracting the rational constants, the powers of $h$ are still bound to the derivatives, thus the system has to be \begin{align*} \begin{bmatrix} 1 & -2 & 2 & -\frac{4}{3} \\ 1 & -1 & \frac{1}{2} & -\frac{1}{6} \\ 1 & 1 & \frac{1}{2} & \frac{1}{6} \\ 1 & 2 & 2 & \frac{4}{3} \\ \end{bmatrix} \begin{bmatrix} f(x_0) \\ f'(x_0)h \\ f''(x_0)h^2 \\ f'''(x_0)h^3 \\ \end{bmatrix} &\approx \begin{bmatrix} f(x_0 - 2h) \\ f(x_0 - h) \\ f(x_0 + h) \\ f(x_0 + 2h) \\ \end{bmatrix} \\ \end{align*} with the solution formula \begin{align*} \begin{bmatrix} -\frac{1}{6} & \frac{2}{3} & \frac{2}{3} & -\frac{1}{6} \\ \frac{1}{12} & -\frac{2}{3} & \frac{2}{3} & -\frac{1}{12} \\ \frac{1}{3} & -\frac{1}{3} & -\frac{1}{3} & \frac{1}{3} \\ -\frac{1}{2} & 1 & -1 & \frac{1}{2} \\ \end{bmatrix} \begin{bmatrix} f(x_0 - 2h) \\ f(x_0 - h) \\ f(x_0 + h) \\ f(x_0 + 2h) \\ \end{bmatrix} &\approx \begin{bmatrix} f(x_0) \\ f'(x_0)h \\ f''(x_0)h^2 \\ f'''(x_0)h^3 \\ \end{bmatrix} \\ \end{align*} You should also recognize that because of the symmetry, the coefficients of the 4th derivative cancel so that the error term for $f'''(x_0)h^3$ is $O(h^5)$. As you have to divide by $h^3$, the error term for $f'''(x_0)$ is $O(h^2)$. You can greatly reduce the computations by recognizing the symmetry of the situation and start with the Taylor expansions of $$ f(x_0+kh)-f(x_0-kh)=2f'(x_0)kh+\frac13f'''(x_0)k^3h^3+\frac1{60}f^{(5)}(x_0)k^5h^5+O(h^7) $$ so that you get for the coefficients $$2A_2+A_1=0\\8A_2+A_1=3$$
Proving by induction
I would do it by strong induction on the height of the tree. A tree of zero height has one node-you should be able to show that it prints that node and stops. Then assume all trees of height $\le n$ are printed correctly. Given a tree of height $n+1$, when you call the other preorders they do not include the root, so it will not be repeated, and they will be printed correctly.
Problem in understanding natural isomorphism
Fix a non-degenerate symmetric bilinear form $\langle \, , \, \rangle$ in $\mathbb{R}^n$, for instance the standard scalar product. Then define $$\phi \colon \mathbb{R}^n \to \mathbb{R}^{n*}, \quad x \mapsto \langle x, \, \cdot \, \rangle$$
Why is the derivative not $\lim x\to x_0$ instead of $\lim h\to 0$
By definition the derivative of $f(x)$ at a point $x=x_0$ is given by (when the limit exists) $$f'(x_0)=\lim_{x\to x_0} \frac{f(x)-f(x_0)}{x-x_0}$$ now if we define $x-x_0=h\to 0$ the limit becomes $$f'(x_0)=\lim_{h\to 0} \frac{f(x_0+h)-f(x_0)}{h}$$ the two definitions are completely equivalent.
A disgruntled secretary problem - why my solution is incorrect
There are $n!$ ways to distribute $n$ different letters to $n$ different envelopes. An envelope receives the proper letter: There are $\binom{n}{1}$ ways to select an envelope that receives the proper letter and $(n - 1)!$ ways to distribute the remaining letters to the remaining envelopes. Thus, there are $$\binom{n}{1}(n - 1)!$$ such distributions. However, have counted too much if $n > 1$. Observe that $\binom{n}{1}(n - 1)! = n!$. However, not all the distributions result in at least one envelope receiving the proper letter if $n > 1$. In particular, we have counted those distributions in which two envelopes receive the proper letter twice, once for each way we could have designated one of those envelopes as being the envelope that receives the proper letter. Thus, we need to subtract these from the total. Two envelopes receive the proper letter: There are $\binom{n}{2}$ ways to select the envelopes that receive the proper letter and $(n - 2)!$ ways to distribute the remaining envelopes to the remaining letters. Hence, there are $$\binom{n}{2}(n - 2)!$$ such distributions. Thus far, we have $$\binom{n}{1}(n - 1)! - \binom{n}{2}(n - 2)!$$ distributions. However, if $n > 2$, we have not counted distributions in which three envelopes receive the proper letter at all. The reason for that is we added such cases three times, once for each way we could designate one of those envelopes as the envelope that receives the proper letter, and then subtracted them three times, once for each of the $\binom{3}{2}$ ways we could designate two of those envelopes as the ones that receive the proper letter. Therefore, we need to add those cases to the total, which gives $$\binom{n}{1}(n - 1)! - \binom{n}{2}(n - 2)! + \binom{n}{3}(n - 3)!$$ distributions. However, if $n > 3$, we have added too much. By the Inclusion-Exclusion Principle, the number of distributions in which at least one envelope receives the proper letter is $$\sum_{k = 1}^{n} (-1)^{k - 1}\binom{n}{k}(n - k)!$$ Hence, the probability that at least one of the randomly distributed letters will be placed in the proper envelope is $$\frac{1}{n!}\sum_{k = 1}^{n} (-1)^{k - 1}\binom{n}{k}(n - k)! = \sum_{k = 1}^{n} (-1)^{k - 1}\frac{1}{k!}$$
The distribution(or just expectation) of $\min\{U_{1}, U_{2}\}\times \min\{U_{1}, U_{3}\}$?
I can start you off, use the Law of Total Expectation and this link : Expectation of Minimum of $n$ i.i.d. uniform random variables. (for the case of two iid uniforms) \begin{align} E[\min\{U_1,U_2\}\min\{U_1,U_3\} ]=& \frac{1}{2}\Big(E[\min\{U_1,U_2\}\min\{U_1,U_3\} ~|~ U_1<U_2] \\ &+E[\min\{U_1,U_2\}\min\{U_1,U_3\} ~|~ U_2<U_1]\Big) \\ =& \frac{1}{2}\Big(E[U_1\min\{U_1,U_3\}] \\ &+E[U_2\min\{U_1,U_3\}~|~U_2<U_1]\Big). \end{align} Now lets deal with $E[U_2\min\{U_1,U_3\}~|~U_2<U_1]$. \begin{align} E[U_2\min\{U_1,U_3\}~|~U_2<U_1]=& E[U_2\min\{U_1,U_3\}~|~U_2<U_1,U_3<U_1] \\ &+E[U_2\min\{U_1,U_3\}~|~U_2<U_1,U_1<U_3] \\ =& E[U_2U_3]+E[U_2U_1~|~U_2<U_1] \end{align} Now solve $E[U_2U_1~|~U_2<U_1]$ using a double integral.
Finding $\pi_q(Z)$ where Z is the orbit space of an action $\Bbb Z_2 \times S^m \times S^n \to S^m \times S^n$
Well for $\pi_1$ you have an "absolute" expression, so you can consider that to be "in terms of $\pi_1(S^n), \pi_1(S^m)$" (since it's in terms of nothing, just as $f: x\mapsto 2$ expresses $2$ as a function of $x$) As for $\pi_n,n\geq 2$, you probably have a result that relates $\pi_n(X)$ to $\pi_n(Y)$ when $Y\to X$ is a covering map, don't you ? If you don't, a hint is that $S^n,n\geq 2$ is simply-connected, so the lifting theorem allows you to lift maps $S^n\to X$ to maps $S^n\to Y$.
Is the $\Bbb{R}^2\setminus\{P_1,...,P_n\}$ plane homeomorphic to $\Bbb{R}^2\setminus\{Q_1,...,Q_n\}$?
As John Samples comments, it suffices to find a homeomorphisms $h : \mathbb R^2 \to \mathbb R^2$ such that $h(P_i) = Q_i$. This is even a stronger result than $h(K) = K'$ which would also show that $\Bbb{R}^2\setminus K \cong \Bbb{R}^2\setminus K'$. Note that it suffices to consider the special points $Q_i = (i,0) \in X = \mathbb R \times \{0\} $. We can do it by induction. As the base case we can take $n = 0$ (nothing to prove). But also for $n =1$ it is obvious: Take a translation shifting $P_1$ to $Q_1$. Now assume that we have $h : \mathbb R^2 \to \mathbb R^2$ such that $h(P_i) = Q_i$ for $i \le n$. In other words, we may assume w.l.o.g. that $P_i = Q_i$ for $i \le n$. For $(a, b) \in \mathbb R^2$ and $r > 0$ let us define $$\psi_{a,b,r} : \mathbb R^2 \to \mathbb R^2, \psi_{a, b,r}(x,y) = \begin{cases} (x,y) & \lvert x - a \rvert \ge r \\ (x,y + \frac{(\lvert x - a \rvert - r)b }{r}) & \lvert x - a \rvert \le r \end{cases}$$ This is well-defined continuous map which shifts $(a,b)$ to $(a,0)$ and $(a,0)$ to $(a,-b)$ and moreover keeps all points $(x,0) \in X$ with $\lvert x - a \rvert \ge r$ fixed. It is a homeomorphism whose inverse is $\psi_{a,-b,r}$ (since $\psi_{a,b,r} \circ \psi_{a,-b,r} = id$ and $\psi_{a,-b,r} \circ \psi_{a,b,r} = id$). $P_{n+1} \notin X$. Write $P_{n+1} = (a, b)$ with $b \ne 0$. There is a unique linear map $\phi : \mathbb R^2 \to \mathbb R^2$ such that $\phi(1,0) = (1,0)$ and $\phi(a,b) = (n+1,b)$ since $(1,0)$ and $(a,b)$ form a basis of $\mathbb R^2$. Clearly $\phi$ is a linear isomorphism, hence a homeomorphism. It keeps $X$ fixed. Then $h_{n+1} = \psi_{n+1,b,1} \circ \phi$ keeps the $P_i$ with $i \le n$ fixed and maps $P_{n+1}$ to $Q_{n+1}$. $P_{n+1} \in X$. Write $P_{n+1} = (a, 0)$. We have $P_{n+1} \notin \{P_1,\dots,P_n\}$, hence there exists $r > 0$ such that for $\lvert x - a \rvert \le r$ we have $(x,0) \notin \{P_1,\dots,P_n\}$. Hence $\psi_{a,1,r}$ keeps the $P_i$ with $i \le n$ fixed and shifts $P_{n+1}$ to $(a,-1) \notin X$. Now proceed as in 1. By the way, this proof shows that $h$ can always be chosen to be orientation preserving.
prove or disprove $(n+2^k)^{2^k}\equiv 0(\mod 2^{k+1})$
The following paragraph is quoted from the link you provided. Now, if $n^n+47\equiv0\pmod{2^l}$, but $n^n+47\not\equiv0\pmod{2^{l+1}}$ we have $$(n+2^l)^{n+2^l}\equiv(n+2^l)^n \pmod{2^{l+1}}$$ I think what you mean is that $$(n+2^k)^{2^k}\equiv 1 \pmod {2^{k+1}}$$ when $n$ is odd.(when $(n+2^l)^{2^l}\equiv 0$ could only give you $(n+2^l)^{n+2^l} \equiv 0$ rather than $(n+2^l)^{n+2^l} \equiv (n+2^l)^n$) This is equivalent to prove $n^{2^k}\equiv 1$, Use Euler's theorem since $(n,2^{k+1})=1$, we know that $n^{\varphi(2^{k+1})}=n^{2^k}\equiv 1\pmod {2^{k+1}}$
Find the derivative of an integral with an integral as a boundary
Just use Leibniz' rule of differentiating under the integral sign to get: $$f'(x) =\frac1{1+\sin^2 g(x) } g'(x) $$ with $$g'(x) = \frac1{1+\sin^2 x^2}(2x)$$
Trying to Find How Many Combinations or Permutations of a group of items
8 ways to choose one item from the first design group, and for each of those ways, there are 8 ways to choose one item from the second design group, and for each of those ways, there are ... and finally 2 ways to choose an item from the last group. Yes, use multiplication as you've specified it.
Choice of Particular Solution for an inhomogeneous ODE
Any correct particular solution is as good as any other. If I substitute yours into the equation $y_p''-t^2y_p=-Ct^2\cos(tx)+Dt^2\cosh(tx)=-Ct^2\cos(tx)-Dt^2\cosh(tx)=-2Ct^2\cos(tx)$ which doesn't solve the equation. You can certainly find particular solutions for each term on the right. When I feed the first to Alpha I get $y_p=\frac a{2t^2}cos(tx)$ and the second gives $y_p=\frac {bx}{2t}\sinh(tx)$
Combinatorics - distribution of objects into boxes
If balls are distinguishable by color only then the first man can chose balls in $$w = \sum_{i = 0}^7 \sum_{j = 0}^8 [12 - 9 \le i + j \le 12]$$ ways, where $[P]$ is $1$ if $P$ is true and $0$ otherwise. Here $i$ is the number of red balls and $j$ is the number of yellow balls taken by the first man. Conditions on $i + j$ guarantees that there are enough green balls for sum $12$ and sum wouldn't exceed $12$. The second man gets all remaining balls in the only way. So we can compute the desired number: $$w = \sum_{i = 0}^7\sum_{j} [0 \le j \le 8]\cdot [3 \le i + j \le 12] = \sum_{i = 0}^7\sum_j [\max \{\,0, 3 - i\,\} \le j \le \min \{\,8, 12 - i\,\}]\\ = \sum_{i = 0}^7 \min \{\,8, 12 - i\,\} - \max \{\,0, 3 - i\,\} = 6 + 7 + 8 + 9 + 9 + 8 + 7 + 6 = 60.$$ P. S. I don't see more elegant solution, but it may exist.
Is this a valid proof that sine is continuous at the origin?
Provided you know properties of the arcsine your idea will be a proof. However, are you sure you do not need to know that $\sin$ is continuous to deduce properties of the arcsine? Spivak's calculus book has a note about a faulty proof he had in there in one of the pre-publication drafts. It used the square-root function in a proof that $x^2$ is continuous. But then, later, he used continuity of $x^2$ in the proof that the square-root exists. Fortunately, he caught the mistake before publication.
Compact metric space and homomorphism
Suppose $f$ is not onto. Pick $x \in X \setminus f[X]$. Then let $\epsilon > 0$ be such that $\epsilon < d(x, f[X])$. $X$ can be covered by finitely many open sets of diameter $< \epsilon$, by compactness. Let $N$ be the smallest size of such a covering, and $\mathcal{U} = \{O_1,\ldots,O_N\}$ a witnessing cover. If $x \in O_i$, then $f[X]$ does not intersect $O_i$, so $f[X]$ is already covered by $\mathcal{U} \setminus \{O_i\}$, which has $N-1$ elements. Then $\{f^{-1}[O_j]: j \neq i \}$ covers $X$, consists of open sets, and the isometry property guarantees that all diameters are $< \epsilon$. This contradicts the minimality of $N$, contradiction. $f$ is 1-1: If $f(x)=f(y)$ then $$0 = d(f(x),f(y))=d(x,y) \text{ so } x= y$$ And $f$ is indeed uniformly continuous. And as $f$ is bijective and an isometry, $f^{-1}$ is also an isometry and so also uniformly continuous. Or use that a continuous map on a compact space to a Hausdorff space is closed and a continuous closed bijection is a homeomorphism.
Infinite intersection is not empty. Say what's wrong about the proof of this assertion.
You won't get a one-to-one mapping form $\mathbb R_+$ to $\mathbb Q$ in this way. Every positive real is larger than some $\frac1n$, but nothing in what you say implies that two different positive reals are larger than different $\frac1n$s. (In fact, you don't get a concrete mapping at all unless you either appeal to the Axiom of Choice to pick out for each of the $p$s one of the many $\frac1n$s that are smaller than it, or decide on a concrete way to choose between them -- say, always pick the smallest $n$ such that $\frac1n<p$. But in the latter case it is easy to see that, for example, the real numbers $\frac12+\frac17\pi$ and $\frac12+\frac18\pi$ both map to $\frac12$, so your correspondence is not one-to-one).
Differential equation existence of a unique solution?
This is a linear differential equation with continuous coefficients on its domain. On all $[a,b]\subset(0,\infty)$ the coefficient function $\frac1{x^2}$ is bounded, has a maximum, which is the local Lipschitz constant.
Mathematical Analysis: Riemann Integration
Hint: $$x_i\gt \frac{x_i+x_{i-1}}{2}.$$ Then use telescoping.
Which statement is true about the following sequence: $f(1)=1,f(2n)=f(n),f(2n+1)=f(n)+f(n+1)$
Hint. Consider the values of $f(2^m)$, for $m \geq 0$. What values are taken on by that subsequence? Now consider the values of $f(2^m+1)$, for $m \geq 0$. What values are taken on by that subsequence?
complexity of matrix multiplication
As you say, evaluating a trace is order $n^2$-you have $n$ diagonal terms, each of which takes $n$ multiplies and $n$ adds to evaluate. The final $n$ additions are dominated. To do trace $ABCD$ I don't see anything better than first finding $AB$ and $CD$, each of which are $n^3$ operations, or, if you are more clever $n^{2.373}$. Then use your $n^2$ trace calculation, giving order $n^3$ or $n^{2.373}$. This will work for any number of There might be something more clever out there.
How can I solve this trigonometry question?
Hint: The area of a triangle could be expressed in terms of product of two sides and the sine of the angle in between. http://www.regentsprep.org/regents/math/algtrig/att13/areatriglesson.htm So the angle $A$ can be found. And then that with the given angle equation, reveal all angles. You can then recover the length $|BC|$ by applying the area formula once more.
Continuity on $[a,b]$ implies uniform continuity on $[a,b]$
Your problem stems from the somewhat unfortunate circumstance that the author denotes a subsequence of the sequence $(x_n)_{n\geq1}$ by $(x_{k_n})$ instead of $(x_{n_k})_{k\geq1}$. He begins with two sequences $(x_n)_{n\geq1}$, $(y_n)_{n\geq1}$ behaving badly insofar as $|f(x_n)-f(y_n)|\geq\epsilon_0$ for all $n$, even though $|x_n-y_n|\to0$ as $n\to \infty$. In order to be able to invoke the continuity of $f$ we'd like to have all $x_n$, $y_n$ near some point $\xi\in[a,b]$. Bolzano's theorem guarantees that there is some $\xi\in[a,b]$ such that "infinitely many" $x_n$, $y_n$ are in the immediate neighborhood of $\xi$. To be exact: There is a selection function $$\sigma:\quad{\mathbb N}\to{\mathbb N}, \qquad k\mapsto n_k$$ such that the "good" $x_n$, namely the selected $x_{n_k}$ $\>(k\geq1)$, actually converge to $\xi$. It is then easily seen that $$\lim_{k\to\infty} x_{n_k}=\lim_{k\to\infty} y_{n_k}=\xi\ .$$ Since $f$ is continuous at $\xi$ this implies $$\lim_{k\to\infty} \left(f\bigl(x_{n_k}\bigr)-f\bigl(y_{n_k}\bigr)\right)=f(\xi)-f(\xi)=0\ .$$ On the other hand, we have $|f(x_n)-f(y_n|\geq\epsilon_0$ for all $n$, good or bad, so that we arrive at a contradiction.
Natural deduction: negation of quantifiers
For $¬∃xP(x)⊢∀x¬P(x)$, it is enough to assume $P(x)$ : 1) $¬∃xP(x)$ --- premise 2) $P(x)$ --- assumed [a] 3) $∃xP(x)$ --- from 2) by $∃$-introduction 4) $\bot$ --- from 1) and 3) 5) $¬P(x)$ --- from 2) and 4) by $¬$-introduction, discharging [a] 6) $∀x¬P(x)$ --- from 5), by $∀$-introduction.
Can a Laurent series be found for $f(z)=\frac{1}{(z+1)(z+2)}$ in the region $0<|z+1|<2$?
Hint: Consider the partial fraction decomposition.
What exactly is a Haar measure
Summarizing some comments, and continuing: the main point is that Haar measure is translation-invariant (and for non-abelian groups, in general, left-invariant and right-invariant are not identical, but the discrepancy is intelligible). Unless you have intentions to do something exotic (say, on not-locally-compact, or not-Hausdorff "groups", which I can't recommend), you'll be happier later to have a regular measure, so, yes, the measure of a set is the inf of the measures of the opens containing it, and is the sup of the measures of the compacts contained in it, and, yes, the measure of a compact is finite. Probably you will also want completeness, especially when taking products, so subsets of measure-zero sets have measure zero. Probably you'll want your groups to be countably-based, too, to avoid some measure-theoretic pathologies. Then, for abelian topological groups (meaning locally compact, Hausdorff, probably countably-based), the basics of "Fourier series/transforms" work pretty well, as in Pontryagin and Weil. The non-abelian but compact case also turns out very well.
Prove that $\frac{1}{K(B)}\frac{\|A-B\|}{\|A\|} \le \frac{\|A^{-1}-B^{-1}\|}{\|B^{-1}\|} \le K(A)\frac{\|B-A\|}{\|A\|}.$
I don't know why are you doing $$||A-B|| = ||A|| - ||B||$$ at certain places because it's certainly not true. Start with the condition number of $A$ and write the RHS of your inequality as: $$k(A) = \|A\|\|A^{-1}\|\geq\dfrac{\|A\|\|B^{-1} - A^{-1}\|}{\|B^{-1}\|\|B-A\|}$$ can you take it from here? You would need to use that fact that operator norms are sub-multiplicative: $$\|XY\|\leq\|X\|\|Y\|$$
About the proof of Hilbert theorem 90,$(Σf(τ)α^τ)^σ=Σf(τ)^σ {α^τ}^σ $
$\sigma$ is an element of the Galois group, so it is an automorphism of the field $E$. This means that $(a+b)^\sigma=a^\sigma+b^\sigma$ and $(ab)^\sigma=a^\sigma b^\sigma$. Therefore $$\left(\sum_\tau f(\tau)\alpha^\tau\right)^\sigma =\sum_\tau \left(f(\tau)\alpha^\tau\right)^\sigma =\sum_\tau f(\tau)^\sigma\left(\alpha^\tau\right)^\sigma.$$ The first equality holds as $\sigma$ respects addition, and the second as it respects multiplication.
Polar integration question
You are correct. To work out correct result, change coordinates to spherical coordinates: $$ V = \int_S \mathrm{d}V = \int_{\mathbb{S}^{n-1}} \mathrm{d} \Omega(\alpha) \int_0^{R(\alpha)} r^{n-1} \mathrm{d} r = \int_{\mathbb{S}^{n-1}} \mathrm{d} \Omega(\alpha) \frac{1}{n} R(\alpha)^{n} = \frac{S_{n-1}}{n } \int_{\mathbb{S}^{n-1}} \frac{\mathrm{d} \Omega(\alpha)}{S_{n-1}} R(\alpha)^{n} $$ The last form is $ V = \left( \frac{S_{n-1}}{n} \right) \cdot \mathbb{E}( R^n )$. But $c = \frac{S_{n-1}}{n } $ is exactly the volume of the unit ball in $\mathbb{R}^n$.
Prove $\exists T\in\mathfrak{L}(V,W)$ s.t. $\text{null}(T)=U$ iff $\text{dim}(U)\geq\text{dim}(V)-\text{dim}(W)$.
You have basically done it : You know that $n\leq p$, so just define $T: V\to W$ by $$ T(u_i) = 0, 1\leq i\leq m, \text{ and } T(v_j) = w_j, 1\leq j\leq n $$ Then $T$ extends to a linear map $T \in \mathcal{L}(V,W)$. By this construction $$ U \subset \text{null}(T) $$ Furthermore, if $T(v) = 0$, then write $$ v = \sum_{i=1}^m \alpha_i u_i + \sum_{j=1}^n \beta_j v_j $$ Then $$ T(v) = \sum_{j=1}^n \beta_j w_j = 0 $$ By linear independence of $\{w_j\}_{j=1}^n$, it follows that $\beta_j = 0$ for all $j$, and hence $$ v = \sum_{i=1}^m \alpha_i u_i \in U $$ Hence, $U = \text{null}(T)$.
Solution to a Homogeneous Difference Equation
Computed by hand, $$a_0=1,a_1=3,a_2=3^2,a_3=3^3,\cdots$$ leaves little doubt.
Finding the measure of a set using Lebesgue Integral
Following Cameron Williams, an inequality is given by $\mu(\{f\geq c\})= \int_ {\{f\geq c\}} 1\, d \mu \leq \frac{1}{c} \int f d\mu,$ since $\frac{f}{c} \geq 1$ on $\{f\geq c\}$.
Proving a transformation is not a linear transformation
show that it does not map the 0 function to the 0 function.
Calculating arbitrary sines/cosines
The famous CORDIC algorithm (last seen discussed here) actually implements this idea. A more primitive version is that one knows some trig. values algebraically, with some effort for all multiples of $3°$. Also, the complex square root can be reduced to real square roots, so halving an angle is also possible. So for example, to compute (approximate) $\sin$ and $\cos$ for $59°$ you could split it as $60°-1°$ and then use the binary expression of $1°/3°=(0.010101...)_2=\frac14+\frac1{16}+\frac1{64}+...$ to compose $\cos1°+i\sin1°$ starting with the values for $3°$ as obtained from the known exact expressions for $90°-15°-72°$, and then using fractions of it obtained by repeated angle bisection, for $3°/4=0.75°$, $3°/16=0.1875°$, $3°/64=0.046875°$ etc.
Coordinate Functions and Coordinate Bases
The set of tangent vector at $p$ noted $T_pM$ is a vector space of dimension $n$. You can have a lot of basis, and what is said is simply that you can use both basis to decompose any tangent vectors. Remember that $\partial_{x_i}$ is a notation: for $f$ smooth, we have $$\partial_{x_i}f=\partial_i(f\circ x^{-1})(x(p))$$ where $x=(x_i)$ is the diffeomorphism that have $x_i$ as coordinate functions.
What is the standard deviation of the expected maximum value of a set of n random numbers between 0 and 1?
The $k$th order statistic of a sample of size $n$ from $U[0,1]$ has a Beta distribution with parameters $\alpha=k, \beta=n-k+1$, with mean $\dfrac{k}{n+1}$ and variance $\dfrac{k(n-k+1)}{(n+1)^2(n+2)}$ So the maximum (i.e. the $n$th order statistic) has a Beta distribution with parameters $\alpha=n, \beta=1$, with mean $\dfrac{n}{n+1}$ and variance $\dfrac{n}{(n+1)^2(n+2)}$ The standard deviation of the maximum is then $\sqrt{\dfrac{n}{(n+1)^2(n+2)}}$
How to sample from a copula?
In general, there is (as far as I know) only one universally applicable sampling method. It goes by the name of "conditional distribution method", "conditional inverse method" or "conditional sampling". Depending on the specific type of copula, other simulation methods can be available. 1) Conditional distribution method I'll limit myself to the 2D case, and specifically I follow chapter 2.9 (page 40ff) in: Nelsen. An introduction to copulas. Springer 2006, 2nd edition. For random variables $(U,V) \sim C$ (i.e. $U$ and $V$ have a the copula you want to simulate from as joint distribution) denote by $c_u$ the conditional distribution of $V$ given $U=u$. It can be shown that $c_u$ has the following form \begin{align} c_u(v) = \Pr[V\leq v | U = u] = \frac{\partial C(u,v)}{\partial u}. \end{align} The function $c_u$ is furthermore non-decreasing, implying that a (generalized) inverse $c_u^{\leftarrow}$ exists. You can now simulate from $C$ as follows: Generate two independent standard uniform random numbers $u$ and $t$. Set $v = c_u^{\leftarrow}(t)$. The pair $(u,v)$ is now a sample from $C$. In principle this method also works in the $d$-dimensional case, but it can in general be quite difficult to derive the necessary inverses of the conditional distribution functions. The $d$-dimensional description is for example in Cherubini, Luciano, Vecchiato. Copula Methods in Finance. John Wiley &amp; Sons, Ltd, 2004, chapter 6.3 (page 183ff), but is (in my opinion) often not the most practical way to simulate. So instead of the general approach, specialized approaches depending on the specific copula in question are often the easier way to go. 2) Copula specific methods Depending on the definition of the copula, more efficient simulation methods can be available. For example, in case of the Gaussian copula $C_R$ you can simply simulate from a multivariate normal distribution with standard normal margins $\Phi$ and variance-covariance matrix $R$. For a realization $\mathbf{x} = (x_1,\ldots,x_d)$ you can now apply $\Phi$ to each component to get $\mathbf{u} = (\Phi(x_1),\ldots,\Phi(x_d))$. Then $\mathbf{u}$ is a sample from the Gaussian copula $C_R$ with parameter matrix $R$. The same goes for the $t$-copula, where simulating from a multivariate $t$-distribution and then transforming with the margins is easier than simulating from the copula directly. For (nested) Archimedean copulas simulation is tied to the Laplace transform of the copula generator. You can find the details in: Hofert, M. (2007). Sampling Archimedean copulas (https://www.uni-ulm.de/fileadmin/website_uni_ulm/mawi/forschung/PreprintServer/2007/preprintmariushofert.pdf)
Use contour integral to show $\int_0^{2\pi}\frac{\mathrm dt}{(1+\cos(u)\cos(t))^2}=\frac{2\pi}{\sin^3(u)}$ for $u\in(0,\frac{\pi}{2})$
Let $$I(u):=\int_0^{2\pi}\,\frac{\text{d}t}{\big(1+\cos(u)\,\cos(t)\big)^2}\,.$$ Note that $I\left(\frac{\pi}{2}\right)=2\pi=I\left(\frac{3\pi}{2}\right)$, which agrees with the formula $I(u)=\frac{2\pi}{\left|\sin^3(u)\right|}$ for $u\in\left\{\frac{\pi}{2},\frac{3\pi}{2}\right\}$. From now on, we assume that $u\in(0,2\pi)\setminus\left\{\frac{\pi}{2},\frac{3\pi}{2}\right\}$. Let $z:=\cos(t)+\text{i}\,\sin(t)$; then, $$I(u)=\oint_\gamma\,\frac{4\,z}{\text{i}\,\left(z^2+2\,\sec(u)\,z+1\right)^2}\,\text{d}z\,,$$ where $\gamma$ is the positively oriented unit circle $\big\{w\in\mathbb{C}\,\big|\,|w|=1\big\}$. That is, $$I(u)=\frac{4}{\text{i}\,\cos^2(u)}\,\oint_\gamma\,\frac{z}{\big(z+\sec(u)-\tan(u)\big)^2\,\big(z+\sec(u)+\tan(u)\big)^2}\,\text{d}z\,.$$ Write $u_+:=-\sec(u)+\tan(u)$ and $u_-:=-\sec(u)-\tan(u)$. Define $$f_u(z):=\frac{z}{\big(z^2+2\,\sec(u)\,z+1\big)^2}=\frac{z}{\big(z+\sec(u)-\tan(u)\big)^2\,\big(z+\sec(u)+\tan(u)\big)^2}\,.$$ You can write $$(z-u_+)^2\,f_u(z)=\frac{z}{\left(z-u_-\right)^2}$$ so that $$\left.\frac{\text{d}}{\text{d}z}\right|_{z=u_+}\,\big((z-u_+)^2\,f_u(z)\big)=-\frac{u_++u_-}{(u_+-u_-)^3}=+\frac{\cos^2(u)}{4\,\sin^3(u)}\,.$$ Similarly, you can write $$(z-u_-)^2\,f_u(z)=\frac{z}{\left(z-u_+\right)^2}$$ so that $$\left.\frac{\text{d}}{\text{d}z}\right|_{z=u_-}\,\big((z-u_-)^2\,f_u(z)\big)=-\frac{u_-+u_+}{(u_--u_+)^3}=-\frac{\cos^2(u)}{4\,\sin^3(u)}\,.$$ If $u\in\left(0,\frac{\pi}{2}\right)$, then $u_-&lt;-1&lt;u_+&lt;0$. Hence, by the Residue Theorem, $$I(u)=\frac{4}{\text{i}\,\cos^2(u)}\,\Biggl(2\pi\text{i}\,\text{Res}_{z=u_+}\big(f(z)\big)\Biggl)=\frac{8\pi}{\cos^2(u)}\,\left(+\frac{\cos^2(u)}{4\,\sin^3(u)}\right)=+\frac{2\pi}{\sin^3(u)}\,.$$ If $u\in\left(\frac{\pi}{2},\pi\right)$, then $0&lt;u_+&lt;+1&lt;u_-$. Hence, by the Residue Theorem, $$I(u)=\frac{4}{\text{i}\,\cos^2(u)}\,\Biggl(2\pi\text{i}\,\text{Res}_{z=u_+}\big(f(z)\big)\Biggl)=\frac{8\pi}{\cos^2(u)}\,\left(+\frac{\cos^2(u)}{4\,\sin^3(u)}\right)=+\frac{2\pi}{\sin^3(u)}\,.$$ If $u\in\left(\pi,\frac{3\pi}{2}\right)$, then $0&lt;u_-&lt;+1&lt;u_+$. Hence, by the Residue Theorem, $$I(u)=\frac{4}{\text{i}\,\cos^2(u)}\,\Biggl(2\pi\text{i}\,\text{Res}_{z=u_-}\big(f(z)\big)\Biggl)=\frac{8\pi}{\cos^2(u)}\,\left(-\frac{\cos^2(u)}{4\,\sin^3(u)}\right)=-\frac{2\pi}{\sin^3(u)}\,.$$ If $u\in\left(\frac{3\pi}{2},2\pi\right)$, then $u_+&lt;-1&lt;u_-&lt;0$. Hence, by the Residue Theorem, $$I(u)=\frac{4}{\text{i}\,\cos^2(u)}\,\Biggl(2\pi\text{i}\,\text{Res}_{z=u_+}\big(f(z)\big)\Biggl)=\frac{8\pi}{\cos^2(u)}\,\left(-\frac{\cos^2(u)}{4\,\sin^3(u)}\right)=-\frac{2\pi}{\sin^3(u)}\,.$$ In all cases, $$I(u)=\frac{2\pi}{\left|\sin^3(u)\right|}=2\pi\,\left|\text{csc}^3(u)\right|\text{ for all }u\in\mathbb{R}\setminus \pi\mathbb{Z}\,.$$ In general, $$\int_0^{2\pi}\,\frac{\text{d}t}{\big(1+r\,\sin(t)\big)^2}=\int_0^{2\pi}\,\frac{\text{d}t}{\big(1+r\,\cos(t)\big)^2}=\frac{2\pi}{\big(\sqrt{1-r^2}\big)^3}$$ for every $r\in\mathbb{C}\setminus\big((-\infty,-1]\cup[+1,+\infty)\big)$. Similarly, $$\int_0^{2\pi}\,\frac{\text{d}t}{1+r\,\sin(t)}=\int_0^{2\pi}\,\frac{\text{d}t}{1+r\,\cos(t)}=\frac{2\pi}{\sqrt{1-r^2}}$$ for every $r\in\mathbb{C}\setminus\big((-\infty,-1]\cup[+1,+\infty)\big)$. Here, we pick the branch of $\sqrt{1-r^2}$ in such a way that $$\big|1-\sqrt{1-r^2}\big|&lt;|r|$$ (i.e., the branch cuts are $(-\infty,-1]$ and $[+1,+\infty)$). That is, for $r\in\mathbb{C}\setminus\big((-\infty,-1]\cup[+1,+\infty)\big)$, the complex number $\sqrt{1-r^2}$ is in the open half-plane containing complex numbers with positive real parts; in short, $$\text{Re}\big(\sqrt{1-r^2}\big)&gt;0\text{ for every }r\in\mathbb{C}\setminus\big((-\infty,-1]\cup[+1,+\infty)\big)\,.$$