title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to Factorise these expressions?
Remember, and you must learn it by heart: $$(a{+}b)^2=a^2+2ab+b^2$$ and $$(a-b)^2=a^2-2ab+b^2$$ So try to write each of two expressions in one of this two forms
Riemannian 1-manifold is flat
Your $\phi$ won't automatically be an isometry with $\mathbb{R}$, but there is the following trick to modify it properly: First, following the hint of Danu, note that if $\gamma:\mathbb{R}\to M$ is a smooth map, then $\gamma^*g=f\,dt^2$ for a smooth function $f$ (symetric tensors of $\mathbb{R}$ are spanned by $dt^2$). Since $\gamma^*g=g(d\gamma(\bullet),d\gamma(\bullet))$ by definition, we read that $$f=f\,dt^2\Big(\frac{\partial}{\partial t},\frac{\partial}{\partial t}\Big)=g\Big(d\gamma\Big(\frac{\partial}{\partial t}\Big),d\gamma\Big(\frac{\partial}{\partial t}\Big)\Big)=g(\gamma'(t),\gamma'(t)).$$ So our goal to construct an isometry will be to find a $\gamma$ such that $f=1$, i.e. $g(\gamma',\gamma')=1$. Note $\psi:=\phi^{-1}:(a,b)\to M$. The map $l:t\mapsto\int_c^t|\psi'(s)|ds$ - where $c\in(a,b)$ and $|\psi'(s)|=\sqrt{g(\psi'(s),\psi'(s))}$ - is differentiable, of differential $l'(t)=|\psi'(t)|>0$. So $l$ has a differentiable inverse, say $h$. Note that $$h'(t)=\frac{1}{l'(h(t))}=\frac{1}{|\psi'(h(t))|}$$ and so $|(\psi\circ h)'(t)|=|\psi'(h(t))\cdot h'(t)|=\Big|\frac{\psi'(h(t))}{|\psi'(h(t))|}\Big|=1$. Thus $\gamma:=\psi\circ h$ is as required a unit speed parametrization of $M$: since $\gamma^*g=g(\gamma'(t),\gamma'(t))\,dt^2=dt^2$, $\gamma$ is an isometry.
what is the relation between the number of elements in a geometric sequence and its summation?
For $x \ne 1$ we have $S=1+x+x^2+...+x^{n-1}= \frac{x^n-1}{x-1}$, hence $x^n=S(x-1)+1$. Can you proceed ?
Confidence intervals for two proportions.
There are a variety of ways of tackling a problem of this kind/size. The problem with looking for differences on one variable (say in area) is that there are other important variables (like age). If you ignore the such variables you invite a host of problems (not being able to pick up important differences, or finding differences that are illusory or even reversed from the true direction of difference; this is also familiar in regression and ancova where an effect can change direction from a univariate analysis once you take into account another important covariate; the simple analysis may be misleading). For count data, there are generalized linear models. For contingency tables, an analysis using loglinear models (a subset of GLMs) would be fairly common. The advantages of using GLMs is that you get the ability to use models for count data like you would use ANOVA, ANCOVA and regression for continuous variables (with assumptions of normality) - you can build suitable, interpretable models that will let you make conclusions about (say) relative odds of taking one mode of transport rather than another in the two towns, for a given age group. [It's probably better to work with your actual problem, avoiding jargon where convenient, explaining jargon where necessary, though. Incidentally, if you're going to explain the actual problem you have, stats.stackexchange.com is likely to have a greater concentration of people used to working on exactly this kind of problem; I'm a statistician myself (and there are statisticians here, certainly), but over there, that's pretty much everyone, and some of them will very likely have familiarity with problems like your actual one.] If you'd like (here or there), I could try to help come up with models that would answer the kind of questions you're talking about. Incidentally, in regard to your 'age' variable, it would be more typical to give the data as "Age <30" vs "Age 30+" rather than as "total" vs "subgroup". You can't compare the total with the subgroup - you'd just have to split the 'total' group up to make the comparison anyway. Do you have suitable software - something that will fit glms / loglinear models? Is this something you're looking to publish? Is it for a thesis? Some coursework?
Composition of mappings on finite sets
Why shouldn't $g \circ f$ be defined? $f$ and $g$ are two well-defined functions and we have ${\rm dom}(g) = {\rm codom}(f) = \underline m$. This are all conditions that have to be fulfilled. We have - as always for compositions - $(g\circ f)(j) = g\bigl(f(j)\bigr)$ for $j \in \{1, \ldots, n\}$. If we identify these functions with tuples, it may seem not usual to have compositions, but it changes nothing. Definedness of compositions is not changed by a change of notation. In your example we have - as $g$ is the identity - $$ (g \circ f) = f = (2,3,4). $$
How to compute the derivative of cosine calculated from cross-product?
Differentiating a scalar $A$ with respect to a vector $\mathbf{B}$ gives a vector whose $i$th entry is obtained by instead differentiating with respect to $B_i$. In this case $A=\dfrac{\sum_i W_i X_i}{\sqrt{\sum_j W_j^2}\sqrt{\sum_k X_k^2}}$, which you can differentiate with respect to either $W_l$ or $X_l$ using the product & chain rules, which I'll lead to you. By symmetry, we only need to do the $W_l$ case; if we get $f_l(\mathbf{W},\,\mathbf{X})$, the derivative with respect to $X_l$ is $f_l(\mathbf{X},\,\mathbf{W})$.
What is a _least ancestor_ in Boyer Myrvolds paper?
While I haven't yet looked at your reference, I'll give you the typical graph theoretic definitions. The terms "least ancestor" and "least common ancestor" are usually used in reference to a rooted tree, although you can extend the definitions I'm about to give to other contexts. Take your rooted tree $T$ with root $v$ and imagine it drawn "downward"; that is, $v$ is the highest vertex on the drawing, then $v$'s children are drawn below $v$, and then their children, and so on. This imagine should be familiar to CS people, so I won't elaborate anymore (let me know if you need more explanation). Let me first begin by defining a "descendent" of a vertex in our picture. The descendent of a vertex $u$ of $T$ is any vertex that appears "beneath" $u$, with the only route to the root being through $u$; if THE path connecting $u$ and $w$ does not have to travel upwards in our picture, then $w$ is a descendent of $u$. The "ancestor" relation is the opposite of the descendent relation: $u$ is a ancestor of $w$ if and only if $w$ is a descendant of $u$. THE "least ancestor" of $u$ is the lowest ancestor of $u$ in our picture. The least ancestor is also the ancestor of $u$ that has no other ancestors of $u$ as descendent; in other words, it is THE vertex directly above $u$ in $T$. THE least common ancestor of $u$ and $w$ is the lowest vertex that is an ancestor to both $u$ and $w$. Least ancestor is only in reference to one vertex, whereas the least common ancestor is in reference to two vertices.
Power series representation
A hint: if $$g(t) = \sum_{n=0}^\infty a_nt^n$$ and $$h(t) = \sum_{n=0}^\infty b_nt^n$$ then $$g(t)h(t) = \sum_{n=0}^\infty\left(\sum_{k=0}^n a_kb_{n-k}\right)t^n.$$
Motivation for a new definition of the derivative using the concept of average velocity
In the following it is assumed that $a<0<b$ whenever the letters $a$ and $b$ appear. Then one has the following Proposition. Let $f:\ U\to{\mathbb R}$ be defined in a neighborhood of $0$. If the limit $$\lim_{b-a\to0}{f(b)-f(a)\over b-a}=:p$$ exists then the one-sided limits $\lim_{x\to0-} f(x)$ and $\lim_{x\to0+} f(x)$ exist and are equal. If $f(0)$ equals this common limit then $f'(0)=p$. Proof. Subtracting a linear function from $f$ we may assume $p=0$. Let an $\epsilon>0$ be given. Then by assumption there is a $\delta\in\ \bigl]0,{1\over4}\bigr]$ such that for $$-\delta < a < 0 < b <\delta$$ one has $$|f(b)-f(a)|<\epsilon(b-a)<{\epsilon\over2}\ .\tag{1}$$ It follows that for arbitrary $x$, $x'\in\ ]0,\delta[\ $ and $a:=-{\delta\over2}$ we are sure that $$|f(x)-f(x')\leq |f(x)-f(a)|+|f(a)-f(x')|<\epsilon\ .$$ This implies by Cauchy's criterion the existence of the $\lim_{x\to0+} f(x)$. Similarly for $\lim_{x\to0-} f(x)$, and then the equality of the two limits is obvious. For the last statement we now assume $f(0)=\lim_{x\to0} f(x)$. Letting $a\to0-$ in $(1)$ we see that $$|f(b)-f(0)|\leq\epsilon b\qquad(0<b<\delta)\ ,$$ and as $\epsilon>0$ was arbitrary this implies $$\lim_{b\to0+}{f(b)-f(0)\over b}=0\ .$$ Arguing similarly about the lefthand limit we are done.$\qquad\square$ The converse of this proposition has been dealt with by coffeemath.
Is there any intuition behind why the derivative of $\cos(x)$ is equal to $-\sin(x)$ or just something to memorize?
Here is a geometric interpretation that is easy to remember: the unit circle is parametrized by $(\cos t, \sin t)$ and hence its tangent vector is orthogonal to the position vector. Rotating the position vector by 90 degrees gives you $(-\sin t, \cos t)$ and so $\cos'=-\sin$ and $\sin'=\cos$. This argument has a simple interpretation in terms of complex numbers. $\exp(it) = \cos t + i\,\sin t$ and so $i \exp(it) = \exp(it)'= \cos' t + i\,\sin' t$. But $i \exp(it) = -\sin t +i\,\cos t$. These two arguments are the same, really.
Proof that isomorphic graphs must have the same number of vertices
You just proved it. Anyway, this is a special case of a more general phenomenon, namely that: functors preserve isomorphisms hence functors preserve isomorphicness. In this case, we're talking about the underlying set functor $$U : \mathbf{Grph} \rightarrow \mathbf{Set}.$$
The roots of the equation $z^2+pz+q=0,$ where $p,q$ are complex numbers, are ..
Let $A(z_1)$,$B(z_2)$, $z_1=r(\cos\theta_1+i\sin\theta_1)$, $z_2=r(\cos\theta_2+i\sin\theta_2)$ and $\theta_1-\theta_2=2\beta$, where $r\geq0$ and $\{\theta_1,\theta_2\}\subset[0,2\pi).$ Thus, $$q=z_1z_2=r^2(\cos(\theta_1+\theta_2)+i\sin(\theta_1+\theta_2))$$ and $$-p=z_1+z_2=r(\cos\theta_1+\cos\theta_1+i(\sin\theta_1+\sin\theta_2))=$$ $$=2r\left(\cos\frac{\theta_1+\theta_2}{2}\cos\beta+i\sin\frac{\theta_1+\theta_2}{2}\cos\beta\right)=2r\cos\beta\left(\cos\frac{\theta_1+\theta_2}{2}+i\sin\frac{\theta_1+\theta_2}{2}\right).$$ Id est,$$p^2=4r^2\cos^2\beta\left(\cos\frac{\theta_1+\theta_2}{2}+i\sin\frac{\theta_1+\theta_2}{2}\right)^2=$$ $$=4r^2\cos^2\beta(\cos(\theta_1+\theta_2)+i\sin(\theta_1+\theta_2))=4q\cos^2\beta$$ and we are done!
Prove that 1 has n distinct roots of order n
If $z_1, \ldots, z_n$ are pairwise distinct roots of an $n$-th degree polynomial $p(z)$ then they all have multiplicity 1 and there are no other roots. This I hope is clear, and if not I can explain it further. The numbers $z_1, \ldots, z_n$ defined by $$z_k = \cos(2 \pi k / n) + i \sin (2 \pi k / n)$$ are all roots of $z^n - 1$ by Moivre's formula. They are also pairwise distinct. Therefore, they are the only roots of $z^n - 1$. In short, we guess $n$ pairwise distinct roots of $z^n = 1$, and then we conclude that there cannot be any others.
Are Chebyshev polynomials not monic for n $\geq$ 2?
Tchebyshev Polynomials satisfy the recurrence relation: $$T_0(x)=1 \\ T_1(x)=x \\T_{n+1}(x)=2xT_n-T_{n-1}.$$ So by inspection it isn't difficult to see that polynomials will always have the form, $$T_{n+1}(x) = a_{n+1}x^{n+1}+O(x^{n-1}),$$ where $a_{n+1}$ is divisible by 2. So therefore Tchebyshev Polynomials are not monic for $n \gt 1$.
Probability of dominating set in random balanced tournament
Revised answer after your edit: I'm not sure where you're getting the numerator ${n-1 \choose (n-1)/2-k}$ in your expression for $P[D_v]$. It looks like you're trying to count the number of ways to choose the $(n-1)/2 - k$ remaining out-neighbors for $v$, since we're assuming that $v$ is adjacent to each of the $k$ vertices in $S$. But you're making this choice from a set of $(n-1)-k$ remaining vertices, not $n-1$. So, I think the numerator should instead be ${n-1-k \choose (n-1)/2 - k}$. Taking $n=5$ and $k=1$, your original expression says that $P[D_v] = {4 \choose 1}/{4 \choose 2} = 2/3$, but the corrected expression gives $P[D_v] = {3 \choose 1}/{4 \choose 2} = 1/2$, as intuition would suggest.
Are there more than $\beth_1$ non-homeomorphic topological subspaces of $\Bbb R$?
Every subset of $\mathbb{R}$ has a countable dense subset. If $X\subseteq\mathbb{R}$ and $A\subseteq X$ is a countable dense subset, a homeomorphism from $X$ to another subset $Y\subseteq\mathbb{R}$ is determined by its restriction to $A$. So there is an injection from the set of homeomorphisms from $X$ to other subsets of $\mathbb{R}$ to the set of functions from $A$ to $\mathbb{R}$. There are only $\beth_1$ functions from $A$ to $\mathbb{R}$ since $A$ is countable. So each subset of $\mathbb{R}$ can be homeomorphic to at most $\beth_1$ other subsets of $\mathbb{R}$. Since there are $\beth_2>\beth_1$ different subsets of $\mathbb{R}$, there must be $\beth_2$ different homeomorphism classes of subsets of $\mathbb{R}$.
Prove that $ax + by = a + c$ has solutions if and only if $ax + by = c$ has solutions
Well, if $x_s$ and $y_s$ solve $ax + by = c$, we have: $ax + by = a + c = a + ax_s + by_s \Rightarrow a(x-(x_s+1)) + b(y-y_s) = 0 \Rightarrow$ $\begin{align}x_r=x_s+1\\y_r=y_s\phantom{....}\end{align}\;$ solve $ax+by=a+c$ Now, if $x_r$ and $y_r$ solve $ax+by = a+c$, we have: $ax + by = c = ax_r-a+by_r \Rightarrow a(x-(x_r-1))+b(y-y_r) = 0 \Rightarrow$ $\begin{align}x_s=x_r-1\\y_s=y_r\phantom{....}\end{align}\;$ solve $ax+by=a$ I don't know if it is formal enough for you, but I hope it helps anyway. Good luck!
Function Field of Variety and Scheme
I think exactly how to answer your question may depend on the definitions you are using. To set definitions: On the side of varieties: $K(V') = frac(k[X]/I(V'))$, where $k[X]$ is the coordinate ring of $X$. (And $V'$ is serving double duty, here as an algebraic set.) The residue field of the scheme $X$ at $V'$ is the residue field of the ring $k[X]$ localized at the prime $V'$. So the commutative algebra side of your question is: Given a ring $R$ and a prime ideal $P$, is $frac(R/P) \cong R_P / PR_P$. The answer is yes: localization commutes with quotients ("localization is exact"), and localizing $R/P$ at $P$ is the same as constructing its field of fractions, because we are inverting all nonzero elements.
Skolem Functions vs Elementary Equivalence
Yes. Specifically, by elementary equivalence the same function symbols serve as Skolem functions in $\mathfrak{B}$: "$\forall\overline{w}((\exists v\phi(v,\overline{w}))\rightarrow\phi(f(\overline{w}),\overline{w}))$" is a first-order sentence, so if it's true in $\mathfrak{A}$ it's true in $\mathfrak{B}$.
Tower of Brahma (a.k.a. Tower of Hanoi)
German Wikipedia says (without reference) that Lucas invented the story. French Wikipedia adds enough information to make that quite plausible. Lucas attributed it to one N. Claus de Siam, supposedly a mandarin at the college of Li-Sou-Stian; the name is an anagram of Lucas d’Amiens, (Lucas was born at Amiens), and the name of the college is an anagram of Saint Louis, the name of the lycée were he taught. He said that during his travels in connection with the publication of the works of the illustrious Fer-Fer-Tam-Tam that this supposed mandarin witnessed the priests in the great temple of Benares transferring the disks on their diamond needles. According to Andreas M. Hinz in The Tower of Hanoi — Myths and Maths, Lucas was a member of the commission that edited Fermat’s collected works; he never went to Hanoi, but he was sent to Rome in this connection. The mandarin Fer-Fer-Tam-Tam is evidently Fermat, and Lucas’s own work with the commission seems to have suggested the story.
How to find length when viewing at some angle?
Your pictures are misleading. If you look at the rectangular in image A from an angle it will not look like the image in B. Make the shape in image B into a 3D image, you'll see it is misleading
Different forms for the exterior power of a module
$B \subset C$ doesn't imply $A/B \subset A/C$. It implies $A/B$ surjects to $A/C$. The argument in Dummit & Foote must not be finished. To show what you want (which presumably comes later in Dummit & Foote): The key is to use $(x+y) \otimes (x+y) = x \otimes y + y \otimes x + x\otimes x + y\otimes x$. This shows that $x\otimes y + y \otimes x$ is in $A(M)$, which will allow you to "move" (modulo $A(M)$) the two $m$'s next to each other. For example, $x \otimes y \otimes x = - x \otimes x \otimes y + x \otimes (stuff)$ Where $stuff = (x+y)\otimes(x+y) - x\otimes x - y\otimes y \in A(M)$. Thus $x \otimes y \otimes x \in A(M)$. This reasoning in general shows $A(M) = J(M)$.
Test the convergence of the sum $\sum_{n= 0 }^\infty\frac{4^{n-1}+2^n}{5^{n+1}}$
You may also try comparison test:$$\frac{4^{n-1}+2^n}{5^{n+1}} \lt \dfrac{4^{n+1}+4^{n+1}}{5^{n+1}}=2\left(\dfrac{4}{5}\right)^{n+1}$$
Is the map $u\mapsto |u|^2 u$ globally or locally Lipschitz continuous in the $H_0^1$ norm?
The map $F(u) = |u|^2 u$ is homogeneous of degree 3, meaning $F(\lambda u) = \lambda^3 u$ for all positive scalars $\lambda$. A homogeneous map cannot be globally Lipschitz unless the degree of homogeneity is $1$. Indeed, $\lambda\mapsto \lambda ^d$ is not Lipschitz on $(0, \infty)$ unless $d=1$. The best one can hope for is an inequality of the form $$ \|F(u)-F(v)\| \leq L(\|u\|+\|v\|)^{d-1}\|u-v\| \tag1$$ where $d$ is the degree of homogeneity. Both sides of (1) are homogeneous of degree $d$; thus we can divide both $u, v$ by the same constant, reducing the problem to the case $\|u\|, \|v\|\le 1$. In other words: if $F$ is degree $d$ homogeneous and Lipschitz on the unit ball, then it satisfies (1). Unfortunately, $F(u)=|u|^2u$ does not satisfy any such bound in the $L^2$ norm. Consider the function on the interval $[0, 1]$ given by $$ u_\epsilon (t) = (t+\epsilon )^{-1/3}, \quad \epsilon>0 $$ It is smooth, is in $L^2$ and its $L^2$ norm stays bounded as $\epsilon \to0$. Yet, the $L^2$ norm of $|u_\epsilon|^2u_\epsilon$ blows up as $\epsilon\to 0$. A similar example for $H^1$ norm can be obtained by increasing the exponent by $1$ (this will be taken out by the derivative): $$ u_\epsilon (t) = (t+\epsilon )^{2/3}, \quad \epsilon>0 $$ Or, one can put $\epsilon$ in the exponent in such a way that it will produce blow up for $F(u)$ but not for $u$. This kind of example can be turned into an $H^1_0([-1, 1])$ function by multiplying it with a smooth cut-off.
A space is Hausdorff iff the diagonal is closed
your prove is not correct at the part "hence (X x X) - A = Union of all V(y)'s such that y doesn't belong to A.". This question can be solved directly by using the fact that every points have neighbourhood $U_1 \times U_2$ with$U_1,U_2$ is open subset. try to use thit :)
Let $\{B_t\}_{t \ge 0}$ a Brownian Motion, prove that for all $a \gt 0$: $P(B_t \ge a$ for some $t \ge 0 )=1$
Do not get confused between the $a$ in the title and the $a$ in the reflection principle. Take $a=0$ in the reflection principle. Note that $2P(B_t \geq b)=2P(X \geq b/\sqrt t) \to 1$ as $ t \to \infty$ (where $X$ is a standard normal variable). Hence $P(B_s \geq b \, \text{for some} \, s\leq t) \to 1$ as $t \to \infty$ which is what we want to prove.
How to determine if two planes have a point in common
Write the equation of the planes as $ax+by+cz=d$ where $a^2+b^2+c^2=1$ so that $d $ is the distance of the plane from the origin in $\mathbb R^3$. The only case when they don't intersect is when they are skew planes, parallel but at different distance from the origin. Otherwise, they are either the same plane or their intersection is a line.
If $A_k\to \infty$ and $B_k=\sum\limits_{i=1}^k A_i$ then is the limit of $\frac{A_k}{B_k}$ always zero?
The limit could be pretty much anything between $0$ and $1$: Example I. $A_n=s^n$, $s>1$. Then $$ \frac{A_n}{\sum_{k=1}^n A_k}=\frac{(s-1)s^n}{s^{n+1}-1}\to \frac{s-1}{s}. $$ Example II. $A_n=n$, then $$ \frac{A_n}{\sum_{k=1}^n A_k}=\frac{2n}{n(n+1)}\to 0. $$ Example III. $A_n=n!$, then $$ \frac{A_n}{\sum_{k=1}^n A_k}\to 1. $$ Chalenge. Construct a sequence $\{A_n\}_{n\in\mathbb N}$, for which we do not have convergence.
Prove that $2AB$ is square
With help from above comment we have: \begin{align} 2AB &= 2(1!×2!×...×1002!)(1004!×1005!×...×2006!) \\ &= 2(2^{501}×1×2×...×501)(2^{500}×502×503×...×1003)((1!)^2(2!)^2 ...(1003!)×(1005!)^2 ...(2005!)^2 \\ &= 2^{1002}(1!)^2 ...(1003!)(1003!)(1005!)^2 (1006)^2 ...(2005)^2 \\ &= 2^{1002} ... \end{align} square.
Tough goniometric inequality (different period)
We have that $$\dfrac{1 - 2 \sin(x)}{2 \cos(x / 2) + \sqrt{3}} \leq 0$$ precisely when $1 - 2 \sin(x)$ and $2 \cos(x / 2) + \sqrt{3}$ are non-zero and have different signs, or when $1 - 2 \sin(x)=0$. Now for each of the following, determine the values of $x$ for which it is satisfied (if there are any): $1-2\sin(x)<0$ $1 - 2 \sin(x)=0$ $1 - 2 \sin(x)>0$ $2 \cos(x / 2) + \sqrt{3}<0$ $2 \cos(x / 2) + \sqrt{3}>0$ and from this, determine the values of $x$ for which the conditions are met.
Evaluating $\int_{0}^{1}\int_{0}^{1}\sqrt{x^{2}+y^{2}}dxdy$ using polar coordinates.
With symmetry we cut the line in half at $y=x$ and claim that the integral is equal to twice its value on the bottom triangle. The $r$ bounds go from the origin to the line $x=1$, which translates to $r =\sec\theta$. Then the integral becomes $$2 \int_0^{\frac{\pi}{4}} \int_0^{\sec\theta} r^2 dr d\theta = \frac{2}{3} \int_0^{\frac{\pi}{4}} \sec^3\theta d\theta$$ Which you can take from here with trig identities, or use the substitution $\tan\theta = \sinh(t)$: $$\frac{2}{3} \int_0^{\sinh^{-1}(1)} \cosh^2(t) \: dt = \frac{1}{3} \int_0^{\sinh^{-1}(1)} 1 + \cosh(2t)\:dt $$ $$ = \frac{t}{3} + \frac{1}{3}\sinh(t)\cosh(t)\Biggr|_0^{\sinh^{-1}(1)} = \frac{1}{3}\sinh^{-1}(1) + \frac{\sqrt{2}}{3}$$
Does the existence of a power of $2$ prime-representing function $\lfloor \alpha^{2^n} \rfloor$ imply Legendre's conjecture to be true?
The Legendre conjecture for every $n$ there is a prime $ p \in [n^2,(n+1)^2]$ is the same as for every $a_0 \ge 3$ there exists $\alpha \in [a_0,a_0+1]$ such that $a_n = \lfloor \alpha^{2^n} \rfloor$ is prime for every $n \ge 1$ This is because for every $n$ and for every $k \in [a_n^2,(a_n+1)^2]$, we can refine $\alpha$ without changing $a_0, \ldots, a_n$ to obtain $$a_{n+1} = \lfloor \alpha^{2^n} \rfloor = k$$
how to show vector is an Imputation of the core?
An imputation $\mathbf{x}$ is in the core of a three person game, if it satisfies the following set of inequalities $$v(12) + v(13) +v (23) \le 2 v(123) \qquad v(1) + v(23) \le v(123) \qquad v(2) + v(13) \le v(123) \qquad v(3) + v(12) \le v(123).$$ Now, the following imputation is given $$\mathbf{x}=(x_{1},x_{2},x_{3})=(v(12)+v(13)-v(123),v(123)-v(13),v(123)-v(12)). $$ We have to check that these inequalities are satisfied by $\mathbf{x}$ to conclude that this imputation belongs to the core Under the mentioned assumptions and that the game is super-additive, we obtain now $$ v(2) \le x_{2} = v(123) - v(13) \qquad v(3) \le x_{3} = v(123) - v(12). $$ Moreover, it holds that $$ \sum_{i \in N} x_{i} = v(12) + v(13) - v(123) + v(123) -v(13)+v(123)-v(12) = v(123).$$ Similar, we get $$x(12) = x_{1}+x_{2} = v(12) \qquad x(13) = x_{1}+x_{3} = v(13) \qquad x(23) = x_{2}+x_{3} = v(23). $$ This shows that the imputation $\mathbf{x}$ is in the core.
Discrete Math dealing with Partition of Ordered Pairs.
Which elements are related to which elements? That is, $a R b$ if and only if there exists a cell (subset) X in the partition such that $a\in X$ and $b \in X$. You will see this is indeed and equivalence relation. Looking at the cell of the partition given by $\{d, e\}$, we know that $(d, d) \in R, (e, e) \in R, (d, e) \in R, (e, d)\in R$. But we also know, for example, that $(a, d) \notin R$ I'll let your complete listing the ordered pairs in this equivalence relation R.
Restriction of a div-free vector field to a plane?
Can any vector field on the plane be viewed as the restriction of some div-free field on $\Bbb R^3$? Yep. Suppose we are given a $F = [f_x,f_y] : \Bbb R^2 \to \Bbb R^2$. Define an $\tilde F = [\tilde f_x,\tilde f_y,\tilde f_z]: \Bbb R^3 \to \Bbb R^3$ so that $F(x,y) = [\tilde f_x(x,y,0), \tilde f_y (x,y,0)]$ by setting: $\tilde f_x(x,y,z) = f_x(x,y)$ $\tilde f_y(x,y,z) = f_y(x,y)$, and $$ f_z(x,y,z) = -z\left( \frac{\partial f_x}{\partial x}(x,y) + \frac{\partial f_y}{\partial y}(x,y) \right) $$ We can quickly verify that $\nabla \cdot \tilde F = 0$.
${\int\int\int}_B dxdydz$ where $B$ is the region delimited by $x²+y²+z² = 4$ and $x²+y²=3z$
This is the intersection betweet the paraboloid $z = \frac{x²+y²}{3}$ and the sphere $x²+y²+z²=4$. Their intersection forms a circle with radius $\sqrt{3}$. Therefore we just need to find the height of the region by integrating with respect to $z$, from the paraboloid to the sphere: $$\int_{\frac{x²+y²}{3}}^{\sqrt{4-x²-y²}}dz = \sqrt(4-x²-y²)-\frac{x²+y²}{3}$$ then we need to integrate this height all over the circle with radius $\sqrt{3}$, which is our region $B_1$: $${\int\int}_{B_1}\sqrt(4-x²-y²)-\frac{x²+y²}{3}dxdy$$ So to make this easier, I'm gonna integrate in polar coordinates by doing the substitution $x = p\cos(t), y= p\sin(t)$ where $t$ goes from $0$ to $2\pi$ because our circle is centered at the origin, and $p$ goes from $0$ to $\sqrt{3}$. The jacobian determinant of the transformation is $p$, so our integral becomes: $$\int_0^{2\pi}\int_0^{\sqrt{3}}\left(\sqrt{4-p²}-\frac{1}{3}p²\right)p \ dp \ dt$$ $$\int_0^{2\pi}\int_0^{\sqrt{3}}\left(\sqrt{4-p²}\ p-\frac{1}{3}p^3\right) \ dp \ dt = $$ $$-\frac{1}{2}\int_0^{2\pi}\int_0^{\sqrt{3}}\sqrt{4-p²}\ (-2p) \ dp \ dt - \int_0^{2\pi}\int_0^{\sqrt{3}}\frac{1}{3}p^3 \ dp \ dt = $$ $$-\frac{1}{2}\int_0^{2\pi}\int_{4}^{1}\sqrt{u} \ du - \int_0^{2\pi}\frac{\sqrt{3}^4}{12}dt = $$ $$\frac{1}{2}\int_0^{2\pi}\int_{1}^{4}\sqrt{u} \ du - \int_0^{2\pi}\frac{9}{12}dt = $$ $$\frac{1}{2}\int_0^{2\pi}\frac{2}{3}u^{3/2} \ du - \frac{3}{4}\int_0^{2\pi}dt = $$ $$\frac{1}{3}\int_0^{2\pi}(4)^{3/2}-1^{3/2} \ du - \frac{3}{4}2\pi = $$ $$\frac{7}{3}2\pi - \frac{3}{4}2\pi = \frac{19}{12}2\pi = \frac{19\pi}{6}$$
Permutation of boxes with two colors
This is A177790 “Number of paths from (0,0) to (n,n) avoiding 3 or more consecutive east steps and 3 or more consecutive north steps” on OEIS. The closest thing to a formula given there is $$ a(n) = \sum_{i=0}^{\lfloor n/2\rfloor} 2\binom{n-i}{i}^2 + \binom{n-i}{i} \binom{n-i-1}{i+1} + \binom{n-i}{i}\binom{n-i+1}{i-1}. $$
If $f$ is continuous with irrational period $\xi$ and $\int_0^\xi f(x)\,dx = 0$, is it possible that $f(1)+f(2)+f(3)+\cdots=\infty$?
It depends, since $\xi $ is irrational , according to Kronecker's Approximation Theorem, $\left\{m+\xi n \mid m,n \in \mathbb{Z}\right\}$ is dense in $\mathbb{R}$. The the fact that $f(x)$ is continuous and $f(x)=f(x+k\xi), \forall k \in\mathbb{Z}$ makes $f\left(m+\xi n\right)=f(m) \Rightarrow \{f(m) \mid m \in \mathbb{Z}\}$ also dense in $f$'s range. Now, one counter example to your statement (since you already mentioned $\sin{x}$), $f(x)=\cos{x}$, from this $$\left|\sum\limits_{k=1}^{n} \cos{k}\right|\leq \left|\frac{1}{2}\frac{\sin\left(\left(n+\frac{1}{2}\right)\right)+\sin{\frac{1}{2}}}{\sin{\frac{1}{2}}}\right| \leq \frac{1}{2}\left|\frac{1}{\sin{\frac{1}{2}}}\right|+\frac{1}{2}$$ taking the limit, we conclude that $\sum\limits_{k=1}^{\infty} \cos{k}$ is bounded and never reaches $\infty$. However, if we assume $f(x)\geq0$ and $f(x) \not\equiv 0$, then $f(x)$ (which is continuous) attains its maximum on $\left[0,\xi\right]$, $f(x_M)=M$. And because its periodic $f(x_M)=f(x_M+n\xi)=M$. Using the fact that $\left\{m+\xi n \mid m,n \in \mathbb{Z}\right\}$ is dense (it can be shown that $m$ can be positive integer, since theorem mentions positive $k$ in the link I provided), we have that $\exists m_M\in \mathbb{Z^+}$: $$\left|x_M-m_M+\xi n_M\right|<\delta \Rightarrow \left|f(x_M)-f(m_M+\xi n_M)\right| < \varepsilon \Rightarrow \\ f(m_M+\xi n_M)=f(m_M)>M-\varepsilon > 0$$ and this happens for all $x_M+n\xi$ thus we have a entire sequence $$f(m_{M,n}+\xi n_{M,n})=f(m_{M,n})>M-\varepsilon > 0$$ Thus $$\sum\limits_{k=1}f(k) > \sum\limits_{n=1}f(m_{M,n}) \geq \lim\limits_{n\rightarrow\infty} n(M-\varepsilon)=\infty$$ NOTE: While typing the last section I realised it doesn't consider $\int_0^\xi f(x)\,dx = 0$, but it's so technical it makes me sad to delete it.
Different approaches to N balls and m boxes problem
Note: The formulation of this question has to be considered carefully in order to find the correct approach. The situation describes $N$ indistinguishable balls which are to be distributed in $m$ distinguishable boxes. Looking at a simple example as suggested by @MickA is often a good starting point. Let's have a look at an example with $N=2$ indistinguishable balls and $m=2$ distinguishable boxes. So, after distributing the $2$ balls into the $2$ boxes, we can see one of three possible outcomes: \begin{array}{rcl} \text{Box $1$} &|& \text{Box $2$} \\ \hline \bullet\;\bullet &|& \\ \bullet &|& \bullet \\ &|&\bullet\; \bullet \end{array} We observe, if the question would have asked for something like number of possible outcomes or number of different configurations The Ansatz would be according to the first approach \begin{align*} \binom{N+m-1}{N}\tag{1} \end{align*} which respects the fact that the balls are indistinquishable and the boxes are distinguishable. But this is not the question! The crucial text is: ... given that the balls have equal chances of arriving at any box. Therefore we have to count the number of possible arrivals \begin{array}{rclcc} \text{Box $1$} &|& \text{Box $2$} &|& \text{Nr of possibilities}\\ &|& &|& \text{to reach this configuration}\\ \hline \bullet\;\bullet &|& &|& 1\\ \bullet &|& \bullet &|& 2\\ &|&\bullet\; \bullet&|& 1 \end{array} We see, counting the number of possibilities or equivalently calculating the corresponding probabilities \begin{array}{rclcc} \text{Box $1$} &|& \text{Box $2$} &|& \text{probability}\\ &|& &|& \text{to reach this configuration}\\ \hline \bullet\;\bullet &|& &|& 0.25\\ \bullet &|& \bullet&|& 0.50\\ &|&\bullet\; \bullet&|& 0.25 \end{array} is a different situation than (1) leading to the second approach $$m^N$$ Result: Therefore, we finally get the probability: $$\left(\frac{m-1}{m}\right)^N$$
Expressing logarithms as ratios of natural logarithms
Consider $y=\log_b x$. Then, by definition, $b^y=x$ and so $y \ln b = \ln x$. Thus, $$\log_b x=\frac{\ln x}{\ln b}$$ A more sophisticated argument is the following. Consider a continuous function $L:\mathbb R^+ \to \mathbb R$ such that $L(xy)=L(x)+L(y)$. Then $F(x)=L(e^x)$ satisfies $F(x+y)=F(x)+F(y)$ and is thus a continuous automorphism of the additive group $\mathbb R$. It is easy to see that $F$ must be a scaling: $F(x)=ax$ for some $a$. Of course, $a=F(1)=L(e)$. When $L(x)=\log_b x$, we have $a=\log_b e=\dfrac1{\ln b}$. Since $L(x)=F(\ln x)$, we have, as before: $$\log_b x=\frac{\ln x}{\ln b}$$
Why is the residue field of $\mathbb{Q}_p$ isomorphic to $\mathbb{F}_p$?
We have the following exact sequence $$ 0\rightarrow \mathbb{Z}_p\rightarrow \mathbb{Z}_p\rightarrow \mathbb{Z}/p^n\mathbb{Z}\rightarrow 0, $$ where the first map is multiplication by $p^n$ and the second sends $x=(x_i)\in \mathbb{Z}_p=\lim_{\leftarrow}\mathbb{Z}/p^n\mathbb{Z}$ to its $n$th term. Thus $ \mathbb{Z}_p/p^n \mathbb{Z}_p\cong \mathbb{Z}/p^n\mathbb{Z}$, so take $n=1$.
Let $x>0$ and $c>0$. Does $x^{1/c}$ always exist?
The answer is yes - $x^y$ is continuous in both $x > 0$ and $y > 0$. If $y \leq 0$ things are still good, but if $x \leq 0$ then it gets ugly. As for a sketch of a proof, if you are ok with the fact that the inverse functions $\exp(x)$ and $\ln(x)$ are continuous for positive values of $x$, then it follows pretty simply: $x^\frac{1}{c} = \exp(\ln(x^\frac{1}{c})) = \exp(\frac{1}{c}\ln(x))$ So you're taking a continuous function of $x$, multiplying it by a constant, then taking a continuous function of the result. And, nicely enough, you also get: $(x^\frac{1}{c})^c = (\exp(\frac{1}{c}\ln(x)))^c = \exp(c \times \frac{1}{c} \ln(x)) = \exp(\ln(x)) = x$ Where to get that result you just have to accept that taking the exponential function to a power is the same as multiplying the argument of the exponential function by that value, which follows from the definition of the function.
Suppose $x$ and $y$ are integers and $x^2$ is a multiple of $y^2$. Is $x$ necessarily a multiple of $y$?
Hint Divide the equation $x^2=ky^2$ by the square of the gcd of $x,y$. You get $X^2=kY^2$ where $X,Y$ are coprime. Then prove that this equation is only possible if $k$ is a square. You can finally conclude that $x^2$ is a multiple of $y^2$ if and only if $x$ is a multiple of $y$.
Calclulate a multidimensional integral
First of all, by symmetry this integral must be $0$. But if you want to use the divergence theorem, you get: $$\int_{\partial B_1} x_1 \, \mathrm{d}\sigma = \int_{\partial B_1} e_1 \cdot n \, \mathrm{d}\sigma = \int_{B_1} \nabla \cdot e_1 \, \mathrm{d}V = 0,$$ where $e_1$ is the first unit vector and $n$ is the outward normal vector field, which is just $x$ in this case.
Equation that always has a single solution, obtainable only by numerical methods.
Here is another idea. Use the $\Gamma$ function to pose the problem instead: $$f\left(t;\theta\right)=\Gamma\left(\theta t+2\right)-1.5$$ $\theta$ is the parameter you requested. A sufficient condition for a root to exist on $t\in\left(0,1\right)$ are $\theta\geq 1$, which gives you a very large range of parameters to use. I doubt they will find a direct way to compute the inverse of $\Gamma$. You can allow them to use one of $\Gamma$'s well-known approximations. One of the Lanczos approximations featured on that page (at the time of writing) is: $$\Gamma\left(x\right) \approx x^{x-\frac{1}{2}}e^{-x}\sqrt{2\pi}\left(1+\frac{1}{12x}+\frac{1}{288x^2}-\frac{139}{51840x^3}-\frac{571}{2488320x^4}\right)$$ for all $x\in\mathbb{R}$. For $\theta=1$, the answer is approximately $0.6628$. Here is a plot using the approximation to the Gamma function:
Künneth formula in topology, show isomorphism
To prove that the natural linear map $$\bigoplus_{i+j = n}(\wedge^i(V) \otimes \wedge^j(W)) \to \wedge^n(V \oplus W)$$is an isomorphism, we first observe that both sides have the same dimension. Indeed, if $\dim V = d \ge 0$ and $\dim W = d' \ge 0$ then the right side has dimension ${d + d'}\choose{n}$ for all $n \ge 0$ $($this is understood to vanish if $n > d + d')$ whereas the left side has dimension$$\sum_{i+j = n} (\dim \wedge^i(V)) \cdot (\dim \wedge^j(W)) = \sum_{i+j = n} {d \choose i} \cdot {{d'} \choose j} = \sum_{i=0}^n {d \choose i} \cdot {d' \choose {n-i}}$$$($where again it is understood that ${d \choose i} = 0$ if $i > d$ and ${d' \choose j} =0$ if $j > d')$. The equality of this sum with ${{d + d'} \choose n}$ is obvious by combinatorics: to count the number of ways to choose $n$ items from an ordered set of $d + d'$ elements, we add up $($over $0 \le i \le n)$ the number of ways to pick $i$ elements from the first $d$ members of the list and $n-i$ elements from the last $d'$ members of the list. These two choices are independent, so the count for each $i$ is the product of the counts for the $i$-element choice and the $(n-i)$-element choice. Having established equality of dimensions, to prove the isomorphism property it suffices to check surjectivity. The target is spanned by elementary wedge products of the form $$(v_1, w_1) \wedge \dots \wedge (v_n, w_n),$$and by writing $(v, w) = (v, 0) + (0, w)$ and expanding out by bilinearity $($and using the skew-symmetry to shuffle terms in a wedge product at the expense of a sign$)$ we see that the target is spanned by elementary wedge products$$(v_1, 0) \wedge \dots \wedge (v_i, 0) \wedge (0, w_1) \wedge \dots \wedge (0, w_j)$$over all possible $i, j \ge 0$ with $i + j = n$. But this final product is the image of the tensor$$(v_1 \wedge \dots \wedge v_i) \otimes (w_1 \wedge \dots \wedge w_j) \in \wedge^i(V) \otimes \wedge^j(W),$$so we are done.
$i \notin \mathbb{Q}[\sqrt[4]{2}]$ without using topological properties of $\mathbb{R}$
One way would to say that if $\Bbb Q(i) \subset \Bbb Q(\sqrt[4]{2})$, then it would necessarily be an quadratic extension. This would mean that $x^4-2$ would factor over $\Bbb Q(i)$. Any factorization $x^4-2 = (x^2+ax+b)(x^2+cx+d)$ must have $a = -c$ by consideration of the cubic term. If $a$ is nonzero we must have $b = d$ by consideration of the linear term, and if $a$ is zero we must have $b = -d$ by consideration of the quadratic term. This implies that either $2$ or $-2$ is a square in $\Bbb Q(i)$, which is easily seen to be false.
Why do we use Complex numbers and not other systems?
If an algebraically closed field $F$ contains (a copy of) the real numbers, then $F$ also contains (a copy of) the complex numbers. Indeed, if we have $\mathbb{R}\subset F$ and $F$ is algebraically closed, then $F$ contains a root $j$ of the polynomial $X^2+1\in \mathbb{R}[X]$. Therefore it contains $\mathbb{R}[j]$, but this field is isomorphic to $\mathbb{C}$: just define $a+bi\in\mathbb{C}\to a+bj\in \mathbb{R}[j]$ and check it's an isomorphism. The key point is that $\mathbb{C}$ is algebraically closed and algebraic over $\mathbb{R}$.
limit of sequence of functions define inductively using integral
$f^{2}(x)=\int_0^{x} f(t)dt$ gives $2f(x)f'(x)=f(x)$ so $f'(x)=\frac 1 2$ when $f(x) \neq 0$. Hence, if $f(x) >0$ for all $x$, then $f(x)=\frac x 2 +c$ where $c$ is a constant. Also $f(0)=0$ so $c=0$.
How to find probability density functions?
You have just discovered that the cumulutative distribution function of an $f(X)$ when $f$ is an invertible monotonuous increasing function can be computed as: $$\mathbb{P}(f(X)<y)=\mathbb{P}(X<f^{-1}(y)) \; .$$
Rational morphism vs rational point over function field
To go from the rational map to the rational point, try answering these two questions: what (regular) maps are there from $K(X)$ to anything obvious; and if you have a rational map from $X$ to $Y$, what regular maps to $Y$ is it related to? To go from the rational point to the rational map, first try doing it when $X$ and $Y$ are both affine, which will show you why finite type is necessary. Then answer these questions: can $X$ be assumed to be affine; and can $Y$ be assumed to be affine? Some further intuition might be helpful, though I don't know if it will actually aid in solving the problem. A regular map $X \to Y$ is determined by its graph, an $X$-valued point of $X \times_k Y$. This problem is asking you to show that a rational map is also determined by its "graph" in $K(X) \times_k Y$. Figuring out what that means is, in some sense, the proof.
Legendre transform of a norm
Using the fact that $\|x\| \equiv \underset{y \in \mathbb{R}^n,\|y\|_* \le 1}{\sup }x^Ty$, you immediately get \begin{equation} \begin{split} \underset{x \in \mathbb{R}^n}{\text{sup }}x^Td - \|x\| &= \underset{x \in \mathbb{R}^n}{\sup}x^Td - \underset{y \in \mathbb{R}^n,\|y\|_* \le 1}{\sup }x^Ty = \underset{y \in \mathbb{R}^n, \|y\|_* \le 1}{\inf }\underset{x\in \mathbb{R}^n}{\sup}x^T(d - y) \\ &= \underset{y \in \mathbb{R}^n, \|y\|_* \le 1}{\inf }\begin{cases}0,&\mbox { if }y = d,\\+\infty, &\mbox{ otherwise}\end{cases}\\ &= \begin{cases}0,&\mbox { if }\|d\|_* \le 1,\\+\infty, &\mbox{ otherwise,}\end{cases} \end{split} \end{equation} where the second equality follows from Sion's minimax theorem (as an easy exercise, the reader should check that the hypotheses of the Theorem are satisfied).
(Hard Question) Calculate Number Of Vectors
The number of $k$-vectors where each element is an integer from $1$ to $n$ – i.e. the maximum is at most $n$ – is $n^k$. Thus the number of the same but with maximum exactly $n$ is $n^k-(n-1)^k$.
Finding polynomial p
Polynomials of degree $0$ and $1$ with rational coefficients will work. But polynomials of higher degree will not. Namely, suppose $p$ is a prime that does not divide the numerator or denominator of any of the coefficients of your polynomial $P(x)$. Then there is no rational $x$ such that $P(x)$ has denominator $p$: if $x$ does not have denominator divisible by $p$, $P(x)$ does not either, while if $x$ has denominator divisible by $p$, $P(x)$ has denominator divisible by $p^d$ where $d$ is the degree of $P$.
Roots of $x^6+tx^3+t$ in $\mathbb{F}_3(t)[x]$
Hint: Substitute $$x^3=z$$ then you will get $$z^2+tz+t$$
Prove that 3 sleep at a time
Let $A_1,A_2,B_1,B_2,\ldots,E_1,E_2$, be the vertices of a graph $G$, representing the $10$ naps. Connect an edge between any two naps if they overlapped. I claim that a cycle in this graph represents a time when $3$ or more mathematicians slept simultaneously. Suppose $v_1,\ldots,v_n$ is a cycle in $G$, $n\ge 3$. If any nap represented by $v_i$ in this cycle began and ended during either nap $v_{i-1}$ or $v_{i+1}$, then it is clear that naps $v_{i-1}$, $v_i$, and $v_{i+1}$ all overlapped. If no nap was contained in any other nap, then without loss of generality we may assume that $v_1$ is the nap that began first in the cycle $v_1,\dots,v_n$. By our condition, $v_2$ must end after $v_1$ ends. Now since $v_n$ must overlap with map $v_1$ by the rules of our graph, it follows that naps $v_1,v_2$, and $v_n$ all overlap at the end of nap $v_1$. For contradiction, assume that there is no time when $3$ or more mathematicians are simultaneously asleep. Then our graph can have no cycles, and is thus a tree. Therefore, it has at most $10-1=9$ edges, which is not enough to create the $\binom{5}{2}=10$ pairs of nappers.
The signum function expressed using a single formula
Using only the arithmetic operations, the identity function and the absolute value, this does not seem possible, because the $\text{sgn}$ function is discontinuous. As far as I see, a discontinuity can only appear with a division by zero, which results in an undefined value in at least one point.
Equivalent conditions of a Galois extension (Exercise VI.4 in Lang's Algebra)
You've shown $(1 \Longrightarrow 2)$ so I will only discuss the rest. Hints: For the implication $(2 \Longrightarrow 3)$ show the product $\sqrt{\alpha}\sqrt{\alpha'}\in E$ is actually in $F$. Consider how the non-trivial automorphism of $F/k$ acts on $\sqrt{\alpha}\sqrt{\alpha'}$, and show either $\sqrt{\alpha}\sqrt{\alpha'}\in k$ or $\sqrt{c}\sqrt{\alpha}\sqrt{\alpha'}\in k$. For the implication $(2 \Longrightarrow 1)$ investigate the possibilities for $\sigma(E)$, for $\sigma \in Gal(\overline{E}/k)$, where $\overline{E}$ is some fixed algebraic closure of $E$. What are the possibilities for $\sigma(\sqrt{\alpha})$? Recall that $E/k$ is Galois if and only if $\sigma(E)=E$ for all such $\sigma$. For the implication $(3 \Longrightarrow 2)$ notice that $F(\sqrt{\alpha})=F(\sqrt{\alpha'})$ if $\sqrt{\alpha} \sqrt{\alpha'} \in F$, or equivalently, if $\alpha \alpha' \in F^2$. When these conditions are satisfied, it's possible to given an automorphism of $E$ by specifying how it acts on $\sqrt{c}$ and on $\sqrt{\alpha}$. One can show $\sigma$ must have order $2$ if it fixes $\sqrt{\alpha}\sqrt{\alpha'}$. Solutions: $(2 \Longrightarrow 3):$The non-trivial automorphism of $E/F=F(\sqrt{\alpha})/F$ sends $\sqrt{\alpha}$ to $-\sqrt{\alpha}$. If $E=F(\sqrt{\alpha'})$, it also sends $\sqrt{\alpha'}$ to $-\sqrt{\alpha'}$. Therefore it fixes $\beta = \sqrt{\alpha}.\sqrt{\alpha'}$ and so $\beta \in F$. The non-trivial automorphism of $F/k$ sends $\sqrt{c}$ to $-\sqrt{c}$. Thus it sends $a+b\sqrt{c}$ to $a-b\sqrt{c}$, so it fixes $\beta^2=\alpha \alpha'$, and therefore it either fixes $\beta$ or sends it to $-\beta$. If it fixes $\beta$, $\alpha \alpha'=\beta^2$ is a square in $k$. Otherwise it sends $\beta$ to $-\beta$, so it fixes $\sqrt{c}\beta$, and in that case $c\alpha \alpha'=(\sqrt{c} \beta)^2$ is a square in $k$. $(2 \Longrightarrow 1):$ Suppose $\sigma$ is an automorphism of some algebraic closure $\overline{E}$ of $E$ that fixes $k$. We have $\sigma(\sqrt{c})=\pm \sqrt{c}$. Either $\sigma(\sqrt{c})=\sqrt{c}$, in which case $\sigma(\sqrt{\alpha})=\pm \sqrt{\alpha}$, or $\sigma(\sqrt{c})=-\sqrt{c}$, in which case $\sigma(\sqrt{\alpha})=\pm \sqrt{\alpha'}$. Thus either $\sigma(F(\sqrt{\alpha}))=F(\sqrt{\alpha})$ or $\sigma(F(\sqrt{\alpha}))=F(\sqrt{\alpha'})$. Since the latter case does occur for some $\sigma$, it follows that $F(\sqrt{\alpha})=F(\sqrt{\alpha'})$ if and only if $\sigma(F(\sqrt{\alpha}))=F(\sqrt{\alpha})$ for every $\sigma$, i.e. if and only if $E$ is Galois over $k$. $(3 \Longrightarrow 2):$ If either $\alpha \alpha'$ or $c \alpha \alpha'$ belong to $k^2$, then $\alpha \alpha' \in F^2$, so that $\beta=\sqrt{\alpha}\sqrt{\alpha'} \in F$. This implies $\sqrt{\alpha'}=\beta/\sqrt{\alpha} \in E$. Since $\sqrt{\alpha'}\in E$ has degree $2$ over $F$, $E=F(\sqrt{\alpha'})$. Suppose these conditions are satisfied and $Gal(E/k)$ is cyclic of order $4$, generated by $\sigma$. Note that $\sigma$ is determined by its action on $\sqrt{c}$ and $\sqrt{\alpha}$. Now $\sigma(\sqrt{c})=-\sqrt{c}$ necessarily otherwise all powers of $\sigma$ act trivially on $F$, so $\sigma(\sqrt{\alpha})=\pm \sqrt{\alpha'}$. If $\alpha \alpha' \in k^2$, then $\sigma$ fixes $\sqrt{\alpha}\sqrt{\alpha'}$. In that case it would follow that $\sigma^2$ fixes $\sqrt{\alpha}$ and $\sigma$ would have order $2$, resulting in a contradiction. Therefore $c\alpha \alpha' \in k^2$ by the third property above. Conversely, if $c \alpha \alpha' \in k^2$, then we have an automorphism of $E/k$ of order $4$, determined by $\sigma(\sqrt{c})=-\sqrt{c}$ and $\sigma(\sqrt{\alpha})=-\sqrt{\alpha'}$.
Exercise 1.2.4 (From Grimmett and Stirzaker)
$\varnothing \in \mathcal{G}$ because $\varnothing =\varnothing \cap B$ and $\varnothing \in\mathcal{F}$. Suppose $A_i\in\mathcal{F}, \:i\in\Bbb{N}$. Then $$ \bigcup_{i=1}^{\infty}(A_i\cap B)=\left(\bigcup_{i=1}^{\infty}A_i\right)\cap B\in\mathcal{G} $$ for $\bigcup_{i=1}^{\infty}A_i\in\mathcal{F}$ since $\mathcal{F}$ is $\sigma$-field. Since $\mathcal{F}$ is $\sigma$-field, $A\in \mathcal{F}$ means $A^c\in \mathcal{F}$. Thus $A^c\cap B\in \mathcal{G}$. So $\mathcal{G}$ is $\sigma$-field of the subsets of $B$.
Positive recurrent random walk on Z
Suppose it was possible for the chain given by a sum of iid $X_k$ to be positive recurrent. This is true in the trivial case of $X_k$ being constant (and zero) with probability 1, so assume $X_k$ takes on at least 2 values with positive probability each. 1.) It is should be immediate that $E[X_k]=0$ -- e.g. if instead the walk had negative drift, then for small $t\gt 0$ $P\big(S_n \gt 0\big) = P\big(t\cdot S_n \gt 0\big)=P\big(e^{t\cdot S_n} \gt 1\big)\leq E[e^{t\cdot S_n}] = E[e^{t\cdot X_1}]^k = M(t)^k= p^k$ for some $p\in(0,1)$, noting e.g. that by differentiating $M'(0) = \mu \lt 0$ and $M(0)= 1$, so for $t$ small enough $0\lt M(t)\lt 1$. hence the expected number of visits to the positive integers is finite (geometric series). And thus the steady state probability of visiting positive integers is zero which implies $X_k$ takes on strictly non-positive values and the chain is transient. (For the case of positive drift, rerun the argument on $S_n^* := -S_n$) Alternative justifications: You can also do this with SLLN or adapt the below CLT argument to the case of non-zero drift. 2.) Since the chain is positive recurrent, we have, for any $m\gt 0$ $P\big(S_n \gt m\big) \rightarrow c$ for some $c\in \big(0,1\big)$ and as $m\to \infty$, $c\to 0$ this implies $P\big(\frac{S_n}{\sqrt{n}} \gt m\big) \rightarrow 0$ but by Central Limit Theorem $\frac{S_n}{\sqrt{n}} \to N\big(0,\sigma^2\big)$ hence $P\big(\frac{S_n}{\sqrt{n}} \gt m\big) \rightarrow c'$ for $c'\in \big(0,1\big)$ which is a contradiction. addendum for a slightly different finish with a few more details spelled out, consider that since the chain starts at state 0 and by assumption it is positive recurrent, it suffices to tease out a contradiction with $P\big(S_n =0\big)$. By positive recurrence assumption $P\big(S_n = 0\big)\to \pi_0 \in (0,1)$, or $\big \vert P\big(S_n = 0\big) - \pi_0\big \vert \lt \epsilon$ for any $\epsilon \gt 0$ for $n\geq n^*$. And we also have $0\leq P\big(S_n = 0\big) = P\big(\frac{S_n}{\sqrt{n}} = 0\big) \lt \epsilon'$ for $n\geq n'$ by CLT (and e.g. using Stein's Method or Berry-Essen for explicit error bounds). Now select $\epsilon: =\epsilon' := \frac{\pi_0}{3} $ and $m:=\max\big(n',n^*\big)$ and we get $\frac{2}{3}\pi_0 \lt \Big \vert \big \vert\pi_0\big \vert- \big \vert P\big(\frac{S_{m}}{\sqrt{m}} = 0\big)\big \vert \Big \vert \leq \Big \vert \pi_0- P\big(\frac{S_{m}}{\sqrt{m}} = 0\big) \Big \vert=\big \vert P\big(S_{m} = 0\big) - \pi_0\big \vert \lt \epsilon = \frac{1}{3}\pi_0$ which is a contradiction
Homology class of real projective space inside complex projective space
We proceed by computing the unoriented self intersection number of $\mathbb{R}P^n $ inside $\mathbb{C}P^n$. In the $n$ even case, we will show this is $1$, and hence, that $\mathbb{R}P^n$ represents the nontrivial homology class. The unoriented self intersection number is equal to the nth Stiefel-Whitney class of the normal bundle of the inclusion, since the normal bundle controls perturbations of the embedding and the nth Stiefel-Whitney class counts mod 2 the number of zeroes of a generic section. Let us identify what the normal bundle is: We put a Riemannian metric on $\mathbb{R}P^n$ so that we may talk about disk bundles. We will describe a smooth way to embed the open disk bundle (of the tangent bundle) of $\mathbb{R}P^n$, so as to conclude that the normal bundle of the embedding is actually the tangent bundle, $T(\mathbb{R}P^n )$ itself. Around a given point $x$, we may identify the fiber of the disk bundle with the $\epsilon$-ball around $x$ in $\mathbb{R}P^n$. Locally, this is described as follows (playing fast and loose with $\epsilon$): we may pick a unit vector $(x_1,\dots,x_n)$ generating $x$ and consider the unit vectors within $\epsilon$ of $x$, each of which generate lines which together give our neighborhood of $x$. Hence, given some $x'$ in my ball, it makes sense to talk about $x-x' = (\delta_1,\dots,\delta_n) \in \mathbb{R}^n$. Now our embedding sends $x=\langle (x_1,\dots,x_n) \rangle$ to $\langle ((x_1 +0i,\dots,x_n+0i) \rangle$. So we define $\iota: \operatorname{Disk}(T(\mathbb{R}P^n)) \rightarrow \mathbb{C}P^n$ by the fiberwise formula $x' \rightarrow \langle(x_1 + \delta_1 i, \dots, x_n + \delta_n i)\rangle$. Clearly, this extends the original embedding. It is not hard to see that if $\epsilon$ is small, this is well defined and an embedding. So we conclude the normal bundle of this embedding is $T(\mathbb{R}P^n)$, as claimed. Now we can directly calculate the nth Stiefel Whitney class. It is equal to the mod two reduction of the Euler characteristic of $\mathbb{R}P^n$. Hence, in odd dimensions it is trivial, and in even dimensions it is nontrivial.
Necessary condition for the extension $F(\alpha) /F$ to be algebraic?
A very easy counterexample is $\alpha \in \mathbb{R}$ any trascendental number over $\mathbb{Q}$. Since $\operatorname{Aut} (\mathbb{R} / \mathbb{Q}) = \{ 1_{\mathbb{R}}\}$, the set $\{ \varphi (\alpha) : \varphi \in \operatorname{Aut} (\mathbb{R} / \mathbb{Q}) \} = \{ \alpha \}$ is a finite set.
How to get the real frequency from an FFT output graph?
The Fast Fourier Transform is at its Fastest when the number of points is a power of 2. So it pads your series with a string of 19 zeros until it reaches 64 points. It looks for sequences of the form $A(f)\exp(i[2\pi fn+\theta(f)])$. So the frequency $f$ for $\sin 5t$ is $f=5/(2\pi)$. If you change $f$ by an integer, you get the same sequence because $\exp(i2\pi)=1$. So it only looks for $f$ between -1/2 and 1/2. If you give it a real sequence, then the amplitude $A(f)=A(-f)$, That's why it only gives the plot between f=0 and f=1/2. Your $0.203125$ is actually $1-5/(2\pi)$, and the other one is $2-10/(2\pi)$.
How do you gather terms?
First of all, you missed one term, you should expect $2 \times 4 = 8$, you only get $7$. Then you should see that everything except $x - y$ cancel in pairs. But to make your life simpler, write $a = x^{1/4}, b = y^{1/4}$, and rewrite your expression as \begin{align} &(a - b) (a^{3} + a^{2} b + a b^{2} + b^{3}) \\=& \begin{matrix} a^{4} &+& a^{3} b &+& a^{2} b^{2} &+& a b^{3}\\ &-& a^{3} b &-& a^{2} b^{2} &-& a b^{3} &-& b^{4} \end{matrix} \\=& a^{4} - b^{4} \end{align} where you see clearly the terms cancelling in pairs. (In the two-row expression, the first row is the expansion of $a (a^{3} + a^{2} b + a b^{2} + b^{3})$, the second one the expansion of $-b(a^{3} + a^{2} b + a b^{2} + b^{3})$.)
Maximizing the Integral
The two cases $a, b\leq -1$ and $2\leq a, b$ can be excluded by noting that those integrals can't get positive, since the function is negative on $[a, b]$. All other cases can be covered in one fell swoop by saying that $\int_{-1}^af(x) dx$ and $\int_b^2f(x)dx$ are both non-negative, so $$ \int_{-1}^2f(x)dx = \int_a^bf(x)dx + \int_{-1}^af(x)dx + \int_b^2f(x)dx $$ shows that $\int_{-1}^2f(x)dx \geq \int_a^bf(x)dx$ (keeping in mind that $\int_p^qf(x)dx = -\int_q^pf(x)dx$).
Metric in the projective space $P^n$
How about this? \begin{align*} d([x],[z])&=\min(||x-z||,||x+z||)\\ &\leq \min(||x-z||+||y-z||,||x+z||+||y+z||)\\ &\leq \min(||x-z||,||x+z||)+\min(||y-z||,||y+z||) \end{align*} Where the second line is true because we add a positive number to positive numbers and as such the minimum must be greater and the last one is fairly self evident.
What are the dimensions of the largest cylinder that can fit inside a sphere?
Let $r$ be the radius of the cylinder, $R$ be the radius of the sphere, and $h$ be half the height of the cylinder. If we look at the cross section of the sphere, it will look like this: Then we see that $h^2 + r^2 = R^2 \Rightarrow h = \sqrt{R^2 - r^2}$.
Is smallest binary tree simply root node? Or does it need to have two child nodes?
There are three main conventions regarding rooted binary trees: The node can have 0, 1 or 2 sons with no distinction between left and right (with leafs being a node without children). The root only is a proper tree. The node can have one left child and one right child (total number is 0, 1 or 2, leafs defined as in 1.). The root only is a proper tree. The node always have 2 sons, but there are special placeholders that have always 0 nodes (and here those are called leafs). It varies, usually it follows from the context if the tree can be "empty" or not. Whatever the convention is, parent node never has more than 2 children (therefore binary tree). In a case of a finite tree, well-founded induction follows usually from the root to the leaves, but other directions (e.g. from the leaves to the root) are also possible. Also, it is helpful to know that one can reroot the tree (in case of convention 1 or 2) -- you just grab one of the vertices and keeping the tree in the air shake it a little, so all the other nodes will fall below the one you have in the hand (remember, do not shake it to much or the branches may snap...). To proof that relation is well-founded you need to show that it doesn't contain infinite descending chains. There are many techniques that can be used, but the easiest (this is my opinion only) is to inject it into another relation such that you know it is well-founded, i.e. to proof that $R$ is well-founded it is enough to show that there exists some well-founded $S$ and a map $f : R \to S$ such that $a < b \Rightarrow f(a) < f(b)$ (notice the strict inequality!). Useful examples of $S$ are: natural numbers; lexicographic order on tuples of fixed length (each element of the tuple may be compared using different order as long as all the orders are well-founded); lexicographic order on tuples of any length with it's length (as a natural number with standard ordering) at the first place; multiset order (this one is really nice); any DAGs (directed acyclic graphs), e.g. other trees; any other well-founded relation that you know. Finally, beware the non-strict inequalities, usually they only cause troubles in this context! Edit: I couldn't find any nice papers that define the usual multiset ordering (by Dershowitz and Manna), so here is the definition: Let $\mathbb{X}$ be any set and $M$ and $N$ be two multisets on $\mathbb{X}$, that means functions $\mathbb{X}\to\mathbb{N}$ with finite support. Then $M <_{DM} N$ if and only if there exist two multisets $A$ and $B$ such that $\varnothing \neq A, A \subseteq N$, $M = (N - A) + B$, $\forall b \in B\ \ \exists a \in A.\ b <_\mathbb{X} a$. Intuitively it means that you can exchange one element from $N$ to any number of strictly smaller (with respect to some ordering $<_\mathbb{X}$ on $\mathbb{X}$ ) elements in $M$. There is also equivalent definition that reads $M <_{DM} N$ if and only if they are not equal and $$\forall x \in \mathbb{X}.\ N(x) <_\mathbb{N} M(x) \Rightarrow \exists y \in \mathbb{X}.\ x <_\mathbb{X} y \land M(y) <_\mathbb{N} N(y).$$ Maybe even that one is easier too understand. Hope that helps ;-)
Identifying the dual space of a Hilbert space using the Riesz representation theorem.
You can restate the theorem as "for every $f \in H^*$ there exists a unique $v_f \in H$ such that $f(u)=\langle u,v_f \rangle$ for all $u \in H$." Which proof? Are you asking about the proof of "we can identify $H^*$ with $H$" from the statement I just gave? The point is that each element of $H^*$ corresponds to an element of $H$ (by the statement above) while each element of $H$ corresponds to an element of $H^*$ (by bilinearity of the inner product and Cauchy-Schwarz). $f=\langle \cdot,v_f \rangle$ means "$f(u)=\langle u,v_f \rangle$ for all $u$ in the domain of $f$".
What is the dimension of $S$?
$S$ can be written as the $p$-fold product of $\ker(A)$ (since it consists of $p$ columns, each in $\ker(A)$). Thus, the dimension of $S$ is $p$ times that of $\ker(A)$. By rank-nullity, this gives the correct answer as $p (n - \text{rank}(A))$.
How to find the height of the mountain
The formula for the height is wrong. We know $\angle{BCD}=\frac{\pi}{2}-\gamma$ and $\angle{BCA}=\frac{\pi}{2}-\alpha$ so therefore $\angle{DCA}=\frac{\pi}{2}-\alpha - (\frac{\pi}{2}-\gamma)=\gamma-\alpha$. We also know $\angle{DAC}=\alpha-\beta$. We therefore know that $\angle {ADC}=\pi +\beta-\gamma$. Using the law of sines we can find that $$\frac{\sin (\angle{ADC})}{AC} = \frac{\sin(\angle{DCA})}{a}$$ or $$\frac{\sin (\pi+\beta-\gamma)}{AC} = \frac{\sin(\gamma-\alpha)}{a}$$ or $$AC= \frac{a\sin(\gamma-\beta)}{\sin(\gamma-\alpha)}$$ As the height of the mountain is $h=AC\sin(\alpha)$, we arrive at the formula $$h =\frac{a\sin(\alpha)\sin(\gamma-\beta)}{\sin(\gamma-\alpha)}$$
Proving something is a spline of degree $n$
Let $f_k(x) = (x - kh)^n_+$. If $k$ is even, then $f_k(x) = (x - kh)^n$, so it's just a polynomial. If $k$ is odd, then $f_k(x)$ is a piecewise polynomial with a "break" or "knot" at $x=kh$. In fact $$ f_k(x) = 0 \quad \text{for } x \le kh \\ f_k(x) = (x - kh)^n \quad \text{for } x > kh \\ $$ Since $S_{(n)}(x)$ is just a sum of these $f_k(x)$ functions, it is again a piecewise polynomial, therefore it's a spline. It's easy to check that $f_k(x)$ has $n$ continuous derivatives at $x=kh$ -- just compare derivatives from the left and right. And it's infinitely differentiable everywhere else, of course. So, $S_{(n)}(x)$ has $n$ continuous derivatives, also. The only places where $S_{(n)}(x)$ might not be infinitely differentiable are at the points $x=kh$ for $k=0,\ldots,n+1$. At these places, $S_{(n)}(x)$ might inherit a discontinuity from one of the $f_k(x)$ functions.
$S:\mathbb{R}^2 \to \mathbb{R}$ , $S(x,y)=x+y$ is continuous.
It almost seems like you have it backwards. You should be fixing $\epsilon$ first and then finding a $\delta$ which is sufficiently small. You have correctly come to the inequality: $$d(S(x,y),S(a,b))=d(x+y,a+b) \leq |x-a| + |y-b|$$ What we want to ensure is that for a given $\epsilon$, there exists a sufficiently small $\delta$. In other words, for any chosen $\epsilon$, can we choose a $\delta$ such that $$d((x,y),(a,b)) < \delta \quad \Rightarrow \quad |x-a| + |y-b| < \epsilon \quad ?$$ For a start, let's evaluate $d((x,y),(a,b))$: $$d((x,y),(a,b)) = \sqrt{(x-a)^2 + (a-b)^2}$$ Note that this is greater than or equal to either $|x-a|$ or $|x-b|$. So $$2 \cdot d((x,y),(a,b)) \geq |x-a| + |y-b|$$ $$2 \cdot d((x,y),(a,b)) \geq d(S(x,y),S(a,b))$$ Which means that for a given $\epsilon$, we can ensure that the right hand side is strictly less than it by setting $$d((x,y),(a,b)) < \epsilon / 2$$ And so for a given $\epsilon$, the largest $\delta$ which will work in general is $\epsilon / 2$.
2D Coordinates of Projection of 3D Vector onto 2D Plane
Your goal should be finding a suitable $2\times3$ matrix which you multiply with your 3D vector to obtain the projected 2D vector. I assume that $e_1, e_2, e_3$ are both unit length and orthogonal to one another, i.e. that you're dealing with an orthogonal coordinate system in 3D. All 3D vectors are assumed to be expressed in this coordinate system. Without orthogonality, you'd have trouble matchiung the relation of $e_1', e_2', n$ to that of $e_1,e_2,e_3$, as $n$ is orthogonal to $e_1',e_2'$. You first need to find a vector $e_1'$ which should be unit length, lie in the plane, but may be rotated about the origin in an arbitrary way. One way to achieve this is by choosing an arbitrary vector $v$, and computing the cross product between $v\times n$. The resulting vector will always be orthogonal to $n$. If you are unlucky, $v$ might be parallel to $n$, in which case the cross product has length zero. So in the possible presence of numerical complications (i.e. rounding errors, so you won't get an exact zero), it might be easiest to try $e_1,e_2,e_3$ as $v$, and choose the result with maximal length. Then normalize its length to 1, and you have a suitable $e_1'$. Next, you compute $e_2'$ as the cross product of $e_1'$ and $n$. Depending on the way $e_1,e_2,e_3$ relate to one another, you'll have to do this either in one or in the other order to end up with the correct sign for $e_1$. Simply try them out. The length of the result should already be unit length, at least if $n$ had unit length and $e_1'$ was chosen as described above. Now that you have $e_1'$ and $e_2'$, you can simply use these as the rows of your desired projection matrix. The rationale here is as follows: the matrix times vector multiplication will compute two scalar products, which correspond to the portion of your input vector which lies in the direction of that vector, i.e. the length of the orthogonal projection along one direction. Take two of these, and you have coordinates in a coordinate system within the plane, obtained from orthogonal projection onto that plane.
Solve system of ODEs
First, get rid of one of the derivatives by integrating: $$ Cx'+Fx''=0 \quad\Longrightarrow\quad Cx(t)+Fx'(t)=A=Cx(0)+Fx'(0) $$ Then solve with respect to the highest derivative $$ x'=-(F^{-1}C)x+\big(F^{-1}Cx(0)+x'(0)\big) $$ Then multiply by $\exp(tF^{-1}C)$ letting $A=F^{-1}Cx(0)+x'(0)$: $$ \exp(tF^{-1}C)\big(x'+(F^{-1}C)x\big)=\exp(tF^{-1}C)\,A, $$ or $$ \big(\exp(tF^{-1}C)x(t)\big)=\exp(tF^{-1}C)\,A. $$ Finally integrating in $[0,t]$ $$ \exp(tF^{-1}C)x(t)-x(0)=\int_0^t \exp(s\,F^{-1}C)\,A\,ds, $$ or \begin{align} x(t) &=\exp(-tF^{-1}C)x(0)+\int_0^t \exp\big(-(t-s)\,F^{-1}C\big)\,A\,ds \\ &= \exp(-tF^{-1}C)x(0)+\int_0^t \exp\big(-(t-s)\,F^{-1}C\big)\,\big(F^{-1}Cx(0)+x'(0)\big)\,ds. \end{align}
An alternative formula for computing curvature of a curve
The curvature is the norm of the second derivative of the curve; however, one uses not the usual derivative with respect to $t$, but the derivative with respect to the unit-speed parameter $s$. This derivative is given by $$ \frac{d}{ds} f(t) = \frac{1}{\| \dot \gamma \|} \frac{df}{dt}\,. $$ The unit-speed parameter is characterized by $$ \frac{ds}{dt} = \| \dot \gamma \|\,,$$ and taken together you obtain the suspicious-looking formula $$ \frac{d}{ds} f(t) = \frac{df/dt}{ds/dt}\,. $$
Where is $f(z) = |x^2 - y^2| + 2i|xy|$holomorphic?
$f$ is differentiable whenever $x^2-y^2\neq 0;\ x\neq0 $ and $y\neq 0$. It's easy to show that $f(-z)=f(z)$ and $f(\bar z)=f(z)$ Consider $z\in \Bbb C \text{ s.t } x^2-y^2>0 \text{ and } xy>0$ $f(z)=z^2$ : holomorphic. Now for $z\in \Bbb C \text{ s.t } x^2-y^2<0 \text{ and } xy<0$ $f(z)=-z^2$ : holomorphic For $z\in\Bbb C \text{ s.t } x^2-y^2>0 \text{ and } xy<0$ $f(z)=\bar z^2$ : NOT holomorphic And finally for $z\in \Bbb C \text{ s.t } x^2-y^2<0 \text{ and } xy>0$ $f(z)=-\bar z^2$ : NOT holomorphic.
Asymptotic Value of $\sum_{n=1}^{N} \log(1+1/n) \log (\sum_{m=0}^{n} \binom{N}{m})$?
Not a complete answer, but maybe in the right direction. One way to solve this is to look at $h(N+1) = f(N+1)-f(N)$, and then compute $\lim_{n \to \infty} f(N) =\sum_{N=1}^{\infty} h(N+1)$. Here $h(N+1) = \log(1+1/(N+1)) \log \sum_{m=0}^{N+1} \binom{N+1}{n} + \sum_{n=1}^{N} \log(1+1/n)[ \log \sum_{m=0}^{n} \binom{N+1}{n} - \log \sum_{m=0}^{n} \binom{N}{n}]$. The first term is $\log(1+\frac{1}{N+1}) \log 2^{N+1} = \log 2 \log((1+\frac{1}{N+1})^{N+1}) \to log 2 \cdot \log e = \log 2$ when $N$ is large. I'm not so sure about the second term. When I compute it in Matlab it seems to be slightly above 1. When I compute the original expression $f(N)/N$ into Matlab, I get a value around $\log 2 + 1$ which seems to support the conjecture that $\lim_{N \to \infty} h(N) = \log 2 + 1$ and $\lim_{N \to \infty} \frac{f(N)}{N}= \log 2 + 1$.
derivatives equivalent somehow
Hint: $$\frac x{1-x}=-1+\frac1{1-x}$$
drawbacks of non positive definite matrices
SPD matrices can be solved by the Conjugate Gradient (CG) algorithms and it's variations which are very efficient. This is because SPD matrices have many very pleasent properties. Also it is usually quite easy to show that a matrix is SPD, but if it isn't SPD you have to resort to more complicated tools to show that it is invertible, and you have to resort to more computationally expensive solving algorithms. But as long as you can use matlabs backslash operator you don't have to worry about it. It can even recognize non invertible matrices and use pseudo inverses.
$W = \{x\in l_0 : =0\}$ where $a=(1,\dfrac12,\dfrac13,...)$ and $l_0$ is sequences with finitely many non-zero terms. Show $W$ is separable
Let me show you that $W$ is not closed, by exhibiting a non-finite $x$ orthogonal to $a$. Let $$ x_n=\begin{cases}\dfrac1k,&\ n=k^2,\\ \\ -\dfrac{k^2+1}{k^3},&\ n=k^2+1,\\ \ \\0,&\ \text{otherwise}\end{cases} $$ Then $$ \sum_{n=1}^\infty |x_n|^2=\sum_{k=1}^\infty \dfrac{(k^2+1)^2}{k^6}-\dfrac1{k^2}=\sum_{k=1}\dfrac1{k^2}\,\left(\dfrac{(k^2+1)^2}{k^4}\right)\leq\sum_{k=1}^\infty\dfrac4{k^2}<\infty. $$ If we write $x_n$ for the truncation of $x$ to its first $n$ entries (and make the rest zero), we have: for $n=k^2$, $$ \langle x_n,a\rangle =\dfrac1k\,\dfrac1k+ \sum_{j=1}^{k-1}\dfrac1j\,\dfrac1{j^2}-\dfrac{j^2+1}{j^3}\,\dfrac1{j^2+1}=\dfrac1{k^2}; $$ and for $n\ne k^2$, $$ \langle x_n,a\rangle = \sum_{j=1}^{k}\dfrac1j\,\dfrac1{j^2}-\dfrac{j^2+1}{j^3}\,\dfrac1{j^2+1}=0; $$ In either case, $$ |\langle x_n,a\rangle|\leq\dfrac1n. $$ Then $\langle x,a\rangle=0$. So $W$ is not closed. It is separable, being a subspace of $l_0$ that is separable.
A surjective map which is not a submersion
Yes! How about $f:\mathbb{R}\to\mathbb{R}$, $f(x) = x^3$? At $x=0$, $df = f'(x) = 3x^2$ is not surjective. (In general, you can perturb a submersion to have points where the differential does not have full rank.) Edited in response to your question below: Sard's theorem guarantees that critical values of $f$ will be of measure zero. However, critical points of $f$ need not have measure zero. For example, the smooth map $$f(x) = \begin{cases} (x-2)e^{-(x-2)^{-2}}, & 2\leq x \\ 0, & -2<x<2 \\ -(x+2)e^{-(x+2)^{-2}}, & x\leq -2 \end{cases}$$ has a critical value at $0$, but its critical point set is $[-2,2]$. As another (stupid) example, take a disconnected manifold and project it onto one of its components by sending the other component to a point. For example, $S^1\sqcup S^1\to S^1$ by $id\sqcup *$.
Why when $f$ has a local minimizer that is not global minimizer, then f have another critical point?
Let $x$ be be the local minimum which is not global, there exists $u<x<v$ such that $f(u)>f(x), f(v)>f(x)$ and $f(x)$ is minimum on $f([u,v])$ since $x$ is a local minimum. Let $y$ such that $f(y)<f(x)$, suppose that $y>v$, since $f$ is continuous, $f([v,z])$ is an interval, we deduce that there exists $z\in [v,y]$ such that $f(z)=f(x)$, $f(z)-f(x)=0=f'(c)(z-x)$, where $c\in (x,z)$ implies that $f'(c)=0$. If $z<u$, $f(u)-f(z)=f'(c)(u-z)=0$ implies that $f'(c)=0$.
Zero element in an Hilbert space is orthogonal?
This is a question of unwinding the definitions, and also a matter of stating a theorem precisely. Definition of orthogonal: vectors $\mathbf{v}$ and $\mathbf{w}$ are orthogonal iff $\langle \mathbf{v}, \mathbf{w}\rangle=0$. This is clearly true if $\mathbf{v}=0$ and $\mathbf{w}$ is any vector whatsoever, so the zero vector is orthogonal to all vectors. Definition of a span $[\mathbf{v}_1,\ldots,\mathbf{v}_n]$: the set of all vectors that can be expressed in the form $a_1\mathbf{v}_1+\cdots+a_n\mathbf{v}_n$, for scalars $a_1,\ldots,a_n$. (For simplicity let's deal only with the finite dimensional case.) Clearly the zero vector is in the span, just by taking all the $a_i$'s equal to 0. Now the theorem you seem to have implicitly in mind is this: if $\mathbf{w}$ is orthogonal to all the vectors $\{\mathbf{v}_1,\ldots,\mathbf{v}_n\}$, and $\mathbf{w}$ is in the span $[\mathbf{v}_1,\ldots,\mathbf{v}_n]$, then $\mathbf{w}=\mathbf{0}$. (True provided the inner product is positive definite, which is part of the definition for Hilbert spaces.) This doesn't contradict either definition. Corollary: any vector orthogonal to all the vectors of a basis is zero. I suppose someone might carelessly state this result as, "No vector can be orthogonal to all vectors of a basis", rather than "Only the zero vector can be orthogonal to all the vectors of a basis". The careless version is false. Another example, suggested by one of your comments: $W\cap W^\perp=\{\mathbf{0}\}$. Someone might carelessly say, "A subspace and its orthogonal complement are disjoint", rather than "The intersection of a subspace and its orthogonal complement consists solely of the zero vector". Let me use this question as an excuse to ramble on about mathematical exposition in general. It's a special case of the expert problem: if you thoroughly understand a subject, it's hard to put yourself in the place of a novice. Imprecise, technically incorrect statements whose overall sense is true flow easily from the lips or the fingers, especially if the correct version is more awkward or verbose. I think learning to navigate around these "abuses of language" is part of what people mean by "mathematical maturity". A really great expositor knows how to avoid strewing these pebbles (or boulders) in the path of the reader, without sacrificing an easy conversational style. Another commenter suggested defining "orthogonal" as a relation between nonzero vectors, so saying $\mathbf{v}$ and $\mathbf{w}$ are orthogonal would automatically imply that $\mathbf{v}$ and $\mathbf{w}$ are both nonzero. I don't like this, for three reasons. First, it disagrees with standard terminology. Second, getting used to the special role of the zero vector is part of getting past the novice stage. Third, the standard definition is logically "cleaner". For example, $W^\perp$ is defined as the set of all vectors that are orthogonal to all vectors in $W$. But with this modified definition, you have to add, "plus the zero vector". (We want $W^\perp$ to be a subspace.) Or looked at categorically: the zero space is an initial object in the category of vector spaces, which makes it analogous, in some sense, to the empty set (the initial object for Set). In Set, "complement" implies $A\cap B=\varnothing$; so in Vect it should imply $A\cap B=\{\mathbf{0}\}$.
What goes the Pappus' theorem says
The order matters, because the lines (e.g. $AB$, $DE$, $EF$, $BC$, etc.) with one point on $l_1$ and one on $l_2$ could be parallel, and so $L$, $M$, or $N$ might not exist. For example, if the points appeared in the order $A$, $E$, $C$ on $l_1$ and $B$, $D$, $F$ on $l_2$, then we could easily have $$AB \cap DE = \varnothing$$
Maximum of absolute value of linear combinations with i.i.d random variables
This does not fully answer your question, but it might be helpful. You can write $\mathbf{x}=r \mathbf{u}$ such that $r$ is an exponentially distributed random variable with mean $\frac{\sqrt{n}}{(n-1)!}$ and $\mathbf{u}$ is a vector uniformly sampled from the surface of the unit $\ell_1$ ball. With independece of $r$ and $\mathbf{u}$, and using the fact that linear (and in general convex) functions attain their maximum on the boundary of their domain, you can reduce your problem in finding $C$ that maximizes $\mathbb{E}_\mathbf{u}\left[\sup_{\mathbf{c}\in\mathcal{P}}\mathbf{c}^\mathrm{T}\mathbf{u}\right]$, where $\mathcal{P}$ is the convex hull of $\lbrace\mathbf{c}_i\rbrace_{i=1}^n$. So, basically you want to find a polytope with $n$ vertices inscribed in the unit Euclidean ball such that it has maximum width in the sense defined above. I'm not sure if you can solve this exactly, but using symmetries of $\ell_1$ ball you might be able to find some properties of $\mathcal{P}$. For instance, I guess $\mathcal{P}$ would be invariant under permutation. Correction: Since $r=\Vert \mathbf{x}\Vert_1$ is the sum of $n$ iid $\text{Exp(1)}$ random variables, it should have $\text{Erlang}(n,1)$ distribution itself rather than exponential.
Is it possible to find a standard deviation for a sample with only its average or mean available?
If your 4 groups are given as the 84 samples of values, then you can calculate the unbiased empirical standard deviation estimator (you can look it up online) for each of the groups from the samples. If all you have is the means of the 4 groups, and not the samples, then no, there is nothing you can do to run a standard $t$-test. However if you can come up with a prior distribution on the standard deviations (based on your beliefs about the samples), and come up with a prior on the means under the hypothesis they are different (and an improper flat prior should be fine), then you can compute the posterior odds ratio for the hypothesis that the means are different versus the hypothesis that the means are the same. That would be a Bayesian approach.
What is the volume of the region $S =\{(x, y, z) : |x| + |y| + |z| ≤ 1\}$?
The region $S$ looks somewhat like this: If you consider only the first octant, you have the simplex: Due to symmetry, the volume of $S$ is $8$ times the volume of this simplex. That is, $\operatorname{vol}(S)=8\times $$\iiint_{\substack{x+y+z\le 1\\ x,\,y,\,z\,\ge\, 0}}\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z$
generating function for $\frac{n!}{(2n)!}$
It might be better to consider the exponential generating function in this case. \begin{align} a_{n} &= \frac{n!}{(2n)!} \\ f(t) &= \sum_{n=0}^{\infty} a_{n} \, \frac{t^{n}}{n!} = \sum_{n=0}^{\infty} \frac{t^{n}}{(2n)!} = \cosh(\sqrt{t}). \end{align} Linear generating function: \begin{align} g(t) &= \sum_{n=0}^{\infty} a_{n} \, t^{n} \\ &= \sum_{n=0}^{\infty} \frac{(1)_{n} \, (1)_{n} \, t^{n}}{n! \, (1)_{2n}} = \sum_{n=0}^{\infty} \frac{(1)_{n} \, t^{n}}{n! \, \left(\frac{1}{2}\right)_{n}} = {}_{1}F_{1}\left(1; \frac{1}{2}; \frac{t}{4}\right) \\ &= 1 + \frac{\sqrt{\pi \, t}}{2} \cdot e^{\frac{t}{4}} \cdot erf\left(\frac{\sqrt{t}}{2}\right). \end{align} Equation (21) of MathWorld is as follows: \begin{align} \sum_{n=0}^{\infty} \frac{t^{2n+1}}{(2n+1)!!} &= \frac{1}{t} \, \sum_{n=0}^{\infty} \frac{2^{n+1} \, (n+1)! \, t^{2n+2}}{(2n+2)!} \\ &= \frac{1}{t} \, \sum_{n=1}^{\infty} \frac{2^{n} \, n! \, t^{2n}}{(2n)!} \\ &= \frac{1}{t} \, \left[ -1 + g(2 t^{2}) \right] \\ &= \sqrt{\frac{\pi}{2}} \cdot e^{\frac{t^{2}}{2}} \cdot erf\left(\frac{t}{\sqrt{2}}\right) \end{align}
Estimate of mean weight?
The problem is badly stated. Clearly the rancher's estimate for the mean weight is $64.3$ lbs. I believe you are supposed to come up with a $95\%$ confidence interval around that based on the $20$ samples and known variance (which should have units of lbs$^2$). The only other interpretation I can see would be that he desires the confidence interval to be $\pm (64.3 \cdot 0.05)$, but then you should be asked to specify the number of llamas to weight to meet this requirement.
Difference between elements of a free product
Note that $\langle g\rangle *\langle h\rangle$ injects canonically into $G*H$, hence we may assume wlog that $G$, $H$ are cyclic (with generators $g,h$). To show that the free product is non-abelian, it suffices to find a group $X$ and group homomorphisms $\phi\colon G\to X$, $\psi\colon H\to X$ where the images of $g$ and $h$ do not commute in $X$. Let $X$ be the group of isometries of the $2$-sphere, $\phi(g)$ the rotation around the $x$-axis (i.e., the line $(1,0,0)\cdot\Bbb R$) by $\frac{2\pi}{|g|}$ (or by $\frac\pi2$ if $|g|=\infty$), and $\psi(h)$ the rotation around the line $\ell=(1,1,0)\cdot\Bbb R$ by $\frac{2\pi}{|h|}$ (or by $\frac\pi2$ if $|h|=\infty$). Then $\phi(g)\psi(h)\phi(g^{-1})$ is a non-trivial rotation around the line $\phi(g)\ell\ne \ell$. In particular, $\phi(g)\psi(h)\phi(g^{-1})\ne\psi(h)$, as desired. If preferred, the above can be spelled out in form of $3\times 3$ matrices. Also, one can work with orthogonal axes instead of the somewhat artificial $\ell$ except in the case $|g|=|h|=2$.
Result on the power of norm in Banach space?
$||x+y||^{\lambda} \leq (2\max \{||x||,||y||\})^{\lambda}=2^{\lambda} \max \{||x||^{\lambda},||y||^{\lambda}\}) \leq 2^{\lambda} \{||x||^{\lambda}+||y||^{\lambda}\})$.
how to find coordinate of unknown point given the distance against N known points
See this wiki page. (Location estimation in sensor networks), or read the gazillion papers on the subject :)
Constant rank theorem
You are not actually required to describe such cubes. What the remark tells you is that, up to composition by some diffeomorphism, you may assume that $\phi(U)$ and $\psi(V)$ are cubes of the same side length. This gives you a way to locally visualise the image of $N$ in $M$.
prove from Lindelof to seprable
Let $X$ be metrizable. For each $n$, the set $\{B_{1/n}(x):x\in X\}$ is an open cover for $X$. Lindelof $\implies$ this reduces to a countable subcover $\{B_{1/n}(x_i):i\in\mathbb{N}\}$. Let $C_n=\{x_i:i\in\mathbb{N}\}$ (the the of centers). Then $C=\cup_{i=1}^{\infty}C_n$ is countable. To show it is dense, let $y\in X$ and $\epsilon>0$ both be arbitrary. Let $N$ be such that $1/N<\epsilon$. Since $\{B_{1/N}(x_i):i\in\mathbb{N}\}$ covers $X$, $y\in B_{1/N}(x_j)$ for some $j$. But then $d(x_j,y)<1/N<\epsilon\implies x_j\in B_{\epsilon}(y)\cap C_n$ and so $C_n$ is dense.
Why do we divide the expectation of the indicator function times X by P(B) for E[X|B]?
$E[1[S=1] P] = 13/10$, not $13/5$, assuming each possible outcome you listed has the same probability.
Language and Finite Models
HINT: Think of a triangle whose vertices represent the points of the model, and whose edges represent the relation $R$.
Probability to select all 3 male mouses from 10 selected at random
How many ways can you choose a group of 10 mice without restrictions? $^{100}C_{10}$ How many ways can you choose a group of 10 mice including all three male mice? $^{3}C_{3}\,^{97}C_{7}=\,^{97}C_{7}$ i.e. choose all three males and seven other mice. Hence the probabolity is $\frac{^{97}C_{7}}{^{100}C_{10}}$ Your calculation is more suited to independent events, where we'd put each mouse back after choosing it and then just see the genders of 10 mice chosen in this way.
This is regarding Chi square test
Alternative hypothesis (what the study is trying to find out): interest in statistics is not the same on average for people of all mathematical abilities Null hypothesis (negation of the alternative hypothesis): interest in statistics is the same on average for people of all mathematical abilities If there are at least four degrees of freedom, then the test is significant at the 0.001 level. The p-value can be found in R using the command pchisq(13.277,k) where k is the number of degrees of freedom. Alternatively, the p-value is given by $1-\frac{1}{\Gamma(\frac{k}{2})}\gamma(\frac{k}{2},\frac{x}{2})$ where $\gamma(s,x)=\int_0^xt^{s-1}e^{-t}dt$
Proving a relation between 2 sets as antisymmetric
First of all, for what you write to make sense, you're not picking two partitions and defining a relation between those particular two partitions. You're defining a single relation on the set of all partitions. Second, I think you're overcomplicating things by considering a partition to be an indexed family of subsets of $U$. Indeed, if you consider the assignment of indices to be part of the partition, then what you want to prove is not true, because then the two partitions $$ A_1 = \{1\}, A_2 = \{2,3,\ldots,n\} $$ and $$ B_1 = \{2,3,\ldots,n\}, B_2 = \{1\} $$ would satisfy $A\succ B\succ A$, but $A\ne B$. So we need to work with partitions being unordered collections of subsets of $U$, and your relation should then be defined as $$ B\succ A \quad\iff \forall b\in B\; \exists a\in A: b\subseteq a$$ In order to prove that this is antisymmetic, we assume $A\succ B\succ A$ and seek to prove that $A \subseteq B$. (Then, since $A$ and $B$ were arbitrary, and we also have $B\succ A\succ B$, the same argument shows $B\subseteq A$, so $A = B$). In order to prove this, it is crucial that $A$ and $B$ are partitions, which requires among other things that (1) any two different members of $A$ must be disjoint, and (2) the empty set is not in $A$. Now, we're assuming that $A\succ B\succ A$. To prove $A\subseteq B$ consider any $x\in A$. One of the $\succ$s gives us $y\in B$ such that $x\subseteq y$, and the other gives $z\in A$ sucht that $x\subseteq y\subseteq z$. But because $x$ and $z$ are both in $A$, they must be either equal or disjoint. Since $x$ is non-empty and $x\subseteq z$ they can't be disjoint, so $x=z$. But then $x\subseteq y \subseteq x$, and $y$ must equal $x$. Since $y$ was in $B$, we have proved $x\in B$.