title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Find a point that divides a line segment in a specific way
Let $dAB$ be the distance between points $A(-12,5)$ and $B(12,29)$. We have: $$dAB=\sqrt{(x_b-x_a)^2+(y_b-y_a)^2}=\sqrt{(12-(-12))^2+(29-5)^2}=\sqrt{24^2+24^2}=24\sqrt2$$ Then, $\frac{3}{8}dAB=\frac{3}{8}.24\sqrt2=9\sqrt2$ Now, we need to find a point $P(x,y)$ such that: $(1)$ lies on the segment $AB$ and $(2)$ $dAP=\frac{3}{8}dAB=9\sqrt2$, where it follows: $$dAP=\sqrt{(x-(-12))^2+(y-5)^2}=\sqrt{(x+12)^2+(y-5)^2}=9\sqrt2$$ Squaring both sides: $$\tag{2} (x+12)^2+(y-5)^2=162$$ We are done with condition $(2)$, but now we need to find the equation of the segment $AB$: $y-y_a=m(x-x_b)$ with $-12<x<12$ and $5<y<29$ $$m=\frac{y_b-y_a}{x_b-x_a}=\frac{29-5}{12-(-12)}=\frac{24}{24}=1$$ Therefore, the equation wanted is: $\tag{1} y-5=x+12 \Rightarrow y=x+17$ Substituting $(1)$ in $(2)$: $$(x+12)^2+(x+17-5)^2=162$$ $$(x+12)^2+(x+12)^2=162$$ $$2(x+12)^2=162$$ $$(x+12)^2=81$$ $$|x+12|=9$$ Where we get: $x_1=-3 \Rightarrow y_1=14$ or $x_2=-21 \Rightarrow y_2=-4$, but the point $(x_2,y_2)$ is not on the segment $AB$. Hence, $P(-3,14)$ is the point wanted
How Eigenvectors of $A^T$ are perpendicular to eigen vectors of A?
If $\def\vec#1{{\bf#1}}A\vec v=\lambda\vec v$ and $A\vec w=\mu\vec w$ with $\lambda\ne\mu$ then $$\eqalign{(\lambda-\mu)\vec v\cdot\vec w &=(\lambda-\mu)\vec v^T\vec w\cr &=(\lambda\vec v^T)\vec w-\vec v^T(\mu\vec w)\cr &=(A\vec v)^T\vec w-\vec v^T(A^T\vec w)\cr &=\vec v^TA^T\vec w-\vec v^TA^T\vec w\cr &=0\ ,\cr}$$ and $\lambda-\mu\ne0$ so $\vec v\cdot\vec w=0$.
Hilbert Space multiplication Operator, shift operator
As pointed out by MaoWao, consider the following Fourier transform operator ${\cal F}: \ell^2(\mathbb{Z}) \rightarrow L^2(S^1)$ $$ {\cal F}[f(n)](k) = \hat{f}(k) = \frac{1}{\sqrt{2 \pi}} \sum_n e^{i kn} f(n) $$ with inverse $$ {\cal F}^{-1}[g(k)](n) = \frac{1}{\sqrt{2 \pi}} \int_0^{2 \pi} dk e^{-i kn} g(k) $$ For point a) First compute the Fourier transform of the standard basis. Note that $e_n(m) = \delta_{n,m}$ (Kronecker delta). So \begin{align} \hat{e_n}(k) &= \frac{1}{\sqrt{2 \pi}} \sum_m e^{i km} \delta_{n,m}\\ & = \frac{e^{ikn}}{\sqrt{2\pi}} \end{align} Now use $$ {\cal F} H{\cal F}^{-1} {\cal F} e_n = i ({\cal F}e_{n+1} - {\cal F}e_{n-1}) $$ which means \begin{align} {\cal F} H{\cal F}^{-1} e^{ikn} &= i (e^{ik(n+1)}- e^{ik(n-1)} ) \\ &= -2\sin(k) e^{ikn} \end{align} Hence $H$ is unitarily equivalent to multiplication by $-2\sin(k)$. Note that the sign is conventional: a different choice of signs in the Fourier operator would have led to $2\sin(k)$. For point b) Now Fourier transform the equality $ \exp{itH}h = h$ to get $$ \exp{ ( it {\cal F} H{\cal F}^{-1} )} \hat{h} = \hat{h} \tag{1} $$ Hence you need to find a non-zero function $\hat{h}(k)$ in $L^2(S^1)$ such that $$ \exp{[-2it \sin(k)]} \, \hat{h}(k) = \hat{h}(k) $$ for all $k\in [0,2\pi)$ (and all t) which is impossible. Note added: In fact $\sin(k)$ is non-zero for all $k$ (in the interval) except $k=0,\pi$. Hence, in order to satisfy (1), the function $\hat{h}(k)$ should be zero for all $k$ except possibly $k=0,\pi$. This is clearly the zero function almost everywhere. To add some intuition you can reason as follows. If such a function existed, it would try to be "infinite" at $k=0$ (or $k=\pi$ or both), in order to have a finite (non-zero) norm. This is achieved by a (Dirac) delta function --which of course is not a function--. So, "in a sense", $\hat{h}(k) = \delta(k)$, is a solution of $\exp{itH}h=h$. If you Fourier transform back you get the function (vector) with components $$ h(n) = \frac{1}{\sqrt{2 \pi}} $$ You can plug this in your equation defining $H$ and you will notice that it leads to $H h = 0$. So it would seem that $h$ is an eigenvector of $H$ with eigenvalue zero. However the vector with components $h(n) \propto 1$ is not normalizable, hence such $h$ does not belong to $\ell^2(\mathbb{Z})$.
Proof of a proposition about recursion definition (Terence Tao's Analysis I)
One has to be careful while forming infinite sets. Simply because $\alpha(n)$ is defined for each $n \in \mathbb{N}$ you cannot form the set $\{(n,\alpha(n)): n \in \mathbb{N}$}. Existence: Let $\mathcal{C} = \{A \subseteq \mathbb{N} \times \mathbb{N}: (0, c) \in A, (n++, f(n, a(n))) \in A \text{ for all } n \in \mathbb{N}\}$ and order $\mathcal{C}$ by inclusion $\subseteq$. $\mathbb{N} \times \mathbb{N} \in \mathcal{C}$ and therefore the collection $\mathcal{C}$ is not empty. It is easy to see that $C = \bigcap_{A \in \mathcal{C}} A \in \mathcal{C}$ and is the smallest such element in $\mathcal{C}$ with this property. We claim that $C$ is a graph. ($(x,y) \in C$ and $(x,z) \in C \implies y = z$). If $(0,c),(0,d) \in C$, with $c \not = d$. Then $C- \{(0,d)\} \in \mathcal{C}$ - a contradiction. Similarly if $(n++, f(n,a(n)))$ and $(n++, d) \in C$ with $d \not = f(n, a(n))$ for some $n \in \mathbb{N}$, then $C - \{(n++, d)\} \in \mathcal{C}$ which contradicts the minimality of $C$. Hence $C$ is a graph and therefore defines a function. Uniqueness can be proved using induction.
What does a *pair* mean in the definition of a topological space?
What I don't understand is what exactly is meant by the pair $(X,\tau)$. Pair is a pair, two things in correct order. That's it. Seriously. It's like bicycle is a pair of wheels together with some skeleton and other mechanisms. Of course in mathematics pair has concrete definition, but it is not really relevant. What matters is that pair is an ordered collection of two things. So a topological space is a set together with some additional structure, called topology. The reason we say "a topological space is a pair $(X,\tau)$ such that..." is because the topological space depends on both the underlying set $X$ and the topology $\tau$. I.e. on a given set $X$ we can define multiple different topological structures, e.g. $\tau=\{\emptyset, X\}$ (a.k.a. antidiscrete topology) and $\tau=P(X)$ (a.k.a. discrete topology) are two different (unless $|X|\leq 1$) topologies on $X$. What does an element from this "topological space" look like? Is it also a pair? Strictly, formally speaking, we never talk about elements of a topological space (since formally any topological space $(X,\tau)$ has exactly two elements: $X$ and $\tau$). When we say that $x$ is an element of a topological space $(X,\tau)$ then typically what we mean is that $x\in X$, i.e. we only talk about elements of the underlying set. Is there a way I can understand this geometrically, e.g. what would $(\mathbb{R},\tau)$ look like? That depends on $\tau$. When $\tau$ is the Euclidean topology, then we deal with the standard Euclidean line. But there are infinitely many nonequivalent topologies $\tau$ on $\mathbb{R}$. Some of them easy to understand, e.g. discrete topology on $\mathbb{R}$ makes every point isolated. But some of them are quite abstract and weird to wrap mind around, e.g. high dimensional spaces (note that since $\mathbb{R}$ is equinumerous with any $\mathbb{R}^n$ then for any $n$ we can define topology on $\mathbb{R}$ making it $n$-th dimensional).
Dertermine the distribution of the entire function's value
Hint: For the first question, use the fact that if $f$ is entire, then $\overline{f(\overline{z})}$ is entire too (see If $f$ is analytic, prove that $\overline{f(\overline{z})}$ is also analytic), and put $g(z)=f(z)\overline{f(\overline{z})}$. What is $g(x)$ for $x\in \mathbb{R}$ ?
Find the inverse and use it to solve
You found that $-8$ is the inverse, not $8$. $-8\equiv 27 \pmod {35}$. $8\cdot 13 \equiv-1\equiv 34\pmod {35}$. To solve the equation, you proceed exactly as you do in basic algebra. You multiply both sides with $13^{-1}$ and reduce.
How to solve this question?
Perhaps, as joriki suggests, the question is to show that for a base $\mathcal{B}$ for a topology on $X$, the family $\tau_{\mathcal{B}}$ of arbitrary unions of elements of $\mathcal{B}$ -- including, I suppose, the empty union, which gives the empty set -- is the smallest topology containing $\mathcal{B}$. If so, this is certainly true and can be shown as follows: Step 1: Since any topology is closed under arbitrary unions, any topology $\tau$ containing $\mathcal{B}$ must contain $\tau_{\mathcal{B}}$, so it is enough to show that $\tau_{\mathcal{B}}$ is a topology. Recall that what we are assuming about $\mathcal{B}$ is that (B1) $\bigcup_{B \in \mathcal{B}} B = X$ and (B2) For all $B_1, B_2 \in \mathcal{B}$, if $x \in B_1 \cap B_2$, then there exists $B_3 \in \mathcal{B}$ such that $x \in B_3 \subset B_1 \cap B_2$. Step 2: Thus $\emptyset, X$ are unions of elements of $\mathcal{B}$: the former by taking the empty union, the latter by (B1). Step 3: Being the set of all unions of a certain family of sets, $\tau_{\mathcal{B}}$ is certainly closed under arbitrary unions. Step 4: So the matter of it is to show that $\tau_{\mathcal{B}}$ is closed under finite intersections. For this, it is enough to show that if $U_1,U_2 \in \tau_{\mathcal{B}}$, then so is $U_1 \cap U_2$. To show this we need to use condition (B2), which notice has not yet been used. This verification takes two or three lines. I urge the OP to try it herself and tell us whether she succeeded and if not what she tried.
Draw Balls from an Urn Without Replacement with Probability Drawing Proportional to Size of Ball.
Fix $i$ and $j$, and consider any drawing where both balls $B_i$ and $B_j$ are still in the urn. Let $W$ be the total weight of all other balls still in the urn. The probability that we draw $B_i$, resp. $B_j$, is given by $$P(i)={V_i\over V_i+V_j+W},\quad P(j)={V_j\over V_i+V_j+W}\ ,$$ and since these events are exclusive the probability that we draw one of them is $$P(i\vee j)={V_i+V_j\over V_i+V_j+W}\ .$$ It follows that the conditional probability of drawing $B_i$, given that one of $B_i$ or $B_j$ is drawn, comes to $$P(i\>|\> i\vee j)={V_i\over V_i+V_j}\ ,$$and this is valid for all drawings until one of $B_i$ or $B_j$ is drawn, whereby the game ends.
How to determine if a set of vectors forms a basis for a subspace
For $S$ to be a basis, it must be linearly independent and span $U$. As you noted, linear independence can be checked by computing the kernel of the matrix. To check if $S$ spans $U$, let $\vec{u}$ be an arbitrary vector in $U$ and show that $\vec{u}\in\text{span}(S)$. To this end, a general vector of $U$ is of the form: $$\begin{bmatrix}2y+z+w \\ y \\ z \\ w\end{bmatrix}.$$ So we want to find $a,b,c\in\mathbb{R}$ such that $$\begin{bmatrix}2y+z+w \\ y \\ z \\ w\end{bmatrix}=a\begin{bmatrix}1 \\ 0 \\ 1 \\ 0\end{bmatrix}+b\begin{bmatrix}2 \\ 1 \\ 0 \\ 0\end{bmatrix}+c\begin{bmatrix}1 \\ 0 \\ 0 \\ 1\end{bmatrix}$$ To do this, try and check which vectors on the right-hand side contain nonzero entries which are zero for the other two vectors. For example, the first vector contains a $1$ in the third row, whereas all the other vectors have zero in the third row! Because of this, we know that $a$ equals $z$. Keep doing this process. Eventually, you'll either find $a,b,c$ that satisfy this, or you'll be able to make up a counterexample! (In this case such $a,b,c$ making this equation hold exist, so there is no counterexample!) Then since $S$ is linearly independent and spans $U$, $S$ is a basis for $U$!
ODE, and Questioning Method
You can get for any function $g(y)$ and differentiable function $y(x)$ the formula $$ g(y(x))-g(y(4))=\int_4^xg'(y(s))y'(s)\,ds $$ by the fundamental theorem and the chain rule of differentiation. Now insert the ODE $y'=ye^{-x^2}$ of which $y(x)$ is a solution, $$ g(y(x))-g(y(4))=\int_4^xg'(y(s))y(s)e^{-s^2}\,ds. $$ This is in general useless, as the right side still contains the unknown solution function $y$. However in the special case $g'(y(s))y(s)=1$ the right side reduces to a simple quadrature, the integral of a known function. This condition implies that the function $y$ does not change the sign of its value inside the considered interval. As $y(4)=1$, the sign is positive. To achieve this, demand more generally $g'(y)=\frac1y$. As this is a well-known integration task, one easily finds $g(y)=\ln|y|$. (plus arbitrary integration constants that cancel in the next step.) Now insert that into the first formula to finally get $$ \ln y(x)-\underbrace{\ln y(4)}_{=0} =\int_4^xe^{-s^2}\,ds. $$ All the other formulas and calculation paths are just short-hands, known as "method of separation of variables", to replace the reference to substitution and chain rules with a more intuitive computation method.
Show that there exists no integer coordinates on curve
This problem gives you a chance to review basic modulo arithmetic with the squares. So any integer can be either odd or even. Hence you can write: For any $x \in \mathbb{Z}$, $x = 2k$ or $x = 2k + 1$. Thus: $x^2 = (2k)^2 = 4k^2 \equiv 0 \pmod 4$ or $x^2 = (2k + 1)^2 = 4(k^2 + k) + 1 \equiv 1\pmod 4$. But your equation gives: $x^2 \equiv 3 \pmod 4$ and this can't happen. So the answer follows.
Subcomplexes of a Closed Combinatorial Surface
To the contrary, it is not generally the case. If $K$ is not homeomorphic to a sphere then it is never the case, because $K$ always has a simplicial circle $C$ which is not homotopic to a constant, namely the image of the shortest closed edge path which is not path homotopic to a constant. And if $K$ is homeomorphic to a sphere then it might be true, but still I would say that the simplicial structures on a sphere which have that property are an exception: imagine a very fine simplicial structure on the sphere for which the equator is a simplicial circle.
The open $(0,1) \times (0,1)$ square invectively mapped *into* the interval $(0,1)$
You have an injection $(0,1)^2\to(0,1)$ -- but it is not surjective, because there is nothing that maps to, for example, $$ \frac{1}{99} = 0.0101010101010\ldots $$ De-interlating the digits of this would produce $\langle 0,\frac19\rangle$, but that is not in $(0,1)^2$. However, an injection is really all you need, because it is easy to find an injection in the other direction, and then the Schröder-Bernstein theorem does the work of stitching them together to a single bijection for you.
finding specific solution to $u_{tt}-u_{xx}=x^2-t^2$ with boundry conditions $u(x,0)=\frac{-x^4}{16}-x, \ u_t(x,0)=1, x \in R$
$$u(x,t)=\frac{-(x^2-t^2)^2}{16}+F(x+t)+G(x-t)$$ $$u_t=\frac{-(x^2-t^2)t}{4}+F'(x+t)-G'(x-t)$$ $\begin{cases} u(x,0)=-\frac{x^4}{16}+F(x)+G(x)=-\frac{x^4}{16}-x \quad\to\quad F(x)+G(x)=-x \\ u_t(x,0)=F'(x)-G'(x)=1 \quad\to\quad F(x)-G(x)=x+c \end{cases}$ $$\begin{cases} F(x)=\frac{c}{2} \quad\to\quad F(x-t)=\frac{c}{2}\\ G(x)=-x-\frac{c}{2} \quad\to\quad G(x-t)=-(x-t)-\frac{c}{2} \end{cases}$$ $u(x,t)=\frac{-(x^2-t^2)^2}{16}+\frac{c}{2}-(x-t)-\frac{c}{2}$ $$u(x,t)=\frac{-(x^2-t^2)^2}{16}-x+t$$
integral of absolute value of a function.
I think you complicated your life. Here is the graph of $\sqrt{3}x\sin(x)$. Since $|\sin(x)|<1$ then $-\sqrt{3}x\le f(x)\le\sqrt{3}x$ and the set of lines joining every point of the curve to the origin is just the part of the plane between these two straight lines (imagine the blue part, all filled with blue). Since this region is infinite as well as the plane, I guess the ratio is calculated by evaluating the angular filling. The angle of $y=\pm\sqrt{3}x$ are $\pm 60°$ so the overall angular load factor is $\dfrac{4\times60°}{360°}=\dfrac 23$
Name of this recursively defined sequence of prime numbers?
You can find the sequence on OEIS here (A00707097). The name given is "Primeth recurrence".
a contour integration involving $5$th roots of unity
Hint. By using the residue at infinity we have that $$\oint_{|z|=2} \frac{dz}{(z-3)(z^5-1)}=-2\pi i(\text{Res}(f,3)+\text{Res}(f,\infty)). $$ and those two residues are quite easy to evaluate for $f(z)=\frac{1}{(z-3)(z^5-1)}$. Recall that on the Riemann sphere, the algebraic sum of the complex residues of a meromorphic function is zero. See also the Inside-Outside Theorem.
Continuous functions in a complete metric space
We can actually prove more with no real difficulty: we can show that the map $f$ is a closed map, meaning that $f[F]$ is closed for all closed $F\subseteq X$. HINT: Let $F$ be an arbitrary closed subset of $X$, and let $y\in\operatorname{cl}f[F]$ be arbitrary. $X$ is a metric space, so there is a sequence $\langle y_n:n\in\Bbb N\rangle$ in $f[F]$ converging to $y$. For each $n\in\Bbb N$ there is an $x_n\in F$ such that $f(x_n)=y_n$. (Why?) Use the fact that $d(y_m,y_n)\ge Cd(x_m,x_n)$ for all $m,n\in\Bbb N$ to show that the sequence $\langle x_n:n\in\Bbb N\rangle$ is a Cauchy sequence. Now use the fact that $X$ is complete to find an $x\in F$ such that $y=f(x)$.
$\bar f(y) = f(Ty)$, how to compute the Hessian of $\bar f(y) $?
Suppose you know the gradient $(g)$ and Hessian $(H)$ of a function in terms of the variable $x$ $$\eqalign{ f = f(x),\,\,\,\,\, g = \frac{\partial f}{\partial x},\,\,\,\,\,\, H = \frac{\partial g}{\partial x} }$$ You are then told that $x$ is not independent, but actually depends on another variable $(x = Sy).\,\,$ Note that the matrix $S$ does not need to be invertible. It might even be rectangular. Let's find the gradient $(p)$ and Hessian $(Q)$ with respect to this new variable, by way of differentials. $$\eqalign{ df &= g^Tdx = g^T(S\,dy) = (S^Tg)^Tdy = p^Tdy \cr p &= \frac{\partial f}{\partial y} = S^Tg \cr \cr dp &= S^T\,dg = S^T(H\,dx) = S^TH(S\,dy) = Q\,dy \cr Q &=\frac{\partial p}{\partial y} = S^THS \cr\cr }$$
Does "homeomorphic" depend on the topology?
Of course ! It's very easy to prove that $(C,\mathcal T_1)\cong (C,\mathcal T_2)$ where $\mathcal T_1$ is the topology induced by $\mathbb R^2$ and $\mathcal T_2$ the quotient topology.
non-homogenous differential equation eigenvalues?
You can change coordinates by calling $$ x=u+2\text{ and }y=v+2 $$ from which you can easily differentiate with respect to $t$: $$ x'=u'\text{ and }y'=v' $$ and the system becomes $$ \left\{ \begin{array}{l} u'=-2u+v\\ v'=u-2v \end{array}\right. $$ The change of coordinates just preserves the direction of the coordinate axis and changes the origin from $(0,0)$ to $(-2,-2)$.
Show $L^2([0,1])$ is not complete with the $L^1$-norm
Let $f_n(x)=0$ for $x\in [0,1/n)$ and $f_n(x)=1/x^{-1/2}$ for $x\in [1/n,1].$ Let $f(0)=0$ and $f(x)=x^{-1/2}$ for $x\in (0,1].$ Details. Each $f_n$ is bounded and Lebesgue-measurable so each $f_n$ belongs to $L^1[0,1] $ and to $L^2[0,1].$ And $\int_0^1|f(x)|dx=2<\infty$ so $f\in L^1[0,1].$ We have $\|f-f_n\|_1=\int_0^{1/n}|x^{-1/2}|dx=2n^{-1/2}\to 0$ as $n\to \infty.$ But $f\not \in L^2[0,1]$ because $\int_0^1|f(x)|^2dx=\int_0^1 x^{-1}dx=\infty.$
Definition of Sheaf of Rational Functions on Integral Scheme?
If $X$ is an integral scheme, then $X$ has a unique generic point - the point associated to the zero ideal in any open affine neighbourhood. We can define the function field of $X$ to be the local ring at the generic point. The sheaf of rational functions $\mathcal K_X$ on $X$ is a constant sheaf. The group of sections over any non-empty open set $U$ is the function field of $X$. For practical purposes, this information is of little use unless we are able to identify $\mathcal O_X$ as a subsheaf of $\mathcal K_X$. We can do this as follows. First, we cover $X$ with a collection of open affines. In any open affine subset ${\rm Spec } A \subset X$, the generic point is associated to the zero ideal $(0) \subset {\rm Spec } A$, and the function field is the localisation $A_{(0)}$. Given a point in ${\rm Spec}A$, represented by a prime ideal $\mathfrak{p} \subset A$, there is a natural inclusion morphism $i_{\mathfrak p} : A_{\mathfrak p} \to A_{(0)}$. For any open set $U \subset {\rm Spec}A$, we can define $\mathcal O_X (U) \subset \mathcal K_X(U)$ as the subring of $A_{(0)}$ consisting of all elements in $A_{(0)}$ that are in the image of $i_{\mathfrak p}$ for all $\mathfrak p \subset U$. If $U$ intersects more than one open affine in our affine cover, then $\mathcal O_X(U)$ is the subring of the function field consisting of elements that obey this criterion in each open affine that $U$ intersects.
How to find the determinant of this $(2n+2)$ x $(2n+2)$ matrix?
Write w.l.o.g. your matrix in the form $$ A:=\begin{bmatrix} 0 & 0 & x^T & 0 \\ 0 & 0 & 0 & x^T \\ x & 0 & -I & I \\ 0 & x & I & -I \end{bmatrix}. $$ Take any nonzero vector $z$ orthogonal to $x$ ($z^Tx=0$), there is a whole $(n-1)$-dimensional subspace of them. Now $$ A\begin{bmatrix} 0 \\ 0 \\ z \\ z \end{bmatrix}=0. $$ Hence for $n>1$, $A$ is singular and $\det(A)=0$. For $n=1$, it is easy to verify that $\det(A)=x^4$.
Integration by parts of $\frac{\ln(1+\sqrt{x})}{\sqrt{x}}$
\begin{align} & \int_0^1 \frac{\ln(1+\sqrt{x})}{\sqrt{x}} \,dx = 2\int_0^1 \ln(\ \underbrace{1+\sqrt{x}}_u\ ) \left(\ \underbrace{\frac{1}{2\sqrt{x}}\,dx}_{du}\ \right) \\[10pt] = {} & 2 \int_1^2 \ln u \, du = \underbrace{2\int w \, du = 2wu-2\int u\,dw}_{\text{integration by parts}} = 2u\ln u - 2\int u \frac{du}u \\[15pt] = {} & 2u\ln u - 2\int du = \cdots\cdots \end{align}
Terminology of adapted coordinate charts
Your problem is one of language, not of mathematics. In particular, your sentence Is it because $S$ has "adapted" the coordinate of $M$ in some sense? suggests that you're mistaking the word "adapted" for "adopted". As far as I can tell, the terminology simply comes from the fact that the chart on $M$ is a somewhat special chart, adapted to $S$ in order to given a nice form to $S\cap U$.
expectation maximization in coin flipping problem - one more ...
Okay Ranjan. There are two biased coins with different biases. If we only flipped one coin many times (n) we can estimate that bias θa since number of successes/n is an consistent and unbiased estimate of it. But we also have a second coin with a different θ say θb. Now when we flip we don't know which coin we flipped (this is how the missing information comes in). But if the coin is selected at random so that each coin has probability 1/2 of being selected the expected value for any flip is (θa + θb)/2. According to the experiment you pick a coin 5 times and each time you flip it 10 times. They are saying that in step 1 you have an initial guess at both θa and θb. If you pretend those are the actual values then the first 10 flips come from either coin A or coin B and use can use the data to calculate a likelihood for coin a and a likelihood for coin B. You can calculate these two likelihoods and pick the one with the larger likelihood. Now pretend that you made the right decision. Now you have selected A. Now take that likelihood and find θa that maimizes it. This revises your estimate of θa but doesn't change the estimate of θb. Now repeat this process with the new pair of estimates and repeat the experiment a second time. Doing this 5 times may allow you to revise both θa and θb s estimates. But 5 times will not be enough to converge to the solution. Repeat this many more times and you should get the estimate of both θa and θb to converge. That will be the result of the EM algorithm. The first step was expectation. You take the estimates for the two parameters to determine the likelihoods. Then step 2 picks the maximum likelihood estimates. The new estimates are then used for the expectation step. I hope this helps. I think by describing the whole process in different words will anserr your specific question and make the whole process clearer.
Ramanujan's Tau function, an arithmetic property
Apply the operator $\Delta$ discussed in this question to the modular form $\Delta$ (sorry for the conflict of notation vis-a-vis $\Delta$!). The result is a modular form of weight $14$. What can you say about it?
If some vectors in $\mathbb Q^n$ are linearly independent over $\mathbb Q$ , then are they also linearly independent over $\mathbb C$?
Yes. We can prove that this is the case "by Gaussian elimination" as follows: Let $A$ be the matrix whose columns are $v_1,\dots,v_k$. Since these columns are linearly independent, the problem $$ A x = 0, \quad x \in \Bbb Q^k $$ has the unique solution $x = \vec 0$. It follows that there is an invertible matrix $E$ (a sequence of rational elementary row-operations) such that $R = EA$ is in reduced row-echelon form, with a pivot in each column. We then note that the solution to $Ax = 0$ with $x \in \Bbb C^n$ is the same as the solution to $Rx = 0$ for $x \in \Bbb C^n$. Since $R$ has a pivot in each column, the problem $Rx = 0$ has the unique solution $x = 0$. Thus, the only solution to $$ Ax = 0, \quad x \in \Bbb C^n $$ is $x=0$, which is to say that the columns of $A$ are independent over $\Bbb C$.
Linear Algebra: Expansion of the Inverse of the Difference of Two Matrices Multiplied by Another Matrix
You can use the Woodbury matrix identity if you want to expand $(A-B)^{-1}$. According to this identity, if $A$ and $B$ are invertible, it holds that $$(A-B)^{-1}=A^{-1}-A^{-1}(-B^{-1}+A^{-1})A^{-1}.$$ You can then left-multiply by $C$ to get the expansion. In the special case that $B$ is a rank-one matrix, then you can use the Sherman-Morrison formula to expand the inverse.
Formally show that $f(0)=0$, $|f'(x)|\leq|f(x)|$ on $[0,1]$ implies $f(x)=0$ on $[0,1]$
The mean value theorem is what you're after. Assume $f(x)$ is not $0$ on the whole interval, and take some $a_0\in (0,1)$ such that $f(a_0)\neq 0$. Then by the mean value theorem there must be some $a_1\in (0,a_0)$ such that $$ f'(a_1) = \frac{f(a_0) - f(0)}{a_0-0} = \frac{|f(a_0)|}{a_0} $$ By the assumed property of $f$, we have $|f(a_1)|\geq \frac{|f(a_0)|}{a_0}$. Now again by the mean value theorem, there is an $a_2\in (0, a_1)$ such that $$ f'(a_2) = \frac{f(a_1) - f(0)}{a_1-0} $$ and again we get $|f(a_2)|\geq\frac{|f(a_1)|}{a_1}\geq \frac{|f(a_0)|}{a_0^2}$. And we continue this pattern. Each step we create a new $a_n$ for which $|f(a_n)|\geq \frac{|f(a_0)|}{a_0^n}$. Since $a_0<1$, this means that $|f(a_n)|\to \infty$. But a continuous function on a closed and bounded interval must be bounded. So we reach a contradiction.
Definition of pointwise convergence
Pointwise convergence of $(f_n)_{n\in\mathbb{N}}$ to $f$ means that for each point $x \in I$, we have $\lim_{n \rightarrow \infty}{f_n(x)}=f(x)$. Essentially we take a point $x \in I$ and look at $f_n(x)$ as $n \rightarrow \infty$. If this converges to a limit and does so for all $x \in I$, then it makes sense to say $(f_n)_{n\in\mathbb{N}}$ converges to the function $f(x)=\lim_{n\rightarrow\infty}f_n(x)$. We call this pointwise convergence because it only looks at individual points rather than the functions as a whole, which is in contrast to something stronger like uniform convergence. An example would be $$f_n(x)=\frac{x^2}{n} (x \in \mathbb{R})$$ For any $x \in \mathbb{R}$ we then have $$ \lim_{n \rightarrow \infty}f_n(x) = \lim_{n \rightarrow \infty}\frac{x^2}{n} = x^2\lim_{n \rightarrow \infty}\frac{1}{n}=x^2 \cdot0 = 0$$ So $(f_n)_{n\in\mathbb{N}}$ converges pointwise to the null function $f(x)=0$
Boundedness of sequence of functions
Consider the sequence of functions $$ f_n = 1/n^2 \cdot \chi_{(n,n+1)}, $$ where $\chi_A$ is the indicator function of the set $A$. I will leave the details for you. The important observation is that the existence of a function $h$ as in your hypothesis would allow you to apply dominated convergence. This should help you to show that there is no such function. (One can also verify it directly).
Prove that if $f$ is integrable on $[0,1]$, then $\lim_{n→∞}\int_{0}^{1} x^{n}f(x)dx = 0$.
If you know that $f$ is bounded, so that $-M \le f \le M$, then you have $\int_0^1 x^n f(x)\,dx \le M \int_0^1 x^n\,dx$. But you can compute $\int_0^1 x^n\,dx$ directly and show that it converges to 0. Likewise, $\int_0^1 x^n f(x)\,dx \ge -M \int_0^1 x^n\,dx$. Use the squeeze theorem. Alternatively, use the fact that $$\left| \int_0^1 x^n f(x)\,dx \right| \le \int_0^1 x^n |f(x)|\,dx \le M \int_0^1 x^n\,dx.$$ (This assumes that "integrable" means "Riemann integrable". If it means "Lebesgue integrable", then integrable functions need not be bounded, and this proof doesn't work. But if you're working with Lebesgue integrable functions then you probably know the dominated convergence theorem and should just use that.) (Acknowledgement of priority: I just noticed that Jon's comment contains exactly the same hint.)
Factors of a sequence resulting from repeated exponentiation
Let $$a_i = 2^{x_i}, x_i = p_1^{p_2^{\cdots ^{p_i}}}$$ Then $b_i = a_i - a_{i-1} = a_{i-1} \cdot (2^{(x_i - x_{i-1})} - 1)$ Remove the trivial factor as: $$\frac{b_i}{a_{i-1}} = 2^{(x_i - x_{i-1})}-1$$ Also it's divisible by $3$ since difference between exponents is even.
Roots of simultaneous power sum equations (numerically or otherwise)
the elementary symmetric polynomials may be computed from the power sums. with these you can then construct a single polynomial on one unknown which has all the $r_j$ as roots. rather than write the solutions explicitly, the scheme is easier to remember in the form suited to recursive evaluation. define: $$ e_1 = k_1 \\ 2e_2 = e_1k_1 - k_2 \\ 3e_3 = e_2k_1-e_1k_2 +k_3 \\ \cdots $$ then (with $e_0=1$) the polynomial $$ P(x) = \sum_{k=0}^n (-1)^k e_k x^{n-k} $$ has roots $r1, r_2,...$
Let $T^t$ be a linear transformation from $W^* \to V^*$, how do I see this goes into $V^*$
For $\lambda\in W^\ast$, you want to check if $T^t(\lambda)\in V^\ast$. That is, does $T^t(\lambda)$ send elements of $V$ into $F$? So for $v\in V$, $$ T^t(\lambda)(v)=\lambda(Tv)\in F $$ since $Tv\in W$, and $\lambda$ sends elements of $W$ into $F$.
Proving a complex function is continuous.
I'm sure you will agree with me that for $z \not= 0$, $f(z)$ is continuous. The only point of concern is the continuity at $z=0$. Now consider $$|f(z)|=|\frac{z^2}{|z|}|=\frac{|z|^2}{|z|}=|z|$$ and we can use a result: $|f|$ has a limit $0$ as $z \rightarrow c$ $\iff$ $f$ has a limit $0$ as $z \rightarrow c$ And we have $$\lim_{z \to 0}|z|=0$$ So $$\lim_{z \to 0}f(z)=0=f(0)$$ and it follows that $f$ is continuous at $z=0$.
Special circulant matrix eigenvalues
This is $I-\frac1nE$, where $I$ is the identity and $E$ is the matrix with all entries $1$. All vectors orthogonal to the vector with all entries $1$ are eigenvalues of $E$ with value $0$, so the $(n-1)$-dimensional orthogonal complement of the $1$-dimensional space spanned by that vector is an eigenspace of $E$ with value $0$ and thus of $I-\frac1nE$ with eigenvalue $1$.
function of a function (composite function) with a function not equal to 3
$x \ne 3$ indicates that the domain of $g$ does not include the value $x=3$. Indeed, $g(x) = \dfrac {6}{3-x}$ is undefined at $x=3$. Basically, the rest of the question follows from what you have done, but for $g(h(x))$, the function is not defined for $h(x) = 3$, i.e. $x = -\frac15$, while $h(g(x))$ is not defined for $x=3$.
Use induction to prove that Legendre polynomials solve the corresponding differential equation
There's three recurrence relations that help here: $$P_{n+1}^{'} -P_{n-1}^{'} = (2n+1)P_n$$ $$(n+1)P_{n+1} = (2n+1)xP_n -nP_{n-1}$$ $$P_{n+1}-P_{n-1} = (x^2-1)\cdot \frac{2n+1}{n(n+1)}\cdot P_n^{'}$$ The induction step then looks as follows: $\newcommand{\partial}[1]{\left[#1\right]}$ $\newcommand{\bracket}[1]{\left(#1\right)}$ \begin{equation} \begin{split} [(1-x^2)P_{n+1}^{'}]^{'} &=[(1-x^2)(P_{n-1}^{'}+(2n+1)P_n)]^{'} \\ &=[(1-x^2)P_{n-1}^{'}]^{'}+(2n+1)\bracket{-2xP_n + (1-x^2)P_n^{'}} \\ &=-(n-1)nP_{n-1}-2\{(n+1)P_{n+1}+nP_{n-1}\} + n(n+1)(P_{n-1}-P_{n+1}) \\ &=\{-(n-1)n-2n+ n(n+1)\}P_{n-1}+\{-2(n+1)-n(n+1)\}P_{n+1} \\ &=n\{-(n-1)-2+(n+1)\}P_{n-1}-(n+1)(2+n)P_{n+1} \\ &=-(n+1)(n+2)P_{n+1} \end{split} \end{equation} Q.E.D.
What is the use of $H_s$ for non-integer $s$?
The fractional Sobolev spaces are important when formulating elliptic boundary value problems in Sobolev spaces. Consider the Dirichlet problem for the Poisson equation. That is, you are interested in a solution $u$ to $\Delta u = g$ on a bounded open domain $\Omega \subset \mathbb{R}^n$ that satisfies a given boundary condition $u | \partial \Omega = f$ for some $f$ that is defined on $\partial{\Omega}$. How do you formulate "boundary" conditions when functions in Sobolev spaces aren't necessarily even continuous? The boundary $\partial \Omega$ of (a sufficiently nice open subset) $\Omega$ is of measure zero and functions in $H^k(\Omega)$ are defined a priori only a.e, and changing them on a measure zero subset doesn't affect them. Still, if $\partial \Omega$ is nice enough (say, a $C^k$ manifold), and if $k \geq 1$ is an integer, one can show that there are trace operators $T : H^k(\Omega) \rightarrow L^2(\partial \Omega)$, that extend continuously the usual restriction map $f \mapsto f| \partial \Omega$ on $C^{\infty}(\Omega) \cap H^k(\Omega)$. What is the image of $T$? That is, what are all the possible "boundary values" of a function in $H^k(\Omega)$? It turns out to be precisely $H^{k-\frac{1}{2}}(\partial \Omega)$! Using this, one can formulate and prove uniqueness and existence results for solutions of PDEs in Sobolev spaces. For example, one has that the map $$ u \mapsto (\Delta u, u|_{\partial \Omega}) = (\Delta u, Tu) $$ is an isomorphism of $H^1(\Omega) \rightarrow H^{-1}(\Omega) \times H^{\frac{1}{2}}(\Omega)$, and so, given $g \in H^{-1}(\Omega)$ and $f \in H^{\frac{1}{2}}(\Omega)$, the Dirichlet problem for the Poisson equation $\Delta u = g$, $u|_{\partial \Omega} = f$ has a unique solution in $H^1(\Omega)$. There are also higher order trace operators, which correspond to normal derivatives of various orders, whose image again lies in fractional Sobolev spaces.
Is there a function that is differentiable but not integrable?
It's important to know that if a function is continuous on a closed interval, it is integrable on that interval. This question seems to be getting at how strict that condition is. If a function is differentiable on an open interval, then it is automatically continuous on that interval. The question is whether it can be extended to a continuous function on the enclosing closed interval. And one way this can fail to happen is if the function has an infinite limit at an endpoint. So that is how we construct the counterexample. The function $$ f(x) = \begin{cases} 0 & x=0 \\ \frac{1}{x} & x > 0 \end{cases} $$ is defined on $[0,1]$, differentiable on $(0,1)$, but not integrable on $[0,1]$.
Proving properties for the Poisson-process.
Let $(X_t)_{t \geq 0}$ be a Poisson process with intensity $\lambda$. Step 1: $(X_t)_{t \geq 0}$ has almost surely increasing sample paths. Proof: Fix $s \leq t$. Since $(X_t)_{t \geq 0}$ has stationary increments and $X_{t-s}$ is Poisson distributed, we have $$\mathbb{P}(X_s>X_t) = \mathbb{P}(X_t-X_s < 0) = \mathbb{P}(X_{t-s}<0)=0,$$ i.e. $X_s \leq X_t$ almost surely. As $\mathbb{Q}_+$ is countable, this implies $$\mathbb{P}(\forall q \leq r, q,r \in \mathbb{Q}_+: X_q \leq X_r)=1.$$ Since $(X_t)_{t \geq 0}$ has càdlàg sample paths, this already implies $$\mathbb{P}(\forall s \leq t: X_s \leq X_t)= 1.$$ Step 2: $(X_t)_{t \geq 0}$ takes almost surely only integer values. We have $\mathbb{P}(X_q \in \mathbb{N}_0)=1$ for all $q \in \mathbb{Q}_+$. Hence, $\mathbb{P}(\forall q \in \mathbb{Q}_+: X_q \in \mathbb{N}_0)=1$. As $(X_t)_{t \geq 0}$ has càdlàg sample paths, we get $$\mathbb{P}(\forall t \geq 0: X_t \in \mathbb{N}_0)=1.$$ (Note that $\Omega \backslash \{\forall t \geq 0: X_t \in \mathbb{N}_0\} \subseteq \{\exists q \in \mathbb{Q}_+: X_q \notin \mathbb{N}_0$ and that the latter is a $\mathbb{P}$-null set.) Step 3: $(X_t)_{t \geq 0}$ has almost surely integer-valued jump heights. We already know from step 1 and 2 that there exists a null set $N$ such that $X_t(\omega) \in \mathbb{N}_0$ and $t \mapsto X_t(\omega)$ is non-decreasing for all $\omega \in \Omega \backslash N$. Consequently, we have $$X_s(\omega) - X_t(\omega) \in \mathbb{N}_0$$ for any $s \geq t$ and $\omega \in \Omega \backslash N$. On the other hand, we know that the limit $$\lim_{t \uparrow s} (X_s(\omega)-X_t(\omega)) = \Delta X_s(\omega) $$ exists. Combing both considerations yields $\Delta X_s(\omega) \in \mathbb{N}_0$. (Check that the following statement is true: If $(a_n)_{n \in \mathbb{N}} \subseteq \mathbb{N}_0$ and the limit $a:=\lim_n a_n$ exists, then $a \in \mathbb{N}_0$.) Since this holds for any $s \geq 0$, we get $$\Delta X_s(\omega) \in \mathbb{N}_0 \qquad \text{for all $\omega \in \Omega \backslash N$, $s \geq 0$},$$ i.e. $$\mathbb{P}(\forall s \geq 0: \Delta X_s \in \mathbb{N}_0)=1.$$ Step 4: $(X_t)_{t \geq 0}$ has almost surely jumps of height $1$. By step 3, it suffices to show that $$\mathbb{P}(\exists t \geq 0: \Delta X_t \geq 2)=0.$$ Since the countable union of null sets is a null set, it suffices to show $$p(T) := \mathbb{P}(\exists t \in [0,T]: \Delta X_t \geq 2)=0$$ for all $T>0$. To this end, we first note that \begin{align*} \Omega_0 \cap \{\exists t \in [0,T]: \Delta X_t \geq 2\} &\subseteq \Omega_0 \cap \bigcup_{j=1}^{kT} \{X_{\frac{j}{k}}-X_{\frac{j-1}{k}} \geq 2\} \\ &\subseteq \bigcup_{j=1}^{kT} \{X_{\frac{j}{k}}-X_{\frac{j-1}{k}} \geq 2\} \end{align*} for all $k \in \mathbb{N}$ where $$\Omega_0 := \{\omega; s \mapsto X_s \, \, \text{is non-decreasing}\}.$$ Using that $\mathbb{P}(\Omega_0)=1$ (by Step 1) and the fact that the increments $X_{\frac{j}{k}}-X_{\frac{j-1}{k}}$ are independent Poisson distributed random variables with parameter $\lambda/k$, we get $$\begin{align*} p(T) &=\mathbb{P}(\Omega_0 \cap \{\exists t \in [0,T]: \Delta X_t \geq 2\}) \\ &\leq \sum_{j=1}^{kT} \mathbb{P}(X_{\frac{j}{k}}-X_{\frac{j-1}{k}} \geq 2) \\ &= kT \mathbb{P}(X_{\frac{1}{k}} \geq 2) = kT \left(1-e^{-\lambda/k} \left[1+\frac{\lambda}{k} \right]\right) \\ &= \lambda T \frac{1-e^{-\lambda/k} \left(1+\frac{\lambda}{k} \right)}{\frac{\lambda}{k}}. \end{align*}$$ Letting $k \to \infty$, we find $$p(T) \leq \lambda T \frac{d}{dx} (-e^{-x}(1+x)) \bigg|_{x=0} = 0.$$
Dirac Operators on $S^1$
I do not really see your problem, as everything you stated is basically correct and there is no real contradiction as far as I see. Still, I would like to guess what you might be stumbling over. It is true that the Dirac operators are "the same" in some sense, as they can both be written as $i d/dt$. However, they are not, as they act on different bundles. When writing the operator this way, one makes various identifications, as I explain below. You are right with your statement that for both spin structures, the spinor space are isomorphic to a different bundle. However, the isomorphisms differ. Let us write $S^1 = \mathrm{R}/\mathrm{Z}$. For the disconnected-cover-spin structure, the associated spinor space then $$ \Sigma_1 = S^1 \times \mathbb{C},$$ a trivial bundle. One can write this as $$ \Sigma_1 = \mathbb{R} \times \mathbb{C} / \sim $$ where $\sim$ is the equivalence relation $$ (t, z) \sim (t^\prime, z^\prime) \Longleftrightarrow t - t^\prime \in \mathbb{Z} ~~ \text{and} ~~ z = z^\prime. $$ For the connected-cover-spin structure, the Spinor space is $$ \Sigma_2 = \mathbb{R} \times \mathbb{C} / \sim$$ where $\sim$ is the equivalence relation $$ (t, z) \sim (t^\prime, z^\prime) \Longleftrightarrow t-t^\prime \in \mathbb{Z} ~~\text{and}~~ z = e^{i \pi (t-t^\prime)} z^\prime $$ This bundle is trivializable but not trivial itself. Now the operator $D = i d/dt$ is usually defined on the space $\mathbb{R} \times \mathbb{C}$, and of course it descends well to operators $D_1$ and $D_2$ on the factor spaces $\Sigma_1$ and $\Sigma_2$. However, $D_1$ and $D_2$ are different operators, and they have different properties; for example, as you mentioned, the kernel of $D_2$ is trivial while the kernel of $D_1$ is not. \Edit: As any differential operator, the Dirac operator is a local object. In fact, it is given by $$ \sum_{j=1}^n e_j \cdot \nabla_{e_j} $$ for an orthonormal basis $e_1, \dots e_j$, where $\nabla$ is the Levi-Civita connection on Spinors. Now in the case of $S^1$, the manifold is flat, so the Levi-Civita connection coincides with $d$. The spinor bundle is isomorphic to $\mathbb{C}$, by identifying the vector $e_1$ with $i$ (note that there is only one positively oriented local ONB $e_1$ on $S^1$ -- which btw is even globally defined). This is why in general, Dirac operators always have to coincide locally, in some sense. And also, this is the reason why the Dirac operator on $S^1$ is locally given by $i d/dt$ in any case.
Solving an absolute value inequality without using a certain property of the absolute value.
Hint: Look at $2x+3>0 \implies 2x+3\leq 4$ and $2x+3\leq 0 \implies -(2x+3)\leq 4$
Showing that a certain Lie Group is diffeomorphic to $\mathbb{R}^{3}$
Yes, the map $$S^1\times\mathbb{R}^2\to G,\quad (\theta,\alpha,\beta)\mapsto\left[ \begin{matrix} \cos(\theta) & \sin(\theta) & 0 & \alpha \\ -\sin(\theta) & \cos(\theta) & 0 & \beta \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix} \right] \ $$ is a diffeomorphism and hence $G$ is not diffeomorphic to $\mathbb{R}^3$. However, $S^1\times\mathbb{R}^2$ (and hence $G$) is diffeomorphic to $\mathbb{R}^3-\{z\text{-axis}\}$, as can be seen by rotating an open half plane around the $z$-axis. To see that $\mathbb{R}^3$ and $S^1\times\mathbb{R}^2$ are not diffeomorphic, you can look at their fundamental group: $\pi_1(\mathbb{R}^3)=1$ while $\pi_1(S^1\times\mathbb{R}^2)=\pi_1(S^1)\times\pi_1(\mathbb{R}^2)=\mathbb{Z}\times 1=\mathbb{Z}$.
Uncountable product of many copies of $\mathbb{Z}$ is not paracompact
To find facts about spaces, you can try looking it up at Pi Base In this case, it says your space is T_2 and not normal, but paracompact+T_2 implies normal, so it is not paracompact.
Is the set of non-decreasing bounded continuous functions a compact set with the norm $(,)=\sup|−|$?
Your set is unbounded and thus not compact (consider $f_n(x)=nx$).
Prove the following about an eigenvector, $w$
If $Aw = \lambda w$, then $Bw = A^2 w = A(Aw) = A(\lambda w) = \lambda A w = \lambda^2 w$
Entire function satisfying $f(z)=f(zi)$ for all $Z\in \mathbb{C}$
Let $g(z):=f(z)-f(0).$ Then we have $g(0)=0.$ From $f(z)=f(zi)$ we get $g'(0)=f'(0)=if(0)$ and $g''(0)=f''(0)=-f''(0).$ Hence $g'(0)=g''(0)=0.$ Then there is an entire funktion $h$ such that $$g(z)=z^3h(z).$$ This shows that $\frac{g(z)}{z^3}$ has a removable singularity at $0$.Hence $\frac{g(z)}{z^3}$ is bounded for $|z|<1.$ From $|f(z)|\leq \alpha |z|^3$ for $|z| \ge 1$, we see that $g(z)/z^3$ is bounded on $ \mathbb C.$ It follows by Liouville that $g(z)/z^3$ is constant. Hence, there is $c$ such that $$f(z)=cz^3+f(0)$$ for all $z$. We then get $$cz^3+f(0)=f(z)=f(iz)=-icz^3+f(0)$$ for all $z$. It follows that $c=0.$ Conclusion: $f$ is constant.
Homology of free loop space and Hochschild cohomology
Let’s use the following grading: On a dual complex, $$((C_*)^\vee)^i=Hom(C_*,\mathbb{Z})^i=Hom(C_i,\mathbb{Z}).$$ On the Hochschild chain complex of $C^*(X)$, $$CH_i(C^*(X),C^*(X))=\bigoplus_{i_1+\cdots+ i_k=i+k} C^{i_1}(X)\otimes\cdots\otimes C^{i_k}(X).$$ Therefore $CH_*$ is honestly $\mathbb{Z}$-graded, and the Hochschild differential increases degree: $\partial^*+\delta: CH_*\to CH_{*+1}$. With the above grading, there is an isomorphism $$H^i(LX)\cong HH_i(C^*(X),C^*(X))\ \ \ \forall i$$ in case $X$ is simply-connected. The validity of dualizing the above isomorphism could come from the Universal Coefficient Theorem. For simplicity let’s work over $\mathbb{Z}$, and assume that homology groups are finitely generated in each degree. For a degree-decreasing complex $\{\cdots \to C_*\to C_{*-1}\to \cdots\}$, form its dual complex $C^*=(C_*)^\vee$, then $$F(H^i)\cong F(H_i),\ \ \ T(H^i)\cong T(H_{i-1}),$$ where $F,T$ denotes free part and torsion part, respectively. Similarly, if $\{\cdots \to C^*\to C^{*+1}\to \cdots\}$ is a degree-increasing complex, its dual complex $\bar{C}_*=(C^*)^\vee$ has $$F(\bar{H}_i)\cong F(H^i),\ \ \ T(\bar{H}_i)\cong T(H^{i+1}).$$ As a consequence, take $\bar{C}_*= C^{\vee\vee}_* $ to be the cocochain complex obtained by doubly dualizing $C_*$, we have $$F(\bar{H}_i)\cong F(H_i),\ \ \ T(\bar{H}_i)\cong T(H^{i+1})\cong T(H_i).$$ Therefore $C_*\to C_*^{\vee\vee}$ is a quasi-isomorphism. By this MO question I think the finite generation assumption is satisfied in our situation. Since $CH^*(C^*(X),C^*(X)^\vee)$ is exactly the dual complex of $CH_*(C^*(X),C^*(X))$, where the differential on the latter is degree-increasing, we have \begin{align*} T(H_i(LX))&\cong T(H^{i+1}(LX))\\ &\cong T(HH_{i+1}(C^*(X),C^*(X)))\\ &\cong T(HH^{i}(C^*(X),C^*(X)^\vee)). \end{align*} The free parts are of course identified in the same degree, so $$H_i(LX)\cong HH^{i}(C^*(X),C^*(X)^\vee).$$ Finally, the $C^*(X)$-bimodule quasi-isomorphism $C_*(X)\to C_*(X)^{\vee\vee}=C^*(X)^{\vee}$ induces an isomorphism $$HH^{i}(C^*(X),C_*(X)))\cong HH^{i}(C^*(X),C^*(X)^{\vee}),$$ and we are done.
True or False: Let $G, H$ be finite groups. Then any subgroup of $G × H$ is equal to $A × B$ for some subgroups $A<G$ and $B<H$.(TIFR GS2018)
For a more general result: Lemma. Let $G,H$ be finite groups. Then $\gcd(|G|, |H|)=1$ if and only if any subgroup of $G\times H$ is of the form $G'\times H'$ for some subgroups $G'\subseteq G$ and $H'\subseteq H$. Proof. &quot;$\Rightarrow$&quot; Let $K$ be a subgroup of $G\times H$. Let $e_G, e_H$ be neutral elements of $G, H$ respectively. Now let $(x,y)\in K$. Since $\gcd(|G|, |H|)=1$ then it follows that the order of $y$ and $y^{|G|}$ are the same. Thus $y^{|G|}$ generates $\langle y\rangle$. In particular we have $$(x,y)\in K\ \Rightarrow$$ $$(x^{|G|}, y^{|G|})=(e_G, y^{|G|})\in K\ \Rightarrow$$ $$(e_G, y)\in K$$ The last implication because $y$ can be generated from $y^{|G|}$. Analogously if $(x,y)\in K$ then $(x, e_Y)\in K$. Now if you consider projections $$\pi_G:G\times H\to G$$ $$\pi_H:G\times H\to H$$ then you (obviously) always have $$K\subseteq \pi_G(K)\times\pi_H(K)$$ What we've shown is the oppositie inclusion which completes the proof. &quot;$\Leftarrow$&quot;. Assume that $d=\gcd(|G|, |H|)\neq 1$. Let $p|d$ be a prime divisor and pick elements $x\in G, y\in H$ such that $|x|=|y|=p$ (they exist by the Cauchy's theorem). Now consider a subgroup $K$ of $G\times H$ generated by $(x, y)$. This subgroup is of prime order $p$. In particular if $K=G'\times H'$ then either $G'$ or $H'$ has to be trivial (because the order of product is equal to product of orders and $p$ is prime). But that is impossible since $(x,y)\in K$ and none of $x,y$ is trivial. Contradiction. $\Box$ So what it means is that if the orders of $G$ and $H$ are not relatively prime then there is a subgroup of $G\times H$ that is not a product of subgroups of $G$ and $H$. This should give you plenty of examples, e.g. $$G=H=\mathbb{Z}_{n}$$ $$K=\langle(1,1)\rangle$$
Evaluate $\frac{\partial^2}{\partial t^2} \left[ \prod_{j=1}^k (1+t+\dots+t^{d_j -1}) \right]$ at $t=1$
Let $$f_i(x)=\sum_{n=0}^{d_i-1}x^n=1+x+\cdots+x^{d_i-1}$$ and let $$\Psi=\prod_{j=1}^kf_j(t)$$ We have that $$\Phi=\frac\partial{\partial t}\Psi=\Psi\left(\sum^k_{j=1}\frac{f_j'(t)}{f_j(t)}\right)$$ Therefore $$\frac{\partial^2}{\partial t^2}\Psi=\frac\partial{\partial t}\Phi=\Psi'\left(\sum^k_{j=1}\frac{f_j'(t)}{f_j(t)}\right)+\Psi\left(\sum^k_{j=1}\frac{f_j'(t)}{f_j(t)}\right)=\Psi\left(\left(\sum^k_{j=1}\frac{f_j'(t)}{f_j(t)}\right)^2+\left(\sum^k_{j=1}\frac{f_j'(t)}{f_j(t)}\right)'\right)$$ Finally after simplification $$\Psi\left(\left(\sum^k_{j=1}\frac{f_j'(t)}{f_j(t)}\right)^2+\sum^k_{j=1}\frac{f_j(t)f_j''(t)-f_j'(t)^2}{f_j(t)^2}\right)$$ The rest is just computations of the derivative at of $f_j$ at $t=1$ which should simplify nicely in terms of $d_j$. Further Calculations $$f_j(1)=d_j$$ $$f_j'(1)=\frac{d_j(d_j-1)}{2}$$ $$f_j''(1)=\frac{d_j(d_j-1)(d_j-2)}{3}$$ Thus you may finally plug and chug. After the plug and chug I get $$\frac{\partial^2}{\partial t^2}\Psi=\left(\prod^k_{j=1}d_j\right)\left(\left(\sum^k_{j=1}\frac{d_j-1}2\right)^2-\sum^k_{j=1}\frac{(d_j-1)(d_j-5)}{2}\right)$$
If $M$ finitely generated as an $R$-module, is $M$ is finitely generated as an $S$-module, and $S$ is finitely generated as an $R$-module?
If $M$ is finitely generated as an $R$-module, then since $R$ is a subring of $S$ we have that $M$ is finitely generated as an $S$-module (we just happen to be able to restrict the coefficients to be only elements of $R$ if we want, which are still elements of $S$). But $S$ need not be finitely generated as an $R$-module. For example, we could use $S=\mathbb Z^\omega$, $R$ the subring isomorphic to $\mathbb{Z}$ where the elements in each coordinate are the same, $M = \mathbb{Z}$ and $S$ acts on $M$ by multiplying by the first coordinate. Then $M$ is finitely generated as an $R$-module (and hence as an $S$-module) but clearly $S$ is not finitely generated as an $R$-module.
Calculate average using average value
You can't do this without a few more variables. You'll need to know $N$, the number of data points. Then you do what you did: $$Avg_{new} = \frac{(Avg_{old}*N + Data_{new})}{N+1}$$ And be sure to update $N = N+1$ afterwards.
Any idea how to solve this integral?
I think this might help $$A=\int \frac{-u^{^{\frac{3}{2}}}\sin\left(\frac{\sqrt{11}}{2}ln\left(u\right)\right)}{\frac{\sqrt{11}}{2}\:ln\:\left(u\right)}du$$ and $$B=\int \frac{-u^{^{\frac{3}{2}}}\cos\left(\frac{\sqrt{11}}{2}ln\left(u\right)\right)}{\frac{\sqrt{11}}{2}\:ln\:\left(u\right)}du$$ Now $$B+iA=\int \frac{-u^{^{\frac{3}{2}}}e^{i\left(\frac{\sqrt{11}}{2}ln\left(u\right)\right)}}{\frac{\sqrt{11}}{2}\:ln\:\left(u\right)}du\\=\int {-u^{3/2}\cdot u^{i{\sqrt {11}\over 2}}\over {\sqrt{11}\over 2}\ln u}du$$ I think this should be an easy integration by parts. After solving , just seperate out the real and complex parts!
Tangent vectors to a point with zero gradient
Consider the origin in the (double) cone $x^2+y^2=z^2$. Your set of tangent vectors to curves gives you the entire cone. This is certainly not a vector space, and it spans all of $\Bbb R^3$. Relevant things to look up are tangent cone and Zariski tangent space.
When is it more appropriate to describe steps using English rather than mathematics?
I will only use notation if it's relevant in some way to the calculation I'm doing or simplifies the presentation of the ideas being discussed. If I ensure that the notation is useful in some meaningful way then I find I can justify using it, otherwise it may simply be clutter.
Given a curve, such as $1/x$, how to find which tangent is closest to its OWN interception with the y-axis
Tanget at $(a,1/a)$ is $$y-\frac1a=-\frac1{a^2}(x-a)$$ The interception with $X$ axis occurs when $y=0$: $$x=2a$$ Now, the distance from $(a,1/a)$ to $(2a,0)$ is $$\sqrt{a^2+\frac1{a^2}}$$ It is well-known that the sum of a positive number and its inverse is minimum when the number is $1$, that is, $a=1$. The minimum distance is, thus, $\sqrt 2$.
Integral with polylogarithm
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Lets $\ds{t \equiv {x - a \over b} \implies x = bt + a}$: \begin{align} \int{1 \over x}\ln\pars{x - a \over b}\,\dd x &amp; = \int{1 \over bt + a}\ln\pars{t}\,b\,\dd t = -\int{\ln\pars{\bracks{-a/b}\bracks{-bt/a}} \over 1 - \pars{-bt/a}} \,\pars{-\,{b \over a}}\dd t \end{align} Lets $\ds{y \equiv -\,{b \over a}\,t \implies t = -\,{a \over b}\,y}$: \begin{align} \int{1 \over x}\ln\pars{x - a \over b}\,\dd x &amp; = -\int{\ln\pars{-ay/b} \over 1 - y}\,\dd y \,\,\,\stackrel{\mrm{IBP}}{=}\,\,\, \ln\pars{1 - y}\ln\pars{-\,{a \over b}\,y} - \int{\ln\pars{1 - y} \over y}\,\dd y \\[5mm] &amp; = \ln\pars{1 - y}\ln\pars{-\,{a \over b}\,y} + \mrm{Li}_{2}\pars{y} = \ln\pars{1 + {b \over a}\,t}\ln\pars{t} + \mrm{Li}_{2}\pars{-\,{b \over a}\,t} \\[5mm] &amp; = \ln\pars{1 + {b \over a}\,{x - a \over b}}\ln\pars{x - a \over b} + \mrm{Li}_{2}\pars{-\,{b \over a}\,{x - a \over b}} \\[5mm] &amp; = \bbx{\ln\pars{x \over a}\ln\pars{x - a \over b} + \mrm{Li}_{2}\pars{1 - {x \over a}} + \pars{~\mbox{a constant}~}} \end{align}
easy calculus hw question on computing work from spring compression
Hint: The spring helps you lift the weight, since it is stretched when you begin stretching it.
$T: V \rightarrow W$ is surjective. Show there exists a right inverse.
You have the correct strategy here. What you've written is almost a complete solution already. Pick a basis $w_1, \ldots, w_n$ for $W$. For each $w_i$, there exists a $v_i \in V$ such that $T(v_i)=w_i$. Now define a linear map $S:W\to V$ by setting $S(w_i)=v_i$ and extending linearly; that is, $S(a_1 w_1 +\ldots +a_n w_n) = a_1 v_1 +\ldots +a_n v_n$. This is a well-defined linear map, and it is a right inverse for $T$. I don't think it's necessary to worry about where the axiom of choice is used here, but yes, you do need to pick a basis for $W$.
Refrigerator Binomial Distribution problem
Given that the store has accepted the batch, there must be 0, 1, or 2 failures. Using the binomial distribtuon function, these have a priori chances of 0.1661, .3059, and 0.2737 respectively. The chances of exactly one is therefore 0.3059/(0.1661 + 0.3059 +0.2737) = 0.4102.
Prove that $ T $ is at most countable
Terminology note: most authors use "countable" to mean what Rudin calls "at most countable." I use Rudin's terminology below, for clarity, but I just wanted to point this out to forestall any confusion down the road if you look at other sources. Every finite set is contained in a countably infinite set - just take its union with $\mathbb{N}$. So suppose I have a collection of sets $B_\alpha$ ($\alpha\in A$), where $A$ is countable; and each $B_\alpha$ is either finite or countable. Let $C_\alpha=B_\alpha\cup\mathbb{N}$. Then each $C_\alpha$ is countable, so $\bigcup_{\alpha\in A} C_\alpha$ is countable. But $\bigcup_{\alpha\in A} B_\alpha\subseteq \bigcup_{\alpha\in A} C_\alpha$ since $B_\alpha\subseteq C_\alpha$ for all $\alpha\in A$, and any subset of a countable set is at most countable. Also, note that even if each $B_\alpha$ is finite, that does not mean that their union is finite! E.g. let $A=\mathbb{N}$ and set $B_\alpha=\{\alpha\}$. As to why the index set $A$ needs to be at most countable: otherwise the theorem is false! If $A$ is uncountable, let $B_\alpha=\{\alpha\}$ for each $\alpha\in A$. Then each $B_\alpha$ is at most countable, but $\bigcup_{\alpha\in A}B_\alpha=A$ is uncountable!
Maximum Minimum theorem
On a half closed interval a continuous function need not be bounded above or below. For example, on $(0,1]$ the function $\frac 1 x \sin(\frac 1 x)$ is neither bounded above nor bounded below. Even if $f$ is bounded it need not attain its maximum or minimum: for $n$ even consider a non-negative continuous function on $[\frac 1 {n+1} \frac 1 n]$ vanishing at the end points with maximum value $1-\frac 1 n$. for $n$ odd consider a non-positive continuous function on $[\frac 1 {n+1} \frac 1 n]$ vanishing at the end points with minimum value $-(1-\frac 1 n)$. The functions together define a continuous function which neither attains its maximum nor attains its minimum.
Filling each box with one colored ball, with an infinite stock of balls
Each box can be filled in $C$ ways as tgere are C different colour balls so $N$ boxes can be filled in $C.C.C....(N times)=C^{N}$ do i t for $2,3$ balls it will be more clear
proof of associativity of natural number with inductive principle with agda
After consulting a professor,I found the error is located at the base case,the base case is not merely idp,it should be given parameters according the type of +assoc,that is: +assoc : (a b c : ℕ) -&gt; ((a + b) + c ) == (a + (b + c)) +assoc = nind (\_ _ -&gt; idp ) (\ n p -&gt; (\ b c -&gt; ap s (p b c)))
What is $\log(n+1)-\log(n)$?
$\log(n+1)-\log n=\log(1+\frac1n)$. Using the Taylor series for $\log(1+x)$, this is $$\frac1n-\frac1{2n^2}+\frac1{3n^3}-\cdots\approx\frac1n.$$
Stability of a three-dimensional system
This is a wrap up of the comments under the OP: Your understanding of the eigenvalues is correct. Since we know that your numerical scheme indeed shows the correct behaviour for non-attractive eigenvector the numerics reflect the expected behaviour. Now if you are on the equilibirium state and (numerical) pertubations occur only in the direction of the two stable eigenvectors, you stay at the equilibrium. If you get a pertubation in direction of the unstable eigenvector you should eventually(!) move away from it. Depending on the numerical scheme there might be the possibility of some artificial damping.
How many ways are there to win Settlers of Catan?
Rewriting slightly, you have $$s + v + 2(c + r + a) = 10$$ subject to $$0\leq s \leq 5 \\ 0 \leq v \leq 5 \\ 0 \leq c \leq 4 \\ 0 \leq r \leq 1 \\ 0 \leq a \leq 1 \\ s + c \ge 2$$ So clearly $s$ and $v$ must have the same parity. I think the easiest way to count solutions is to case split on $c+r+a$ ignoring the final constraint: $$\begin{array}{c|c|c} c+r+a &amp; \textrm{Number of solutions} &amp; s+v &amp; \textrm{Number of solutions} &amp; \textrm{Total} \\ \hline 0 &amp; 1 &amp; 10 &amp; 1 &amp; 1 \\ \hline 1 &amp; 3 &amp; 8 &amp; 3 &amp; 9 \\ \hline 2 &amp; 4 &amp; 6 &amp; 5 &amp; 20 \\ \hline 3 &amp; 4 &amp; 4 &amp; 5 &amp; 20 \\ \hline 4 &amp; 4 &amp; 2 &amp; 3 &amp; 12 \\ \hline 5 &amp; 3 &amp; 0 &amp; 1 &amp; 3 \\ \hline &amp; &amp; &amp; \textrm{Grand total} &amp; 65 &amp; \end{array}$$ and then subtract the cases $s+c &lt; 2$. If $s=c=0$ then the others can only total 9, so there are two cases: $s=0, c=1$: $s+v \le 5$ and $c+r+a \le 3$ so the only case is $s=0, v=4, c=1, r=1, a=1$; $s=1, c=0$: $s+v \le 6$ and $c+r+a \le 2$ so the only case is $s=1, v=5, c=0, r=1, a=1$. Therefore we have 63 solutions. I disagree with you on the second case. I think that the correct way to state it is $$s + v + 2(c + r + a) = 11$$ subject to $$0\leq s \leq 5 \\ 0 \leq v \leq 5 \\ 0 \leq c \leq 4 \\ 0 \leq r \leq 1 \\ 0 \leq a \leq 1 \\ r + a \ge 1$$ Then the same case split gives $$\begin{array}{c|c|c} c+r+a &amp; \textrm{Number of solutions} &amp; s+v &amp; \textrm{Number of solutions} &amp; \textrm{Total} \\ \hline 1 &amp; 2 &amp; 9 &amp; 2 &amp; 4 \\ \hline 2 &amp; 3 &amp; 7 &amp; 4 &amp; 12 \\ \hline 3 &amp; 3 &amp; 5 &amp; 6 &amp; 18 \\ \hline 4 &amp; 3 &amp; 3 &amp; 4 &amp; 12 \\ \hline 5 &amp; 3 &amp; 1 &amp; 2 &amp; 6 \\ \hline &amp; &amp; &amp; \textrm{Grand total} &amp; 52 &amp; \end{array}$$ Here we only have one overcount case: If $s=0, c=1$ then we get $s=0, v=5, c=1, r=1, a=1$. If $s=1, c=0$ then we can't get a total of 11. So we get 51 solutions, which added to the previous 63 makes 114.
Suppose that $\Gamma$ is a curve $y = f(x) \in \mathbb{R}^2$, where $f$ is continous. Show that $m(\Gamma) = 0.$
Start with an interval $[a,b]$. Let $\epsilon &gt; 0$. Since $f$ is uniformly continuous on $[a,b]$ there exists an index $n$ with the property that if $[a,b]$ is partitioned into $n$ equal pieces, the graph over each piece is contained in a rectangle of height $\epsilon$ and width $\frac{b-a}{n}$. Consequently the graph of $f$ over the whole interval $[a,b]$ is contained in a union of rectangles whose total area does not exceed $\epsilon(b-a)$. Now let $\epsilon \to 0^+$ to get that the graph of $f$ over $[a,b]$ has measure zero. The whole graph can be written as a countable union of such graphs restricted to bounded sets.
Inequality used to bound curvature terms
1, 2: we know only the first term $$ - 12\int_0^{2\pi } k^2(k'')^2(k''')^2$$ is negative. We are just using $$ 2ab \le \epsilon a^2 + b^2 /\epsilon,$$ and this holds even when $a, b$ are negative. 3: The inequality $$\int_0^{2\pi } {{{(k'')}^2}} \le {\left(\int_0^{2\pi } {{{(k'')}^4}} \right)^{\frac{1}{2}}}\sqrt {2\pi }$$ has nothing to do with the differential inequality. Just use Holder's inequality: $$\int_0^{2\pi } (k'')^2 = \int_0^{2\pi} (k'')^2 (1) \le \sqrt{\int_0^{2\pi} (k'')^4} \sqrt{\int_0^{2\pi} 1^2 }$$
Subset of matrix rows with half of column sums
Ok, I think the following works so I will sketch out the idea. Maybe you can check the details? We consider the following system: Let $C_1, \cdots, C_n$ be the $n$ column sums. Let $x_1, \cdots, x_m$ be $m$ variables, one for each row an consider the following constriants: \begin{align*} x_1 + \cdots + x_m &amp;\le \frac{m}2 + \frac{n}2 \ \text{(Type 1)}\\ a_{1i}x_1 + a_{2i}x_2 + \cdots + a_{mi}x_m &amp;\ge \frac{C_i}2, \ 1 \le i \le n \ \text{(Type 2)} \\ x_j &amp;\ge 0, \ 1 \le j \le m \ \text{(Type 3)} \\ x_j &amp;\le 1, \ 1 \le j \le m \ \text{(Type 4)} . \end{align*} Type $1$ is the relaxed version of saying that we cannot pick more than $\frac{m+n}2$ rows, and the type $2$ constraints are saying that for any column, the sum of the values of the entries in that column (times the corresponding row weight) have to be at least half the column sum. Ideally we would want $x_j \in \{0, 1\}$ but we relax this. Now we note that the problem is easy if $n \ge m.$ This is because in this case, we can just take all of the rows! Thus, we can assume that $m &gt; n$. If we put all of the constraints in a big matrix $B$, it will be $(1 + 2m + n )\times m$. Now I claim that the rank of $B$ is $m$. Indeed, the type $3$ constraints just give us a copy of the $m \times m$ identity matrix. Furthermore, the total rank of the type $2$ constraints is at most $n$. Now let $x^*$ be any feasible solution to the system (for example, letting all of the $x_j = \frac{1}2$). From the above discussion, we have a $m-n \ge 1$ dimensional subspace of $\mathbb{R}^m$ that is orthogonal to all of the type $2$ constraints. Let $\delta$ be a vector from that subspace. We now want to consider $x' = x^* + c \delta$ for a suitable chosen scalar $c$. We do not know the dot product of $\delta$ with the all ones vector $\mathbf1 \in \mathbb{R}^m$ but we know that $\mathbf 1 \cdot \delta$ is the sum of $e_k \cdot \delta$ for the basis vectors $e_k$. (We are considering $\mathbf 1$ and $e_k$ since they are type $1$ and type $3$ constraints respectively!) Thus, we can pick $c$ such that $x'$ is also a solution to our system above and $x_{k} = 0$ or $1$ for some $k$ (essentially $x_{k}$ will be the first one to be tight as we change the value of $c$ due to our assumption. We just have to pick the sign of $c$ so that the type $1$ constraint isn't violated.) Now we are almost done. There are two cases. If we have $x_k = 1$ then we can just remove the $k$th row and adjust the column sums accordingly. Furthermore, this only helps us with the type $1$ constraint since the LHS of the type $1$ constraint decreases by $1$ while the RHS only decreases by $\frac{1}2.$ This will result in a smaller instance of the problem which we can solve by induction. In the other case, we continue in this manner by making more and more variables equal to $0$. However, everytime we make a variable equal to $0$, we must pick a new $\delta$ that is orthogonal to all the type $2$ constraints and all the type $3$ constraints that we just made tight. Thus, we can make some $m-n$ of the $x_j's$ equal to $0$ this way. Note that we cannot pick which of the $x_j's$ are $0$ since we cannot control the dot product $\mathbf 1 \cdot \delta$. In summary, we have a feasible solution $x'$ of the above system where $m-n$ of the variables are equal to $0$. Now just make the rest of the $n$ variables equal to $1$. This is possible since it only helps all of the type $2$ constraints since all of the coefficients of $A$ are non-negative. Furthermore, the type $1$ constraint is also satisfied since $$x_1 + \cdots + x_m = n &lt; \frac{m}2 + \frac{n}2 $$ where the inequality follows from our assumption that $m &gt; n$ and we are done!
For positive integers m and n. Which of the following statements are true?
For problem $a$ we first prove that $G_n$ divides $G_{n+1}$, this is easy $2^{2^n}-1|(2^{2^n}+1)(2^{2^n}-1)=G_{n+1}$. So now take $n&lt;m$. Notice $F_n|G_{n+1}$ because $2^{2^n}-1|(2^{2^n}-1)(2^{2^n}+1)=G_{n+1}$. We also have $G_{n+1}|G_m$. For problem $c$ we prove that $\prod\limits_{i=1}^n F_n=F_{n+1}-2$. This is because $\prod\limits_{i=1}^n (2^{2^n}+1)=1+2+2^2+2^3+\dots+ 2^{2^n-1}=2^{2^n}-1=F_{n+1}-1$ From here take $n&lt;m$ and notice $F_m-2=F_1F_2\dots F_n\dots F_m-1$. Since $F_n$ is odd and it divides $F_{m}-2$ we conclude $F_n$ and $F_m$ are coprime.
calculating an incoherence property
Let us look at eq (5) in the paper you linked and give an example. This equation defines the following problem (though it does so in an awkward way using subscripted inner-products rather than matrices): $$ \min_\mathbf{x} \Vert \mathbf{x} \Vert_{1} \,\, \mathrm{s.t.} \,\, \mathbf{\Phi \Psi x} = \mathbf{y} $$ In the above, $\mathbf{y}$ is your signal and $\mathbf{y_0}$ is the sensed data (see below), $\mathbf{\Phi}$ is the sensing matrix, and $\mathbf{\Psi}$ would be a sparsifying basis. To give specifics, let's say that we have a time-based signal that is sparse in the frequency domain and that we have a sensor that samples the signal at constant intervals in the time domain and returns a vector $\mathbf{y}_0 \in \mathbb{C}^{N}$. To make the scenario applicable to $\ell_1$, suppose that a random subset of the orignal $N$ time domain samples from the sensor are corrupted or lost so that $\mathbf{y} \in \mathbb{C}^{\bar{N}}$, where $\bar{N} &lt; N$. In this case, the sensing matrix $\mathbf{\Phi}$ is a row selection matrix that selects the known samples from the original set of $N$ samples. For instance, if $N=5$ and the $2^{nd}$ and $4^{th}$ samples were lost, then $$ \mathbf{\Phi} = \left[ \begin{array}{ccccc} 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 \end{array} \right], $$ such that $\mathbf{\Phi} \, \mathbf{y_0} = \mathbf{y}$. Since it is known that the signal is sparse in the frequency domain, we would choose $\mathbf{\Psi}$ to be the inverse Fourier basis, i.e., $\mathbf{\Psi} = \operatorname{DFT}^{-1}$. Thus, when we minimize $\Vert \mathbf{x} \Vert_1$, we are minimizing the coefficients of the signal's discrete Fourier transform, which is the domain in which we know the signal to be sparse. To see this, look again at the constraints: $\mathbf{\Phi} \, \mathbf{\Psi} \, \mathbf{x} = \mathbf{y}$. This equation says that whatever the estimate of $\mathbf{x}$, when we put it back into the time domain ($\mathbf{\Psi}$) and look at it's first, third, and fifth samples ($\mathbf{\Phi}$), they must be equal the samples we got from the sensor ($\mathbf{y}$). Now, how accurately $\ell_1$ will reconstruct the missing samples (i.e., $\mathbf{\Psi} \, \mathbf{x} = \mathbf{y}_0$) has to do with the coherency of $\mathbf{\Psi}$ and $\mathbf{\Phi}$ and how sparse $\mathbf{y}_0$ is when expressed in terms of $\mathbf{\Psi}$, which is a pretty involved topic. The (In)Coherence Property $\mu$ (scalar) measures the maximum similarity between row and column vectors of two matrices $\Phi $ ($\bar{N}$ x$\ N$ sensing basis) and $\Psi$ ($\ N$ x$\ N$ representation basis), respectively: \begin{equation} \mu(\Phi,\Psi) = &gt; \sqrt N \max_{1\le k,j\le N} |\langle\Phi_k,\Psi_j \rangle &gt; |,\end{equation} where $\ N$ refers to the number of elements in the signal, $\bar{N}$ refers to the number of samples taken, $\ k$ and $\ j$ are column and row indices. This is sometimes referred to as the Coherence Property. Incoherence bounds are given by $\mu(\Phi,\Psi) &gt; \in [1,\sqrt N]$ i.e. when $\mu(\Phi,\Psi) = 1$, $\Phi$ and $\Psi$ are maximally incoherent and when $\Phi = \Psi$, $\mu(\Phi,\Psi) = &gt; \sqrt N$. One essentially needs a low level of similarity (an incoherence) between $\Phi$ and $\Psi$ to reconstruct a sparse signal. The $\mu$ property helps to define the number of samples $\bar{N}$ required to reconstruct the sparse signal from the available measurements via the relationship (Candes and Romberg, 2007) \begin{equation} \bar{N} \ge C \mu^2(\Phi,\Psi)S\log N\ \end{equation} where $\ C$ is some positive constant and $\ S$ is the sparsity of the signal, i.e. a signal whose coefficients in some fixed basis have relatively few nonzero entries, say $\ y_0=\Psi x$, $\ S$ represents the number of nonzero (or largest) coefficients in $\ x$ (Candes and Tao, 2006). Furthermore the signal is reconstructed with probability exceeding $\ (1- \delta)$ if $\bar{N} \ge C \mu^2(\Phi,\Psi)S\log &gt; (N/\delta)$. The takeaway message is that fewer samples are needed when the coherence is smaller; if $\mu = 1$ then on the order of $\ &gt; S\log N$ samples are needed to reconstruct a sparse signal (almost any set of $\bar{N}$ coefficients may be measured without loss of signal) "some positive constant C": How is this usefully defined in practice? what is $\delta$ here and is this related to the restricted isometry constant $\delta_s$ below? The restricted isometry property (RIP) (Candes and Tao, 2006) given by: \begin{equation} (1-\delta_s)\|y\|_{\ell_2}^2 \le &gt; \|A_s y\|_{\ell_2}^2 \le &gt; (1+\delta_s)\|y\|_{\ell_2}^2. \end{equation} is used to characterize matrices$\ A = \Phi \Psi$ which are nearly orthonormal, i.e. used to classify the robustness of compressed sensing in case of noise or also in case of only approximately sparse signals. A random sensing basis $\Phi $ coupled to any representation basis $\Psi$ will satisfy the RIP with high probability. One way to generate a random sensing basis is by selecting entries of $\Phi$ from a Gaussian or Bernoulli distribution. Related terms seem to be: The Restricted Isometry Condition (RIC), Uniform Uncertainty Principle (UUP) and the Exact Reconstruction Principle (ERP) (Candes and Tao, 2006). Not sure what these mean/imply currently. Some terms are briefly summarized on Tao's site The restricted isometry constant(s): $\delta_s$ is the smallest number such that the RIP holds for all S-sparse vectors $\ y $. $\ A $ obeys the RIP of order $\ S$ if $\delta_s $ is smaller than one. Lastly, the sensing matrix $\mathbf{\Phi}$ is not always as simple as the selection matrix I used above, though it often is. Often times people will try to design the hardware to get a sensing matrix that maximizes incoherency. Whatever it is, remember that at its core it's nothing more than a linear transform of the original data.
Angle in a triangle within a circle.
let radius = $r$, then in triangle ACO, using Pythagoras: $$\begin{align} AO^2 &amp;= AC^2+CO^2\\ \\ r^2 &amp;= 12^2+(r-5)^2\\ \\ 144+25-10r &amp;= 0\\ \\ r &amp;= 16.9\\ \end{align}$$ In triangle ACO, $$\begin{align} \sin\theta &amp;= \dfrac{12}{r}\\ \\ \sin\theta &amp;= \dfrac{12}{16.9}\\ \\ \theta &amp;= 45.2^{\circ} (1dp)\\ \end{align}$$
Introductive Book on Modular Forms
The first book you want to look at is Serre's "A course in arithmetic". THe second is the aptly named "A first course in modular forms"
interpolation between 3 points
Let us write $v_i = (x_i, y_i)$ for $1 \le i \le 3$. If $v_1$, $v_2$, $v_3$ are affine independent (i. e. there is no line containing all three of them), we can compute the (uniquely determined) affine function $f\colon \mathbb R^2 \to \mathbb R$ with $f(v_i) = D_i$ for $1 \le i \le 3$ an use it to interpolate other values. $f$ can be computed as follows: As an affine function, $f$ has the form $f(x,y) = ax+by+c$, $a$, $b$, $c$ to be determined, so we plug in what we know giving a linear system of equations \begin{align*} ax_1 + by_1 + c &amp;= D_1\\ ax_2 + by_2 + c &amp;= D_2\\ ax_3 + by_3 + c &amp;= D_3 \end{align*} Under our assumptions on the $v_i$, the equations have a unique solution $(a,b,c)$. -- Edit: So I'll try to add something more about the "why". You should think about the one-dimensional case, that is: given points $x_1$, $x_2\in \mathbb R$, and same values $D_1$, $D_2$, we try to inter- or extrapolate the value at some other point $x$. To do this, we find the line connecting the points $(x_1, D_1)$ and $(x_2, D_2)$ in the plane and take it as a graph of a affine-linear function $f\colon\mathbb R \to \mathbb R$, and interpolate at $x$ by computing the value $f(x)$. We follow this well known (I hope so) procedure in our two-dimensional case. The analogon of the line in 1D is a plane, and indeed: The graph of an affine-linear function $f\colon \mathbb R^2 \to \mathbb R$ is a plane. The equation of such a plane is $f(x,y) = ax + by + c$, that is to dertermine our plane we need to compute $a$, $b$ and $c$. As we want $f$ to interpolate the given values, we need $f(x_i, y_i) = D_i$, $1 \le i \le 3$. We compute them as above (solving the linear system $f(x_i, y_i) = D_i$). Then we just interpolate by evaluating $f$ (as we did in 1D).
$ax^2 -bx +c =0$ where $a,b,c$ are natural nos. and it has roots lying in the interval $(1,2)$...
If you multiply out your polynomial to get $ax^2-a(\alpha+\beta)x+a\alpha\beta$ we can see that $a$ can be $1$. It cannot be $0$ if we take &quot;roots&quot; to indicate there are two of them. If $a,\alpha,$ and $\beta$ are all positive, $b \lt 0$ and there is no solution to the problem as stated. If the polynomial is $ax^2-bx+c$ we have $b=a(\alpha+\beta)$ with $\alpha,\beta \gt 1$, so the minimum value of $b$ is $2$. Similarly, $c=a\alpha \beta$ must be at least $2$. These conditions are not independent. In fact $x^2-2x+2=0$ has no real roots at all. We need to increase $b$ to get the parabola to hit the axis at all. I don't find any polynomial that fits the requirement with roots at $1,2$ disallowed by the fact that the interval is open. If you close the interval, $x^2-3x+2=0$ has roots at $1,2$
Resources to help me with convex analysis
I don't know if you are aware of this, but the same author of Nonlinear Programming, Bertsekas, also wrote a text known as Convex Optimization Theory, which can be used as a resource to learn convex analysis from. There are also helpful solutions provided on the website. As far as I understand, most of the texts will refer to the text Convex Analysis by Rockafellar as being the originator (or at least the best expositor) of the concepts used in convex analysis relevant to optimization. This text might require some "mathematical maturity", which is experience with writing and understanding proofs. For the topics that you mentioned in the post, they should be covered in texts on Real Analysis. There are many questions on StackExchange about the best book to study from for real analysis. I recommend either Rudin or Abbott, but Abbott imparts better intuition like your example in the post.
Proof Involving Generalized Mean
If $0 &lt; p &lt; q &lt; \infty$, use Holder's inequality with conjugate exponents $\frac{q}{p}$ and $\frac{q}{q-p}$ to get $$\sum_{k = 1}^n |x_k|^p \le n^{1 - \frac{q}{p}} \left(\sum_{k = 1}^n |x_k|^q\right)^{\frac{q}{p}}$$ Rearrange the inequality to obtain $g(p) \le g(q)$. To prove that $\lim_{p \to \infty} g(p) = \max\{|x_1|\,\ldots, |x_n|\}$, show that for every $\varepsilon &gt; 0$, $$n^{-1/p}(\max\{|x_1|\,\ldots, |x_n|\} - \varepsilon) \le g(p) \le \max\{|x_1|\,\ldots, |x_n|\} \quad (p \ge 1)$$
Show that the numerator of $1+\frac12 +\frac13 +\cdots +\frac1{96}$ is divisible by $97$
If you group the fractions in pairs with the first pairing to last, second pairing to next-to-last, etc, you get $$1+\frac 1{96}=\frac {97}{96}, \frac 12+\frac 1{95}=\frac {97}{190}...$$ The sums of these pairs all have a numerator of $97$, and because $97$ is prime the common denominator will not have a factor of $97$, so in $\frac xy$, $x$ is a multiple of $97$.
Are Fibonacci numbers with a square prime index always divisible by $F_p$?
It is well known (see here ) that $\gcd(F_m,F_n)=F_{\gcd(m,n)}$ and so $\gcd(F_p,F_{p^2})=F_p.$
Number of combinations in numbered, colored balls combinatorics problem
Having put the red balls in order you have seven spaces to put blue and yellow balls. Because of the conditions each space can only hold one color of balls and the colors have to alternate. If the space to the left is blue you need to distribute $7$ blue balls among $4$ spaces. There are $6$ places to put the breakpoints and you need to choose $3$ of them, so there are $6 \choose 3$ ways to place the blue balls by stars and bars. Having done that, you need to place the $10$ yellow balls in three places, which gives the $9 \choose 2$. The other term comes when the yellow balls are on the ends.
Bound on the index of an abelian subgroup in discrete subgroup of the euclidean group?
The index is the order of the crystallographic point group, in the cocompact case. It is then a finite subgroup of $GL(n,\mathbb{Z})$. There are several bounds known for the maximal order of such subgroups. Friedland proved in $1997$ that $$m\le 2^nn! $$ for $n\ge n_0$ and gave conditions where equality is attained (The maximal orders of finite subgroups of $GL(n,\mathbb{Q})$. Rockmore in $1995$ used a different description: For every $\epsilon&gt;0$ there exists a constant $c(\epsilon)$ such that $m\le c(\epsilon)(n!)^{1+\epsilon}$.
Learning about geometric bezier splines
I recommend these on-line notes. I agree that Farin's book is very good. A more elementary one is this one by Mortenson.
Topological Intuition behind the 2-Homology of $RP^2$
Honestly, your definition of "wrapping around a surface" is not clear to me. I will list here what geometric meaning you can give to the homology groups of a manifold. The main "geometric" interpretation is the following theorem you can find on any book of Algebraic Top: Thm: Let $M$ be a connected $n$-manifold without boundary. Then $H_n(M;\Bbb Z)$ is $\Bbb Z$ if and only if $M$ is closed and orientable, $0$ otherwise. A geometric intuition (which holds in the smooth case) of this theorem is that in the orientable case, starting from an oriented triangulation of the manifold, you can build a non-exact closed $n$-simplex (could this count as a kind of wrapping?) which generates the top homology group. you can weak the orientability requirement if you work in $\Bbb Z_2$ coefficient, and you obtain that, with the same hypothesis: Thm: Let $M$ be a connected $n$-manifold without boundary. Then $H_n(M;\Bbb Z_2)$ is $\Bbb Z_2$ if and only if $M$ is closed. $0$ otherwise. So you see that in your case $H_2(\Bbb RP^2; \Bbb Z)=0$ detects the non-orientability of the projective space. $$ \times \times \times$$ Cellular homology should prove another visual interpretation close to your concept of "wrapping", because it's the homology of the chain complex generated by the cells, whose differentials are the degree of certain maps, so in fact in you case you are studying the $2$-cells (the usual disks) and how they attach to the $1$-skeleton of $\Bbb RP^2$. In our case, you see that the unique $2$-cell in the standard CW-structure of $\Bbb RP^2$ is attached to the unique $1$-cell via a degree $2$ map, therefore you get the $0$ in the second homology group. Cellular homology makes cristal clear why the circle has trivial second (and higher) homology groups: it admits a CW-structure with only 0 and 1 cells $$ \times \times \times$$ In the simply connected case, $H_2(M;\Bbb Z)$ is isomorphic to the second homotopy group $\pi_2(M)$, which is the group of pointed homotopy classes of maps $S^2\to M$, via the Hurewicz map
Demonstrate the mean of the sample variance
The complexity of that demonstration gets reduced if you don’t start by expanding the square parenthesis. That is because at the end you will need to have somehow the expression of the variance of your random process and with the method you are proposing there is no possible way of getting that expression. The variance of a random process is: $\sigma^2_{x}=E[(x(n)-\mu_{x})^2]$ This is how I would do it: First of all, you start by substracting the mean of the random process to both members of the square parenthesis: $\widehat{\sigma^2_{x}}=\dfrac{1}{N}\sum_{n=0}^{N-1} {[(x(n)-\mu_{x})-(\widehat{\mu_{x}}-\mu_{x})]^2}$ Now you cand expand the square expression, obtaining: $\widehat{\sigma^2_{x}}=\dfrac{1}{N}\sum_{n=0}^{N-1} {(x(n)-\mu_{x})^2-2(x(n)-\mu_{x})(\widehat{\mu_{x}}-\mu_{x})+(\widehat{\mu_{x}}-\mu_{x})^2]}$ If you observe the first and the last term inside the summatory, the first one gives you directly the variance of the random process and the last one is the variance of the mean estimator. Replacing the last term by its value (which is $\dfrac{\sigma^2_{x}}{N}$) and taking average in the whole expression you will get something like this: $E[\widehat{\sigma^2_{x}}]=\sigma^2_{x}+\dfrac{1}{N}\sigma^2_{x}-\dfrac{2}{N}\sum_{n=0}^{N-1} {E[(x(n)-\mu_{x})(\widehat{\mu_{x}}-\mu_{x})]]}$ Note that since you are replacing $\sum_{n=0}^{N-1} {(x(n)-\mu_{x})}^2$ by $\sigma^2_{x}$ you are taking away that term of the summatory, which implies that $\sigma^2_{x}$ is multiplied by N (length of the summatory): $E[\widehat{\sigma^2_{x}}]=\dfrac{1}{N}\sigma^2_{x}N+\dfrac{1}{N}\sigma^2_{x}\dfrac{1}{N}N-\dfrac{2}{N}\sum_{n=0}^{N-1} {E[(x(n)-\mu_{x})(\widehat{\mu_{x}}-\mu_{x})]]}$ The last step is to make a connection between the term inside the summatory and the variance of the random process. This is simple if you replace the value of the sample mean by the one you have already written above. $\widehat{\mu_{x}}=\dfrac{1}{N}\sum_{n=0}^{N-1}x(n)$ Now we can only focus on that expression: $\sum_{n=0}^{N-1} {E[(x(n)-\mu_{x})(\dfrac{1}{N}\sum_{n=0}^{N-1}{x(n)-\mu_{x})]}}$ Considering that each element of $x(n)$ is uncorrelated, we can separate the mean in two terms: $E[(x(n)-\mu_{x})]E[(\dfrac{1}{N}\sum_{n=0}^{N-1}{x(n)-\mu_{x})}]$ I am also considering that $\widehat{\mu_{x}}$ is refered to the same x(n). This is why you can separate the mean as I said. If it wasn’t so, and the two variables are uncorrelated, this term will be 0: $E[(x(n)-\mu_{x})(\widehat{\mu_{k}}-\mu_{x})]]=0$ Thus, operating with the second mean gets you to: $E[\dfrac{1}{N}\sum_{n=0}^{N-1}{x(n)-\mu_{x}}]=\dfrac{1}{N}\sum_{n=0}^{N-1}{E[x(n)-\mu_{x}]}$ As you can see, at this point you have an expression that looks like the variance of the random process. Returning a few steps, you will realize that your other term of the summatory is also $E[x(n)-\mu_{x}]$ , which leads us to: $\dfrac{2}{N}\sum_{n=0}^{N-1} {E[(x(n)-\mu_{x})(\widehat{\mu_{x}}-\mu_{x})}]=\dfrac{2}{N^2}\sum_{n=0}^{N-1}{E[(x(n)-\mu_{x})^2]}$ Writing again the expression of the mean of the sample variance: $\sigma^2_{x}+\dfrac{1}{N}\sigma^2_{x}-\dfrac{2}{N^2}\sum_{n=0}^{N-1}{E[(x(n)-\mu_{x})^2]} =&gt; \sigma^2_{x}+\dfrac{1}{N}\sigma^2_{x}-\dfrac{2}{N}\sigma^2_{x}$ Finally, once you have the variance at each term, you reach the end of the demonstration. $E[\widehat{\sigma_{x}^2}]=(1-\dfrac{1}{N})\sigma^2_{x}$ Hope it helped.
If a family will have $7$ children, how many combinations of boys and girls can there be if order doesn't matter?
The number of boys can be $0, 1, 2, 3, 4, 5, 6,$ or $7$. The corresponding number of girls is $7,6,5,4,3,2,1,$ or $0$. This means there are $\boxed{8\,}$ possibilities. The reason you intuitively think of $7$ is perhaps forgetting to account for one of the edge cases of $0$ boys or $0$ girls. Generalization from comment: Suppose you are choosing $7$ fruits from a bowl. Each fruit can be an apple, pear, or banana. How many possible combinations of fruits are there, if order does not matter? In this problem, we can use a technique called stars and bars. Imagine $7$ identical balls and $2$ identical dividers. Placing the $9$ objects in a line can be corresponded with a combination of apples, pears, and bananas in the original problem. For each placement of dividers, we can say that the number of apples is the number of balls before the first divider. The number of pears is the number of balls between the first divider and the second divider. And finally, the number of bananas is the number of balls after the second divider. The number of combinations of apples/pears/bananas exactly corresponds to the number of ways to arrange $2$ identical dividers and $7$ identical balls in a line. This is equal to the number of ways to choose the two places in which there are dividers. There are nine possibilities, of which we must choose two. Mathematicians call this "nine choose two" and it is written and computed as a binomial coefficient: $$\binom{9}{2} = \frac{9 \cdot 8}{2} = \boxed{36\,}$$
Is Hamiltonian path NL?
This describes a non-deterministic walk, not a path. You may end up counting the same vertex more than once. Given a start configuration of (v, 0) (meaning you're at vertex v and you've traversed 0 edges), when you're at configuration (w, n-1) (meaning now you're at vertex w and you've traversed n-1 edges), all you know is that you have a v-&gt;w walk of length n-1. If there's a Hamiltonian path, you will find it non-deterministically, however, you will not "know" that you've found it by examining the (w, n-1) state. This algorithm decides walks of length n-1.
Finding a surface normal of implicit surface f(x,y,z)
The unit surface normal of the implicit surface $f(x,y,z)=0$ is indeed the normalized gradient of $f$. You were also right that the unit surface normal of $z=g(x,y)$ is the normalized vector $(-g_x \ -g_y \ 1)$ (the cross product of $\mathbf{x}_u$ and $\mathbf{x}_v$, with $\mathbf{x} = (x \ y \ z)$ and $x=u$ and $y=v$). Here is a quick proof that the unit surface normal is the normalized gradient. The function $f(x,y, g(x,y))$ is a composition of the mapping $f$ from $\mathbb{R}^3$ to $\mathbb{R}$ and the function $h$ from $\mathbb{R}^2$ to $\mathbb{R}^3$ with $h(x,y)= (x,y,g(x,y))$. The total derivative of this function using the chain rule is $$ D(f\circ h) = (Df)(Dh) = (f_x + f_zg_x \ f_y+f_zg_y). $$ Setting this derivative to zero (since $f\circ h(x,y) = 0$ is the definition of the surface) yields $g_x = -f_x/f_z$ and $g_y = -f_y/f_z$. Substituting these values of $g_x$ and $g_y$ into $(-g_x \ -g_y \ 1)$ gives $(f_x \ f_y \ f_z) / f_z,$ a multiple of the gradient. You must have computed the partial derivatives of $f$ incorrectly, because the unit surface normal of an implicit surface $f(x,y,z)=0$ definitely is the normalized gradient of $f.$
A simple problem about partition function and Young diagram
This is A000070 in the OEIS, following the explanations given there by Jon Perry or Thomas Wieder. (Both are pretty clearly equivalent to your problem but are not stated exactly the same way.) In particular, if we call the answer to your question $f(n)$, then $f(n) = \sum_{k=1}^n p(k)$ where $p(k)$ is the number of partitions of $k$.
Joint density function of $X$ and $X-Y$, where $X, Y\sim U(-1,1)$
If you are using the transformation formula, you should have got, for the joint density $$f_{U,V}(u,v)=\frac{f_{X,Y}(x,y)}{\left |\frac{\partial(U,V) }{\partial(X,Y)}\right|}=\frac{1}{4}$$ but this is not the end of the story, you need to get the support of the transformed variables. To write $-2&lt;U&lt;2$ and $-1&lt;V&lt;1$ is not totally right: both inequalities are true, but they don't give you the support, because (say) you cannot have simultaneously $V=0.9$ and $U=1.9$. (You can also guess that something is wrong in that the integral of the density over the support must be one). The correct way is to note that if we allow for $V=X$ its full range $-1&lt;V&lt;1$ then we must put that dependence into the other variable: $U=Y-X=Y-V$ , hence the range of $U$ is $(-1-V, 1-V)$ Then the support is $-1&lt;V&lt;1$ and $-1-V&lt; U &lt;1-V$ which corresponds to a parallelogram. (Notice BTW that two variables with uniform joint density over a -straight- rectangular support are independent - which is the case for $X,Y$, but it's not -it should not- for $U,V$ ) Sanity check: $$ \int f_{U,V}= \int_{-1}^1 \int_{-1-V}^{1-V} \frac{1}{4} dU dV= \frac{1}{4} \int_{-1}^1 2 dV= 1$$
Question about finitely generated modules
When $M$ is an $R$-module and $R$ is a $K$-vector space, then $M$ is also a $K$-vector space.
Show $\sum\limits_{n=0}^{\infty}{2n \choose n}x^n=(1-4x)^{-1/2}$
Or, by definition. \begin{eqnarray*} {-1/2\choose n}&amp;=&amp;{(-1/2)(-1/2-1)(-1/2-2)\cdots(-1/2-[n-1])\over n!}\cr &amp;=&amp;{(-1)^n\over 2^n} {(1)(3)(5)\cdots(2n-1)\over n!}\cr &amp;=&amp;{(-1)^n\over 2^n} {(1)(3)(5)\cdots(2n-1)\over n!}\cdot{2^n n!\over 2^n n!}\cr &amp;=&amp;{(-1)^n\over 4^n} {2n\choose n}. \end{eqnarray*}
Continuity on $\Bbb R$
Here are some hints: 1) Suppose we restrict the domain to $[-N,N]$ for some positive integer $N$. Then we can simply quote a big theorem to tell us that $f: [-N,N] \rightarrow \mathbb{R}$ is bounded and assumes a minimum and maximum value: what theorem is that? Thus we can "handle $f$" for small values of $x$. On the other hand: 2) Since $\lim_{x \rightarrow \pm \infty} f(x) = 0$, this tells us how to "handle $f$" for large values of $x$. Can you figure out how to put 1) and 2) together? Try to show the boundedness first. Added: I'll venture one more hint: first choose $N$ such that the maximum $M_N$ of $|f|$ on $[−N,N]$ is positive (it is very easy to handle the case in which there is no such $N$). Then handle the case of very large $x$...so large so that $|f(x)|&lt;M_N$ You are left with an intermediate range to handle, which again you can do using the Extreme Value Theorem.
Logic in closed symmetric monoidal categories; reference request.
I think there is a reference that fits perfectly with your requests (Question 1 and, indirectly, Question 0): Paul-André Melliès: Categorical Semantics of Linear Logic. Panoramas et Synthèses 27, Société Mathématique de France, 2009. This survey is designed to guide the reader in investigating the symbolic mechanisms of cut-elimination in proof theory and especially in linear logic, and their algebraic transcription as coherence diagrams in categories with structure. From the abstract: We start the survey by a short introduction to proof theory (Chapter 1) followed by an informal explanation of the principles of denotational seman- tics (Chapter 2) which we understand as a representation theory for proofs – generating algebraic invariants modulo cut-elimination. After describing in full detail the cut-elimination procedure of linear logic (Chapter 3), we explain how to transcribe it into the language of categories with structure. We review three alternative formulations of ∗-autonomous category, or monoidal category with classical duality (Chapter 4). Then, after giving a 2-categorical account of lax and oplax monoidal adjunctions (Chapter 5) and recalling the notions of monoids and monads (Chapter 6) we relate four different categorical axiomati- zations of propositional linear logic appearing in the literature (Chapter 7). We conclude the survey by describing two concrete models of linear logic, based on coherence spaces and sequential games (Chapter 8) and by discussing a series of future research directions (Chapter 9) Chapter 4 is devoted to symmetric monoidal closed categories and their relationship with linear logic.
Finding the order of 3 modulo 242
we have $$3^1=3,3^2=9,3^3=27,3^4=81,3^5=243$$