title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Evaluate the directional derivative
Your formula for the directional derivative is wrong. It is correct if $ y$ is a unit vector. In general you have to divide by the norm of the vector which is $\sqrt 6$ in this case. It is worth observing that the directional derivative cannot depend on the length of the vector; it can only depend on the direction of the vector. EDIT: there are conflicting definitions of directional derivatives (as I just discovered) and whatever you have done is right with your definition. The answer provided is based on a different definition.
Two absolute value equations with two unknowns. Possible to retrieve sign?
We can't. $|c x_1 + d x_2 | = u \implies c x_1 + d x_2 = \pm u$ (and similarly for the $2nd$). Without further restrictions on $x_1$ and $x_2$, we can get valid solutions for $x_1$ and $x_2$ by choosing any one case from each of the $2$ sets of equations.
Probability that at least 2 edges of $\Gamma_{n,N}$ shall have a point in common
An important condition on $N$ in that paper is that $N = o(n^{1/2}).$ This allows us to more easily deal with the binomial coefficient terms. Namely, since $N = o(n^{1/2})$, we have that $$ {n \choose 2N} = (1+o(1)) \frac{n^{2N}}{(2N)!}, $$ and $$ {{n \choose 2} \choose N} = (1+o(1)) \frac{ {n \choose 2}^N}{N!}. $$ But we will need to know more about these $o(1)$ terms and so we have to go deeper. To do this, let's expand the first binomial expression: \begin{align*} {n \choose 2N} &= \frac{1}{(2N)!} \prod_{i=0}^{2N-1} (n - i) \\&= \frac{n^{2N}}{(2N)!} \prod_{i=0}^{2N-1} \left( 1 - \frac{i}{n} \right). \end{align*} Further, note that \begin{align*} \prod_{i=0}^{2N-1} \left( 1 - \frac{i}{n} \right) &= \exp \sum_{i=0}^{2N-1} \log \left(1- \frac{i}{n}\right) \\ & = \exp \sum_{i=0}^{2N-1} \left( - \frac{i}{n} + O\left(\frac{N^2}{n^2}\right) \right) \\&= \exp \left( \frac{-2N(2N-1)}{n} + O\left( \frac{N^3}{n^2}\right)\right) \\&= \exp \left(O \left( \frac{N^2}{n} \right)\right) = 1+ O\left( \frac{N^2}{n} \right). \end{align*} Thus $$ {n \choose 2N} = \left(1+O\left(\frac{N^2}{n}\right)\right)\frac{n^{2N}}{(2N)!}. $$ The asymptotic expression for the other binomial expression can be shown using a similar method to get $$ {{n \choose 2} \choose N} = \left(1+O\left(\frac{N^2}{{n\choose 2}}\right)\right)\frac{{n \choose 2}^{N}}{N!}. $$ Putting these all together, we have that \begin{align*} \frac{ { n \choose 2N} (2N)!} {2^N N! {{n \choose 2} \choose N}} &= \left(1+O\left(\frac{N^2}{n}\right)\right) \frac{ n^{2N}}{2^N {n \choose 2}^N} \\&= \left(1+O\left(\frac{N^2}{n}\right)\right) \frac{n^{2N}}{2^N n^N (n-1)^N/2^N} \\&= \left(1+O\left(\frac{N^2}{n}\right)\right) \frac{1}{\left(1-\frac{1}{n}\right)^N}. \end{align*} Now note that $\left(1-\frac{1}{n}\right)^N = 1+O\left(\frac{N}{n}\right).$ As a consequence, $$ \frac{ { n \choose 2N} (2N)!} {2^N N! {{n \choose 2} \choose N}} = 1+O\left(\frac{N^2}{n}\right). $$ In this case, we did not need to approximate the factorials since each factorial was divided out. If you want an asymptotic formula for a factorial, you would use Stirling's approximation. Namely, $$ n! = (1+o(1)) \sqrt{2\pi \, n} \, \left(\frac{n}{e}\right)^n. $$ If you wanted to approximate further, that $o(1)$ error term above is actually on the order of $1/n$.
How to calculate the degree of this Gauss map?
Draw a circle around the $0$. Then, at each point of this circle there will be a vector. As you travel around the circle counterclockwise, the direction of the vector at that point will change. The number of times this vector rotates about the origin, counting orientation, is the degree of the Gauss map. Try keeping track of the vector in case two as you travel around the $0$. You should notice that the vector rotates clockwise exactly once, and hence the degree is $-1$.
Correct Way to Write a Statement in First-Order Logic
What you've written is fine. It says that for all p and q, If both $q\in r$ AND $p\lt_\mathbb Q q$, then... It says nothing about what follows when the relation $<_\mathbb Q$ is not defined for $p, q$: the consequent in the implication follows for all $p, q$ that meet the criteria of the antecedent. Just a simple example. If we let the domain of discourse to be the set of animals, and let $B(x)$ denote "x is an bird", and $F(x)$ denote "x has feathers", then $$\forall x(B(x)\rightarrow F(x))$$ What we are asserting is true whenever "x" is a bird. What is true about non-birds isn't in question here. If you'd be more comfortable, perhaps add a third conjunct: $q\in r \land p\in \mathbb Q \land p<_\mathbb Q q$. That way, the conclusion is true for every $p, q$ that make the antecedent true.
what kind of b should be if $\cos(bx)=\sin^2x+1$
Well, since $|\cos y| \le 1$ for all $y$, and $\sin^2 x \ge 0$, you must have $\sin^2 x = 0$ and $\cos(bx) = 1$.
Problem on Central limit theorem and Law of large numbers
By Borel-Cantelli, almost surely, only finitely many of $X_n$ are non-zero. Therefore, $$ a_n^{-1}\sum_{i=1}^n X_i $$ converges to zero almost surely (and thus in probability and in distribution) for any sequence $a_n$ that tends to infinity.
Hyperdoctrines and Contravariance
Baez and Weiss are talking about something Baez is calling "polyadic Boolean algebras", which are functors from $\mathbf{FinSet}\to\mathbf{Bool}$, for instance, $n\mapsto \mathcal{P}(V^n)=\mathbf{Set}(V^n,\{0,1\})$ for a fixed set $V$. This is covariant since a function $f:n\to m$ induces a function $V^f:V^m\to V^n$, which then induces the pullback function $\mathcal{P}(V^f):\mathcal{P}(V^n)\to \mathcal{P}(V^m)$. There is a closely related hyperdoctrine $\mathcal P:\mathbf{Set}^{\mathrm{op}}\to \mathbf{Bool}$ which is just $\mathcal P$ itself, but these aren't trying to talk about quite the same thing.
Is the ideal $(2,x+1)$ principal in $\mathfrak{o}_K$?
Hint $\ $ You have that $\,(2,x\!+\!1)\supseteq (x\!+\!1)\,$ both have norm $= 2\,$ hence $\,\ldots$ Or, directly $\ f(-1) = -2\,$ so $\,f = (x\!+\!1)g - 2,\,$ so $\,{\rm mod}\ f\!:\ (x\!+\!1)g\equiv 2,\,$ i.e. $\,x\!+\!1\mid 2\ \ $
Notation for defining a homomorphism
The author means that the homomorphism $f$ is entirely determined by the value of $X$ under $f$ (i.e. the value $f(X)=a \in L$). Presumably $f$ is assumed to be a $K$-algebra homomorphism. For any polynomial $p(X) = b_n X^n +\ldots +b_0 \in K[X]$, using the properties of $K$-algebra homomorphisms, it follows that $$f(p(X)) = b_n a^n + b_{n-1}a^{n-1}+\ldots b_0 \in L$$ That $f$ is entirely determined by the point $f(X)$ and that this choice is unrestricted is what we mean when we say that $K[X]$ is the free $K$-algebra on one generator.
Definition of 'equivalent atlases' and 'smooth structure'
The relation is only an equivalence on the set of smooth atlases. Nonsmooth atlases can essentially be excluded entirely, since no nonsmooth atlas is equivalent to a smooth one. As a side note, it's also common to define smooth structures as maximal smooth atlases, i.e. atlases which contain every smoothly compatible chart. This is equivalent (the maximal atlas is the union of the equivalence class) and may or may be more intuitive.
Finding average value of function over given region
HINT: $$\int_{-4}^{\ 8}\int_{2}^{-\frac{1}{2}x+6}f(x,y)dydx$$ The rest is only "hard work"
Integral over a contour in complex plane
You can use Cauchy's Integral Formula: \begin{equation} f(z) = \frac{1}{2 \pi i} \int_{C_{(z_{0},r)}}\frac{f(w)}{w-z}dw \end{equation} To use this formula $f$ must be holomorphic at an open set $\ U$ containing $\overline{D}(z_{0}, r)$ and $z \in D(z_{0}, r)$ In your case, $f(w) = w^{2}, \ z_{0} = 0, \ r = 2, \ z = 1$ so you can use the formula. Onto your second question, the same formula can be generalized to \begin{equation} \frac{f^{n)}(z_{0})}{n!} = \frac{1}{2 \pi i} \int_{C(z_{0},r)}\frac{f(w)}{(w-z)^{n+1}}dw \end{equation} where $f^{n)}(z_{0})$ refers to the $n^{th}$ derivative. With this formula you can solve easily both questions.
Show that two discrete spaces are homeomorphic iff they have the same cardinality
If both sides have discrete topology, then any map is continuous. So as long as you have a bijection it is a homeomorphism. Thus this is true iff the underlying sets have same cardinality.
Bounding the integral of the tails of a random variable.
We don't need a sequence, only the fact that if $X$ is a non-negative random variable, then $$\int_{\{X\geqslant M\}}X\mathrm d\mathbb P\leqslant \frac 1{M^\delta}\mathbb E[X^{1+\delta}].$$ To see that, notice that $$X\cdot\chi_{\{X\geqslant M\}}\leqslant \frac{X^{1+\delta}}{M^\delta},$$ then integrate.
Solve the following matrix equation.
Write the equation as $AX^2 B = C$, then $X^2 = A^{-1} C B^{-1} = D=\begin{bmatrix} 1+i & -1+2i \\ 1 & i\end{bmatrix}$. $D$ has distinct eigenvalues, hence is diagonalisable, hence it has a (not unique) square root $X$ such that $X^2 = D$.
How to prove the formula for the residue of $f$ at a pole of order $m$?
I think it's more $$Res_{z_0}(f)=\lim_{z\to z_0}\frac{1}{(m-1)!}\frac{d^{m-1}}{dz^{m-1}}(z-z_0)^mf(z)$$ To prove it, since $z_0$ is a pole of order $m$, $$f(z)=\frac{c_m}{(z-z_0)^m}+...+\frac{c_{-1}}{(z-z_0)}+c_0+...$$ and thus $$(z-z_0)^mf(z)= c_m+...+c_{-1}(z-z_0)^{m-1}+c_0(z-z_0)^m+...$$ Then, $$\frac{d^{m-1}}{dz^{m-1}}(z-z_0)^mf(z)=(m-1)!c_{-1}+(z-z_0)(c_0+...).$$ Take the limit when $z\to z_0$ to conclude.
Diagonalizability and linear independence of columns
No. Consider $\Phi=\begin{pmatrix}1&1\\0&1\end{pmatrix}$ which is invertible but not diagonalizable. Moreover, $\Psi=\begin{pmatrix}1&0\\0&0\end{pmatrix}$ is diagonalizable (since it is a diagonal matrix), but not invertible hence the columns are linear dependent.
relation between lowest and highest weight of representation of $\mathfrak{sl}_n $
Edit: Sorry, in an earlier answer I misread your post. This is indeed true, assuming that the fundamental weights $w_1, ..., w_{n-1}$ are ordered according to the standard ordering of the simple roots in the Dynkin diagram. Namely, in general, the lowest weight of a highest weight representation $L(\Lambda)$ is the negative of the highest weight of the dual representation $L(\Lambda)^*$; and this, in turn, is given by $-w_0(\Lambda)$ where $w_0$ is the longest element of the Weyl group. See Relationship between highest and lowest weights for a simple complex Lie algebra, finding highest weight of dual of a representation of a semisimple lie algebra, Highest weight of dual representation of $\mathfrak{sl}_3$, Irreducible Dual Representation. The upshot is: The lowest weight of $L(\Lambda)$ is $w_0(\Lambda)$. Now in a root system of type $A_{n-1}$, $w_0$ is given on simple roots in the usual ordering by $w_0(\alpha_i) = -\alpha_{n-i-1}$, and is easily seen to do the same on the fundamental weights, i.e. $w_0(w_i) =-w_{n-1-i}$. Since its action is linear, this explains your formula.
How many letters to be generated to get a specific string?
This has nothing to do with the (presumed) normality of $\pi$. The probability that a given $8$-character string is ABCDEFGH is $26^{-8}$, so for the expected number of such strings to be $1$, by linearity of expectation, we need $26^8$ $8$-character strings, so we must have $26^8+7$ characters.
show that for $\mathcal F$ $\sigma$- field to be countably generated there is some $X$ random variable s.t. $\mathcal F=\sigma(X)$
Proof of Sufficiency Set $A_q := \{X \leq q\}$, where $q\in\mathbb{Q}$. It is easy to check $$\mathcal{F} = \sigma(X) = \sigma(A_q:q\in\mathbb{Q}).$$ Proof of Necessity Suppose $\mathcal{F} = \sigma(A_n : n\in\mathbb{N})$. Set $$X := \sum_n 3^{-n}1_{A_n}.$$ Clearly, $X$ is $\mathcal{F}$-measurable, and so $\mathcal{F}\supset \sigma(X)$. Moreover, observe that $$\bigcup\left\{S + 3^{-n}/2 < X \leq S + 3^{-n} : S \in \left\{\sum_{i < n}\epsilon_i3^{-i}:\epsilon_i = 0\text{ or }1\right\}\right\} = A_n$$ and so $\mathcal{F}\subset \sigma(X)$.
$A\subseteq B, a\in A , a\notin B\setminus C\Rightarrow a\in C$
Your solution is correct. A typical more formal proof might look like this: $a \in A$ Reason: Given $A \subset B$ Reason: Given $a \notin B\setminus C$ Reason: Given By contradiction: suppose $a \notin C$ Reason: Contradiction hypothesis. $a \in B$ Reason: 1, 2 $a \in B \setminus C$ Reason: 4, 5, definition of set-difference. Contradiction Reason: 3, 6 disagree. Hence the contradicition hypothesis (4) is false, so $a \in C$. Normally this would be done in two columns, with the statements on the left and the reasons on the right, but I don't know how to make MathJax do that.
Is there a name for those elements $x$ of a commutative ring $R$ such that $Rx$ is maximal among all proper ideals?
These elements are called m-irreducible or i-atomic sometimes. In rings with zero-divisors, these are actually distinct from irreducible elements. I recommend the paper by D.D. Anderson and Valdez Leon called Factorization in Commutative Rings with Zero-divisors if you want to read more. In a domain which is not a field, $(0)$ is not maximal among principal ideals (obviously) despite being irreducible, so the version you state is not quite even true in domains, although it is for non-zero elements. The paper I suggest gives you some non-trivial examples of all the various relations between the notions of irreducible as well as examples which show the reverse inclusions are not true. Edit: I figured I would add the example just in case people cannot access that paper. In $\mathbb{Z} \times \mathbb{Z}$, the element $(0,1)$ is prime and therefore irreducible, but not maximal among principal ideals since it is properly contained in $(2,1)$.
Question about bounding PSD matrices.
The inequality is generally not true. Take, e.g., $$ A=\begin{bmatrix}2&-1\\-1&2\end{bmatrix}, \quad B=\begin{bmatrix}1\\1\end{bmatrix}. $$ Then $$ \mathrm{trace}[(B^TAB)^{-1}]=\frac{1}{2}< \frac{2}{3}=\frac{\mathrm{trace}(A^{-1})}{\lambda_{\max}(B^TB)}. $$
Find $n$ numbers which LCM is equal to $k$
(I suppose that "there is no number that equal to k" should mean "find $n$ proper divisors of $k$".) As noted by Henning, you can't have more of them then the number of divisors (in $\mathbb N$ or $\mathbb Z$ according to your wish). An algorithm: For the first two numbers, take $k/p$ and $k/q$ for two distinct primes $p,q$ that divide $k$. For the rest, take any divisors of $k$. If $k$ is a power of a prime, than lcm of any of its proper divisors will be less than $k$, so you can't do it no matter what. Otherwise, you have $\operatorname{lcm}(k/p,k/q)=k$, and then adding any divisor of $k$ doesn't change this. Example. Let $k=120$. It has three prime divisors, we'll choose $p=5$ and $q=3$. Then the sequence starts with $120/5=24$ and $120/3=40$ for which indeed, $\operatorname{lcm}(24,40)=120$. Then we add the other divisors in any order, obtaining for instance $$24,40,1,2,3,4,5,6,8,10,12,15,20,30,60.$$
Subspaces of $\mathbb{R}^3$ homeomorphic to quotient spaces
I am not quite sure with your answers. (a) $\Bbb R^2 / \Bbb D^2$ looks like $\Bbb R^2$ to me Consider the map $$(x,y,0) \mapsto ((\sqrt{x^2+y^2}-1)x,(\sqrt{x^2+y^2}-1)y,1) \text{ if } x^2+y^2\geq 1 \\ $$ and map $\Bbb D^2$ to $(0,0,1)$. Use Gluing lemma to obtain the quotient map and then use Fundamental Theorem of Quotient Topology to conclude! (b) $\Bbb R^2/S^1$ looks like $\Bbb R^2 \lor S^2$ Consider the map $$(x,y,0) \mapsto ((\sqrt{x^2+y^2}-1)x,(\sqrt{x^2+y^2}-1)y,1) \text{ if } x^2+y^2\geq 1 \\ $$ For $\Bbb D^2$, consider the boundary identification map i.e. $q: \Bbb D^2 \to S^2 $ (that identifies the boundary $\partial \Bbb D^2 =S^1$ to a point ) and glue these two maps suitably i.e. make $q(S^1)=(0,0,1)$ . This will give a quotient map and then use Fundamental Theorem of Quotient Topology to conclude! (c)$\Bbb R^2/(\Bbb R^2 -\Bbb B^2) $ looks like $S^2$ For $\Bbb D^2$, consider the boundary identification map i.e. $q: \Bbb D^2 \to S^2 $ ( that identifies the boundary $\partial \Bbb D^2=S^1$ to a point ) and simply map $\Bbb R^2- \Bbb B^2$ to the point where $q$ maps $S^1$ , you have your quotient map (again by gluing lemma) and then use Fundamental Theorem of Quotient Topology to conclude! After these identifications I guess it is trivial to just give some coordinates and make a concrete subspace.
Equivalence of definition for faithful functor?
Well, here you proved that if the functor $N\mapsto M\otimes N$ is faithful, then for all $N$, $M\otimes N = 0 \implies N = 0$. For the converse : Assume that $M\otimes N = 0 \implies N=0$. Let $N,N'$ be two modules, and assume that $f,g : N\to N'$ are such that $id_M \otimes f = id_M \otimes g$. Without loss of generality (consider $f-g$) we may assume $g=0$. Then let $K= \mathrm{Im}f$. Clearly $M\otimes K = 0$ (because $id_M \otimes f = id_M \otimes 0 = 0$). By the property, $K=0$, that is, $f$ is the zero map, and so $f=g$. Therefore, the functor is faithful.
If $S$ is a vector space, then $S$ is a vector subspace of $S$
If you're looking to show that if $S$ is a vector space, then $S$ is a subspace of $S$, then what you did is good. Your argument needs to go: $S$ is a vector space (by assumption), all elements of $S$ are elements of $S$, thus $S$ is a vector subspace of $S$. This is pretty trivial, though. In fact, $S$ and the empty set are often called the trivial subspaces of S and many times are just ignored.
Prove by induction area of koch snowflake
Do you have to use induction? Let's say that the flake after $j$ steps has $n_j$ sides of length $s_j$, now since you replace each side with four of length $s_j/3$ and thereby adding a equilateral triangle of side $s_j$ you will end up with $s_{j+1}$ = $s_j/3$ and $n_{j+1} = 4n_j$ and the area added in this step to be $n_j a_0 s_j^2/9$. Now it's easy to see from this that $s_j = s_03^{-j}$, $n_j = n_04^j$, so the area added is $\sum a_0 n_j s_j^2/9 = a_0\sum n_0 4^j s_0^2 3^{-2j}/9 = a_0n_0s_0^2\sum(4/9)^j/9$ Then it's just a matter of putting it into the formula for geometric series: $a_0n_0s_0^2{1 - (4/9)^{N+1}\over1-4/9}/9 = a_0n_0s_0^2{1-(4/9)^{N+1}\over 5/9}/9$ This is however only the area added to the original triangle that has to be added: $A_n = a_0s_0^2 + a_0n_0s_0^2( {1-(4/9)^{N+1}\over 5/9})/9$ Now we start with a triangle so $n_0=3$ and with unit length we get: $A_n = a_0( 1 + 3{1-(4/9)^{N+1}\over 5/9}/9) = a_0(1 + 3/5 - {3\over 5}(4/9)^{N+1}) = a_0({8\over5} - {3\over 5}({4\over 9})^{N+1})$
Necc. and suff. conditions for a canonical transformation.
Hints: Consider a coordinate transformation of the form $$P_i ~:=~ p_j A^j{}_i + B_{ij}q^j , \qquad Q^i := C^i{}_j q^j. \tag{1}$$ A canonical transformation is in this context a symplectomorphism. It should take Darboux coordinates $(q^i,p_j)$ into Darboux coordinates $(Q^i,P_j)$, so that $$ \{Q^i,Q^j\}_{q,p}~=~0, \qquad \{Q^i,P_j\}_{q,p}~=~\delta^i_j, \qquad \{P_i,P_j\}_{q,p}~=~0. \tag{2}$$ Inserting ansatz (1) into eqs. (2) leads to $$ \text{Automatically satisfied}, \qquad CA ~=~{\bf 1}_{n \times n} ,\qquad BA~=~(BA)^t,\tag{3}$$ respectively.
Show $\sin(\theta) + \sin(2\theta) + ... +\sin(n\theta) = \frac {\sin(n\theta/2)\sin((n+1)\theta/2)} {\sin(\theta/2)}$ using De Moivre's formula
$$\sin\theta + \sin2\theta + ... + \sin n\theta = \Im(1 + e^{i\theta} + e^{i2\theta} + \ldots + e^{in\theta}) $$ where $\Im (z)$ denotes the imaginary part of $z\in\mathbb C$. $1 + e^{i\theta} + e^{i2\theta} + \ldots + e^{in\theta}$ is a geometric progresion with common ratio $e^{i\theta}$ and first term unity. I'm sure you can compute that $$1 + e^{i\theta} + e^{i2\theta} + \ldots + e^{in\theta} = \frac{1 - e^{i(n+1)\theta}}{1-e^{i\theta}}$$ Then, $$\sin\theta + \sin2\theta + ... + \sin n\theta = \Im\left(\frac{1 - e^{i(n+1)\theta}}{1-e^{i\theta}} \right)$$ Now, I'd ask you to use $e^{i\theta} = \cos\theta + i\sin\theta$ and half-angle trigonometric identities to simplify the denominator. You should be able to figure out $$\Im\left(\frac{1 - e^{i(n+1)\theta}}{1-e^{i\theta}} \right) = \frac {\sin(n\theta/2)\sin((n+1)\theta/2)} {\sin(\theta/2)}$$ If you face any trouble, let me know.
Don't get why this is a transitive relation
A relation $R$ fails to be transitive only if you can find $a,b,c$ such that $(a,b)\in R$ and $(b,c)\in R$ are true but $(a,c)\in R$ is not true. Your relation is transitive since you can not find such $a,b,c$ to contradict transitivity.
Not understanding the definition of a differential of a map.
The other answer covers it, but I think it is worthwhile to remark that the very definition of differential encodes how $(F_*)_p$ transforms tangent basis vectors, and so how it acts on $T_pN:$ if $p\in U\subseteq N$ and $(U,\phi)$ is a chart, then $\phi_*:T_pU\cong T_pM\to T_p \mathbb R^n$ is an isomorphism (because $\phi$ is a diffeomorphism) and so it makes sense to $\textit{define}$ for each $\ 1\le i\le n,\ \frac{\partial}{\partial x^i}:=\phi^{-1}_*(\frac{\partial}{\partial r^i})$, where $r^i$ are the standard coordinates on $\mathbb R^n$. The same analysis applied to $F(p)\in V\subseteq M$ using the chart $(V,\psi)$ gives tangent vectors $\frac{\partial}{\partial y^j}=\psi^{-1}_*(\frac{\partial}{\partial r^j})$ for $1\le j\le m$. Then, a direct calculation shows that $(F_*)_p\left (\frac{\partial}{\partial x^i}\right )=\sum ^m_{j=1}\frac{\partial (\psi^j\circ F\circ \phi^{-1})}{\partial r^i}\cdot \frac{\partial}{\partial y^j}$. The upshot of this is that the matrix of $F_*$ as a map from $T_pN$ to $T_{F(p)}M$ is precisely the Jacobian matrix of the function $\psi\circ F\circ \phi^{-1}$, which is a map between $\textit{Euclidean}$ spaces. And this result was ensured by the definitions.
Mathematical induction for Picard iteration
Use the induction hypothesis for $y_n$ inside the integral in the definition of $y_{n+1}$, a change of index in the sum and finally a "convenient $0$": \begin{align*} y_{n+1} &= 2+\int_0^x 2s(1+y_n(s))\,\mathrm ds, \\ &= 2+\int_0^x 2s\left(1-1+3\sum_{k=0}^n \dfrac{s^{2k}}{k!}\right)\,\mathrm ds , \\ &= 2+\int_0^x 6\sum_{k=0}^n \dfrac{s^{2k+1}}{k!}\,\mathrm ds , \\ &= 2+ 6\sum_{k=0}^n\int_0^x \dfrac{s^{2k+1}}{k!}\,\mathrm ds , \\ &= 2+ 6\sum_{k=0}^n \dfrac{s^{2k+2}}{k!(2k+2)}, \\ &= 2+ 3\sum_{k=0}^n \dfrac{s^{2(k+1)}}{(k+1)!}, \\ &= 2+ 3\sum_{k=1}^{n+1} \dfrac{s^{2k}}{k!}, \\ &= 2\color{red}{-3+3} +3\sum_{k=1}^{n+1} \dfrac{s^{2k}}{k!}= -1 + 3\sum_{k=0}^{n+1} \dfrac{x^{2k}}{k!}. \end{align*} Good job on your claim!
Surface area of revolution $x=\ln(y)$ about $x$ axis
Rewrite the argument of the square root function. Then use trigonometric substitution followed by integration by parts. In more detail: $$\int_1^e y\sqrt{1 + \frac{1}{y^2}}\,dy = \int_1^e \sqrt{y^2 + 1}\,dy.$$ Using trigonometric substitution with $y = \tan \phi$, we obtain $$\int_1^e \sqrt{y^2 + 1}\,dy = \int_{\frac{\pi}{4}}^{\tan^{-1}(e)} \sec^3 \phi \,d\phi.$$ Integrating by parts with $u = \sec\phi$, $dv = \sec^2\phi\,d\phi$ yields $$\int_{\frac{\pi}{4}}^{\tan^{-1}(e)} \sec^3 \phi \,d\phi = e\sqrt{1 + e^2} - \sqrt{2} - \int_{\frac{\pi}{4}}^{\tan^{-1}(e)} \sec\phi \tan^2\phi\,d\phi.$$ Since $$\int_{\frac{\pi}{4}}^{\tan^{-1}(e)} \sec\phi \tan^2\phi = \int_{\frac{\pi}{4}}^{\tan^{-1}(e)} \sec^3 \phi\,d\phi - \int_{\frac{\pi}{4}}^{\tan^{-1}(e)} \sec\phi \,d\phi,$$ it follows that \begin{align} 2 \int_{\frac{\pi}{4}}^{\tan^{-1}(e)} \sec^3 \phi\,d\phi &= e\sqrt{1 + e^2} - \sqrt{2} + \int_{\frac{\pi}{4}}^{\tan^{-1}(e)} \sec\phi\,d\phi \\ &= e\sqrt{1 + e^2} - \sqrt{2} + \ln\left(e + \sqrt{1 + e^2}\right) - \ln\left(1 + \sqrt{2}\right). \end{align} Hence the result on multiplying by $\pi$.
Function from normal groups, to normal groups of cosets.
Here is another approach, which may or may not be useful to you. Let $\mathcal{C}$ denote the set of normal subgroups of $G$ containing $N$, and let $\mathcal{D}$ denote the set of normal subgroups of $\gamma(G) = G/N$ Define $\psi: \mathcal{D} \to \mathcal{C}$ by $\psi(H) = \gamma^{-1}(H)$. We need to be sure that this definition makes sense, that is, for every $H \in \mathcal{D}$ that $\psi(H) \in \mathcal{C}$. It's clear that $\psi$ is well-defined on SUBSETS of $G/N$, but it's not immediately obvious $\psi(H)$ is a normal subgroup of $G$ containing $N$. Now if $n \in N$, then $\gamma(n) = e_{G/N} = N$, since $N = \text{ker }\gamma$. Since $N$ is the identity element of $G/N$, we have $N \in H$, whence $ n \in \gamma^{-1}(H)$. So, at the very least, $\psi(H)$ contains $N$ as a subset. Suppose $x,y \in \psi(H)$. This means $\gamma(x),\gamma(y) \in H$. Since $H$ is a subgroup of $G/N$, we have $\gamma(x)\gamma(y)^{-1} \in H$, thus $\gamma(xy^{-1}) \in H$ (since $\gamma$ is a homomorphism), so that $xy^{-1} \in \gamma^{-1}(H)$, and $\psi(H)$ is thus a subgroup of $G$. It is easy to see $\psi(H)$ is normal in $G$, for take any $x \in \psi(H)$ and any $g \in G$. Then $\gamma(gxg^{-1}) = \gamma(g)\gamma(x)\gamma(g)^{-1}$, and $\gamma(x) \in H$, which is normal in $G/N$, thus $\gamma(g)\gamma(x)\gamma(g)^{-1} \in H$. Since this equals $\gamma(gxg^{-1})$, we see $gxg^{-1} \in \gamma^{-1}(H)$, so $\psi(H)$ is normal in $G$. It's important to recognize that given $L \in \mathcal{C}$, that $\phi(L)$ is just $L/N$. Furthermore, if $g \not\in L$, we cannot have $\gamma(g) \in L/N$, because this implies $g = xn$ for some $x \in L, n \in N$, whence $x^{-1}g \in N \subset L$, and thus $g \in Lx = L$. So the pre-image of $L/N$ under $\gamma$ is precisely $L$. That's the "ugly part", it's downhill from here: $\phi(\psi(H)) = \gamma(\gamma^{-1}(H)) = H$, for any $H \in \mathcal{D}$, and for any $L \in \mathcal{C}$: $\psi(\phi(L)) = \gamma^{-1}(\gamma(L)) = \gamma^{-1}(L/N) = L$, so that we see $\psi$ is a two-sided functional inverse to $\phi$ which is therefore bijective. I tried (to the best of my meager abilities) to shy away from invoking cosets, to keep the "layers" straight (we have group elements, cosets, subgroups, and sets of subgroups) and what is going on is just a correspondence between sets of subgroups. You can think of $\gamma$ as doing this: shrinking $N$ down to a point, but preserving the group structure "above $N$" faithfully. I find the notion of cosets a bit cumbersome myself, and prefer to think of quotient groups as "homomorphic images" (the homomorphism does some "collapsing" by pretending all elements of $N$ are the identity).
What would this the derivative be?
The chain rule can be extremely misleadingly taught and/or written down. I sympathize. Here, an application of the chain rule should look like $\frac{d}{dt} f(\mathbf{c}(t)) = \nabla f (\mathbf{c}(t)) \cdot \frac{d\mathbf{c}}{dt}(t)$
Prove there are no 3 collinear points on a non-degenerate conic
Change coordinates in the plane, to make the line $L$ joining the three points the $x$-axis (equation $y=0$). Let your conic $C$ now have equation $$Ax^2+Bxy+Cy^2+Dx+Ey+F=0.\tag1$$ The $x$-coordinates of the points of $C$ on the $x$-axis satisfy $$Ax^2+Dx+F=0.\tag2$$ We get this by putting $y=0$ into $(1)$. But $(2)$ can only have two or fewer solutions, unless all its coefficients are zero, that is $A=D=F=0$. Then $(1)$ becomes $$Bxy+Cy^2+Ey=0\tag3$$ which factorises.
Monty Hall Problem confusion
Probability relies heavily on the knowledge of the observer. The question might be, "What's the probability a randomly chosen man is taller than 6 feet?$ If I know nothing more about him, the answer is one thing. If I know he's a basketball star, then the answer will be different. There are two observers lurking around the Monty Hall problem. The player and the hypothetical alien who walks in later after the first door is chosen and after Monty has revealed a goat behind one of the doors. To the alien, with his limited knowledge, the probability is 1/2 that the car is behind either door. But to the player, who knows a bit more information, the probability is different. He knows that he had a 1/3 chance with his first choice. So he knows that there's a 2/3 chance the car is behind one of the other two doors. If it is, then, by the rules of the game, he'll win the car by switching.
Recovering a restricted Lie algebra from its restricted enveloping algebra
This is a very old question but it may be useful for somebody reading this post to know that the set of primitive elements in $u(\mathfrak{g})$ is precisely $\mathfrak{g}$. A generalisation of this statement is proven for colour Lie $p$-superalgebras in Chapter 3, Theorem 2.11 of ``Infinite dimensional Lie superalgebras'' by Bahturin et al.
total order on finite dimensional vector space over $\mathbb{R}$
Let $<$ be such a total ordering on $V$ (which is, as in the question, finite dimensional). Then the set $C=\{v \in V: v\geq 0\}$ clearly is a convex cone. Since $C \cap (-C) =\{0\}$ and since $V=(-C) \cup C$ there is a $v_0 \in V$ which does not lie in the closure of $C$. Thus there is a nonzero linear form $l$ on $V$ such that $l \geq 0$ on $C$ (see below in "edit"). Now the zero set of $l$ is a hyperplane as Harald thought of: Let $l(v)<0$ then $v \not\in C$ thus $v<0$. Let $l(v)>0$ and assume $v \leq 0$ then $-v \in C$ which contradicts $l(-v)<0$. edit: This follows from biduality of convex cones, see http://arxiv.org/abs/1006.4894 (section 2.1): Assume that $C^*=\{0\}$ (notation as in the paper), then we have $V=\{0\}^*=(C^*)^*=\textrm{cl}(C)$, but we have $V \neq \textrm{cl}(C)$ ($\textrm{cl}(C)$ is the closure of $C$). I expect that you can find the proof of this biduality theorem (which uses the separation theorem that I mentioned in the first place) in a standard book about convexity, for example in Barvinok's "A First Course in Convexity".
How would this fly/bicycle question be solved using calculus?
It’s a geometric series. You can figure out how far the fly flies to meet the other bike for the first time. At that instant the bikes are much closer than at the start. The ratio of that distance to the original distance is the ratio of the series. There is a famous story about John von Neumann in which it is said someone asked him a question similar to this. He answered so quickly they said he must have known the trick. But he claimed to have summed the series.
Heuristic in quotient rings with linear relations?
For 5.4.(a), (b), (c); it can be a way to guess the solution, but then you still have to prove that it works. The fact that it works for these is not a coincidence, but the fact that each time, modulo the $n$ that you find, $\alpha$ is some integer, so you "haven't actually adjoined anything". Indeed for (a), $3=0$, and $2\alpha =6$ so $2\alpha =0$ and so $\alpha = 3\alpha -2\alpha = 0$. For (b), you have $\alpha =10$ that's given; and for (c) there's only one ring in which $1=0$. For 4.3, you need to be careful : you won't always be in the situation where $\alpha$ (the class of $x$) is actually an integer; so you may have to deal with extensions of $\mathbb{Z}/(n)$
2-D Analgoue of Pseudosphere
Mathematically the pseudosphere is a 2D surface of constant negative Gaussian curvature. It only looks 3D because of the way it is immersed in 3-space. There is no concept equivalent to negative curvature for 1D, so the nearest analogy is the ordinary 1-sphere of constant curvature, or circle.
nontrivial solutions in matrix
The system of linear equations $Ax=0$ has non-trivial solution when the rank of the matrix $A$ is smaller than the number of unknowns. For your problem, adding a 3rd row or not, the rank of the matrix is still 2 and there are 3 unknowns, so there must be at least one non trivial solution.
How can I calculate the size of a square block and the number of rows and columns needed to fit a known number of blocks on a page of known size?
Note that if your grid is $6 \times 10$, you only have $5 \times 9$ gaps, so the resulting grid is $440 \times 260$. Otherwise your work is fine. There is no magic in choosing the size of the square, the size of the gap, and the arrangement. If your grid is $w$ squares wide, $h$ squares high, squares have a side of $s$, and the gap is $g$, the grid will occupy $(ws+(w-1)g) \times (hs+(h-1)g)$ pixels. Pick the constants so it looks good to you.
Names of 3 input logic gates
There are actually (obviously) 256 such 3-input logic gates. You've already got the 2 that depend on no inputs, and the 6 that depend on one input; there are also 30 additional ones that depend only on two of the three inputs (and have names like "A and B" for 0000 0011), and the other 218 depend on all three. Important ones you're missing include XOR 0110 1001 and Carry 0001 0111, which together can be used to create a 3-bit, or "full", adder.
Meaning of Lyapunov function in dissipative systems
The intuition behind the Lyapunov function in dissipative systems has been extensively discussed by Ilya Prigogine in his theroy of irreversible (non-equilibrium) thermodynamics, dissipative structures and complex systems as well as by Hermann Haken in his theory of synergetics. The mathematical intuition or physical meaning of the Lyapunov function is indeed identical with the Hamiltonian, and sometimes used by some authors as Hamiltonian. But particularly in the theory of dissipative systems where systems described by order parameters and the eigenvalues of the linear part (control parameters), the Lyapunov function gains an important meaning as a potential function of the so called master modes (potential relates terminologically to potential energy in classical thermodynamics) that in a very comprehensive way lets us understand phase transition phenomena, such as critical fluctuations, bifurcations and symmetry breaks near equlibrium while the system non-linear and dissipative. The potential function includes also some significant characteristics of the attractors of the system, such as oscillatory and stability etc - this is also because they involve the linear eigenvalue properties as well as the non-linear properties of the system. Dependent on the system (and field of application chem, bio...) the Lyapunov function can help to understand also oscillatory phenomena and pattern fomration (spacial and temporal). In pattern recognition the situation reverses, there a Lyapunov function is defined which has the attractors as the paterns against which we recognize. Also in Chaos theory the Lyapunov function gains special meaning, where can be shown that certain properties of the exponent are prerequisite to deterministic Chaos. Beyond the deterministic theory the Lyapunov function can be also found with similar interpretations (but distinguished) in the stochastic theory of dissipative systems when for instance applying Fokker-Planck equations or Langevin equations. In fractal theory we can trace relations between the Lyapunov function and the fractal dimmension - but this statement I raise very carefully. ... What I can suggest you is to read through the interdisciplinary work of both Prigogine and Haken. With regard to your example, the one ecosystem can be regarded for instance as a stable near equlibrium but nonlinear dissipative system where the potential function after a phase transtion from a so called "V" phenomenlogically seen turns to a "W" which is again stable. The other system may have control parameters changed in a way that the "V" turns to a saddle and instable. Hope this is good answer! A posterior yes correct your comment, that was writing fast. It is the Lyapunov function I am talking about. However, beware there is a deep relation between the Lyapunov function and the Lyapunov exponent. This is why I wrote "but this statement I raise very carefully". When a system in deterministic chaos, the tarjectories behaviour can be traced by the iterative (often fractal process) taking the Lyapunov exponent. One can show, at least phenomenologically, that the Lyapunov exponent encapsulates (entropic) information from the modal trajectories (i.e. in eigenspace) that is identical with information that in a possibly existing corresponding deterministic model is supported by the Lyapunov function. The trivial case for intance to exercise would be to identify the Lyapunov function (Hamiltonian, potential) of a population growth model (that can under certain control parameters like 3 instable modes turn chaotic) and then the Lyapunov exponent (trivially related to its eigenvalues).
Inspection not working with an ode
This "inspection method" only works for linear ODE with constant coefficients, as then the basis solution are of the type "polynomial times exponential" (trigonometric functions are also combinations of exponentials). If then also the right side is also a sum pf terms of that type, then the particular solution can be constructed by this method.
Proving the given identity
Using the generating function of the Chebyshev polynomials with $-1<\rho<1$, $$ \frac{1-\rho^2}{1-2\rho\cos n\theta+\rho^2}=1+2\sum_{s=0}^\infty \rho^s\cos s\theta$$ the integral can be written as $$ I(\rho)=\frac{1}{\pi}\int_{-\pi}^\pi F(\theta)\sum_{p=0}^\infty\rho^{2p}\cos(n(2p+1)\theta)\,d\theta$$ Inverting the summations,$$I(\rho)=\frac{1}{\pi}\sum_{p=0}^\infty\rho^{2p}\int_{-\pi}^\pi F(\theta)\cos(n(2p+1)\theta)\,d\theta$$ With the Fourier-cosine coefficients of $F$:$$I(\rho)=2\sum_{p=0}^\infty\rho^{2p}F_{n(2p+1)}$$ As $F$ is continuous and periodic, it can be represented by its Fourier series:$$F(\theta)=\sum_{-\infty}^\infty F_re^{ir\theta}$$ The limit of $I(\rho)$ for $\rho\rightarrow 1$ exists:$$I(1)=2\sum_{p=0}^\infty F_{n(2p+1)}$$ Now, using the Fourier representation, one can form the summation of the rhs of the question:$$\frac{1}{2n}\sum_{k=1}^{2n}(-1)^k\sum_{r=-\infty}^\infty F_re^{ir\theta}=\frac{1}{2n}\sum_{r=-\infty}^\infty F_r\sum_{k=1}^{2n}(-z)^k$$where $z=\exp(\frac{ir\pi}{n})$. Inner summation is zero except when $r=n(2p+1)$. In this case its value is $2n$. As the coefficient are even with respect to $r$, $$rhs=2\sum_{p=0}^\infty F_{n(2p+1)}$$
Proof of a formula for the period of a $\sin$ function subtracting distances
Assume, without loss of generality that $x_1 > x_2.$ Since you accept that the period of $f(x)$ is $(2\pi/a)$, this means that you accept the goal that $(x_1 - x_2) = (2\pi/a)$ which means that you accept the goal that $(ax_1 - ax_2) = 2\pi$. The explanation that you excerpted had $t = ax$ which means that $t_1 = ax_1, t_2 = ax_2$. Therefore, $t_1 - t_2 = 2\pi.$ Addendum Responding to the OP's comment: We want to find the period for this function: $f(x)=\sin ax.$ Without loss of generality, $a > 0.$ In order to find the period of $f(x)$, you have to define what it means for a function to have a period. Given a generic function $g:\mathbb{R}\to\mathbb{R}$, the specification that $r$ is the period of $g$ denotes the following: Constraint-1 : For all $y \in \mathbb{R}, g(y) = g(y + r)$. Constraint-2: $r$ is the smallest possible positive value that satisfies Constraint-1. For example, suppose $g(y) = \sin(y).$ Then the period of $g$ would be designated as $(2\pi)$, rather than [for example] $(4\pi$). This means that $(2\pi)$ is the smallest positive value $r$ such that for any real number $y,$ $g(y) = \sin(y) = \sin(y+r) = g(y+r).$ Now, the challenge is to show that the period of $f(x)$ is $(2\pi/a).$ The way to do that is to show that $(2\pi/a)$ satisfies both Constraint-1 and Constraint-2. Constraint-1: For any $x \in \mathbb{R},$ $f(x + [2\pi/a]) = \sin(a[x + 2\pi/a]) = \sin(ax + 2\pi) = \sin(ax) = f(x).$ Thus, $(2\pi/a)$ satisfies Constraint-1. Constraint-2: Suppose that there exists $s \in \mathbb{R}$, such that $0 < s < (2\pi/a)$ and For all $x \in \mathbb{R}, f(x) = f(x + s)$. Let $u$ denote $(as) \implies u < (2\pi).$ Since $s$ is assumed to be the period of $f$, for all $x \in \mathbb{R},$ $f(x+s) = f(x) \implies \sin(a[x+s]) = \sin(ax + u) = \sin(ax).$ This means that for any $x \in \mathbb{R}$, $\sin(ax + u) = \sin(ax)$. This means that $u$ satisfies Constraint-1, with respect to the sine function. This yields a contradiction, since the sine function is known to have a period of $(2\pi)$. This means that with respect to the sine function, Constraint-2 implies that there can be no positive real number $u < (2\pi)$ that satisfies Constraint-1. Thus, the assumption that $f(x)$ had some period less than $(2\pi/a)$ yielded a contradiction. Therefore, $(2\pi/a)$ not only satisfies Constraint-1, but also satisfies Constraint-2.
Toy example for computing stable homotopy group.
The Fredenthal Suspension Theorem states that the suspension homomorphism $\pi_{n+k}(S^n) \to \pi_{n+k+1}(S^{n+1})$ is an isomorphism for $n > k + 1$. Taking $n = 3$ and $k = 1$, we see that the suspension homomorphism $\pi_4(S^3) \to \pi_5(S^4)$ is an isomorphism as $3 > 1 + 1$. As we continue to take the suspension homomorphism, we maintain the same value of $k$ but $n$ increases, so $n > k + 1$ is still true, and hence the homomorphisms will be isomorphisms. That is, we get $$\pi_4(S^3) \cong \pi_5(S^4) \cong \pi_6(S^5) \cong \pi_7(S^6) \cong \dots$$ This group, whatever it is, is what we call the stable homotopy group $\pi_1^S = \pi_1^S(S^0)$. In this case, the group is $\mathbb{Z}_2$, so we would write $\pi_1^S \cong \mathbb{Z}_2$. Note, when $n \leq k + 1$, the groups $\pi_{n+k}(S^n)$ and $\pi_{n+k+1}(S^{n+1})$ may not be isomorphic, and therefore $\pi_k^S$ may not be isomorphic to $\pi_{n+k}(S^n)$ for every $n$. For example, $\pi_3(S^2) \cong \mathbb{Z}$ but $\pi_4(S^3) \cong \mathbb{Z}_2 \cong \pi_1^S$. However, $\pi_k^S \cong \pi_{n+k}(S^n)$ for $n > k + 1$, which justifies the notation $$\pi_k^S \cong \lim_{n\to\infty}\pi_{n+k}(S^n) = \lim_{n\to\infty}\pi_{n+k}(\Sigma^nS^0).$$ In particular, $\pi_k^S \cong \pi_{2k+2}(S^{k+2})$; the case above corresponds to $k = 1$. In general, for an $(n-1)$-connected CW complex $X$, the suspension homomorphism $\pi_k(X) \to \pi_{k+1}(\Sigma X)$ is an isomorphism for $k < 2n-1$. Now suppose $X$ is a CW complex and consider the sequence of suspension homomorphisms $$\pi_k(X) \to \pi_{k+1}(\Sigma X) \to \pi_{k+2}(\Sigma^2 X) \to \pi_{k+3}(\Sigma^3 X) \to \dots$$ Note that $\Sigma^n X$ is $(n-1)$-connected, so the suspension homomorphism $\pi_{k+n}(\Sigma^n X) \to \pi_{k+n+1}(\Sigma^{n+1} X)$ is an isomorphism for $k + n < 2n - 1$ (i.e. $n > k + 1$). So we see that $$\pi_{2k+2}(\Sigma^{k+2} X) \cong \pi_{2k+3}(\Sigma^{k+3} X) \cong \pi_{2k+4}(\Sigma^{k+4} X) \cong \pi_{2k+5}(\Sigma^{k+5} X) \cong \dots$$ This group is the stable homotopy group $\pi_k^S(X)$ and one writes $$\pi_k^S(X) = \lim_{n\to\infty}\pi_{n+k}(\Sigma^n X).$$ Note that $\pi_k^S(X) \cong \pi_{2k+2}(\Sigma^{k+2}X)$. From this point of view, the stable homotopy group you were trying to calculate was $\pi_4^S(S^3)$; so you would write $\pi_4^S(S^3) \cong \mathbb{Z}_2$. To see that $\pi_4^S(S^3)$ coincides with $\pi_1^S = \pi_1^S(S^0)$, note that for $i < k$ $$\pi_k^S(\Sigma^i X) = \lim_{n\to\infty}\pi_{n+k}(\Sigma^n\Sigma^i X) = \lim_{n\to\infty}\pi_{n+k}(\Sigma^{n+i} X) = \lim_{n\to\infty}\pi_{(n+i)+(k-i)}(\Sigma^{n+i} X) = \lim_{N\to\infty}\pi_{N + (k-i)}(\Sigma^N X) = \pi_{k-i}^S(X).$$ Therefore, $\pi_4^S(S^3) = \pi_4^S(\Sigma^3 S^0) = \pi_1^S(S^0) = \pi_1^S$.
Spaces with same homotopy type and deformation retract
Try to have a look at Corollary 0.21 in http://www.math.cornell.edu/~hatcher/AT/AT.pdf
Calculating $\Sigma_{k=1}^\infty (2+\frac{1}{k}-\frac{3}{k+1})$
HINT: When considering $\displaystyle \sum_n a_n$, if $a_n$ does not go to $0$, then the series diverges.
Derivative of quadratic form 3
So another detailed justification is the following. Expand $\alpha = x^TAx$ as you did, namely: $$\alpha = x^TAx = \sum_{i=1}^{m}\sum_{j=1}^{m}a_{ij}x_ix_j $$ Derive w.r.t the $k^{th}$ element of $x$ (say $x_k$), you get $$\frac{\partial \alpha}{\partial x_k} = 2a_{kk}x_k + \sum\limits_{i=1 \\ i\neq k}^m (a_{ik} + a_{ki})x_i = \sum\limits_{i=1}^m (a_{ik} + a_{ki})x_i \qquad k = 1\ldots m$$ So $$\frac{\partial \alpha}{\partial x}= \begin{bmatrix} \frac{\partial \alpha}{\partial x_1} \\ \frac{\partial \alpha}{\partial x_2} \\ \vdots \\ \frac{\partial \alpha}{\partial x_m} \end{bmatrix} = \begin{bmatrix} \sum\limits_{i=1}^m (a_{i1} + a_{1i})x_i \\ \sum\limits_{i=1}^m (a_{i2} + a_{2i})x_i \\ \vdots \\ \sum\limits_{i=1}^m (a_{im} + a_{mi})x_i \end{bmatrix} =x^T(A + A^T) $$
Why is the image of a C*-Algebra complete?
To expand on my comment, let us take as given the following two (nontrivial) facts: Lemma 1: If $U$ is a $C^*$-algebra and $I\subset U$ is a closed 2-sided $*$-ideal, then $U/I$ is a $C^*$-algebra. Lemma 2: If $U$ and $V$ are $C^*$-algebras and $\pi:U\to V$ is an injective $*$-morphism, then $\|\pi(A)\|=\|A\|$ for all $A\in U$. Now suppose $\pi:U\to V$ is any $*$-morphism of $C^*$-algebras; we wish to show $\pi(U)$ is closed. Let $I$ be the kernel of $\pi$; then $\pi$ factors through an injective $*$-morphism $\pi':U/I\to V$. By Lemma 1, $U/I$ is also a $C^*$-algebra, so we can apply Lemma 2 to $\pi'$ and conclude that $\pi'$ is an isometry. In particular, this implies the image of $\pi'$ is closed. But the image of $\pi'$ is the same as the image of $\pi$, hence the image of $\pi$ is closed. Let me now briefly sketch the proofs of the lemmas. In Lemma 1, it suffices to show that the usual quotient norm satisfies the $C^*$ identity. This can be shown by some calculations using an approximate identity for $I$ (the idea being that the best way to approximate $A\in U$ by elements of $I$ is to take $AB_\alpha$, where $B_\alpha\in I$ is an approximate identity for $I$). In Lemma 2, you can use the $C^*$ identity to reduce to the case that $A$ is self-adjoint, in which case the norms of $A$ and $\pi(A)$ can be computed as the spectral radius. It thus suffices to show that $\pi$ preserves spectra of self-adjoint elements. This can be shown using the continuous functional calculus (if $\sigma(A)$ were strictly larger than $\sigma(\pi(A))$, you could find two continuous functions on $\sigma(A)$ that agree on $\sigma(\pi(A))$, and this contradicts injectivity of $\pi$).
Finding a line that passes by 4 points, or as close a possible. Points are NOT random
For minimizing actual distances to the line (rather than the vertical distances in Argerami's answer) the method of finding a best fit is known as total least squares: https://en.wikipedia.org/wiki/Total_least_squares.
Estimation, bias, and mean square error
$\newcommand{\var}{\operatorname{var}}\newcommand{\E}{\operatorname{E}}$ \begin{align} \E(\bar X) & = \E\left( \frac {X_1+\cdots+X_n} n \right) \\[8pt] & = \frac 1 n \E(X_1+\cdots+X_n) = \frac 1 n (\E(X_1)+\cdots+\E(X_n)) \\[8pt] & = \frac 1 n \Big( n\E(X_1) \Big) = \E(X_1). \\[25pt] \var( \bar X ) & = \var \left( \frac {X_1+\cdots+X_n} n \right) \\[8pt] & = \frac 1 {n^2} \var(X_1+\cdots+X_n) = \frac 1 {n^2} (\var(X_1)+\cdots+\var(X_n)) \\[8pt] & = \frac 1 {n^2} \Big( n \var(X_1) \Big) = \frac 1 n \var(X_1). \end{align} Based on the above, you should be able to find the bias and the mean squared error.
How does it follow in this proof of the Rank-Nullity Theorem that $\mathcal{N}(A_0)=\mathcal{N}(A)\cap L=0$?
It doesn't follow from this: you use the fact that this intersection is $0$ and that equality to deduce that $A_0$ is an isomorphism. To show that this intersection has just the element $0$ in it, take an element in the intersection, and write it following the basis in two ways using the fact that it is in the intersection. What can you deduce from that?
What is mean of "open condition"?
It means that it defines an open set, like an inequation or an inequality for a continuous function. $\{x:f(x)>0\}$ and $\{x:f(x)\ne 0\}$ are open sets, thus one can call $f(x)>0$, $f(x)\ne 0$ "open conditions". The restriction to compact manifolds is to ensure that the underlying function is continuous. Here one could define the condition as $$ f(t)=\inf_{x\in M}|\det(I+A(t,x))|>0. $$ An easy counter-example for the non-compact situation is $$ f(t)=\inf_{x\in\Bbb R}|1-tx| $$ where $f(0)=1$ while $f(t)=0$ for $t>0$, a jump discontinuity.
Complexificantion of a Lie algebra.
I find it easiest to view the complexified Lie algebra $L_{\mathbb{C}}$ as the set of all formal elements $$A+iB$$ such that $A$ and $B$ lie in $L$. This is clearly a real vector space, and become a complex vector space if we define $$i(A+iB)=-B+iA$$ It's then not hard to show that the Lie bracket on $L$ uniquely extends to a Lie bracket on $L_{\mathbb{C}}$, using the $\mathbb{C}$-bilinearity. The calculations are very slightly messy, but quite intuitive. I agree that the tensor product viewpoint above is quite inelegant. It looks to me like notational overkill for a simple concept!
Polynomial and distinct roots
Let $a, b, c$ be the rational roots. Then we have $$a+b+c=2,\quad ab+bc+ca=-2, \quad m=-abc.$$ Replace $c=2-a-b$, $$a^2+b^2+ab-2(a+b)-2=0.$$ (If $a=b$, then $a$ is irrational) Thus, $$\Delta_b=(a-2)^2-4(a^2-2a-2)=-3a^2+4a+12$$ is a square rational. The rest is easy. Added after diner. Since $$-3a^2+4a+12=-3(a+\frac{2}{3})^2+4\times \frac{10}{3}$$ is a square. Let $\frac{3}{2}(a+\frac{2}{3})=\frac{p}{q}$ (suppose $\gcd(p,q)=1$), then $$-3p^2+30 q^2$$ is a square of integer, say $3r$, thus $$-p^2+10q^2=3r^2.$$ It is clear that $2|p-r$. If both are even, then $q$ is even too, contradicting $\gcd(p,q)=1$. So $p$ and $r$ both are odd, then $p^2+3r^2=4 \mod(8)$, therefore $q$ is even, hence $10q^2=0 \mod (8)$, this is an contradiction with $p^2+3r^2=10 q^2$. If I didn't make mistake, Conclusion: there is no $a, b, c$, there is no $m$.
Proving that a finite simple group (order < 100) is either abelian or has order 60
This is a classic problem which is an exercise in casework. I won't do the casework for you, but here are some key observations that will make your casework a lot easier. Let $G$ be a simple non-abelian group. $G$ can't have a prime power order (consider its center) $G$ is isomorphic to a subgroup of the alternating group on the left coset space of $G/H$ for any proper subgroup $H$. The Sylow number equals the index of the Sylow normalizer for any $p$ prime. For any $H\subseteq G$, the order of $G$ divides $\frac{[G:H]!}2$ $G$'s order can't be $p\cdot q\cdot r$, where $p,q,r$ are distinct primes. $A_5$ is a simple non-abelian group.
Determining probabilities with limited knowledge - sample problem
Unfortunately you can only do some actual estimation once you have the information you mentioned for your second question.
Naive question: Why $B:Mon\rightarrow Top^{*}, \Omega: Top^{*}\rightarrow Mon$ is an adjoint functor?
This question is answered in my post in mathoverflow.
How to define the complex square root $ \sqrt{z} $?
There can be no continuous function $z\mapsto f(z)$ with $f^2(z)=z$ for all $z$ in a neighborhood $U$ of $0$. For given $z=r^{i\phi}\ne0$ the equation $w^2=z$ has exactly two solutions $w_1$, $w_2$ by the fundamental theorem of algebra. Using polar coordinates these can be written explicitly as $\pm \sqrt{r}e^{i\phi/2}$. It is then obvious that there are exactly two continuous square roots of $z$ in the complex plane minus the negative real axis ($=:H_r$), namely $$g_\pm(re^{i\phi})=\pm \sqrt{r}e^{i\phi/2}\qquad(-\pi&lt;\phi&lt;\pi)\ .$$ Similarly, there are exactly two continuous square roots of $z$ in the complex plane minus the positive real axis ($=:H_l$), namely $$h_\pm(-re^{i\psi})=\pm i \sqrt{r}e^{i\psi/2}\qquad(-\pi&lt;\psi&lt;\pi)\ .$$ Assume now for simplicity that $f(z)$ is a continuous square root of $z$ in a full neighborhood of $0$ containing the unit circle. Then $f(i)\in\{e^{i\pi/4},-e^{i\pi/4}\}$. If $f(i)=e^{i\pi/4}$ then $f(z)\equiv g_+(z)$ in $H_r$, and $f(z)\equiv h_+(z)$ in $H_l$ (check this!). But $-i\in H_r\cap H_l$, and $g_+(-i)=e^{-i\pi/4}\ne e^{3i\pi/4}=h_+(-i)$.
Calculate the area of the crescent
Assuming AD is the diameter of the smaller circle and C is the center of the larger circle. If $CD = x$ then, $CE = 4+x$. Note that angle DEA is a right triangle. We have by the similarity of triangles EDC and ACE that $\frac{x}{4+x} = \frac{4+x}{9+x}$ Solving gives $x = 16$. Thus the radius of larger circle is $25$. The radius of the smaller circle is $\frac{x + 9+x}{2} = 20.5$ Area of the crescent = $\pi ((25)^2 - (20.5)^2) = 204.75 \times \pi$
Baby Rudin vs. Abbott
They are both rigorous in that they both give complete proofs of their results. Rudin's problems on the other hand are challenging to newcomers. Abbot's problems are on a much lower level than Rudin's. I love Rudin's books, but there are mixed opinions on whether they should be used as introductions. I used Principles after my first year of analysis and loved it. I'd say first work through Abbot because he will likely provide more motivation. Later, get Rudin and push your boundaries of understanding. You might just become an analyst after that approach. It's what happened to me.
Changing variable of this double integral
If I may suggest a different substitution, with $x=2\rho\cos t,\,y=\sqrt{2}\rho\sin t$ the integral could be written as$$2\int_{1/2}^1\rho d\rho\int_{\arctan(2\sqrt{2})}^{\arctan(5\sqrt{2})}\tan t dt=\frac38\ln\frac{17}{3},$$in agreement with @DougM's result.
Condition for two elements of An to be conjugated
Let $\sigma \in A_n$, and let $\sigma^{S_n}$ and $\sigma^{A_n}$ denote the conjugacy classes of $\sigma$ in $S_n$ and $A_n$ respectively. Then, one has If $\sigma$ commutes with an odd permutation, then $\sigma^{A_n} = \sigma^{S_n}$. If $\sigma$ does not commute with any odd permutation, then $$ \sigma^{S_n} = \sigma^{A_n}\sqcup ((12)\sigma(12))^{A_n} $$ and in particular, $|\sigma^{A_n}| = |\sigma^{S_n}|/2$ in this case. Now, consider the 5-cycle $\sigma = (12345) \in A_5$, and let $C_{S_n}(\sigma)$ denote the centralizer of $\sigma$ in $S_n$. Since $|\sigma^{S_n}| = 24$, we have $$ |C_{S_n}(\sigma)| = 5 $$ Hence the only elements that commute with $\sigma$ as powers of $\sigma$ itself. In particular, $\sigma$ does not commute with an odd permutation, so $$ |\sigma^{A_n}| = 12 $$ by the above result.
Expected value and Variance calculation
To get you started: $$E(C^g)=\sum_{n=0}^\infty C^nP(g=n)=\sum_{n=0}^\infty C^n\mathrm e^{-\lambda}\frac{\lambda^n}{n!}=\mathrm e^{-\lambda}\sum_{n=0}^\infty \frac{(C\lambda)^n}{n!}=\ldots$$
solve system inequalities derived from a function
$$ \begin{cases} y^2-3 \geq 0\\ 16y^4-96y^2 \geq 0 \end{cases} $$ To satisfy condition 1 $y^2 - 3 \geq 0 \iff y^2 \geq 3 \iff -\sqrt{3} \geq y$ or $y \geq \sqrt{3}$ To satisfy condition 2: $16y^4-96y^2 \geq 0 \iff -\sqrt{6} \leq y \leq \sqrt{6}$ Then, to satisfy both conditions $y \in (-\infty, -\sqrt{3})\cap(\sqrt{3}, +\infty)\cap(-\sqrt{6}, \sqrt{6}) = (-\sqrt{6}, -\sqrt{3}) \cup (\sqrt{3}, \sqrt{6})$
How $\displaystyle \lim_{x \rightarrow \infty} \frac{x^c}{c^x}=0$, for $c>1$
Hint: $x^c = e^{c\ln x}, c^x = e^{x\ln c}$
Definition: expressions that can be evaluated versus those that can't
Here is the terminology from predicate logic. An expression such as $x + f(y)$ is called a term. In this particular term there are several free variables (e.g. $x$ and $y$ at least, and also $f$ if we view that as a variable for a function). If we replace the free variables by constants (also known sometimes as parameters), so that no free variables remain, the term is then a closed term. A term that has one or more free variables is called an open term. In the usual framework of predicate logic, each closed term with parameters from a model $M$ denotes a particular element of the model $M$. Not only do we need the model to have parameters to substitute for the variables, we also need the model to give meaning to symbols such as $+$ that appear in the term.
does this sequence necessarily converge?
No. For example, starting at $x_1 = 0$ let $x_{n+1} = x_n + 1/n$ until $x_n &gt; 1$, then $x_{n+1} = x_n - 1/n$ until $x_n &lt; 0$, then $+$ again ... Because $\sum_n 1/n$ diverges, you will have infinitely many terms $&gt; 1$ and infinitely many $&lt; 0$.
Divisibility by sum of digits
Robert Israel has given a hint (look at a sum of $n$ distinct powers of $10$) that quickly leads to a solution of the problem. We give a little added detail. Let $n=ab$, where $a$ is divisible by no prime other than possibly $2$ and/or $5$, and $b$ is relatively prime to $10$. By Euler's Theorem, we have $$10^{\varphi(b)}\equiv 1\pmod{b}.$$ Let $$S_b=10^{\varphi(b)}+10^{2\varphi(b)}+10^{3\varphi(b)}+\cdots +10^{n\varphi(b)}.$$ Then the decimal expansion of $S_b$ consists only of $0$'s and $1$'s, and the digit sum of $S_b$ is $n$. We have $10^{k\varphi(b)}\equiv 1\pmod b$, so $S_b\equiv n\equiv 0 \pmod{b}$. It is possible that $S_b$ is not divisible by $a$. Let $S$ be $S_b$ multiplied by a sufficiently high power of $10$ to ensure divisibility by $a$. The digit sum does not change.
Exponential and Uniform Distribution Problems
For the decimal point to be 3, the number should be between 0.3 and 0.4. Not 0.2 and 0.4. But this one you can solve with a very intuitive argument: between 0 and 1, each decimal &quot;range&quot; (0-0.1, 0.1-0.2, ...) has the same probability, so the answer should not depend on the fact that the question is about &quot;3&quot;. Out of symmetry, the answer should be 0.1. Your solution for the second part is correct. Again, the probability of $\vert X \vert$ to be in the range $[a,a+\Delta]$ (within $[0,3]$, of course) depends only on $\Delta$ and not the $a$ (it should be $\tfrac{2\Delta}{6}$ since $X$ is uniform), so $\vert X \vert$ is uniform on $(0,1)$. For the second question, simply calculate the distribution. For natural $t$: $$\Pr(Y=t)=\Pr(t-1&lt;X&lt;t)=\int\limits_{t-1}^t f_X(u)du =\cdot $$ Can you continue from here and show that the resulting probability looks like the one of Geom?
Which mappings are functors?
What you specified above is a bunch of mappings from some class to another. Let's denote one such by $f:C\to D$. Now, that mapping $f$ can always be extended to a functor. In fact in two (equally boring) ways. It all depends on how you want to consider the classes $C$ and $D$ as the objects of a category. One way is as the objects of a discrete category: the only arrows are the identity arrows. Another way is as the objects of an indiscrete category: for all objects $x,y$ there is precisely one arrow $x\to y$. If you equip $C$ and $D$, both, with either the discrete or the indiscrete category structure then $f$ trivially extends (uniquely) to a functor. Of course, things get more interesting when your class of objects comes with a naturally defined notion of morphisms. Then the question whether or not $f$ extends to a functor is less trivial. But there is no hope for an all encompassing answer to that due to the generality of the situation. Regardless, most often you start with categories and consider naturally arising functors. It is not so often that you concentrate only on the objects and just conjure some mapping and casually wonder 'can I extend it to a functor?'.
Types of Topologies: show how they compare to each other (finer, strictly finer, coarser...)
The $\mathcal{T}_1$ is strictly coarser than $\mathcal{T}_2$, which is strictly coarser than $\mathcal{T}_3$. $\mathcal{T}_3$ is strictly coarser to both $\mathcal{T}_4$ and to $\mathcal{T}_5$ but $\mathcal{T}_4$ and $\mathcal{T}_5$ are not related, but incomparable, and of course all of these topologies are strictly coarser than $\mathcal{T}_6$, the discrete topology. $[0,1) \in \mathcal{T}_4$ and $[0,1) \notin \mathcal{T}_5$ while $(0,1] \in \mathcal{T}_5$ and $(0,1] \notin \mathcal{T}_4$, hence the incomparability of these two topologies. That $\mathcal{T}_3 \subsetneq \mathcal{T}_4$ is clear by $(a,b) = \bigcup \{[x,b): a &lt; x &lt; b\}$ and $[0,1) \notin \mathcal{T}_3$. A similar argument can be made for $\mathcal{T}_3 \subsetneq \mathcal{T}_5$.
Beautiful triangle problem
This is a coordinate-based approach, making heavy use of tools from projective geometry. Without loss of generality, you can choose the coordinate system in such a way that the inscribed circle is the unit circle. On that you can use a rational parametrization, i.e. choose $a,b,c\in\mathbb R$ such that $A'=(1-a^2,2a)/(1+a^2)$ and likewise for $B'$ and $C'$. Or, even better, use homogeneous coordinates for these: $$ A'=\begin{pmatrix}1-a^2\\2a\\1+a^2\end{pmatrix}\qquad B'=\begin{pmatrix}1-b^2\\2b\\1+b^2\end{pmatrix}\qquad C'=\begin{pmatrix}1-c^2\\2c\\1+c^2\end{pmatrix} $$ From that, everything else follows. The tangent at a point on the circle is simply its polar line, which you get by multiplication with the matrix of the unit circle, namely the matrix $$U=\begin{pmatrix}1&amp;0&amp;0\\0&amp;1&amp;0\\0&amp;0&amp;-1\end{pmatrix}$$ You can join points using lines, and intersect lines to obtain points, simply by computing the cross product. $$A=(U\cdot B')\times(U\cdot C')$$ and likewise for $B$ and $C$, the other two corners of the triangle. Then you get $$G=(A\times A')\times(B\times B')$$ A circle through three points can be constructed as a conic through these points and the special points $I=(1,i,0)$ and $J=(1,-i,0)$ which have complex coordinates and lie on the line at infinity. Since they also lie on every circle, they are often called the ideal circle points. So we construct the matrix of the circle $\bigcirc GA'B'$ in several steps: \begin{align*} M_1 &amp;= (A'\times I)\cdot(B'\times J)^T &amp; M_2 &amp;= (A'\times J)\cdot(B'\times I)^T \\ M_3 &amp;= M_1 + M_1^T &amp; M_4 &amp;= M_2 + M_2^T \\ M_5 &amp;= (G\cdot M_3\cdot G)M_4 - (G\cdot M_4\cdot G)M_3 &amp; M_C &amp;= iM_5 \end{align*} $M_1$ and $M_2$ describe degenerate conics through $A',B',I,J$. $M_3$ and $M_4$ are the same except using symmetric matrices. $M_5$ is a linear combination which also passes through $G$, so that's the circle. $M_C$ is a real matrix describing the same circle, avoiding all the purely imaginary entries of $M_5$. To intersect that circle with $AC$ (which is the same line as $B'C$) you compute $$C_A=(C^T\cdot M_C\cdot C)B'-2(B'^T\cdot M_C\cdot C)C$$ This is still a homogeneous coordinate vector of a point, e.g. some $(x,y,z)$. Dehomogenize that to $(x/z, y/z)$ then take the norm of that: \begin{align*} \lVert C_A\rVert =&amp; \frac{\sqrt s}{t} \\ s =&amp; \phantom+ (a^4b^4 + a^4c^4 + b^4c^4) \\&amp; - 2\,abc\,(a^3b^2 + a^2b^3 + a^3c^2 + b^3c^2 + a^2c^3 + b^2c^3) \\&amp; + 3\,a^2b^2c^2\,(a^2 + b^2 + c^2) \\&amp; + 11\,(a^4b^2 + a^2b^4 + a^4c^2 + b^4c^2 + a^2c^4 + b^2c^4) \\&amp; + 16\,abc\,(a^2b + ab^2 + a^2c + b^2c + ac^2 + bc^2) \\&amp; - 20\,(a^3b^3 + a^3c^3 + b^3c^3) \\&amp; - 20\,abc\,(a^3 + b^3 + c^3) \\&amp; - 42\,a^2b^2c^2 \\&amp; - 2\,(a^3b + ab^3 + a^3c + b^3c + ac^3 + bc^3) \\&amp; + 3\,(a^2b^2 + a^2c^2 + b^2c^2) \\&amp; + (a^4 + b^4 + c^4) \\ t =&amp;\phantom+ (a^2 + b^2 + c^2) \\&amp; - (ab + ac + bc) \\&amp; + (a^2b^2 + a^2c^2 + b^2c^2) \\&amp; - abc\,(a + b + c) \end{align*} This is the radius for one of the six points of your claimed circle. The other five can be obtained using the same computation, starting from a permutation of the three initial points. So the result will be the same except for a permutation of the parameters $a,b,c$. But the formula stated above is invariant under such a permutation, therefore all six points lie on a circle as claimed. Its center is the center of the coordinate system, i.e. the incenter of the triangle. Its radius will be the fraction described by the lengthy expressions stated above.
Invertibility of Fractional Laplacian in Lebesgue spaces
To the first question, I don't think there is a way to extend the fractional Laplacian to all tempered distributions. The difficulty is exactly as you stated, that multiplication by $|\xi|^{\alpha}$ doesn't preserve the Schwartz class $\mathcal{S}$. That said, if the distribution is such that its Fourier transform agrees with a measure or function in a neighborhood of the origin, we can make things work by keeping the multiplication by $|\xi|^{\alpha}$ on the distribution rather than the Schwartz function, at least in that neighborhood. To the second question, there's a slightly different way of looking at things that avoids the difficulty in the calculation. To begin, even though the fractional Laplacian $(-\Delta)^{\alpha/2}$ doesn't preserve the Schwartz space, it does map this space into $L^r$ for $1 \leq r \leq \infty$. (By looking at the inverse Fourier transform of $|\xi|^{\alpha} \mathcal{F} \varphi(\xi)$ and using integration by parts, we can obtain generic decay at infinity on the order of $|x|^{-n - \alpha}$.) Then for $f \in L^p$, and letting $u = I_{\alpha} f \in L^q$ (with $1/p = 1/q - \alpha/n$), one (weak type) sense in which $(-\Delta)^{\alpha/2} u = f$ is that, for $\varphi \in \mathcal{S}$, $$ \int u(x) \overline{(-\Delta)^{\alpha/2} \varphi(x)} \, dx = \int f(x) \overline{\varphi(x)} \, dx. $$ Heuristically the same thing is going on as in the not-quite-formally-correct distributional calculation, of course. Perhaps the key thing to keep in mind is the basic fact that distributions given by $L^p$ functions act via integration, and so the integration point of view can help with some of the formalities.
What are the subsets of the unit circle that can be the points in which a power series is convergent?
Try this POST or, indeed, the whole thread. A quote: The convergence set has to be F_sigma_delta, since the (pointwise) convergence set for any sequence of continuous functions is F_sigma_delta. Herzog and Piranian (together) proved in 1949 that any F_sigma subset of |z| = 1 can be the convergence set of some power series with radius of convergence 1. Lukasenko proved in 1978 that some G_delta subsets of |z| = 1 cannot be the convergence set of any power series with radius of convergence 1. For a fairly elementary survey of the problem of characterizing the convergence set for a power series in C (complex numbers), see Thomas W. Korner, "The behavior of power series on their circle of convergence" [pp. 56-94 in "Banach Spaces, Harmonic Analysis, and Probability Theory", Springer Lecture Notes in Mathematics 995, Springer-Verlag, 1983]. This is a beautifully written paper that contains detailed proofs of virtually everything and is pitched at the level of a beginning graduate student in math. Dave L. Renfro
Find the solution to $y' = \frac{1+y}{x^2+x}$ that satisfies $y(1) = -1$
If you consider $x &lt; 0$ too, there will not be a unique solution to the IVP. Indeed, any solution to the IVP on the three intervals $(-\infty, -1), (-1, 0), (0, \infty)$ can be spliced together to form a solution to the differential equation. So long as the function is constantly $-1$ on $(0, \infty)$ (and whatever other solution on the other two intervals) it will satisfy the IVP. For example, the following satisfies the IVP: $$y(x) = \begin{cases} \frac{-3x}{x + 1} - 1 &amp; \text{if } x &lt; -1 \\ \frac{2x}{x + 1} - 1 &amp; \text{if } -1 &lt; x &lt; 0 \\ -1 &amp; \text{if } x &gt; 0. \end{cases}$$ The question seems to want the unique solution locally around $x = 1$ and the largest domain on which it's unique. The question doesn't sound like it makes this clear.
How to show that $E\left[X\mathbb{1}_{\{X \geq \lambda\}}\right] \geq \lambda P(X \geq \lambda)$ for $\lambda >0$?
This is called Markov's inequality. The random variable $(X-\lambda)\mathbb{1}_{X \geqslant \lambda}$ is nonnegative, so its expectation is nonnegative, i.e. $$ 0 \leqslant E[(X-\lambda)\mathbb{1}_{X \geqslant \lambda}] = E[X\mathbb{1}_{X \geqslant \lambda}] - \lambda E[\mathbb{1}_{X \geqslant \lambda}], $$ and the latter term is just $P(X \geqslant \lambda)$.
How do I show convergence of this sequence?
I think I can approach the problem as follows. I will define a new sequence $\{b_n\}$ by $b_1=a_1+a_2+...+a_k$, $b_2=a_{k+1}+a_{k+2}+...+a_{2k}$,...,$b_n:=a_{(n-1)k+1}+a_{(n-1)k+2}+...+a_{nk}.$ Then, the sequence $\{b_n\}$ is monotone and $\sum\limits_{n=1}^\infty b_n&lt;\infty.$ Moreover, $(n+1)b_{n+1}\leq (n+1)b_n=nb_n+b_n$. If we now define, $\rho_n=nb_n$, then $\rho_{n+1}\leq \rho_n+b_n$ and also $\{\rho_n\}$ is a Cauchy sequence because, for $m&gt;n,$ $0\leq \rho_m\leq \rho_n+\sum\limits_{j=n+1}^m b_j$ and thus $|\rho_m-\rho_n|\leq \sum\limits_{j=n+1}^m b_j\rightarrow 0, as\,\,,\, n,m\rightarrow \infty.$ So $\{\rho_n\}$ has a limit. Suppose now that $\rho_n$ does not converge to $0$. Then there exists $\epsilon_0&gt;0$ such that $\rho_n\geq \epsilon_0$ for all $n.$ Thus $b_n\geq \frac{ \epsilon_0}{n}$ for all $n.$ Thus $\sum\limits_{n=1}^\infty b_n =\infty.$
Theorems true "with probability 1"?
One sometimes hears "The Riemann Hypothesis is true with probability 1." What's meant is that the Riemann Hypothesis is true if a certain statement $S$ about the Möbius function $\mu(n)$ is true; $\mu(n)$ is a member of a very natural family of functions, a family on which there is a natural probability measure; with probability 1, a function chosen from this family satisfies $S$.
Does generator matrix of a code $C$ must have linearly independent rows?
A generator matrix doesn't have to have linearly independent rows, but linear independence of rows is generally a good property to have. If the rows are linearly dependent, then two or more different data vectors will be mapped onto the same code vector, and thus, even in the absence of any channel errors, the decoder will not be able to determine which data vector was transmitted. In the example that you have constructed, data vectors $00, 11, 22$ all are mapped onto the same code vector $000$.
Solution of functional equation $f(x)=-f(x-a)$
We may assume $a=\pi$. Simple examples that come to mind are the sine and the cosine function. Unfortunately these function have zeros, but a combination of the two allows the following construction: If $$f(x)\equiv-f(x-\pi)\qquad (*)$$ then the function $g(x):=e^{ix}f(x)$ is a $\pi$-periodic complex valued function (check it!). Conversely: If $x\mapsto g(x)\in{\mathbb C}$ is an arbitrary $\pi$-periodic function then $f(x):=e^{- ix} g(x)$ satisfies the functional equation $(*)$. Now I assume you are interested in solutions of $(*)$ that are real-valued for $x\in{\mathbb R}$. In this respect note that the real part ${\rm Re}f(x)$ of a solution of $(*)$ automatically is a solution of $(*)$ either. Doing the computations one can say the following: Any real solution of $(*)$ can be written in the form $$f(x)=a(x)\cos x+b(x)\sin x\ ,$$ where the functions $a(\cdot)$ and $b(\cdot)$ are real-valued and $\pi$-periodic; but this representation is not unique.
It is always neccesary to use definition to show that a function is not differentiable at a given point?
In general, this must be shown by definition. For instance $f(x)=x^2\sin(1/x)$ has derivative $2x\sin(1/x)-\cos(1/x)$ for $x\neq 0$ but is still differentiable at $0$. There are a few general things that can be said, though; while derivatives can be discontinuous, as the previous example shows, they cannot have jump discontinuities. So if the formula giving the derivative on $B$ would lead to a jump discontinuity if extended to more of $A$, then you know the function is not differentiable at the missing points.
SPECIAL CASE first order PDE, characteristic curves method
This is only a partial answer (as requested by OP in the comments to the question). We start by treating $u(x_0,y)$ as a known function depending on $y$. Then the resulting linear first order PDE can be treated explicitely via method of characteristics resulting in: $$ u(x,y)=C(y-x)e^{\int_{x_0}^xa(s)u(x_0,s+y-x)ds} $$ for an arbitrary function $C$. Note that the lower bound for the intregation is arbitrary (different values are soaked up by the function $C$) but I went with $x_0$ because it will make the following steps easier. Now basically you have two steps left. Two remove the implicit dependency of $u$ and to match the initial data $f$. As mentioned I got a partial result by tackling the implicit dependency of $u$ first: If we evalute our solution at $x_0$ we achieve: $$ u(x_0,y)=C(y-x_0)\cdot1 $$ Thus we get an explicit representation of $u(x_0,y)$ which we can plug back into the formula: $$ u(x,y)=C(y-x)e^{\int_{x_0}^xa(s)C(s+y-x-x_0)ds} $$ If we evalute at $y=0$ $$ u(x,0)=C(-x)e^{\int_{x_0}^xa(s)C(s-x-x_0)ds}=f(x)\\ \Leftrightarrow \int_{z}^{-x_0}a(t+x0-z)C(t)dt=\ln(f(-z))-\ln(C(z)) $$ with $z=-x$ and $t=s-x-x_0$. The remaining task is to solve this integral equation for $C$, but I was only able to do this for constant $a$. Assume $a$ is constant, then the equation reduces to $$ a\cdot\int_{z}^{-x_0}C(t)dt=\ln(f(-z))-\ln(C(z))\\ \Rightarrow -a\cdot C(z)=-\frac{f'(-z)}{f(-z)}-\frac{C'(z)}{C(z)} $$ This ODE can be solved explicitely and the solution reads: $$ C(z)=\frac{f(-z)}{-a\cdot\int_{-x_0}^z f(-s) ds+1} $$
How could we define the existence of an object/element in the Euclidean space?
Elements (points) of Euclidean geometry are taken as primitive notions, with no further explanation. Their existence is postulated. Every other thing existing "in" the space would be described as collections of these points.
Are intersections of simple subsets of $\mathbb{R}^2$ necessarily simple?
The result becomes easy by using a nice (not fully trivial) result: A set $A\subseteq \mathbb R^2$ is simple if and only if its complement in $\overline{\mathbb R^2}:=\mathbb R^2\cup\{\infty\}$ is connected. Now if $A_i$ is simple for each $i\in I$, then $$ \overline{\mathbb R^2}\setminus \bigcap _{i\in I} A_i = \bigcup_{i\in I}\left(\overline{\mathbb R^2}\setminus A_i\right)$$ is the union of connected subsets that all have the point $\infty$ in common and hence is connected.
Given that $a+b+c=0$, show that $2(a^4+b^4+c^4)$ is a perfect square
Square $a+b+c=0$ \begin{eqnarray*} a^2+b^2+c^2=-2(ab+bc+ca). \end{eqnarray*} Square this \begin{eqnarray*} a^4+b^4+c^4+2(a^2b^2+b^2c^2+c^2a^2)=4(a^2b^2+b^2c^2+c^2a^2)+8abc(a+b+c) \end{eqnarray*} The last term is zero ... rearrange \begin{eqnarray*} 2(a^4+b^4+c^4)=4(a^2b^2+b^2c^2+c^2a^2)=(a^2+b^2+c^2)^2. \end{eqnarray*} the last equality follows from the first equation.
Prove that $f(x) > g(x)$ where both functions are convex and have the same value and slope at $0$
Indeed, in your setting, $f''\geqslant g''$ implies $f\geqslant g$. To show this, note that, since $f(0)=g(0)=0$, for every nonnegative $x$, $$f(x)=\int_0^xf'(t)\,\mathrm dt,\qquad g(x)=\int_0^xg'(t)\,\mathrm dt.$$ Using the hypothesis that $f'(0)=g'(0)=0$, an integration by parts of these identities yields $$f(x)=\int_0^x(x-t)f''(t)\,\mathrm dt,\qquad g(x)=\int_0^x(x-t)g''(t)\,\mathrm dt.$$ Since $x-t\geqslant0$ for every $t$ in $(0,x)$, this shows that $f(x)\geqslant g(x)$. Surely you can adapt the argument to the case when $x\leqslant0$... :-)
Rotation about an axis by matrix multiplication
No (it is called “non-commutativity”). First hint: rotate 90° around z, then 90° around x, then −90° around z. When the first exercise was complete, try to rotate −90° around z, then 90° around x, then 90° around z ☺
Prove $F_n(z)=\frac1{2i}\left(\left(1+\frac{iz}n\right)^n-\left(1-\frac{iz}n\right)^n\right)...$
We have $$\begin{align} F_n(z) &amp;= \frac{1}{2i}\left(\left(1 + \frac{iz}{n}\right)^n - \left(1-\frac{iz}{n}\right)^n\right)\\ &amp;= \frac{1}{2i}\left(\left(1 + iz + O(z^2)\right) - \left(1 - iz+ O(z^2)\right)\right)\\ &amp;= z + O(z^2). \end{align}$$ The coefficient of $z$ on the left is $1$, and on the right in the last formula, the coefficient is $a$. However, the formula $$F_n(z) = z\prod_{k=1}^m (z^2-z_k^2)$$ is not correct, there's a constant factor missing. The coefficient of $z^n$ on the right is $1$, but on the left it is $$\frac{1}{2i}\left(\left(\frac{i}{n}\right)^n - \left(\frac{-i}{n}\right)^n\right) = \frac{1}{2in^n}\left(i^{2m+1} - (-i)^{2m+1}\right) = \frac{2i^{2m+1}}{2in^n} = \frac{(-1)^m}{n^n}.$$ So we have $$\frac{1}{2i}\left(\left(1 + \frac{iz}{n}\right)^n - \left(1 - \frac{iz}{n}\right)^n\right) = \frac{(-1)^m}{n^n}z\prod_{k=1}^m \left(z^2-n^2\tan^2\left(\frac{k\pi}{n}\right)\right),$$ since both sides are polynomials of degree $n$ with the same zeros and the same leading coefficient. Now factoring out the $-n^2\tan^2\left(\frac{k\pi}{n}\right)$ leads to $$\begin{align} \frac{1}{2i}\left(\left(1 + \frac{iz}{n}\right)^n - \left(1 - \frac{iz}{n}\right)^n\right) &amp;= \frac{(-1)^m}{n^n}\prod_{k=1}^m \left(-n^2\tan^2\left(\frac{k\pi}{n}\right)\right)z \prod_{k=1}^m \left(1 - \frac{z^2}{n^2\tan^2\frac{k\pi}{n}}\right)\\ &amp;= \frac{1}{n}\prod_{k=1}^m \tan^2\left(\frac{k\pi}{n}\right)\cdot z\prod_{k=1}^m \left(1 - \frac{z^2}{n^2\tan^2\frac{k\pi}{n}}\right), \end{align}$$ and comparing the coefficient of $z$, which is $1$ on the left and $$\frac{1}{n}\prod_{k=1}^m \tan^2\left(\frac{k\pi}{n}\right)$$ on the right proves $$\prod_{k=1}^m \tan \left(\frac{k\pi}{2m+1}\right) = \sqrt{2m+1}.$$
linear transformation on lebesgue measure theory
If $f$ is of rank less than $n$, the image of $f$ has measure zero and the property holds. Otherwise, let $A$ be a measurable set and recall that there is a Borel set $B \subseteq A$ such that $m(A \setminus B) = 0$. Since $f$ has full rank, $f^{-1}$ exists and is linear. Note that in $\mathbb{R}^n$, every linear function is continuous. Now write $f(A) = (f^{-1})^{-1}(B) \cup f(A \setminus B)$. The first set in the union is the continuous preimage of a Borel set, so it's Borel. The second set has measure zero. One way to see this is to notice that $f$ is Lipschitz, so there is some $k \in \mathbb{R}$ (the Lipschitz constant of $f$) for any set $S \subseteq \mathbb{R}^N$, we have $km(S) \geq m(f(S))$. But $m(A \setminus B) = 0$, so we're done.
Figuring out the Mean and Variance of the nth order statistic of a uniform distribution(theta,theta +1)
For the first question regarding integration, let $u = y$ and $dv = n(y- \theta)^{n-1}\, dy.$ It follows that $du = dy$ and $v = (y - \theta)^n$, and integrating by parts we obtain $$\int ny(y - \theta)^{n-1} \, dy = y(y- \theta)^n - \int(y - \theta)^n \, dy \\ = y(y- \theta)^n - \frac{(y - \theta)^{n+1}}{n+1} + C.$$ Using this anti-derivative, the expected value of the nth order statistic is $$\begin{align}E(y_{(n)}) &amp;= \int_\theta^{\theta+1} ny (y- \theta)^{n-1} \, dy \\ &amp;= \left. \left[y(y- \theta)^n - \frac{(y - \theta)^{n+1}}{n+1}\right] \right|_\theta^{\theta + 1} \\ &amp;= \theta + \frac{n}{n+1}\end{align} $$ The second moment is $$\begin{align} E(y_{(n)}^2) &amp;= \int_\theta^{\theta+1} ny^2 (y- \theta)^{n-1} \, dy \end{align}.$$ A first integration by parts yields $$\begin{align} E(y_{(n)}^2) &amp;= \left.y^2(y- \theta)^n\right|_\theta^{\theta+1} - \int_\theta^{\theta+1} 2y (y- \theta)^{n} \, dy \\ &amp;= (\theta+1)^2 - \int_\theta^{\theta+1} 2y (y- \theta)^{n} \, dy \end{align}$$ A second integration by parts yields $$\begin{align} E(y_{(n)}^2) &amp;= (\theta+1)^2 - \left. \frac{2y(y - \theta)^{n+1}}{n+1} \right|_\theta^{\theta + 1} + \frac{2}{n+1}\int_\theta^{\theta+1} (y- \theta)^{n+1} \, dy \\ &amp;= (\theta+1)^2 - \frac{2(\theta+1)}{n+1} + \frac{2}{(n+1)(n+2)}\end{align}$$ The variance of the nth order statistic is then $$E(y_{(n)}^2) - E(y_{(n)})^2 = \frac{n}{(n+1)^2} \underset{n \to \infty}{\longrightarrow} 0$$
Bijection function $f(x,y)= (x-y+1, 2x-2y+2)$
We have $f(0,0)=f(1,1)$. It is not an injection and hence not a bijection.
Finding the derivative of a quadratic function at a particular point
Equivalently to @T's post, $f'(-1)$ exists if the following limit is $&lt;\infty$, so: $$\lim_{x\to -1}\frac{f(x)-f(-1)}{x-(-1)}=\lim_{x\to -1}\frac{-5x^2+x-3+9}{x+1}=\lim_{x\to -1}\frac{-5x^2+x+6}{x+1}$$ $$=\lim_{x\to -1}\frac{-(x+1)(5x-6)}{x+1}=+11$$