title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Computing the UMVUE for $U(0,2\theta+1)$
If you can find a function of $Y_n$ that is an unbiased estimator of $\theta$, then the Lehmann-Scheffe Theorem tells you that it will be the UMVUE. So, first find $EY_n$ and then see if you can find a function of it that is unbiased. The density function for $Y_n$ is $n[F(x)]^{n-1}f(x)$ for $x \in [0,2\theta+1]$. Thus, $$EY_n=\int_0^{2\theta+1} xn \left[\frac{x}{2 \theta+1} \right]^{n-1}\frac{1}{2 \theta+1}dx=\frac{n}{(2 \theta+1)^n}\frac{(2 \theta+1)^{n+1}}{n+1}$$ Finally, $E\left[\frac{\frac{n+1}{n}Y_n-1}{2} \right]=\frac{\frac{n+1}{n}EY_n-1}{2}=\theta$. So, $\frac{\frac{n+1}{n}Y_n-1}{2} $ is the UMVUE.
Prove that there exists $n\in \mathbb{N}$ s.t. $x_n=\frac12$
Let $f(x)=x^2-\frac{5}{2}x+\frac{3}{2}$, so that $x_{n+1}=f(x_n)$. Suppose that $x_n$ converges to $l$, where $l\in\lbrace \frac{1}{2},3\rbrace$. Then $$ y_n=\frac{x_{n+1}-l}{x_n-l}=\frac{f(x_{n})-f(l)}{x_n-l} \to f'(l) \textrm{ when } n \to \infty \tag{1} $$ Note that $f'(\frac{1}{2})=-\frac{3}{2}$ and $f'(3)=\frac{7}{2}$. So $|f'(l)| \geq \frac{3}{2}$ in both cases, and hence $|f'(l)| \gt \frac{5}{4}$ in both cases. It follows that there is a $n_0$ such that $|y_n|\gt \frac{5}{4}$ for all $n\geq n_0$. Then $$|x_{n+1}-l| \geq \frac{5}{4} |x_n-l| \textrm{ for all } n\geq n_0 \tag{2}$$. By induction, we deduce $$|x_n-l| \geq \big(\frac{5}{4}\big)^{n-n_0}|x_{n_0}-l| \textrm{ for all } n\geq n_0 \tag{3}$$ If $x_{n_0}\neq l$, we would deduce $\lim_{n\to\infty}{|x_n-l|}=\infty$, which is impossible. So $x_{n_0}=l$, which finishes the proof.
Topological dimension and derham cohomological dimension
De Rham cohomology is meaningless for general topological spaces. One can, however, use other cohomology theories to define cohomology and cohomological dimension. One can define Chech cohomology groups of $X$ with sheaf coefficients, $H^*(X; F)$, where $F$ is a sheaf on $X$. Then one defines the cohomological dimension of $X$ (say, over the integers) as $$ cd(X)=\sup \{n: \exists F \ \hbox{such that}\ H^n(X; F)\ne 0\}, $$ where $F$ are sheafs of abelian groups on $X$. (One can also define cohomological dimension over fields in a similar fashion, by considering sheafs of vector spaces.) By taking the constant sheaf (where the abelian group is ${\mathbb Z}$) one obtains the trivial inequality $$ cd(X)\ge \dim_{const}(X):=\sup\{n: H^n(X; {\mathbb Z})\ne 0\}. $$ One more ingredient one needs to know is that $$ \dim(X)\ge cd(X) $$ where $dim$ is the covering dimension. This inequality is essentially a tautology once you get thorough the definitions. Thus, you get $$ \dim_{const}(X)\le \dim(X). $$ Check "Dimension theory" by Engelking for (many more) details.
Conditional Expectation Die Roll
If two rolls are independent, then by symmetry (there is no difference between which one is the first roll and which one is the second), $E(X\mid Y=y)=\frac{y}{2}$ for $y \geq 2$.
The set of all 2x3 matrices A satisfying A(1;2;3)=(0;0) is a subspace of R^(4x3), and give it dimension.
Saying that there are "four independent variables" does not constitute a subspace. For example, the subset of all vectors in $\Bbb{R}^2$ such that $a_1+a_2+1=0$ is not a subspace despite there being an independent variable. Instead, you should use the subspace test. Let $A, B \in V$ and $c \in \Bbb{R}$. We must show that: $$A+cB \in V$$ This can be otherwise stated as: $$(A+Bv)\pmatrix{1\\2\\3}=\pmatrix{0\\0}$$ Distribute on the left-hand side to get: $$(A+Bv)\pmatrix{1\\2\\3}=A\pmatrix{1\\2\\3}+cB\pmatrix{1\\2\\3}$$ Now, we are given $A, B \in V$, so obviously, those matrices times $\pmatrix{1\\2\\3}$ are $\pmatrix{0\\0}$: $$(A+Bv)\pmatrix{1\\2\\3}=\pmatrix{0\\0}+c\pmatrix{0\\0}=\pmatrix{0\\0}$$ This proves the equation we wanted, so $V$ is a subspace. Now, to find the dimension, we set up what you did the first time: $$\text{Let }A=\pmatrix{a_1 & a_2 & a_3 \\ a_4 & a_5 & a_6}$$ $$\pmatrix{a_1 & a_2 & a_3 \\ a_4 & a_5 & a_6}\pmatrix{1\\2\\3}=\pmatrix{0\\0}$$ $$a_1+2a_2+3a_3=0 \\ a_4+2a_5+3a_6=0$$ If we express $A$ as a column vector with $a_i$s rather than a matrix, the above corresponds to the following matrix-vector product equation: $$\pmatrix{1 & 2 & 3 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 2 & 3}\pmatrix{a_1\\a_2\\a_3\\a_4\\a_5\\a_6}=\pmatrix{0\\0}$$ Now, basically, we are finding the dimension of the null space of the matrix on the left. There are two pivot columns and six columns in all, giving us a dimension of $4$.
Number theory book recommendation.
The greatest of all classical books on this subject is An Introduction to the Theory of Numbers, by G. H. Hardy and Edward M. Wright.
About Linear Systems on Curves.
Base-point-freeness. A rational map $C\dashrightarrow V\subset\mathbb P^n$ from a curve $C$ to a projective variety $V$ is defined at all the smooth points. In particular, taking $V=\mathbb P^n$, any linear series $f:C\dashrightarrow\mathbb P^n$ on a smooth projective curve is in fact base-point-free. Recall that the locus where $f$ is not defined coincides with the base locus of the corresponding linear series; but the latter is defined as the intersection of all the divisors in the system, namely $$\textrm{Bl}(f)=\bigcap_{H\subset\mathbb P^{n}}f^{-1}(H),\,\,\,\,\,\,\,\,\,\,\,\,\,\,H\,\,\textrm{all the hyperplanes in}\,\,\mathbb P^n,$$ and when $n=1$, a point $P\in C$ cannot satisfy $f(P)=x$ for all $x\in \mathbb P^1$, so in the case you are interested in the base locus is empty for this very concrete reason. Completeness. Let $\textrm{Pic}^{e}(C)$ denote the group of degree $e$ line bundles on $C$. As you said, the gonality being $d$ means that any $L\in \textrm{Pic}^{d-1}(C)$ is such that any vector subspace $U\subseteq H^0(C,L)$ has $\dim U\leq 1$. But we have a $g^1_d$, namely a couple $(\mathscr L,V)$ where $\mathscr L\in\textrm{Pic}^{d}(C)$ and $V\subseteq H^0(C,\mathscr L)$ has dimension 2. We want to show that in fact $h^0(C,\mathscr L)=2$. Let us take any point $P\in C$. Then, $L=\mathscr L(-P)\in \textrm{Pic}^{d-1}(C)$, so by what we said we must have $h^0(C,L)\leq 1$. But the inclusion $H^0(C,L)\subset H^0(C,\mathscr L)$ produces, in general, either no dimension jump, or a jump of one dimension. This means that $h^0(C,\mathscr L)\leq 2$, hence it is in fact $=2$.
Do these "exponentially convex" functions have a standard name?
I am not aware of a standard name, but you may perhaps try convex up to an exponential, if it makes sense in English (I am not an English native speaker...). You should of course precisely define this notion before using it.
What is the notation for a stepped range
The only thing that springs to mind for me is something like this: $$\big\{4k\mid k\in\{-1,0,...,3\}\big\} $$ Otherwise, you may have to define your own notation.
What is the relation between basis vectors of a vector space to those of its subspace?
Example: $V=\mathbb{R}^2$, with basis $\{(1,0),(0,1)\}$. Let $B=Span((1,1))$, a diagonal line. Neither basis vector from $V$ is in $B$. In short, the bold statement is incorrect. Try Math.SE next time instead of Yahoo answers. Hint for question that led to all this: Given finite-dimensional vector space $V$ and subspaces $A,B$, we have $$\dim(A)+\dim(B)=\dim(A+B)+\dim(A\cap B)$$
Calculate a given limit
Observe \begin{align} \frac{3+3^{1/2}+3^{1/3}+\ldots + 3^{1/n} -n}{\ln(n^2+1)} = \frac{(e^{\ln 3}-1)+(e^{\frac{1}{2}\ln 3}-1)+\ldots + (e^{\frac{1}{n}\log 3}-1)}{\ln(n^2+1)}. \end{align} Using the fact that \begin{align} x\leq e^x-1 \leq x-2x^2 \end{align} when $0 \leq x\leq 2$, you can show that \begin{align} \alpha \ln 3\leq e^{\alpha \ln 3} -1 \leq \alpha \ln 3-2(\alpha \ln 3)^2 \end{align} for all $0\leq \alpha\leq 1$. Hence it follows \begin{align} \frac{\ln 3\cdot (1+\frac{1}{2}+\ldots+\frac{1}{n})}{\ln (n^2+1)}\leq&\ \frac{(e^{\ln 3}-1)+(e^{\frac{1}{2}\ln 3}-1)+\ldots + (e^{\frac{1}{n}\log 3}-1)}{\ln(n^2+1)}\\ \leq&\ \frac{\ln 3\cdot (1+\frac{1}{2}+\ldots +\frac{1}{n})-2(\ln 3)^2\cdot(1+\frac{1}{2^2}+\ldots + \frac{1}{n^2})}{\ln(n^2+1)}. \end{align} Using the fact that \begin{align} \lim_{n\rightarrow \infty}\left(1+\frac{1}{2}+\ldots +\frac{1}{n} - \ln n\right) = \gamma \end{align} where $\gamma$ is the Euler-Mascheroni constant i.e. the limit exists, then it follows \begin{align} \lim_{n\rightarrow \infty}\frac{\ln 3\cdot (1+\frac{1}{2}+\ldots+\frac{1}{n})}{\ln (n^2+1)}= \lim_{n\rightarrow \infty}\frac{\ln 3\cdot [(1+\frac{1}{2}+\ldots+\frac{1}{n})-\ln n]+\ln 3\cdot \ln n }{\ln (n^2+1)} = \frac{\ln 3}{2} \end{align} and \begin{align} &\lim_{n\rightarrow \infty}\frac{\ln 3\cdot (1+\frac{1}{2}+\ldots +\frac{1}{n})-2(\ln 3)^2\cdot(1+\frac{1}{2^2}+\ldots + \frac{1}{n^2})}{\ln(n^2+1)}\\ &= \lim_{n\rightarrow \infty} \frac{\ln 3\cdot [(1+\frac{1}{2}+\ldots +\frac{1}{n})-\ln n]+\ln 3\cdot \ln n-2(\ln 3)^2\cdot(1+\frac{1}{2^2}+\ldots + \frac{1}{n^2})}{\ln(n^2+1)} = \frac{\ln 3}{2}. \end{align} Hence by the squeeze theorem, we have that \begin{align} \lim_{n\rightarrow \infty} \frac{3+3^{1/2}+3^{1/3}+\ldots + 3^{1/n} -n}{\ln(n^2+1)}= \frac{\ln 3}{2}. \end{align}
Find all triples $(a,b,c)$ of integers
Hint: multiply the first equation by 2 and the second by 4 and then add them and add 1 to both terms. You'll get $(1+2a)(1+2b)(1+2c) = 8075 = 5^2 \cdot 17 \cdot 19$.
If $ \ S : \mathbb{R}^3 \to \mathbb{R}^3 \ $ be a shear map with respect to the unit vector $ \ n \ $
HINT In the basis $\{m_1,n,m_2\}$ with $m_1\perp n$ and $m_2\perp n,m_1$ we have that $$T_S=\begin{bmatrix}1&k&0\\0&1&0\\0&0&1\end{bmatrix}$$
Boolean-like algebra
Such algebras are in fact called de Morgan algebras; they are often classified as a branch of algebraic logic. For more, see this wikipedia page. Hope this helps. Cheerio, and as always, Fiat Lux!!!
Application of Lebesgue density theorem
Exploit Lebesgue Differentiation Theorem to the locally integrable function $\chi_{E}$: \begin{align*} \chi_{E}(x)=\dfrac{1}{m(B(x,r))}\int_{B(x,r)}\chi_{E}(y)dy,~~~~\text{a.e.}~x. \end{align*}
Relative abundance of rationals in Cantor's bijection?
Let $q_1, q_2, ...$ be the enumeration, concerning the Problem: $$\lim\limits_{k\to\infty}\frac{|\{x ∈ \mathbb{R}| n < x \leq n+1\} ∩ \{q_1, q_2, ..., q_k\}|}{|\{x ∈ \mathbb{R} | 0 < x \leq 1\} ∩ \{q_1, q_2, ..., q_k\}|}$$ It has been conjecture that the ratio is: $$\frac{2}{(+1)(+2)}$$ But proof is still missing.
Linear Optimization: Duality in Simplex
Observe that any feasible solution to $(P)$ is also a solution to $(D)$, since you set $A = A^t$. In words, a primal feasible solution satisfies the dual. From the hypothesis of the question, we fix a feasible solution $x_0$ in $(P)$. You are almost there with the weak duality: $c^tx \ge c^ty$ for any feasible $x$ in $(P)$ and $y$ in $(D)$. For any (non-optimal) feasible $y$ in $(D)$, we have $c^tx_0 \ge c^ty$ thanks to the weak duality. Then, we apply the first paragraph on $x_0$, so that $x_0$ also satisfies $(D)$. Since the choice of $y$ is arbitrary in $(D)$, $x_0$ is an optimal solution for $(D)$. Now, we apply the Strong Duality Theorem (p.14 of 45) to conclude that $(P)$ has an optimal solution $x_1$, and $c^t x_0 = c^t x_1$. Summarise the above points so as to see that $x_0$ is in fact an optimal solution for $(P)$. $x_0$ is a feasible solution for $(P)$. (by construction) $x_1$ is an optimal solution for $(P)$. (existence of such $x_1$ proved by Strong Duality) $c^t x_0 = c^t x_1$ (part of the statement of the Strong Duality) Conclusion: $x_0$ is an optimal solution for $(P)$. $\tag*{$\square$}$
Prove that there are infinity many tautologies.
Just show a pattern of tautologies that clearly has infinitely many wff's. For example, $$A\implies A,\ B\implies B,\ C\implies C,\ldots$$ or $$A\implies A,\ (A\land A)\implies (A\land A),\ (A\land A\land A)\implies (A\land A\land A),\ldots$$ or many other patterns. Make one that you like.
Is every finite group the outer automorphism group of a finite group?
Here is a positive answer for finite abelian groups. For $k > 2$ and any $n > 0$, the simple group $\operatorname{PSp}_{2k}(2^n)$ has outer automorphism group isomorphic to the cyclic group $C_n$. So for any finite cyclic group, there exist infinitely many non-abelian finite simple groups with outer automorphism group $C_n$. Let $A$ be a finite abelian group, so $A = C_{n_1} \times \cdots \times C_{n_t}$ for some $n_i > 0$ by the fundamental theorem of finitely generated abelian groups. It follows from the main theorem of [1] that if $G_1$, $\ldots$, $G_t$ are pairwise non-isomorphic nonabelian simple groups, then for $G = G_1 \times \cdots \times G_t$ we have $\operatorname{Out}(G) \cong \operatorname{Out}(G_1) \times \cdots \times \operatorname{Out}(G_t).$ So for example by choosing $G_i = \operatorname{PSp}_{2k_i}(2^{n_i})$ with $2 < k_1 < \cdots < k_t$, we get $\operatorname{Out}(G) \cong A$. EDIT: To give another example, all symmetric groups are outer automorphism groups. If $H$ is indecomposable and non-abelian with $\operatorname{Hom}(H,Z(H)) = 1$, then by Theorem 3.1 in [2] we have $\operatorname{Aut}(H^n) \cong \operatorname{Aut}(H)^n \rtimes S_n$, where $S_n$ acts on $H^n$ by permuting the direct factors. Thus $\operatorname{Out}(H^n) \cong S_n$ for example if $H = S_3$, more generally if $H$ is complete (trivial center and all automorphisms inner). [1] J. N. S. Bidwell, M. J. Curran, and D. J. McCaughan, Automorphisms of direct products of finite groups, Arch. Math. 86, 481 – 489 (2006). [2] J. N. S. Bidwell, Automorphisms of direct products of finite groups II, Arch. Math. 91, 111–121 (2008).
Prove that there $B : H\to H $ bounded such $ B^n = A $.
By the Spectral Theorem, $A$ is of the form $$ A=\sum_j\lambda_j\,P_j, $$ where $\lambda_j\in\mathbb R$ and $P_j$ is a finite-rank projection for all $j$. Now choose numbers $\mu_j\in\mathbb C$ with $\mu_j^n=\lambda_j$ and define $$ B=\sum_j\mu_jP_j. $$
Demonstration of the largest product between segments
The way to come up with the solution by Azif00 is to simplify the function. An absolute value is generally not 'simple', so you find the values of $x$ that correspond to where the argument of the absolute value function is positive/negative, and try to simplify the expression based on that. In our case, the absolute value is $|x+2|$, so we have to look at 2 cases: $$x \ge -2,$$ $$x < -2.$$ In the first case ($x \ge -2$), we have $|x+2| = x+2$, so we get $$\text{If }x \ge -2 \text{, then } f(x)=\frac{(x+2)-x}{x+1} = \frac2{x+1}$$ So you may or may not have an idea if that part of the function is injective, but it's certainly a more simple description than the original formulation. In the second case ($x < -2$) we have $|x+2| = -(x+2)$, so we get $$\text{If }x < -2 \text{, then } f(x)=\frac{-(x+2)-x}{x+1} = \frac{-2x-2}{x+1} =-2$$ So, this part of the function is certainly not injective, it's kinda the opposite: The value of the function is the same for all $x < -2$. That's why Azif00 asked you to evaluate $f(-2), f(-3), f(-4)$, they knew that the function values would all be the same ($-2$), which exactly gives you a counterexample. Of course, such problems are not always that easy. Generally you may need to apply methods from calculus, but in this case, it got easy because the simplification showed that for $x< -2$ the function was really easy to understand!
What are some applications of the Kronecker product of matrices?
This can be easily found with a google search here https://www.hindawi.com/journals/jam/2013/296185/ On a more personal note, quantum mechanics uses such products quite frequently.
Showing if $f^{-1}(D_1)$ $\subseteq$ $f^{-1}(D_2)$ then $D_1 \subseteq D_2$ on certain conditions
Let $x \in D_1$, since $f$ is a surjection, $\exists y \in A, f(y)=x.$ $y \in f^{-1}(D_1)$. Since $f^{-1}(D_1) \subseteq f^{-1}(D_2)$, $y \in f^{-1}(D_2)$ Hence $x=f(y) \in D_2$. Hence $D_1 \subseteq D_2$. To construct the example to show the result may not hold if we do not have surjection. Let $f: (-1,1) \rightarrow \mathbb{R}$ $f(x)=x^2$. Let $D_1=[0,2]$, $D_2=[0,1]$. Can you verify that $f^{-1}(D_1) \subseteq f^{-1}(D_2)$ but $D_1 \not \subseteq D_2$ ? To answer your last question, let $D_1=D_2=[0,1)$. Can you verify whether $f:(-1,1) \rightarrow D_1$ where $f(x)=x^2$ is a bijection?
How does one define a dimension of $V(F) = \{F = 0\}$ over the reals?
It is always true that the scheme-theoretic dimension of $\operatorname{Spec}\Bbb R[x_1,\cdots,x_m]/(f_1,\cdots,f_k)$ is the Krull dimension of $\operatorname{Spec}\Bbb R[x_1,\cdots,x_m]/(f_1,\cdots,f_k)$ as a ring, which is equal to the Krull dimension of $\Bbb C[x_1,\cdots,x_m]/(f_1,\cdots,f_k)$ as a ring which is equal to the scheme-theoretic dimension of $\operatorname{Spec}\Bbb C[x_1,\cdots,x_m]/(f_1,\cdots,f_k)$. This may be seen by considering what $-\otimes_{\Bbb R} \Bbb C$ does to chains of ideals. On the other hand, the dimension of the $\Bbb R$ points need not behave so nicely. For $X\subset \Bbb A^n_{\Bbb R}$, the set of real points may have any dimension between $0$ and $n$: consider the variety cut out by $\sum_{i=1}^d x_i^2$. One saving grace is that if $X$ is a smooth $\Bbb R$ variety, then the dimension of the real points of $X$ as a manifold is either the algebraic dimension of $X$ or the set of real points of $X$ is empty. This may be seen by considering the Jacobian criteria for smoothness and the implicit function theorem. One can then upgrade this to say that a general variety has the expected dimension of it's real points if you can find a smooth point somewhere by considering $X^{sm}\subset X$. Unfortunately, I've found that there's not even much standardized terminology about the mismatch of these dimensions - I searched diligently and asked a question on this website nearly 3 years ago about the matter and it got no responses.
Are there functions f and g whose compositions are not commutative
Yes, first just to make sure we are using same notations, I assume that $0\in\mathbb{N}$. Let $f:\mathbb{N}\rightarrow\mathbb{N}$ be $f(x)=x+1$ and $g:\mathbb{N}\rightarrow\mathbb{N}$ be $g(x)=x-1$ whenever $x\not = 0$ and $g(x)=0$ if $x=0$. Now, $g\circ f (x) = g(x+1)=x+1-1=x$ hence $g\circ f = Id$. While $f\circ g(0)=f(0)=1$ hence $f\circ g$ is not the identity.
What are the domain and name of a function that takes a vector of arbitrary length as argument?
In the first place there is a name for the domain of such a function. Given a ground set $X$ one can consider words of length $n\geq0$ over the alphabet $X$, which are nothing else but maps $v:\>[n]\to X$, including the empty word $\Lambda$ (or similar letter). The set of all these words when $n$ runs through all the integers $\geq0$ is denoted by $X^*$ (see Formal Language in Wikipedia). This means that the domain of the functions $f$ you are envisaging is the set $$X^*=\bigcup_{n=0}^\infty X^n\ .$$
Reference for undergraduates for differential calculus in Banach spaces
I would highly recommend Lang's book "Real and Functional Analysis". It's mentioned in the OP of the page you linked, so I'm not sure if you consider it too advanced, but the nice thing about the book is that it keeps its different topics separate, so that they can be read independently. The chapters on calculus in Banach spaces are the best I've ever seen on the topic, and assume nothing but basic knowledge of Banach spaces, which your students presumably already have.
$L^p(\mu)$ space relations
Can also be done from the Hölder inequality, in the form: If $1/\alpha+ 1/\beta = 1$, then $$ \int |FG|\;d\mu \le \left(\int |F|^\alpha\;d\mu\right)^{1/\alpha}\;\left(\int |G|^\beta\;d\mu\right)^{1/\beta} \tag1$$ Given $p<s<r$ and $f \in L_p \cap L_r$ as in the problem, let $$ \alpha = \frac{r-p}{r-s},\quad \beta = \frac{r-p}{s-p} \quad\text{so that}\quad \frac{1}{\alpha}+\frac{1}{\beta} = 1\quad\text{and}\quad \frac{p}{\alpha} + \frac{r}{\beta} = s, $$ and let $$ F = |f|^{p/\alpha},\qquad G = |f|^{r/\beta} $$ Plug into $(1)$ to get $$ \int|f|^s\;d\mu \le \left(\int |f|^p\;d\mu\right)^{(r-s)/(r-p)} \;\left(\int |f|^r\;d\mu\right)^{(s-p)/(r-p)} < \infty. $$ We will need a different (easier) argument in case $r = \infty$.
Non-converging Taylor series of $1/(1-x)$
There are Summation Methods for series that, under the usual definition of convergence, do not converge. Some of these methods have applications. The list of such summation methods is very long: Euler summation, Cesaro summation, Abel summation, Borel summation, many others. I have mentioned these so that you will see that mathematicians of the first rank have considered this kind of problem. Some methods actually do assign sum $-1$ to $1+2+4+\cdots$. The ordinary rule of algebra that the sum of positive numbers is positive breaks down for any such summation method. Thus, if we use such a summation method, we must be very careful. By way of contrast, doing what comes naturally is ordinarily correct with summation as defined in the usual way. The field is very large. For a beginning, you may want to start here and chase down some of the references.
Power series $ f(g(z)) = z $
Yes, yes, and yes. $f$ is bijective in a neighbourhood of $0$ - since $f'(0) = 1 \neq 0$ - hence the inverse exists in a neighbourhood of $0$. The inverse is holomorphic too, hence has a power series expansion, and that starts with $0 + 1\cdot z$ since $g(0) = f^{-1}(0) = 0$ and $g'(0) = \frac{1}{f'(0)} = 1$. Power series expansions are always uniquely determined as witnessed by the identity theorem.
Is a function in an ideal? Verification by hand and Macaulay 2
I tried the lex order. In your computation you misinput $-4x^2y^2z^2+y^6+3z^5$ as $4x^2y^2z^2+y^6+3z^5$. Otherwise using lex order can easily give you remainder $0$. $$ \require{enclose} \begin{array}{cc} &-4z^2 + 1 \\[-3pt] x^3-z^2& \enclose{longdiv}{-4x^2y^2z^2+y^6+3z^5} \\[-3pt] x^2y^2-z^3 & \underline{-4x^2y^2z^2+4z^5}\phantom{+3z^5} \\[-3pt] xy^4-z^4 & y^6-z^5 \\[-3pt] xz-y^2 & \underline{y^6-z^5}\\[-3pt] y^6-z^5& 0 \end{array} $$ Apparently then $f=-4z^2(x^2y^2-z^3)+(y^6-z^5)$. Here is a program in Sage that I adopted from: http://www.math.utah.edu/~carlson/cimat/lecture1.pdf: def div(f,g): n = len(g) p, r, a = f, 0, [0 for x in range(0,n)] while p != 0: i, divisionoccured = 0, False while i < n and divisionoccured == False: if g[i].lt().divides(p.lt()): a[i] = a[i] + p.lt()//g[i].lt() p = p - (p.lt()//g[i].lt())*g[i] divisionoccured = True else: i = i + 1 if divisionoccured == False: r = r + p.lt() p = p - p.lt() return a, r Results from running the above algorithm: P.<x,y,z> = PolynomialRing(QQ,order='lex') I = ideal(x*z-y^2,x^3-z^2) B = I.groebner_basis(); B f = -4*x^2*y^2*z^2+y^6+3*z^5 div(f,B) [x^3 - z^2, x^2*y^2 - z^3, x*y^4 - z^4, x*z - y^2, y^6 - z^5] ([0, -4*z^2, 0, 0, 1], 0)
$A$ is strongly dense in its double commutant $A^{''}$
(1)Since $K$ reduces $u$ and $x=\operatorname{id_H}(x)\in K$, $u(x)=u\circ \operatorname{id_H}(x)\in K$. (2)For different $x\in H$, the corresponding sequence $v_n$ is different. (3)As updated.
Locus of fixed points of an involution in a surface
This is a case of a more general statement. For a finite order automorphism of a complex manifold the fixed point set is a collection of complex sub-manifolds (of possibly different dimensions). Firstly choose a $\mathbb{Z}_{2}$-invariant Hermitian metric on $X$, just by choosing any Hermitian metric and by averaging. Now, consider a fixed point $p$. Then $\mathbb{Z}_{2}$ acts complex linearly on $T_{p}X$. The fixed point set of this representation is the Eigenspace for the Eigenvalue $1$, hence a complex subspace $V \subset T_{p}X$. Then by considering the exponential map of the corresponding Riemannian metric, which is equivariant since the metric is invariant. The image of $V$ is precisely the fixed point set (in some neighbourhood of $p$), hence it is a complex submanifold. Note that if the original complex manifold was a projective variety then the components of the fixed point set are smooth subvarieties by Chow's theorem. Note that the fixed point set does not necessarily have to contain a curve i.e. the fixed point set can be just a finite collection of points. Consider the minimal surface $\mathbb{P}^1 \times \mathbb{P}^{1}$ with the involution $$([a_{1},a_{2}],[b_{1},b_{2}]) \mapsto ([-a_{1},a_{2}],[-b_{1},b_{2}]) .$$ Note that this involution has $4$ fixed points.
Is it the C*-norm in disguise? [Yes, within the realm of commutativity.]
Try $x=\begin{bmatrix}0&1\\0&0\end{bmatrix}$, which has $\|x\|=1$, and $\|\cos(t)h-\sin(t)k\|=\frac12$ for all $t$. Looking at the easiest example of a nonnormal element of a $C^*$-algebra is natural because the equality would hold for normal elements. Equivalently, this holds in $C(X)$ when $X$ is a compact Hausdorff space. In that case, for $f\in C(X)$ there exists $x_0\in X$ such that $$\|f\|=\max\limits_{x\in X}|f(x)|=|f(x_0)|=|h(x_0)+ik(x_0)|=\sqrt{h(x_0)^2+k(x_0)^2},$$ and $$\cos(t)h(x_0)+\sin(t)k(x_0)=\sqrt{h(x_0)^2+k(x_0)^2}\sin(t+\theta)$$ for some $\theta\in[0,2\pi)$, so by taking $t=\frac\pi2-\theta$ you get $\cos(t)h(x_0)+\sin(t)k(x_0)=\|x\|$.
Calculation of rank of a matrix
Hint: What's the difference between consecutive rows (or consecutive columns)? So how many rows (columns) do you need in addition to the first one to generate all the others?
probability of a line not crossing spheres
If the diameter of the spheres is comparable to the distance between them, this is a hard problem whose outcome depends stochastically on the geometry of the sphere-packing. Consider that a primitive cubic lattice has sight-lines parallel to the crystal axes which extend to infinity. You mention the context of radiation physics. In that field an “interaction cross section,” which has units of area, is typically measured in “barns,” where $\rm 1\,barn=(10\,fm\times10\,fm)=10^{-28}\,m^2$ is substantially larger than the physical size of a nucleus. However the distance between scattering centers is the atomic scale, $\rm1\,Å=10^{-10}\,m$. Let’s consider a lattice with number density $n$ scatterers per unit volume, where each scatterer has an interaction cross section $\sigma$. Consider a thin slice of the material with thickness $\ell$. The number of scatterers per unit area is $n\ell$, so the fraction of the slice which is occluded is $\sigma n\ell$ and the probability your ray reaches the other side of the slice is $(1-\sigma n \ell)$. In a symmetric material with $n\approx\rm1\,Å^{-3}$ a single layer would have $\ell\approx\rm1\,Å$, so barn-scale interactions give an interaction probability $\sigma n \ell \approx 10^{-8}$ for each layer. If you traverse many slices with total thickness $L=N\ell$, the transmission probability is the product of the transmission probabilities for each thin layer: $$ T(L) = \lim_{N\to\infty}\left(1 - \frac{\sigma n L}{N}\right)^N = e^{-\sigma n L} $$
Graph automorphism group
As a simple counterexample, take a graph on $3$ vertices, with one isolated vertex $u$, and two adjacent vertices $v,w$. Every automorphism must fix $u$, and permute $v,w$, so not every permutation is an automorphism. Perhaps your misunderstanding is this . . . It's true that given an arbitrary permutation of the vertices, if you apply the permutation to the vertices, and also permute the corresponding edges (i.e., change the old edges to new ones based on the permutation), then the new graph is isomorphic to the old one. But by definition, an automorphism is a permutation of the vertices that maps edges to corresponding edges without changing the edges (i.e., with no change with regard to which vertices are adjacent).
What is the sum that the square root button on calculator does so I can do it without the calculator button
Here is an extremely simple idea of $$ \frac{a}{b}\to\frac{a+2b}{a+b}$$ For instance to get the square root of 2 you can start with any 2 counting integers like a = 1 and b = 2. $\frac{1}{2}\to\frac{1+2 (2)}{1+2}=\frac{5}{3}\to \frac{5+2(3)}{5+3}=\frac{11}{8}\to \frac{11+2(8)}{11+8}=\frac{27}{19}\to \frac{27+2(19)}{27+19}=\frac{65}{46}...$ continue as long as you need and notice that $\frac{65}{46}=1.4130434$ is already close. This method was derived by Terry Goodman and John Bernard. For the $ \sqrt{n} $ you would use $ \frac{a}{b}\to\frac{a+nb}{a+b}$ The advantages are you can keep the intermediate answers in fraction form which means you only have to multiply and add whole numbers. Closer initial estimates of a and b will yield better answers more quickly.
What is the real distinction between algebra and geometry?
You might want to look at Robin Hartshorne's book "Introduction to Projective Geometry", which starts from four axioms for an affine plane, and a corresponding four axioms for a projective plane. There are three more axioms that often get added for projective planes to restrict to "more interesting" ones. If you start with these seven purely geometric axioms (they are things like "for any two distinct points there is a unique line containing them") you can prove a quite remarkable theorem, namely, the existence of coordinates in the affine plane that results from deletion of any line from the projective plane. In short: at least in the context above, you can do everything algebraic purely geometrically, because the geometric axioms imply the existence of the algebraic structure. Of course in practice, working with algebra is often easier and quicker, and I'm a big fan of it.
Can objects repeat in commutative diagrams?
Yes, of course! A functor $J \to C$ need not be injective on objects. Another example is when you want to take the product of an object with itself.
Given $a+b+c=0$, simplify the following.
Your idea that $a^3+b^3+c^3=3abc$ is a good one. Consider your numerator $x^3+y^3+z^3$, where $x = a^3-abc$ and so forth, try to see what $x+y+z$ is, and what the relationship between $x^3+y^3+z^3$ and $3xyz$ is.
Finding the equation to the variables with imaginary numbers
Hint: $(x-i)(x+i)(x-2) = 0$, multiply the left hand side all out ! You've used the following fact: If $z$ is a zero, then $x-z$ is a factor of the polynomial being discussed. How many zeroes do you consider? Point is you need not mention both $\pm i$ as zeroes, and only admitted one of them is a zero, then you automatically get the other due to the Fundamental Theorem of Algebra. That is if $z$ is a zero of $P(x)$ in $\mathbb{C}$, then $\overline{z}$ is also a zero.
Diophantine approximation problem
A similar question was answered on MathOverflow. It cited these notes which refer to this algorithm.
Diagonalization with orthogonal matrix?
Guide: Suppose you have found two eigenvectors but they are not orthogonal to each other. $\{ u_1, u_2\}$. $$Au_1 = \lambda u_1$$ $$Au_2 = \lambda u_2$$ Let $v_1 = u_1$ and $v_2 = u_2 - \frac{v_1.u_2}{\|v_1\|^2}v_1$, then $v_2$ and $v_1$ are orthogonal to each other. To make them orthonormal, divide by their length. You might want to check out Gram-Schmidt process.
Finding the probability generating function for X and then using it to compute its first three moments
Here is a selection of tools: The probability mass function for $X\sim\mathcal{Bin}(n,p)$ is $$\mathsf P_X(x)=\binom n x p^x(1-p)^{n-x}$$ The expectation of a function $g$ of $X$ is $$\mathsf E(g(X)) = \sum_{x=0}^n g(x)\,\mathsf P_X(x)$$ The probability generating function is: $$\mathsf F_X(t)=\mathsf E(t^X)$$ More easily obtained by observing that for independent random variables $X_i$: $$\begin{align}\mathsf F_{\sum_{i=1}^nX_i}(t) ~=~& \prod_{i=1}^n\mathsf F_{X_i}(t)\end{align}$$ And when using the indicator random variables for success of the $i^\text{th}$ trial of the Binomial experiment: $$\begin{align}X~=~&\sum_{i=1}^n X_i\\\mathsf P_{X_i}(x)~=~&p\mathbf 1_{x=1}+(1-p)\mathbf 1_{x=0}\\\ \mathsf E(t^{X_i})~=~&(1-p)+tp\end{align}$$ The moment generating function, where $m_n$ is the $n^\text{th}$ moment, is $$\mathsf M_X(s) = \mathsf F_X(e^s) = 1+\sum_{n=1}^\infty \frac{s^n m_n}{n!}$$ Edit: and yes this is the long way around to finding the three moments.
If $|g| = k$ then $|g^m| = k / $lcm$(k, m)$
Note: The result is $$|g^m|=\frac{k}{\color{red}{\gcd(k,m)}}.$$ Let $|g^m|=t$, then $(g^m)^t=e$. This means $k | tm$ ($\because$ $|g|=k$). Consider $$ak=tm \qquad \text{ for some } a \in \Bbb{N}.$$ Then, $$ak=tm \implies a\,\,\frac{k}{\gcd(m,k)}=t\,\,\frac{m}{\gcd(m,k)}.$$ Since $\gcd\left(\frac{m}{\gcd(m,k)}, \frac{k}{\gcd(m,k)}\right)=1$. This means $\frac{k}{\gcd(m,k)}$ divides $t$. Since $t$ is the order so it has to be the least positive integer with this property. Thus $$t=\frac{k}{\gcd(m,k)}$$
How to find out the maximum value of absolute error?
$$|f(x,y)-L(x,y)|=|x^2-3xy-x+6y-2|=$$ $$=|(x-2)^2-3(x-2)(y-1)|\leq$$ $$\leq(x-2)^2+3|(x-2)(y-1)|\leq0.01^2+3\cdot0.1\cdot0.1=0.04.$$ The equality occurs for $x=1.9$ and $y=1.1$, which says that $0.04$ it's the answer. Done!
Finding other representations for a sum of two squares
Let me first say a bit about how the problem goes if you don't have a representation to start from. Writing $c$ as a sum of two squares $a^2+b^2$ is equivalent to factoring $c$ as $(a+bi)(a-bi)$ over the Gaussian integers. The way to list out all of these would be to solve the problem for each prime factor of $c$ of the form $4k+1$, and combine those in all possible ways. For example, let $c = 65$, which factors as $5\cdot 13$. We have $5 = 2^2+1^2$, or $5 = (2+i)(2-i)$, and $13 = 2^2 + 3^2$, or $13 = (2+3i)(2-3i)$. We can write $65$ as a sum of two squares by picking one of the complex conjugates from each factor: Taking $(2+i)(2+3i) = 1+8i$ tells us $65 = 1^2 + 8^2$. Taking $(2+i)(2-3i) = 7-4i$ tells us $65 = 7^2 + 4^2$. The other two possibilities give us $1-8i$ and $7+4i$, which are redundant with the above. For a more complicated example, let $c = 1170$, which factors as $2 \cdot 3^2 \cdot 5 \cdot 13$. Since $3$ is a prime of the form $4k+3$, it cannot be written as a sum of two squares, so all we can do with it is treat $3^2$ as $3^2+0^2 = (3+0i)(3-0i)$. Also, $2$ is special (it ramifies in $\mathbb Z[i]$): we can write $2 = 1^2+1^2=(1+i)(1-i)$, but this will not help us get more solutions. So we still only consider the two possibilities from $5$ and $13$: Taking $(1+i)(3)(2+i)(2+3i) = -21 + 27i$ tells us $1170 = 21^2 + 27^2$. Taking $(1+i)(3)(2+i)(2-3i) = 33 + 9i$ tells us $1170 = 33^2 + 9^2$. The other possibilities give us back the same representations as above. Doing the above steps is hard for large $c$, because factoring is hard. In fact, solving the problem is more or less as hard as factoring: knowing all the ways to write $c$ as a sum of two squares tells us what all its prime factors of the form $4k+1$ are. To see this, consider $1170$ again. If we know that $$1170 = 21^2 + 27^2 = 33^2 + 9^2,$$ we can: compute $\gcd(21+27i, 33+9i) = 3+9i$ to conclude that $(3+9i)(3-9i)=90$ divides $1170$, getting $1170/90=13$ as a factor; compute $\gcd(21+27i,33-9i) = 15+3i$ to conclude that $(15+3i)(15-3i)=234$ divides $1170$, getting $1170/234=5$ as a factor. Having one representation $c = a^2+b^2$ doesn't change the fact that factoring is hard, so we still have to do that work. However, having one representation helps us tremendously with the "easy" step afterwards (which can still be quite hard when the primes are large): writing each prime factor of the form $4k+1$ as a sum of two squares. We can do this by taking the GCD again. For example, if we know how that $1170 = 2 \cdot 3^2 \cdot 5 \cdot 13$, and also that $1170 = 21^2 + 27^2$, we can: compute $\gcd(5, 21+27i) = 1+2i$, concluding that $5 = 1^2 + 2^2 = (1+2i)(1-2i)$; compute $\gcd(13, 21+2i) = 3+2i$, concluding that $13 = 3^2 + 2^2 = (3+2i)(3-2i)$. Then we can combine these factors in other ways, as before, to get the other representations $a^2 + b^2 = c$. (In this case, there's just one more: $(1+i)(3)(1+2i)(3-2i) = 9+33i$, for $1170 = 9^2 +33^2$.) One caveat: if a prime of the form $4k+1$ appears to a higher power, then some representations $a^2+b^2$ might not help with that prime. For example, if we instead take $5850 = 2 \cdot 3^2 \cdot 5^2 \cdot 13$, then there are three representations: $$5850 = 51^2+57^2 = 33^2+69^2 = 15^2 + 75^2.$$ The first two will help us out with all the primes, but the last will just give us $\gcd(5, 15+75i) = 5$, so it will not tell us how to write $5$ as a sum of two squares. (The first two ways to write $5850$ are relying on nontrivial representations of $5^2=25$ as $3^2 + 4^2$; the last relies on the trivial solution $25 = 5^2+0^2$.)
Find the harmonic conjugate
$$u(x,y) =x^3 -6x^2 y -3xy^2 +2y^3 =(x^3 -3xy^2 ) +2 (y^3 -3x^2 y) =Re( (x+iy)^3 ) - Im(2 (x+iy)^3 )= Re z^3 -2Im z^3 == Re [(1-2i)z^3 ]$$ Hence the conjugate is equal to $$v(x,y)=Im [(1-2i) z^3 ]$$ $$|a+b +v(1,1) |=|-9 +Im [(1-2i) (1+i)^3]|=|-9 +6|=3$$
Describe the quotient space that results from the partition of $\mathbb{R}$ into the equivalence classes on $\mathbb{R}$ given by $x-y\in \mathbb{Z}$
The quotient space can be naturally identified with $[0,1)$, true, but not as a group and not as a topological space. The quotient space is $\mathbb R/\mathbb Z$, which is isomorphic (as a group) and homeomorphic to the circle $S^1$. You can map from $[0,1)$ via $t\mapsto e^{2\pi i t}$, and this actually gives you the quotient map from $\mathbb R$ that induces the topology. If you're not looking for any kind of structure-preserving map, then identifying it with $[0,1)$ is easy and fine. But you mention isomorphism, for which $[0,1)$ would not be appropriate algebraically or topologically.
Probability of choosing the correct stick out of a hundred. Challenge from reality show.
The probability of getting the first wrong is $\dfrac{99}{100}$. The probability of getting the second right given the first is wrong is wrong $\dfrac{1}{99}$; the probability of getting the second wrong given that the first is wrong $\dfrac{98}{99}$. And this pattern continues. Let's work out the probability of getting the correct one on the fourth try: it is the probability of getting the first three wrong $\dfrac{99}{100}\times\dfrac{98}{99}\times\dfrac{97}{98}$ times the probability of getting the fourth correct given the first three were wrong $\dfrac{1}{97}$. It should be obvious that the answer is $\dfrac{1}{100}$. It will still be $\dfrac{1}{100}$, no matter which position you are considering. This should not be a surprise as each position of the stick is equally likely.
What does this definition of $C^k$ surface mean?
Typically, one does not have an atlas at hand. For this reason it is not beneficial to describe a surface as being the image of an atlas, only that one exists (which is what he is claiming by saying that $M$ is contained in the images of such charts or, there is a chart for each $p\in M$). They are actually not different definitions. Suppose for every $p$ there is such a chart. Then the collection of these charts forms an atlas. Conversely, if one has such an atlas, then at every point $p\in M$ there is a chart. The definition given is the typical definition for a $C^k$ differentiable structure on $M$.
How would I use the properties of the Riemann integral to prove this?
By the triangle inequality and the trivial bound on $\sin$, $$ \left |\int_0^{2\pi}x^2\sin(e^x)\,\mathrm dx\right | \leq \int_0^{2\pi} \left |x^2\sin(e^x) \, \mathrm dx\right |\\ \stackrel{|\sin y|\leq 1|}{\leq }\int_0^{2\pi}x^2\,\mathrm dx=\frac{(2\pi)^3}{3} $$
Prove that $R$ is a ring with division.
If the only part you are missing is to show that $ab = 1 \Rightarrow ba = 1$ then proceed as follows: $$ab = 1 \Rightarrow bab = b $$ Now because of the statement, $b$ has a right inverse, $c $: $$babc = bc \Rightarrow ba = 1$$ And we are done. @Alex Wertheim also provides us with a different approach in the comment section. Be sure to check it.
What does having a basepoint buy us in algebraic topology?
Without a specified basepoint, it makes it difficult to define the fundamental group or higher homotopy groups. In particular, loops that are freely homotopic needn't be distinct in $\pi_1(X,x_0)$; in fact, elements of $\pi_1(X,x_0)$ are freely homotopic iff they're conjugate in the group. So if we forget our basepoint, we lose our group structure. Higher (basepointed) homotopy groups are abelian, so this issue is no longer an issue; two maps $S^n \to X$ (that both preserve a chosen basepoint in $S^n$ and $X$) are basepoint-homotopic iff they're freely homotpic. But the most natural way of defining a sum of two elements - collapsing the equator of $S^n$ and composing with the map $f_1 \wedge f_2: S^n \wedge S^n \to X$ - requires that we've chosen a basepoint of $S^n$ in the first place. As Ronnie Brown comments above, there's certainly a way to work with more than one basepoint, or without a basepoint at all; the MO question he links is helpful for this perspective.
Find analytic function such that $u = \phi \left( \frac{x^2+y^2}{x} \right) $
From equation $u_{xx} + u_{yy} = 0$ we get ode $$\phi''(t)=-\frac{2\phi'(t)}{t},\quad t=\frac{x^2+y^2}{x}.$$ Then $$\phi(t)=\frac{c_1}{t}+c_2,$$ $$u=\frac{x}{x^2+y^2}.$$
Continuity of a function between metric spaces
Let $\epsilon > 0$ and $x, z\in X$, and $d(x,z) < \delta$. Then, since $A$ is closed, you know that the infimum in the function definition is actually achieved at some $y_0$ (for $x$ and $y_1$ for $z$).($d(x,y)$ is bounded below and continuous for fixed $x$ so it achieves its $\inf$ on the closed set $A$). WLOG assume $d(x,y_0) \ge d(z,y_1)$. (It is symmetric). Then $$ \begin{align*} |f(x) - f(z)| &= |d(x, y_0) - d(z, y_1)|\\ &\le |d(x,y_1) - d(z,y_1)| \;\;\; &\text{since}\ d(x,y_0)\ \text{is the inf}\\ & \le d(x,z) &\text{Reverse triangle inequality}\\ & < \delta \end{align*} $$ so take $\delta = \epsilon$ and you are done for continuity. If $x\in A$ then it is clear that $d(x,A)=0$, otherwise, let $x\in A^c$. If $d(x,A) = 0$ then there is a sequence in $A$ that approaches $x$, so $x$ is in the closure of $A$. This implies that $x\in A$ since $A$ is closed which is a contradiction.
Explicit expression for $b$ as a function of $a$ where $\log_b a = (a/b)^{1/2}$
Welcome to the world of Lambert function ! The solutions are given by $$b=\frac{4 a }{\log ^2(a)}W\left(\pm\frac{\log (a)}{2 \sqrt{a}}\right)^2$$ $$a=\frac{4 b}{\log ^2(b)} W\left(\pm\frac{\log (b)}{2 \sqrt{b}}\right)^2$$ The plot consists in a straight line for $0 \leq a \leq e^2$ corresponding to $b=a$ and for $a > e^2$ it is a decreasing function "looking like an hyperbola".
Maclaurin series expansion for $e^{-1/x^2}$
$e^{-1/x^2}$ itself is not defined at $x=0$. But you can take $$f(x)=\begin{cases}e^{-1/x^2}&x\neq0\\0&x=0\end{cases}$$ and since $\lim_{x\to0}e^{-1/x^2}=0$, $f$ is continuous. Now you can take $f$'s derivative at $0$, but not so much by using the chain rule, exponential rule, and power rule, but rather by using the definition of the derivative. That limit-based definition gives $f'(0)=0$:$$\begin{align}f'(0)&=\lim_{h\to0}\frac{f(0+h)-f(0)}{h}\\&= \lim_{h\to0}\frac{e^{-1/h^2}-0}{h}\\&= \lim_{h\to0}\frac{e^{-1/h^2}}{h}\\&= \lim_{h\to0}\frac{h^{-1}}{e^{1/h^2}}\\&= \lim_{h\to0}\frac{-h^{-2}}{-2h^{-3}e^{1/h^2}}\\&= \lim_{h\to0}\frac{h}{2e^{1/h^2}}=0\end{align}$$ For $f''(0)$, you need to fully understand $f'$. It works out as $$f'(x)=\begin{cases}\frac{2}{x^3}e^{-1/x^2}&x\neq0\\0&x=0\end{cases}$$ and you have a similar issue to before. Find $f''(0)$ using the definition of the derivative applied to $f'$. And so on, until you see a pattern, and prove that pattern continues for all $n$th derivatives at $x=0$.
On the space $l_2$ we define an operator $T$ by $Tx=(x_1, {x_2\over2}, {x_3\over3}, . . . )$. Show that $T$ is bounded, and find its adjoint.
We have to show that $T$ maps $l^2$-sequences to $l^2$ sequences. If $x \in l^2$, then $||x||=(\sum_{i=1}^{\infty} x_i^2)^{1/2} < \infty$. But $(x_i/i)^2 \leq x_i^2$ all $i \in \mathbb{N}$ and hence $||Tx||=(\sum_{i=0}^{\infty} (x_i/i)^2)^{1/2} \leq (\sum_{i=1}^{\infty} x_i^2 )^{1/2}< \infty$
Show $\frac{d}{dx} \int\limits^{a(x)}_{0} f(x,y)dy = \int\limits^{a}_{0} \frac{\partial f}{\partial x}dy + a'(x)f(x,a)$
We set $$ F(x,y)=\int_0^yf(x,s)\,ds, $$ and assume that $\partial_1F$ and $\partial_2F$ exist. This is the case, e.g. when $f$ is continuous in $y$ and differentiable in $x$. Then $$ \partial_1F(x,y)=\int_0^y\frac{\partial f}{\partial x}(x,s)\,ds,\quad \partial_2F(x,y)=f(x,y). $$ (See, e.g. Prove that $h'(t)=\int_{a}^{b}\frac{\partial\phi}{\partial t}ds$. for the first identity). It follows that \begin{eqnarray} \frac{d}{dx}\int_0^{a(x)}f(x,y)\,dy&=&\frac{d}{dx}(F(x,a(x)))=\partial_1F(x,a(x))+a'(x)\partial_2F(x,a(x)\\ &=&\int_0^{a(x)}\frac{\partial f}{\partial x}(x,y)\,dy+a'(x)f(x,a(x)). \end{eqnarray}
How is this integral trigonometric?
Use the substitution $t=8x,\quad\mathrm d t=8\,\mathrm dx$. The integral becomes $$\int \frac{3}{\sqrt{1-64x^2}}\,\mathrm dx =\int \frac{3}{\sqrt{1-t^2}}\,\frac{\mathrm dt}8=\frac38\int \frac{\mathrm dt}{\sqrt{1-t^2}}.$$
Is my textbook wrong? Simple trig equation
No. You need to be careful about $0^\circ\lt \theta \lt 360^\circ\ (\iff 0^\circ \lt 3\theta\lt 1080^\circ)$. We have the form of $$3\theta =180^\circ+30^\circ+360^\circ \cdot k,\ 360^\circ -30^\circ +360^\circ \cdot k.$$ So, the answer is $$3\theta=210^\circ, 330^\circ, 570^\circ, 690^\circ, 930^\circ, 1050^\circ$$ $$\iff \theta=70^\circ, 110^\circ, 190^\circ, 230^\circ, 310^\circ, 350^\circ.$$
How do I find the closest point to a line, on the same side as another point?
Your question is a little unprecise, are you looking at a line or at a line segment? If $g$ is a line (not a line segment) in $\mathbb{R}^2$ then the line subdivides $\mathbb{R}^2$ into two halfspaces. If $n$ is a unit normal to $g$ then the signed distance of a point $p$ to $g$ (with respect to $n$) is positive iff $p$ is in the halfspace into which $n$ points, otherwise it's negative (or zero if $p$ is lying on $g$). The absolut value of the signed distance is just the usual euclidean distance of the point to the line. The signed distance is easily calculated, this is standard vector calculus: if $q$ is any point on $g$ and $v_q$ its position vector, and if $v_p $ is the position vector of $p$ then the signed distance from $p$ to $g$ is just the length of the projection of $v_p-v_q$ onto (any) normal line to $g$, which is simply $$d= <v_p-v_q, n>$$ ($<,>$ denoting the scalar product). Similar statements are true for hyperplanes in higher dimensional Euclidean space, the key word to look for is 'Hessian Normal form'. I never heard about a signed squared distance, but I'd assume it is just the squared distance with a sign attached to it as described in the preceding paragraph. You should look up the definition, though. Now if you are looking at a line segment $g_{AB}$ from $A$ to $B$ then the above formula ist only true if $p$ is actually lying on a line normal to $g_{AB}$ which really passes through some point on $g_{AB}$, otherwise it's just the euclidean distance to one of the end points $A$ or $B$ (the one which is smaller). If you need to assign a sign to that entity you could extend the line segment to a line and then use the above formula.
Prove that $f(W)$ is the graph of $y_{n+1} = \varphi(y_1,\cdots,y_n)$
You are very near the truth. I shall denote the points $y\in{\mathbb R}^{n+1}$ by $(y',y_{n+1})$ with $y'=(y_1,\ldots, y_n)$, and let $\pi:\>{\mathbb R}^{n+1}\to{\mathbb R}^n$ be the projection forgetting the last coordinate. Choose an arbitrary point $p\in U$, and let $f(p)=:q=(q',q_{n+1})$. Since we have everywhere $${\rm det}\left({\partial f_i\over\partial x_k}\right)_{1\leq i,\,j\leq n}\ne0$$ there is a neighborhood $W$ of $p$ such that the map $$f':=\pi\circ f=(f_1,f_2,\ldots, f_n)$$ maps $W$ diffeomorphically onto a neighborhood $V\subset{\mathbb R}^n$ of the point $q'\in{\mathbb R}^n$. There is a $C^1$-inverse $$g:=\bigl(f'\bigr)^{-1}:\quad V\to W\ .$$ The $C^1$ function $$\phi:=f_{n+1}\circ g:\quad V\to{\mathbb R}$$ gives for each point $y'\in V$ the last coordinate $y_{n+1}$ of a point $y=(y',y_{n+1})\in{\mathbb R}^{n+1}$. The graph of this $\phi$ is the set $${\cal G}=\bigl\{\bigl(y',\phi(y')\bigr)\in{\mathbb R}^n\times{\mathbb R}\bigm| y'\in V\bigr\}\ .$$ Note that $f(W)=(f\circ g)(V)$. From $$(f\circ g)(y')=\bigl((f'\circ g)(y'),(f_{n+1}\circ g)(y')\bigr)=\bigl(y',\phi(y')\bigr)\qquad(y'\in V)$$ it finally follows that indeed $f(W)={\cal G}$.
Prove cosh(x) and sinh(x) are continuous.
Your reasoning looks good, except when it comes to $e^{-x}$. True, that dividing $1$ by $e^{x}$ is still continuous, but why? The reason is that $e^{x}\neq 0$ for all $x\in\mathbb{R}$, and hence $e^{-x}=\frac{1}{e^{x}}$ is continuous as well since $e^{x}$ is.
Sigma-algebra: $\sigma(\mathcal{A})=\mathcal{A}$?
Yes. By definition, $$\sigma(\mathscr A)=\bigcap\{\mathscr B\,|\,\mathscr B\text{ is a $\sigma$-algebra and }\mathscr A\subseteq\mathscr B\},$$ which can be shown to be the smallest (in the sense of set inclusion) $\sigma$-algebra containing $\mathscr A$, so that $\mathscr A\subseteq\mathscr\sigma(\mathscr A)$. But $\mathscr A$ is already a $\sigma$-algebra and $\mathscr A\subseteq\mathscr A$ trivially, which implies that $\sigma(\mathscr A)\subseteq\mathscr A$.
Continuously embedded subspace (with own topolog. structure) of a separable real Hilbert space is itself separable
Let $H$ be a separable Hilbert space, let $V$ be a nonseparable Hilbert space, and let $T : V \to H$ be linear, injective, and $\|T(v)\|_H \le C\|v\|_V$. That is, $T$ is a bounded linear operator. We must show this is impossible. Let $T^* : H \to V$ be the adjoint operator. Since $T$ is injective, we know that $T^*$ has dense range. Suppose $T^*(H)$ is not dense in $V$. There is $v \in V$ with $v \ne 0$ and $v \perp T^*(H)$. For every $h \in H$,$$0 = \langle v,T^*h\rangle_V = \langle Tv,h\rangle_H$$ Thus $Tv=0$. So $T$ is not injective. Now $T^*(H)$ is a continuous image of a separable set, so $T^*(H)$ is itself separable. But it is dense in $V$, so $V$ is separable.
Finding $6$ pairwise non-isomorphic semi-direct products of order $42$
Let us recall the list given by the comments above: $1)$ $\mathbb{Z}_{21}\times \mathbb{Z}_{2}=\mathbb{Z}_{42}$ $2)$ $\mathbb{Z}_{21}\rtimes \mathbb{Z}_2=D_{42}$ (Diedral group, action on $\mathbb{Z}_{21}$ sends $x$ to $-x$) $3)$ $\mathbb{Z}_{21}\rtimes \mathbb{Z}/2=\mathbb{Z}_7\times D_{6}$ (the action of $\mathbb{Z}_2$ on $\mathbb{Z}_{21}=\mathbb{Z}_7\times \mathbb{Z}_3$ is trivial on $\mathbb{Z}_7$ but sends $x$ to $-x$ on $\mathbb{Z}_3$ ) $4)$ $\mathbb{Z}_{21}\rtimes \mathbb{Z}/2=\mathbb{Z}_3\times D_{14}$ (the action of $\mathbb{Z}_2$ on $\mathbb{Z}_{21}=\mathbb{Z}_7\times \mathbb{Z}_3$ is trivial on $\mathbb{Z}_3$ but sends $x$ to $-x$ on $\mathbb{Z}_7$ ) $5)$ $\mathbb{Z}_{2}\times (\mathbb{Z}_7\rtimes \mathbb{Z}_3)$ (the action of $\mathbb{Z}_3$ on $\mathbb{Z}_7$ sends $x$ onto $2x$ and is of order $3$ since $2^3=8=1$ in $\mathbb{Z}_7$) It remains to show that these give really five non-isomorphic examples of groups. In each case, the group $G$ contains a unique subgroup $H$ of order $21$, which is moreover normal. The group $H$ is not abelian only in the last case, so we can forget $5)$. You then look at the action of $G$ on $H$ and consider the subgroup of fixed points. This gives a subgroup of order $21$, $1$, $7$, $3$ respectively. This achieves the proof.
Properties of integration a delta function
Integrate out $t_1$ first to get $\int_0^t[t_2\in(0,\,t^\prime)] dt_2$, where for any proposition $p$ the Iverson bracket $[p]$ is $1$ if $p$ is true or $0$ if $p$ is false. This single integral is $\int_{[0,\,t]\cap[0,\,t^\prime]}1 dt_2=\min\{t,\,t^\prime\}$.
evalf integration with multiple free constants
Pull the symbolic coefficient outside the integral, so that the definite integral is then purely numeric and may evaluate to a float under evalf. restart; eq := int(c*x^(4/5)*exp((1+x)^(1/7)),x=1..2); / (4/5) / (1/7)\ \ int\c x exp\(1 + x) /, x = 1 .. 2/ expand(eq); / / (4/5) / (1/7)\ \\ c \int\x exp\(1 + x) /, x = 1 .. 2// evalf(%); 4.320521355 c
Classification of conformally equivalent annuli via periods
It seems details can be found in Nehari's Conformal Mapping, Chapter 1, section 9. Perhaps a better reference is Chapter 6, section 5 of Ahlfors' Complex Analysis.
How many ways are there to place $l$ balls in $m$ boxes each of which has $n$ compartments (2)?
The answer given here is actually to be considered an addendum to the answer given to the other post, where it has been considered the number of ways $C(m,n,l)$ to arrange: - $l$ undistinguishable balls - into $m$ distinguishable boxes, each provided with $n$ distinguishable compartments - counting only the arrangements that contain no empty boxes = at least $1$ ball per box. Let's now call $C_{2}(m,n,l)$ the number of ways to arrange balls in boxes same as above, but - counting only the arrangements that contain at least $2$ balls per box. It is evident that the approach described in the previous answer and leading to the involutory recurrence 4) therein, can be applied here as well. Briefly, - imagine to have the sketch of all the configurations pertaining to $C_{2}(m,n,l)$; - add an additional box aside; - transfer one ball at time from the original block to the new box; - starting from the 2nd ball, you are constructing a partition of the $(m+1, \, n,\, l)$ arrangement, that at each step accounts for $C_{2}(m,n,l-k) \cdot C_{2}(1,n,k)$; - the process stops when $k=n$ or $k=l$, but we can sum for $0 \leqslant k \leqslant l$ because by definition $C_{2}(m,n,j) = 0$ for $j \leq 2m$ . Thus we have $$ C_{\,2} (m + 1,n,l)\quad \left| {\;0 \leqslant \text{integer}\;m,n,l} \right.\quad = \sum\limits_{\left( {0\, \leqslant } \right)\;k\,\,\left( { \leqslant \,l} \right)} {C_{\,2} (m,n,l - k)\;C_{\,2} (1,n,k)} \tag {4.a} $$ that is the same recurrence as in previous answer. But now the initial conditions will be $$ C_{\,2} (1,n,l)\quad = \quad \begin{array}{*{20}c} {n\backslash l} &| & 0 & 1 & {2 \leqslant l} \\ \hline 0 &| & 0 & 0 & 0 \\ 1 &| & 0 & 0 & 0 \\ {2 \leqslant n} &| & 0 & 0 & {\left( \begin{gathered} n \\ l \\ \end{gathered} \right)} \\ \end{array} \quad \quad = \quad \left( \begin{gathered} n \\ l \\ \end{gathered} \right) - \left( \begin{gathered} 0 \\ l \\ \end{gathered} \right) - n\left( \begin{gathered} 0 \\ l - 1 \\ \end{gathered} \right) $$ so, we have $$ \begin{gathered} C_{\,2} (2,n,l) = \sum\limits_{\left( {0\, \leqslant } \right)\;k\,\,\left( { \leqslant \,l} \right)} {\left( {\left( \begin{gathered} n \\ l - k \\ \end{gathered} \right) - \left( \begin{gathered} 0 \\ l - k \\ \end{gathered} \right) - n\left( \begin{gathered} 0 \\ l - k - 1 \\ \end{gathered} \right)} \right)\;\left( {\left( \begin{gathered} n \\ k \\ \end{gathered} \right) - \left( \begin{gathered} 0 \\ k \\ \end{gathered} \right) - n\left( \begin{gathered} 0 \\ k - 1 \\ \end{gathered} \right)} \right)} = \hfill \\ = \left( \begin{gathered} 2n \\ l \\ \end{gathered} \right) - \left( \begin{gathered} n \\ l \\ \end{gathered} \right) - \left( \begin{gathered} n \\ l - 1 \\ \end{gathered} \right) - \left( \begin{gathered} n \\ l \\ \end{gathered} \right) + \left( \begin{gathered} 0 \\ l \\ \end{gathered} \right) + n\left( \begin{gathered} 0 \\ l - 1 \\ \end{gathered} \right) - n\left( \begin{gathered} n \\ l - 1 \\ \end{gathered} \right) + n\left( \begin{gathered} 0 \\ l - 1 \\ \end{gathered} \right) + n^{\,2} \left( \begin{gathered} 0 \\ l - 2 \\ \end{gathered} \right) = \hfill \\ = \left( {\left( \begin{gathered} 2n \\ l \\ \end{gathered} \right) - 2\left( \begin{gathered} n \\ l \\ \end{gathered} \right) + \left( \begin{gathered} 0 \\ l \\ \end{gathered} \right)} \right) - 2n\left( {\left( \begin{gathered} n \\ l - 1 \\ \end{gathered} \right) - \left( \begin{gathered} 0 \\ l - 1 \\ \end{gathered} \right)} \right) + n^{\,2} \left( \begin{gathered} 0 \\ l - 2 \\ \end{gathered} \right) = \hfill \\ = \sum\limits_{\left( {0\, \leqslant } \right)\;j\,\,\left( { \leqslant \,2} \right)} {\left( { - 1} \right)^{\,j} \left( \begin{gathered} 2 \\ j \\ \end{gathered} \right)\left( {\sum\limits_{\left( {0\, \leqslant } \right)\;k\,\,\left( { \leqslant \,2} \right)} {\left( { - 1} \right)^{\,k} \left( \begin{gathered} 2 - j \\ k \\ \end{gathered} \right)\left( \begin{gathered} \left( {2 - j - k} \right)n \\ l - j \\ \end{gathered} \right)} } \right)n^{\,j} } \hfill \\ \end{gathered} $$ Bringing the convolution a few steps forward, we can realize that the pattern of the upper and lower terms of the binomials follows the same pattern as that of $$ \text{conv}\left( {\left( \begin{gathered} n \\ l \\ \end{gathered} \right) - \left( \begin{gathered} 0 \\ l \\ \end{gathered} \right) - n\left( \begin{gathered} 0 \\ l-1 \\ \end{gathered} \right)} \right) = \sum\limits_{k,\;j} {c_{\,k,\;j} \left( \begin{gathered} k \\ l-j \\ \end{gathered} \right)} \quad \overset {\text{pattern}} \longleftrightarrow \quad \left[ {x^{\;k} y^{\;j} } \right]\text{mult}\left( {x^{\,n} - 1 - ny} \right) $$ so that we arrive to: $$ \begin{gathered} C_{\,2} (m,n,l)\quad \left| {\;0 \leqslant \text{integer}\;m,n,l} \right.\quad = \hfill \\ = \sum\limits_{\left( {0\, \leqslant } \right)\;j\,\,\left( { \leqslant \,m} \right)} {\left( { - 1} \right)^{\,j} \left( \begin{gathered} m \\ j \\ \end{gathered} \right)\left( {\sum\limits_{\left( {0\, \leqslant } \right)\;k\,\,\left( { \leqslant \,m} \right)} {\left( { - 1} \right)^{\,k} \left( \begin{gathered} m - j \\ k \\ \end{gathered} \right)\left( \begin{gathered} \left( {m - j - k} \right)n \\ l-j \\ \end{gathered} \right)} } \right)n^{\,j} } = \hfill \\ = \sum\limits_{\left( {0\, \leqslant } \right)\;j\,\,\left( { \leqslant \,m} \right)} {\left( { - 1} \right)^{\,m - j} \left( \begin{gathered} m \\ j \\ \end{gathered} \right)\left( {\sum\limits_{\left( {0\, \leqslant } \right)\;k\,\,\left( { \leqslant \,m} \right)} {\left( { - 1} \right)^{\,k} \left( \begin{gathered} j \\ k\\ \end{gathered} \right)\left( \begin{gathered} \left( { j - k} \right)n \\ l-m+j \\ \end{gathered} \right)} } \right)n^{\,m - j} } \hfill \\ \end{gathered} \tag {5} $$ which I could check on computer for a small range of the parameters and looks correct. This time, for $m=0$, by definition the result should be null, for whichever values of $n$ and $l$. The formula above instead gives $C_{2}(0,n,0) = 1$, which does not look plausible. So we shall limit its validity to $1 \leqslant m$, or correct it for $m=l=0$. For an algebraic verification of 5), let us consider an additional relation. We can partition the $C(m,n,l)$ arrangements with no empty box into those that contain - $0$ boxes with exactly 1 ball, ${m \choose 0}$ ways to choose them, and $C_{2}(m,n,l)$ boxes with at least $2$ balls - $1$ boxes with exactly 1 ball, ${m \choose 1}$ ways to choose the box and $n$ ways to place the ball in it, and $C_{2}(m-1,n,l-1)$ boxes with at least $2$ balls ... - $k$ boxes with exactly 1 ball, ${m \choose k}$ ways to choose the box and $n^k$ ways to place the balls , and $C_{2}(m-k,n,l-k)$ boxes with at least $2$ balls ... - $m$ boxes with exactly 1 ball, ${m \choose m}$ ways to choose the box and $n^m$ ways to place the balls , and $C_{2}(0,n,l-m)$ boxes with at least $2$ balls. We can see here that , for $l=m$, the $n^m$ arrangements with exactly $1$ ball per box are legitimate and that this demands that $$C_{2}(0,n,0) = 1$$ The above translates into: $$ C(m,n,l) = \sum\limits_{\left( {0\, \leqslant } \right)\;k\,\,\left( { \leqslant \,m} \right)} {\left( \begin{gathered} m \\ k \\ \end{gathered} \right)n^{\,k} \,C_{\,2} (m - k,n,l - k)} \tag {6} $$ Computing this relation numerically, for $0 \leqslant n,m \leqslant 10$ and $0 \leqslant l \leqslant nm$ returns a full check, and it is also fully demonstrable algebraically. Then each addendum in formula 6) is exactly what you are looking for.
Method to solve probability of chips
Your sample space size isn't 36. How many choices for the first chip; how many for the second (without replacement)?
Possible graphs with 8 vertices, 13 edges, $\delta(G)=2$ and $\Delta(G)=4$
We can set up the following equations relating $n_2,n_3,n_4$ since there are 8 vertices and 13 edges: $$n_2+n_3+n_4=8$$ $$2n_2+3n_3+4n_4=2\cdot13=26$$ Subtracting twice the first equation from the second we get $$n_3+2n_4=10\tag1$$ All variables are positive integers ($\delta(G)=2$, $\Delta(G)=4$ and it is given that $n_3\ge1$). It is easy enough to list all the solutions to $(1)$ under this condition: $$(n_3,n_4)=(2,4),\ (4,3),\ (6,2)$$ From here $n_2$ can be found, but $(n_3,n_4)=(6,2)$ causes $n_2=0$, which has been excluded. Thus the only possible tuples $(n_2,n_3,n_4)$ are $$(2,2,4)\quad(1,4,3)$$ Here are explicit graphs corresponding to these degree sequences.
How to derive the formula for Zeta function at even intergers
$$f(z)=\frac{\pi^2}{\sin^2(\pi z)}-\sum_n \frac1{(z+n)^2}$$ is a $1$-periodic thus bounded entire function, therefore it is constant: $f(z)=f(i\infty)=0$. Next expand $\sum_n \frac1{(z+n)^2}$ in Laurent series to make $\zeta(2k)$ appear and relate $\frac{\pi^2}{\sin^2(\pi z)}$ to $\frac{z}{e^z-1}$ to make the Bernouilli numbers appear.
Prove $\int_0^\infty \frac{e^{\frac{-1}{x}}-1}{x^{\frac{2}{3}}}dx$ converges
Note that $1-e^{-u} \sim u$, as $u \to 0$, by Taylor expansion. Consequently $e^{-1/x}-1 \sim -x^{-1}$ as $x \to \infty$. Therefore $x^{-2/3}(e^{-1/x}-1) \sim -x^{-5/3}$ as $x \to \infty$. What can you conclude?
Is proving $O(3)\cong \mathbb{Z}_2 \otimes SO(3)$ equivalent to proving $O(3)/SO(3) \cong \mathbb{Z}_2$
As commenters point out, the notation $\otimes$ is not used for groups, only for vectorspaces. What you are thinking about is the direct product $\times$. For groups $G$ and $H$ we have that the group $G \times H$ consists of all pairs $(g, h)$ with $g \in G$ and $h \in H$ where the group operation is just 'pointwise': for multiplicative groups we have $(g_1, h_1)(g_2, h_2) = (g_1g_2, h_1h_2)$ and for additive groups we have $(g_1, h_1) + (g_2, h_2) = (g_1 + g_2, h_1 + h_2)$. So everything happens inside the separate copies, there is no interaction between the groups so to speak. There are some subtleties. We have that $\mathbb{Z}_2 \times \mathbb{Z}_3 \cong \mathbb{Z}_6$ but we do not have that $\mathbb{Z}_2 \times \mathbb{Z}_2 \cong \mathbb{Z}_4$. This latter example is an answer (of the 'no: see this counter-example' type) to your question because we do have that $\mathbb{Z}_4/\mathbb{Z}_2 \cong \mathbb{Z}_2$. I recommend you write out these really small examples explicitly to understand what is going on here. The 'four maps' I was talking about in my answer to you other question are the following. There are two 'outgoing' homomorphisms $G \times H \to G$ and $G \times H \to H$ given by $(g, h) \mapsto g$ and $(g, h) \mapsto h$. Also there are two 'incoming' maps: one map $G \to G \times H$ given by $g \mapsto (g, e)$ where $e$ is the neutral element of $H$ and one $H \to G \times H$ by $h \mapsto (e', h)$ where $e'$ is the neutral element of $G$. Writing $B$ (for big) for the big group $G \times H$ we have a copy of $G$ sitting inside $B$ as $(G, e)$ (i.e. the image of the incoming map from $G$). This copy of $G$ is a normal subgroup so there exist a quotient $B/G$. It is not hard to see that this quotient must be isomorphic to $H$, and in a sloppy sense the outgoing map to $H$ gives you the isomorphism. However: in the general situation that you have a normal subgroup isomorphic to $G$ in a big group $B$ such that $B/G \cong H$ we do not have that $B = G \times H$. For that to be true we need that moreover we have a normal subgroup isomorphic to $H$ sitting inside $B$ such that the quotient $B/H$ (that then apparently also exists) is isomorphic to $G$. In other words: you need to do twice as many work to show $B = G \times H$ than to show that $B/G = H$. I hope this helps.
Number of ways of selection
What you thought is completely correct. The source problem does not clearly state that the balls are, apart from their colors, distinguishable. This means that the black balls are numbered from $1$ to $5$, and the red balls from $1$ to $6$. Given these numberings there are the $200$ listed selections.
If the $T_B$ space has a non- Hausdorff compact subset , then $X \times X$ is not $T_B$? Why?
If $(X,\tau)$ has a compact subset $K$ which is not Hausdorff, then $X\times X$ has the compact subset $K\times K$. Now the diagonal $\triangle\cap(K\times K)$ is homeomorphic to $K$, thus compact. If it were closed, then $K$ would be Hausdorff. So it cannot be closed. This shows that $X\times X$ is not $T_B.$
Joint probability of geometric random variables
For a geometric random variable $Y$ with parameter $y$, $$\begin{align} P\{Y > n\} &= \sum_{i=n+1}^\infty P\{Y = i\} = \sum_{i=n+1}^\infty y(1-y)^{i-1}\\ &= y(1-y)^n[1 + (1-y) + (1-y)^2 + \cdots ]\\ &= y(1-y)^n \times \frac{1}{1-(1-y)}\\ &=(1-y)^n. \end{align}$$ Actually, the end result is more easily derived by arguing that the event $\{Y > n\}$ occurs if and only if the first $n$ trials ended in failure, and the probability of this is just $(1-y)^n$. Regardless of which way you computed $P\{Y > n\}$, you can compute $$P\{X = n, Y> n\} = P\{X = n\}P\{Y > n\} = x(1-x)^{n-1}(1-y)^n$$ and hence $$\begin{align} P\{X < Y\} &= \sum_{n=1}^\infty P\{X = n, Y > n\}\\ &= \sum_{n=1}^\infty x(1-x)^{n-1}(1-y)^n\\ &= x(1-y) [1 + (1-x)(1-y) + ((1-x)(1-y))^2 + \cdots\\ &= x(1-y)\times \frac{1}{1 - (1-x)(1-y)}\\ &= \frac{x(1-y)}{x+y-xy}. \end{align}$$ This answer too has a nice interpretation. Let us carry out the $n$-th trials simultaneously. Then, we can ignore all instances in which failures were recorded on both trials -- the next pair of trials must be conducted -- and concentrate on the very first time that success occurred on at least one of the paired trials, possibly on both trials. Conditioned on at least one success, an event of probability $x+y-xy$, (remember $P(A\cup B) = P(A)+P(B)-P(A)P(B)$ for independent events?), the probability that $X < Y$ is just the probability that $X$ occurred and $Y$ did not, which is $x(1-y)$. Thus we have $$P\{X < Y\} = \frac{x(1-y)}{x+y-xy}, ~~ P\{X > Y\} = \frac{(1-x)y}{x+y-xy}, ~~ P\{X = Y\} = \frac{xy}{x+y-xy}$$ without any need for summing series or indeed without even writing down the joint pmf etc.
Does this Manifold exist?
Your argument seems fine to me. There's just one ambiguity about one chart 'getting' the whole manifold. By your work it seems as if the chart map is surjective, but really if we know the manifold is compact couldn't a chart miss a few points but still give us enough information to determine the whole manifold. So in that sense we'd have a chart that gets the whole manifold without actually surjecting onto it. An example of that would be charting a sphere by mapping an open set to everything but the north pole. From this chart you could determine that the manifold in question is definitely the sphere and hence you 'get it' from the single chart.
$F(x,y)=2x^4-3x^2y+y^2$. Show that $(0,0)$ is local minimum of the Reduction of F for every linear line that passes through $(0,0)$.
A direct way to prove that $(0,0)$ is a local minimum for $F$ is just to notice that $F(0,0)=0$ and that $$ \begin{align} F(x,y)-F(0,0)&=2x^4-3x^2y+y^2\\\\ &=2\left( x^4-\frac32x^2y+y^2\right)\\\\ &=2\underbrace{\left( \left(x^2-\frac34y\right)^2+\frac7{16}y^2\right)}_{\large\geq \:0} \end{align} $$
Let $G$ be a vector field defined in $A = \mathbb{R}^n -{0}$ with $\operatorname{div}G = 0$ in $A$.
Let me attempt to answer my own question, but I may be wrong. Let $K$ be the unit normal vector field to $\partial M_1 \cup \partial M_3$ pointing ourwards from M $\int_{M_1 - Int M_3} div G \ dV = \int_{\partial M_1 - IntM3}divG \ dV = \int_{\partial M_1 - IntM3} \langle G, K \rangle \ dV $ $= \int_{\partial M_1} \langle G, K \rangle \ dV - \int_{\partial M3} \langle G, K \rangle \ dV $ $= \int_{\partial M_1} \langle G, N_1 \rangle \ dV - \int_{\partial M3} \langle G, N_3 \rangle \ dV $ As the $N_1$ part of $K$ is not on $\partial M_1$, and $N_3$ part of $K$ is not on $\partial M3$. The last equality will be a result of linearity of inner product and integral.
Searching for numerical algorithm realization
Here is a link that outlines lots of different options, and the code that is linked there is generally well documented and readable. For nonlinear solvers, my opinion is that Netlib has better stuff (such as MINPACK) than LAPACK, but there are bound to be many other good implementations as well. Most of this stuff will be in C or Fortran, but there are some examples in Matlab which should be closer to usual linear algebra form. You should also consider iterative methods like gradient descent, Newton's Method and semi-Newton methods like the Davidon-Fletcher-Powell method. Added Since you expanded on the question in the comments, it seems clear that Newton's method will work for you. Your problem is small (only 3 variables) and has known analytical form so you can calculcate the Jacobian matrix $J$ by hand. By reformulating the update equation of Newton's method, you do not need to compute the inverse of $J$.. you'll only need a linear solver. (See here for more details). In your case, the vector of solutions will be $x_{k}$, and you initialize $x_{0} = (p_{0}, s_{1,0}, s_{2,0})^{T}$ as the vector of initial guesses. Then just run the iterative scheme for $k=1,2,...$ : $$ x_{k}\textrm{ is solution to }J(x_{k-1})[x_{k} - x_{k-1}] = -F(x_{k-1})$$ where, with it understood that $x = (p,s_{1},s_{2})^{T}$ as above, so you'll rewrite the stuff below using $x_{i}$ to represent the $i$th thing you want to solve for: $$ F\biggl( (p,s_{1},s_{2})^{T} \biggr) = \begin{pmatrix} \frac{s_2 - K_2}{ps_2^{\gamma_1} + (1 - p)s_2^{\gamma_2}} - \frac{K_1 - s_1}{ps_1^{\gamma_1} + (1 - p)s_1^{\gamma_2}} \\ \frac{s_2}{s_2 - K_2} - \frac{p\gamma_1s_2^{\gamma_1} + (1 - p)\gamma_2 s_2^{\gamma_2}}{ps_2^{\gamma_1} + (1 - p)s_2^{\gamma_2}} \\ \frac{s_1}{s_1 - K_1} - \frac{p\gamma_1s_1^{\gamma_1} + (1 - p)\gamma_2 s_1^{\gamma_2}}{ps_1^{\gamma_1} + (1 - p)s_1^{\gamma_2}} \end{pmatrix} $$ You continue this until you reach some terminal criteria, such as hitting a maximum number of iterations or observing that the norm $||x_{k}-x_{k-1}||$ has become sufficiently small for your purposes, or that the value of the objective function at the current solution is small, $||F(x_{k})|| < tol$ for some tolerance $tol$. These are all things that you can do with existing library functions in almost any scientific computing package. Almost any standard textbook on numerical analysis will offer a better explanation to this method, and how to check whether your particular problems falls into any of its pitfalls. Textbooks will also include references to computer code for solving such a problem. But your first step should be to calculate the Jacobian by hand in any case since it is available to you. Two textbooks that I am familiar with and which should give good guidance for this problem are: Numerical Analysis and Scientific Computation by Jeff Leader (link) Numerical Methods in Engineering with Python by Jaan Kiusalaas (link) Both of these are highly readable and focus on giving you an understanding of the methods rather than just formulas for using them (which you can find in the LAPACK and Netlib implementations, in Numerical Recipes, or in dozens of other places.) The first book focuses on examples in Matlab if you prefer that language. Most of the examples can also be directly used in Octave as well, which is a free, open-source Matlab clone. The second book focuses on showing examples in Python, and if you choose to go the Python route, I recommend working with the NumPy library because pure Python will be very inefficient.
$\pi_0$ in the long exact sequence of a fibration and quaternionic projective space
Sp(1) doesn't have two path components. It's just the unit quaternions, which are $S^3$.
Does this proof use a tautological statement?
It is fine to assume that $n(\epsilon/5, \zeta)$ is finite, because it cannot be infinite. We $n(\epsilon, z)$ to be the least positive integer such that (24.3) holds. I am going to assume that at some point before the excerpt you quoted, it is shown that the sequence $F_p(z)$ converges to some limit $F(z)$ pointwise, and therefore (24.3) must hold at some integer. The set of all integers where (24.3) holds has a (finite) least element. Meanwhile, $N$ is not necessarily finite, because $N$ is defined as a limit: $$N = \lim_{p \to \infty} n(\epsilon, z_p).$$This limit could be infinite, even if every $n(\epsilon, z_p)$ is finite. It is only once we show that the sequence $n(\epsilon, z_p)$ has an upper bound that we can conclude that $N$ must be finite as well.
Ahlfors method for partial fraction decomposition
You would write $R(\beta_i + 1/\zeta)$ as a quotient of polynomials, in your example, multiply numerator and denominator with $\zeta^4$ to obtain $$R(1 + 1/\zeta) = \frac{(\zeta+1)^4}{\zeta\bigl((\zeta+1)^3 - \zeta^3\bigr)},$$ then normalise and do polynomial division. However, the method used is not introduced for practical purposes, since it usually takes more work than a direct partial fraction decomposition. The method is used for purely theoretical purposes, it quickly establishes the existence of the partial fraction decomposition of a rational function.
Sequence convergence to zero
Not sufficient. A sequence with $a_n=\frac{1}{2} + \frac{1}{n}$ is decreasing, bounded below by zero, monotone, and contains all positive numbers. Its limit is not zero. Of course, its infimum is also not zero, but you haven't shown that $0$ is the infimum for $x_n$ either. Hint: If $\frac{x_{n+1}}{x_n}$ would be $L$, then $x_{n+1}=L^nx_0$ which has a limit of $0$. Of course, you don't have $\frac{x_{n+1}}{x_n}=L$, but can you find a $L'<1$ such that you have $\frac{x_{n+1}}{x_n}<L'$ at least for $n>N$ for some $N$?
Electromagnetism & the Gauge Theory
Gauge theories are now regarded as fiber bundles with a connection, If the gauge group is U(1) one gets electromagnetism. When a more complex Lie group is used, such as SU(3) (quantum chromodynamics) one gets Yang-Mills theory. Non-abelian gauge theories are very complicated. The connection is normally described by a vector potential $A^j_\nu$, where j refers to a group generator index and $\nu$ is a spacetime index.
What is a more formal name for the wedgey group?
I think you have a nil-2 product of $n$ copies of $F$ (rather than two copies of $V$, as I originally wrote in comments). If $A_1,\ldots,A_n$ are abelian groups, then their 2-nilpotent product is their free product modulo the second term of the lower central series of the free product (in general, you take the free product and mod out by the intersection of the second term of the lower central series with the cartesian, the kernel of the map from the free to the direct product). For the abelian case, the elements of the 2-nil product can be written uniquely as $$a_1\cdots a_n\gamma$$ where $a_i\in A_i$, and $\gamma$ is in the commutator subgroup. The commutator subgroup is isomorphic to $$\sum_{1\leq i\lt j\leq n} A_j\otimes A_i.$$ (In the general case, you take the abelianizations). The multiplication rule is $$(a_1\cdots a_n\gamma)(b_1\cdots b_n\delta) = (a_1b_1\cdots a_nb_n)\left(\gamma+\delta+\sum_{1\leq i\lt j\leq n}b_i\otimes a_j\right).$$ If we take each $A_i$ isomorphic to $F$, then the commutator subgroup will be isomorphic to an $F$-vector space of dimension $\binom{n}{2}$, with basis given by $v_{ji}$, where $1\leq i\lt j\leq n$ (identifying $1_j\otimes 1_i$ with $v_{ji}$). If you make the "convention" that $v_{ij} = -v_{ji}$ and $v_{ii}=0$, then you get exactly the wedge $F^n\wedge F^n$. I am a little worried about that factor of $2$ in the commutator; there would be no problem with defining the $2$-nil product of groups of exponent $2$ and getting a 2-nil group (of exponent 4), which you cannot do with your set up. Indeed, if I think of this as the $2$-nilpotent product of $n$-copies of $F$, with elements expressed as sums of multiples of $\mathbf{1}_1,\ldots,\mathbf{1}_n$, I would write: \begin{align*} [\alpha_1\mathbf{1}_1+\cdots+\alpha_n\mathbf{1}_n,\beta_1\mathbf{1}_1+\cdots+\beta_n\mathbf{1}_n] &= \prod_{i,j=1}^n [\alpha_i\mathbf{1}_i,\beta_j\mathbf{1}_j]\\ &= \prod_{1\leq i\lt j\leq n} [\mathbf{1}_j,\mathbf{1}_i]^{\alpha_j\beta_i-\alpha_i\beta_j}. \end{align*} On the other hand, you would have \begin{align*} 2(\alpha_1\mathbf{v}_1+\cdots+\alpha_n\mathbf{v}_n)\wedge(\beta_1\mathbf{v}_1 + \cdots + \beta_n\mathbf{v}_n) &= 2\sum_{i,j=1}^n \alpha_i\mathbf{v}_i\wedge\beta_j\mathbf{v}_j\\ &= 2\sum_{i,j=1}^n (\alpha_i\beta_j)\mathbf{v}_i\wedge\mathbf{v}_j\\ &= 2\sum_{1\leq i\lt j\leq n} (\alpha_j\beta_i - \beta_j\alpha_i)\mathbf{v}_i\wedge\mathbf{v}_j, \end{align*} that is, twice as much as I do.
Flip a fair coin repeatedly. What is the probability that the first sequence of heads is exactly two heads long?
We assume this means that for example the first few tosses are something like $TTTHHT\dots$. Let $p$ be the required probability. Given that the first toss is $H$, the required probability is $\frac{1}{4}$. For the $H$ has to be followed by $HT$. Given that the first toss is $T$, the required probability is $p$. Thus $$p=\frac{1}{2}\cdot\frac{1}{4}+\frac{1}{2}p.$$ Solve this linear equation for $p$.
Trying to prove product rule of integration
The product rule for differentiation says that $$(uv)'=uv'+vu'\implies uv'=(uv)'-vu',$$ integrating which gives $$\int uv'=\int(uv)'-\int vu'\implies\int uv'=uv-\int vu'.$$ You can do a similar thing for the quotient rule too, i.e. $$\left(\frac{u}{v}\right)'=\frac{vu'-uv'}{v^2}\implies\int\frac{uv'}{v^2}=\int\frac{u'}{v}-\frac{u}{v}.$$
Is a certain inverse image under an equivariant map a submanifold?
Suppose $f:X\to G/K$ is smooth and $G$-equivariant, with $G$ a Lie group and $K$ a closed subgroup. Then, in particular, $f$ must be a submersion: if $p\in X$ and $g\in G$, then $f(gp)=gf(p)$, so $f(\exp(t\xi)p)=\exp(t\xi)f(p)$. Thus, $f^{-1}(gK)$ is a submanifold for every $gK\in G/K$.
Valid way to show that the origin is the only critical point?
Yes your method is correct. $$x'=y+kx(x^2+y^2), \ \ y'=-x+ky(x^2+y^2) \ \ \ \ k\in\mathbb{R}.$$ Note that $$rr'= xx'+yy'$$ $$rr'= xy+kx^2(x^2+y^2)-xy+ky^2(x^2+y^2)=k(x^2+y^2)^2$$ which does imply to $$r'=kr^3$$ The origin is the only equilibrium.
Cyclotomic cosets modulo
From the discussion in the comments, it turns out that the question concerns the cyclotomic cosets of $3$ modulo $26$. So, first we look at the powers of $3$, modulo $26$. Since $3^3=27\equiv1\bmod{26}$, the powers of $3$ modulo $26$ are just $\{\,1,3,9\,\}$. So this is $C_1$ (and it's also $C_3$, and $C_9$). Then $C_2=\{\,2,6,18\,\}$, $C_4=\{\,4,12,10\,\}$, $C_5=\{\,5,15,19\,\}$, $C_7=\{\,7,21,11\,\}$, $C_8=\{\,8,24,20\,\}$, $C_{13}=\{\,13\,\}$, $C_{14}=\{\,14,16,22\,\}$, $C_{17}=\{\,17,25,23\,\}$ and I suppose you need $C_0=\{\,0\,\}$.
How would you write a function from $A\sqcup B$ to $C\sqcup B$?
The disjoint union operation is universal for maps out, in the following sense. Given any maps $f:A \to X$ and $g:B \to X$, there is a map $$ f \sqcup g: A \sqcup B \to X, $$ defined (in the obvious way) by $$ (f \sqcup g)(t) = \begin{cases} f(t) & \text{if }\, t \in A, \\ g(t) & \text{if }\, t \in B. \end{cases} $$ Conversely, given any map $f \sqcup g: A \sqcup B \to X$, we can recover the maps $f$ and $g$ by restriction. This construction is an example of the coproduct construction in the category of sets. Coproducts take different forms in different categories, but they are always defined by the universal mapping property described above. Incidentally, the coproduct is dual to the product construction (universal for maps in). The Cartesian product plays the role of product in the category of sets. By the way, in your example, the second map is $\operatorname{id}_B:B \to B$, and supressing the natural embeddings $C \subseteq C \sqcup B$ and $B \subseteq C \sqcup B$, the function would be written $$ f \sqcup \operatorname{id}_B. $$ If we were being really careful, we'd name the embeddings $$ \iota_C: C \hookrightarrow C \sqcup B \qquad\text{and}\qquad \iota_B: B \hookrightarrow C \sqcup B, $$ and the map you're interested would be the coproduct of these compositions, each of which lands in $X = C \sqcup B$: $$ (\iota_C \circ f) \sqcup (\iota_B \circ \operatorname{id}_B). $$ By the way, when the meaning is clear, it's typical not to be so careful (i.e., pedantic) in choosing notation. But when you're first learning, it can be useful to see it written carefully.
An Exercise of noncentral $\chi^2$ distribution.
Since the $Y$s are independent and the distributions are normal and the variances are all equal, the distribution is spherically symmetric, so the distribution of sum of squares will still be the same if we look at a rotated coordinate system. Let $P$ be the orthogonal projection of the column vector $(Y_1,\ldots, Y_n)^T$ onto the one-dimensional space $x_1=\cdots=x_n$, and let $Q=I-P$ be the complementary orthogonal projection onto the $(n-1)$-dimensional space $x_1+\cdots+x_n=0$. Then $$ Q\begin{bmatrix} Y_1 \\ \vdots \\ Y_n \end{bmatrix} = \begin{bmatrix} Y_1-\bar Y \\ \vdots \\ Y_n-\bar Y \end{bmatrix} \text{ and }Q\begin{bmatrix} a_1 \\ \vdots \\ a_n \end{bmatrix} = \begin{bmatrix} a_1-\bar a \\ \vdots \\ a_n-\bar a \end{bmatrix}. $$ If we express everything in a rotated coordinate system with one axis in the direction of the space $x_1=\cdots=x_n$ and the others orthogonal to that, this is expressed as $$ Q\begin{bmatrix} U_1 \\ U_2 \\ \vdots \\ U_n \end{bmatrix} = \begin{bmatrix} 0 \\ U_2 \\ \vdots \\ U_n \end{bmatrix} \text{ and }Q\begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix} = \begin{bmatrix} 0 \\ b_2 \\ \vdots \\ b_n \end{bmatrix}. $$ Then $$ \frac{1}{\sigma^2}(U_2^2+\cdots +U_n^2) \sim \chi^2_{n-1,b_2^2+\cdots+b_n^2}. $$ But $$ (Y_1-\bar Y)^2 + \cdots+(Y_n-\bar Y)^2 = U_2^2+\cdots+U_n^2 $$ and $$ (a_1-\bar a)^2+\cdots+(a_n-\bar a)^2 = b_2^2+\cdots+b_n^2. $$
function limits and continuity
Continuity of $f$ at $p$ is indeed achieved if and only if $\lim_{x\to p}f(x)=f(p)$. (It is implicit that the limit must exist.) If you compare the definitions of a limit and of continuity, you will notice that they just differ in $L$ vs. $f(p)$.
The Hausdorff property versus closedness of the diagonal in the context of convergence spaces
These statements are still equivalent. We can apply the same idea than with topological spaces. Convergence of (proper) filters in a product space is component-wise. So a filter converges to two different points if and only if its image by the diagonal map $X \to X \times X$ converges to a point outside of the diagonal (that is, the diagonal is not closed). We can do the same things with nets instead.